path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
re-id.ipynb | ###Markdown
Re identificación de personas (ReID) utilizando Triplet Loss y MobileNetsPaper: https://arxiv.org/pdf/1704.04861v1.pdfCódigo original: https://github.com/cftang0827/human_recognitionLa re identificacion es el proceso de identificar la persona de fotos tomadas de diferentes cámaras o en diferentes ocasiones de la misma cámara con campos de vista no superpuestos (para realizar seguimiento tras cortes en la escena)El modelo se basa en una variación de la función de pérdida denominada **Triplet Loss** para la re identificación de personas y en **MobileNets**, que utiliza **Depthwise Separable Convolution** para reducir una porción significante del tamaño del modelo y mantener un buen desempeño. Además se utiliza **SSD (Single Shot Detector)** para la detección de personas, el cual obtiene las regiones de interés y las clasifica al mismo tiempo. Importación de la APIEn este ejemplo se creó una API, la cual es importada, además de otras librerías necesarias.
###Code
import api
import cv2
from matplotlib import pyplot as plt
import timeit
plt.ion()
%matplotlib inline
###Output
WARNING:tensorflow:
The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
* https://github.com/tensorflow/io (for I/O related ops)
If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:From /home/aalejo/human_recognition/api.py:7: The name tf.reset_default_graph is deprecated. Please use tf.compat.v1.reset_default_graph instead.
WARNING:tensorflow:From /home/aalejo/human_recognition/api.py:11: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.
INFO:tensorflow:Scale of 0 disables regularizer.
WARNING:tensorflow:From /home/aalejo/human_recognition/nets/mobilenet_v1.py:316: The name tf.variable_scope is deprecated. Please use tf.compat.v1.variable_scope instead.
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb7db5588>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb7db5588>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb7db5588>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb7db5588>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7db5ef0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7db5ef0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7db5ef0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7db5ef0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb41f0da0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb41f0da0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb41f0da0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb41f0da0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb7d09198>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb7d09198>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb7d09198>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb7d09198>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb41eaf60>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb41eaf60>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb41eaf60>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb41eaf60>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09358>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09358>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09358>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09358>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb421e1d0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb421e1d0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb421e1d0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb421e1d0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d092e8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d092e8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d092e8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d092e8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb40d35c0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb40d35c0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb40d35c0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb40d35c0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb40d35c0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb40d35c0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb40d35c0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb40d35c0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09438>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09438>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09438>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09438>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4163f60>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4163f60>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4163f60>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4163f60>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb4244a58>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb4244a58>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb4244a58>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb4244a58>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09588>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09588>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09588>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09588>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4210080>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4210080>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4210080>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4210080>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faecb3956d8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faecb3956d8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faecb3956d8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faecb3956d8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09630>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09630>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09630>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb7d09630>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c747cf8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c747cf8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c747cf8>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c747cf8>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb4217080>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb4217080>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb4217080>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7faeb4217080>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c619b00>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c619b00>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c619b00>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c619b00>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c564748>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c564748>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c564748>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c564748>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c564748>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c564748>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c564748>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c564748>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb421e5c0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb421e5c0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb421e5c0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb421e5c0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c4fda58>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c4fda58>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c4fda58>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c4fda58>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c597f98>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c597f98>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c597f98>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c597f98>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4163400>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4163400>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4163400>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4163400>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c4fda58>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c4fda58>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c4fda58>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c4fda58>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c4fd518>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c4fd518>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c4fd518>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c4fd518>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4163668>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4163668>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4163668>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7faeb4163668>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c314cc0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c314cc0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c314cc0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c314cc0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c34d390>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c34d390>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c34d390>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c34d390>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c7475c0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c7475c0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c7475c0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c7475c0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c24cc88>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c24cc88>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c24cc88>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c24cc88>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c28f320>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c28f320>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c28f320>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c28f320>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c723da0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c723da0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c723da0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c723da0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c275630>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c275630>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c275630>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c275630>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c1f1828>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c1f1828>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c1f1828>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c1f1828>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c1a0e80>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c1a0e80>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c1a0e80>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c1a0e80>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c0e7940>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c0e7940>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c0e7940>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c0e7940>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c275630>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c275630>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c275630>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c275630>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c4fd940>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c4fd940>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c4fd940>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method BatchNormalization.call of <tensorflow.python.layers.normalization.BatchNormalization object at 0x7fae9c4fd940>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Pooling2D.call of <tensorflow.python.layers.pooling.AveragePooling2D object at 0x7fae9c378c88>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Pooling2D.call of <tensorflow.python.layers.pooling.AveragePooling2D object at 0x7fae9c378c88>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Pooling2D.call of <tensorflow.python.layers.pooling.AveragePooling2D object at 0x7fae9c378c88>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Pooling2D.call of <tensorflow.python.layers.pooling.AveragePooling2D object at 0x7fae9c378c88>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x7fae9c15e8d0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x7fae9c15e8d0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x7fae9c15e8d0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dropout.call of <tensorflow.python.layers.core.Dropout object at 0x7fae9c15e8d0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c0e7940>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c0e7940>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c0e7940>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Conv.call of <tensorflow.python.layers.convolutional.Conv2D object at 0x7fae9c0e7940>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:From /home/aalejo/human_recognition/heads/fc1024.py:12: The name tf.GraphKeys is deprecated. Please use tf.compat.v1.GraphKeys instead.
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7faeb7db51d0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7faeb7db51d0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7faeb7db51d0>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7faeb7db51d0>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7faeb7db5080>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7faeb7db5080>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING: Entity <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7faeb7db5080>> could not be transformed and will be executed as-is. Please report this to the AutgoGraph team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: converting <bound method Dense.call of <tensorflow.python.layers.core.Dense object at 0x7faeb7db5080>>: AssertionError: Bad argument number for Name: 3, expecting 4
WARNING:tensorflow:From /home/aalejo/human_recognition/api.py:17: The name tf.train.Saver is deprecated. Please use tf.compat.v1.train.Saver instead.
WARNING:tensorflow:From /home/aalejo/anaconda3/envs/pytorch/lib/python3.6/site-packages/tensorflow/python/training/saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from model/checkpoint-25000
###Markdown
Las cargas de pesos del modelo pre entrenado se realiza en el archivo api.py línea 17 tf.train.Saver().restore(sess, 'model/checkpoint-25000') Primer EjemploPara el primer ejemplo se carga una imagen de 3 personas:
###Code
people = cv2.imread('test/people.jpg')[:,:,::-1]
plt.imshow(people)
###Output
_____no_output_____
###Markdown
Seguidamente se llama a la función human_locations que con el modelo pre entrenado con MobileNetSSD que predice las 3 personas de la foto
###Code
people_location = api.human_locations(people)
people_human = api.crop_human(people, people_location)
for human in people_human:
plt.imshow(human)
plt.show()
###Output
_____no_output_____
###Markdown
Segundo EjemploPara el segundo ejemplo se toman 2 imágenes de una escena la serie de TV: The big bang theory, donde aparece Sheldon y un Amigo
###Code
people2 = cv2.imread('test/001.jpeg')[:,:,::-1]
people3 = cv2.imread('test/002.jpeg')[:,:,::-1]
plt.imshow(people2)
plt.show()
plt.imshow(people3)
###Output
_____no_output_____
###Markdown
Aplicamos la detección de las 2 personas en las 2 imágenes:
###Code
people2_location = api.human_locations(people2)
people2_human = api.crop_human(people2, people2_location)
for human in people2_human:
plt.imshow(human)
plt.show()
people3_location = api.human_locations(people3)
people3_human = api.crop_human(people3, people3_location)
for human in people3_human:
plt.imshow(human)
plt.show()
###Output
_____no_output_____
###Markdown
Se obtienen los vectores de las personas detectadas a partir de MobileNet
###Code
t1 = timeit.default_timer()
human_1_1_vector = api.human_vector(people2_human[0])
human_1_2_vector = api.human_vector(people2_human[1])
human_2_1_vector = api.human_vector(people3_human[0])
human_2_2_vector = api.human_vector(people3_human[1])
t2 = timeit.default_timer()
print('Time elasped: {} sec'.format(round(t2-t1, 3)))
###Output
Time elasped: 0.708 sec
###Markdown
Re identificaciónSe calculan las distancias entre las personas detectadas, siendo las más cortas las de las mismas personas
###Code
# Sheldon (Foto 1) vs Amigo (Foto 1)
api.human_distance(human_1_1_vector, human_1_2_vector)
# Sheldon (Foto 1) vs Sheldon (Foto 1)
api.human_distance(human_1_1_vector, human_2_1_vector)
# Sheldon (Foto 1) vs Amigo (Foto 2)
api.human_distance(human_1_1_vector, human_2_2_vector)
# Amigo (Foto 1) vs Sheldon (Foto 2)
api.human_distance(human_1_2_vector, human_2_1_vector)
# Amigo (Foto 1) vs Amigo (Foto 2)
api.human_distance(human_1_2_vector, human_2_2_vector)
# Sheldon (Foto 1) vs Amigo (Foto 2)
api.human_distance(human_2_1_vector, human_2_2_vector)
###Output
_____no_output_____ |
py/learningAlignmentMapping.ipynb | ###Markdown
Mapping TFBS to alignments Dictionary postions with an alignmentMe and Bronski's conversation on how to do this**Me**: Hey! I want to map nucleotide sequence position after an alignment. I know you have done this before. So I would rather not reinvent the wheel. You did a dictionary in python, but how? Can I see your script? If this feature is embedded in a larger program it might be easier to just explain your strategy.**Bronski**: So the strategy is to loop through an aligned sequence and create a dictionary where the keys are the original indices and the values are the indices in the alignment.Here’s a simple example:
###Code
aligned_seq = 'AGC---TTCATCA'
remap_dict = {}
nuc_list = ['A', 'a', 'G', 'g', 'C', 'c', 'T', 't', 'N', 'n']
counter = 0
for xInd, x in enumerate(aligned_seq):
if x in nuc_list:
remap_dict[counter] = xInd
counter += 1
print(remap_dict)
###Output
{0: 0, 1: 1, 2: 2, 3: 6, 4: 7, 5: 8, 6: 9, 7: 10, 8: 11, 9: 12}
###Markdown
My Attempt with ES2 **Breakdown of what I have to do**:1. Read in alignment file.2. seperate each sequence into it's own sequence - make dictionary for each sequence - print out sequence? - run TFBS finder for each sequence3. Make vector of each sequence that says presence or absence at each position.4. Figure out a way to visualize this. Read in Alignment FileUse `Bio.AlignIO.read()`- The first argument is a handle to read the data from, typically an open file (see Section 24.1), or a filename.- The second argument is a lower case string specifying the alignment format. As in Bio.SeqIO we don’t try and guess the file format for you! See [http://biopython.org/wiki/AlignIO](http://biopython.org/wiki/AlignIO) for a full listing of supported formats.
###Code
from Bio import AlignIO
alignment = AlignIO.read("../data/fasta/output_ludwig_eve-striped-2.fa", "fasta")
print(alignment)
for record in alignment:
print(record.id)
###Output
ludwig_eve-striped-2||MEMB002F|+
ludwig_eve-striped-2||MEMB002A|+
ludwig_eve-striped-2||MEMB003C|-
ludwig_eve-striped-2||MEMB002C|+
ludwig_eve-striped-2||MEMB003B|+
ludwig_eve-striped-2||MEMB003F|+
ludwig_eve-striped-2||MEMB003D|-
ludwig_eve-striped-2||MEMB002D|+
ludwig_eve-striped-2||MEMB002E|-
###Markdown
Buuuuuuuut, we don't really need the alignment as an alignment per se. But it is important for viewing and testing later. We need to have each seperate sequence, So I am going to use SeqIO.parse.
###Code
from Bio import SeqIO
# read in alignment as a list of sequences
records = list(SeqIO.parse("../data/fasta/output_ludwig_eve-striped-2.fa", "fasta"))
# Testing with the first sequence
seqTest = records[0]
#print(seqTest.seq)
print(type(seqTest))
# Turn just the sequence into a string instead of fasta sequence
aligned_seq = str(seqTest.seq)
print(type(aligned_seq)) # check
###Output
<type 'str'>
###Markdown
**Notes on loop**- `enumerate()`: prints out numbers counting up- `xInd` is the keys that were enumerated. - then the `remap_dict[counter] = xInd` makes the dictionaryx is the nucleotide
###Code
remap_dict = {}
nuc_list = ['A', 'a', 'G', 'g', 'C', 'c', 'T', 't', 'N', 'n']
counter = 0
for xInd, x in enumerate(aligned_seq):
if x in nuc_list:
remap_dict[counter] = xInd
counter += 1
#checking dictionary created
print(len(remap_dict)) # should be length of alignment
print(remap_dict[40]) #should print the value of the number key
print(type(remap_dict[40])) #Check data type
###Output
{0: 0, 1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9, 10: 10, 11: 11, 12: 12, 13: 13, 14: 14, 15: 15, 16: 16, 17: 17, 18: 18, 19: 19, 20: 20, 21: 21, 22: 22, 23: 23, 24: 24, 25: 25, 26: 26, 27: 27, 28: 28, 29: 29, 30: 30, 31: 31, 32: 32, 33: 36, 34: 37, 35: 38, 36: 39, 37: 40, 38: 41, 39: 42, 40: 43, 41: 44, 42: 45, 43: 46, 44: 47, 45: 48, 46: 49, 47: 59, 48: 60, 49: 61, 50: 62, 51: 63, 52: 64, 53: 65, 54: 66, 55: 67, 56: 68, 57: 69, 58: 70, 59: 71, 60: 72, 61: 73, 62: 74, 63: 75, 64: 76, 65: 77, 66: 78, 67: 79, 68: 80, 69: 81, 70: 82, 71: 83, 72: 84, 73: 85, 74: 86, 75: 87, 76: 88, 77: 89, 78: 90, 79: 91, 80: 92, 81: 93, 82: 94, 83: 97, 84: 98, 85: 99, 86: 100, 87: 101, 88: 102, 89: 103, 90: 104, 91: 105, 92: 106, 93: 139, 94: 140, 95: 141, 96: 142, 97: 143, 98: 144, 99: 145, 100: 146, 101: 147, 102: 148, 103: 149, 104: 150, 105: 151, 106: 152, 107: 153, 108: 154, 109: 160, 110: 161, 111: 162, 112: 163, 113: 164, 114: 165, 115: 166, 116: 167, 117: 168, 118: 169, 119: 170, 120: 171, 121: 180, 122: 181, 123: 182, 124: 183, 125: 184, 126: 185, 127: 186, 128: 187, 129: 188, 130: 189, 131: 190, 132: 191, 133: 192, 134: 200, 135: 201, 136: 202, 137: 203, 138: 204, 139: 205, 140: 206, 141: 207, 142: 208, 143: 209, 144: 210, 145: 211, 146: 212, 147: 213, 148: 214, 149: 215, 150: 216, 151: 217, 152: 218, 153: 219, 154: 220, 155: 221, 156: 222, 157: 223, 158: 224, 159: 225, 160: 226, 161: 227, 162: 228, 163: 243, 164: 244, 165: 245, 166: 246, 167: 247, 168: 248, 169: 249, 170: 250, 171: 251, 172: 252, 173: 253, 174: 254, 175: 255, 176: 256, 177: 257, 178: 258, 179: 259, 180: 260, 181: 261, 182: 262, 183: 263, 184: 264, 185: 265, 186: 266, 187: 267, 188: 268, 189: 269, 190: 270, 191: 271, 192: 272, 193: 273, 194: 274, 195: 275, 196: 276, 197: 277, 198: 278, 199: 279, 200: 280, 201: 281, 202: 282, 203: 283, 204: 284, 205: 285, 206: 286, 207: 287, 208: 288, 209: 289, 210: 290, 211: 291, 212: 292, 213: 293, 214: 294, 215: 295, 216: 296, 217: 316, 218: 317, 219: 318, 220: 319, 221: 320, 222: 321, 223: 322, 224: 323, 225: 324, 226: 325, 227: 326, 228: 327, 229: 328, 230: 329, 231: 330, 232: 331, 233: 332, 234: 333, 235: 334, 236: 335, 237: 336, 238: 337, 239: 338, 240: 339, 241: 340, 242: 341, 243: 342, 244: 343, 245: 344, 246: 345, 247: 346, 248: 347, 249: 348, 250: 349, 251: 350, 252: 351, 253: 352, 254: 353, 255: 354, 256: 355, 257: 356, 258: 357, 259: 358, 260: 359, 261: 360, 262: 361, 263: 362, 264: 363, 265: 364, 266: 365, 267: 366, 268: 367, 269: 368, 270: 369, 271: 370, 272: 371, 273: 372, 274: 373, 275: 374, 276: 375, 277: 376, 278: 377, 279: 378, 280: 379, 281: 380, 282: 381, 283: 382, 284: 383, 285: 384, 286: 385, 287: 386, 288: 387, 289: 388, 290: 389, 291: 397, 292: 398, 293: 399, 294: 400, 295: 401, 296: 402, 297: 403, 298: 404, 299: 405, 300: 406, 301: 407, 302: 408, 303: 409, 304: 410, 305: 411, 306: 413, 307: 414, 308: 415, 309: 416, 310: 417, 311: 418, 312: 419, 313: 420, 314: 421, 315: 422, 316: 423, 317: 424, 318: 425, 319: 426, 320: 427, 321: 429, 322: 430, 323: 431, 324: 432, 325: 433, 326: 434, 327: 435, 328: 436, 329: 437, 330: 438, 331: 439, 332: 440, 333: 441, 334: 442, 335: 443, 336: 444, 337: 445, 338: 446, 339: 447, 340: 448, 341: 449, 342: 450, 343: 451, 344: 452, 345: 453, 346: 454, 347: 455, 348: 456, 349: 457, 350: 458, 351: 459, 352: 474, 353: 475, 354: 476, 355: 477, 356: 478, 357: 479, 358: 480, 359: 481, 360: 482, 361: 483, 362: 484, 363: 485, 364: 486, 365: 487, 366: 488, 367: 489, 368: 490, 369: 491, 370: 492, 371: 493, 372: 494, 373: 495, 374: 496, 375: 497, 376: 498, 377: 499, 378: 500, 379: 501, 380: 502, 381: 503, 382: 504, 383: 505, 384: 506, 385: 507, 386: 508, 387: 509, 388: 510, 389: 511, 390: 512, 391: 516, 392: 517, 393: 518, 394: 519, 395: 520, 396: 521, 397: 522, 398: 523, 399: 524, 400: 525, 401: 526, 402: 528, 403: 529, 404: 530, 405: 531, 406: 532, 407: 533, 408: 534, 409: 535, 410: 536, 411: 537, 412: 538, 413: 539, 414: 540, 415: 541, 416: 542, 417: 543, 418: 544, 419: 545, 420: 546, 421: 547, 422: 550, 423: 551, 424: 552, 425: 553, 426: 554, 427: 555, 428: 556, 429: 560, 430: 561, 431: 562, 432: 563, 433: 564, 434: 565, 435: 566, 436: 567, 437: 568, 438: 569, 439: 570, 440: 571, 441: 574, 442: 575, 443: 576, 444: 577, 445: 578, 446: 579, 447: 580, 448: 581, 449: 582, 450: 583, 451: 584, 452: 588, 453: 589, 454: 590, 455: 591, 456: 592, 457: 593, 458: 594, 459: 595, 460: 596, 461: 597, 462: 598, 463: 599, 464: 600, 465: 601, 466: 602, 467: 603, 468: 604, 469: 605, 470: 606, 471: 607, 472: 608, 473: 609, 474: 610, 475: 611, 476: 612, 477: 613, 478: 614, 479: 615, 480: 616, 481: 617, 482: 618, 483: 619, 484: 620, 485: 621, 486: 622, 487: 623, 488: 624, 489: 625, 490: 626, 491: 627, 492: 628, 493: 631, 494: 632, 495: 633, 496: 635, 497: 636, 498: 637, 499: 663, 500: 664, 501: 665, 502: 666, 503: 667, 504: 668, 505: 669, 506: 670, 507: 671, 508: 672, 509: 673, 510: 674, 511: 675, 512: 676, 513: 677, 514: 678, 515: 679, 516: 680, 517: 681, 518: 682, 519: 683, 520: 684, 521: 685, 522: 686, 523: 687, 524: 688, 525: 689, 526: 690, 527: 691, 528: 692, 529: 693, 530: 694, 531: 695, 532: 696, 533: 697, 534: 698, 535: 699, 536: 700, 537: 701, 538: 702, 539: 703, 540: 704, 541: 705, 542: 706, 543: 707, 544: 708, 545: 709, 546: 710, 547: 711, 548: 712, 549: 713, 550: 714, 551: 715, 552: 716, 553: 717, 554: 718, 555: 719, 556: 720, 557: 721, 558: 722, 559: 723, 560: 724, 561: 725, 562: 726, 563: 739, 564: 740, 565: 741, 566: 742, 567: 743, 568: 744, 569: 745, 570: 746, 571: 747, 572: 748, 573: 749, 574: 750, 575: 751, 576: 752, 577: 753, 578: 754, 579: 755, 580: 756, 581: 759, 582: 760, 583: 761, 584: 762, 585: 763, 586: 764, 587: 765, 588: 766, 589: 767, 590: 768, 591: 769, 592: 770, 593: 771, 594: 772, 595: 773, 596: 774, 597: 775, 598: 776, 599: 777, 600: 778, 601: 779, 602: 780, 603: 781, 604: 782, 605: 783, 606: 784, 607: 785, 608: 786, 609: 787, 610: 788, 611: 789, 612: 790, 613: 791, 614: 792, 615: 793, 616: 794, 617: 795, 618: 796, 619: 797, 620: 798, 621: 806, 622: 807, 623: 808, 624: 809, 625: 810, 626: 812, 627: 813, 628: 814, 629: 815, 630: 816, 631: 817, 632: 818, 633: 819, 634: 820, 635: 821, 636: 822, 637: 823, 638: 824, 639: 825, 640: 826, 641: 827, 642: 829, 643: 830, 644: 831, 645: 832, 646: 833, 647: 834, 648: 835, 649: 836, 650: 837, 651: 838, 652: 839, 653: 840, 654: 841, 655: 842, 656: 843, 657: 844, 658: 845, 659: 846, 660: 847, 661: 848, 662: 849, 663: 850, 664: 851, 665: 852, 666: 853, 667: 854, 668: 855, 669: 861, 670: 862, 671: 863, 672: 864, 673: 865, 674: 866, 675: 867, 676: 868, 677: 869, 678: 872, 679: 873, 680: 874, 681: 875, 682: 876, 683: 877, 684: 878, 685: 879, 686: 880, 687: 881, 688: 882, 689: 883, 690: 892, 691: 893, 692: 894, 693: 895, 694: 896, 695: 897, 696: 898, 697: 899, 698: 900, 699: 901, 700: 902, 701: 903, 702: 904, 703: 905, 704: 906, 705: 907, 706: 908, 707: 909, 708: 910, 709: 911, 710: 912, 711: 913, 712: 914, 713: 929, 714: 930, 715: 931, 716: 932, 717: 933, 718: 934, 719: 935, 720: 936, 721: 937, 722: 938, 723: 939, 724: 947, 725: 948, 726: 949, 727: 950, 728: 951, 729: 952, 730: 953, 731: 954, 732: 955, 733: 956, 734: 957, 735: 960, 736: 961, 737: 962, 738: 963, 739: 964, 740: 965, 741: 966, 742: 967, 743: 968, 744: 969, 745: 970, 746: 971, 747: 972, 748: 973, 749: 974, 750: 975, 751: 976, 752: 977, 753: 978, 754: 979, 755: 980, 756: 981, 757: 982, 758: 983, 759: 984, 760: 985, 761: 986, 762: 987, 763: 988, 764: 989, 765: 990, 766: 991, 767: 992, 768: 993, 769: 994, 770: 995, 771: 996, 772: 997, 773: 998, 774: 999, 775: 1000, 776: 1001, 777: 1002, 778: 1003, 779: 1004, 780: 1005, 781: 1006, 782: 1007, 783: 1008, 784: 1009, 785: 1010, 786: 1011, 787: 1012, 788: 1013, 789: 1014, 790: 1015, 791: 1016, 792: 1017, 793: 1018, 794: 1019, 795: 1020, 796: 1023, 797: 1024, 798: 1026, 799: 1027, 800: 1028, 801: 1029, 802: 1030, 803: 1031, 804: 1032, 805: 1033, 806: 1034, 807: 1035, 808: 1036, 809: 1037, 810: 1038, 811: 1039, 812: 1040, 813: 1041, 814: 1042, 815: 1043, 816: 1044, 817: 1045, 818: 1046, 819: 1047, 820: 1048, 821: 1049, 822: 1050, 823: 1051, 824: 1052, 825: 1053, 826: 1054, 827: 1055, 828: 1056, 829: 1057, 830: 1058, 831: 1059, 832: 1060, 833: 1061, 834: 1063, 835: 1064, 836: 1065, 837: 1066, 838: 1067, 839: 1068, 840: 1069, 841: 1070, 842: 1071, 843: 1072, 844: 1073, 845: 1074, 846: 1075, 847: 1076, 848: 1077, 849: 1078, 850: 1079, 851: 1080, 852: 1081, 853: 1082, 854: 1083, 855: 1084, 856: 1085, 857: 1086, 858: 1087, 859: 1088, 860: 1089, 861: 1090, 862: 1091, 863: 1092, 864: 1093, 865: 1094, 866: 1095, 867: 1096, 868: 1097, 869: 1098, 870: 1099, 871: 1100, 872: 1101, 873: 1102, 874: 1103, 875: 1104, 876: 1105, 877: 1106, 878: 1107, 879: 1108, 880: 1109, 881: 1110, 882: 1111, 883: 1112, 884: 1113, 885: 1114, 886: 1115, 887: 1116, 888: 1117, 889: 1118, 890: 1119, 891: 1120, 892: 1121, 893: 1122, 894: 1123, 895: 1124, 896: 1125, 897: 1126, 898: 1127, 899: 1128, 900: 1129, 901: 1130, 902: 1131, 903: 1132, 904: 1133}
905
43
<type 'int'>
###Markdown
We need two sequences. One that is not the alignment. Putting the Remap together with TFBS The last part is to create the vector that should span the entire alignment printing 1 if the position has a bicoid site or 0 if not.
###Code
## Attempt at vector
bcdSites = [0] * len(aligned_seq)
#from loctaingTFB.ipy
TFBS = [10, 102, 137, -741, -680, -595, 309, -497, -485, 429, 453, 459, 465, -376, -347, -339, -308, 593, 600, -289, 613, 623, -240, 679, -128, -77, 825, 826, 886]
#Need to make positive. This is not what I need.
#TFBS_pos = [abs(k) for k in TFBS]
print((TFBS))
m = 7
# This is the range of the motif
for pos in TFBS:
print(aligned_seq[pos:pos+m])
###Output
[10, 102, 137, -741, -680, -595, 309, -497, -485, 429, 453, 459, 465, -376, -347, -339, -308, 593, 600, -289, 613, 623, -240, 679, -128, -77, 825, 826, 886]
ATAATTT
CCTCG--
--AACTG
--CTGTG
ACAG---
TCCATCC
-------
-------
-------
GAACGGT
GAGACAG
G------
-------
AACAGGC
AGTTGGG
TA-----
-TAATCC
GGGATTA
GCCGAGG
CACCTCA
CTTGGTA
CGATCC-
TGCGCCA
AAAGTCA
AATAAAT
GTC-CCA
CCC-TAA
CC-TAAT
------T
###Markdown
Now I need to make a new vector that says if the bicoid site is present or absent on the position. Can the negative positions be used in a query of the dictionary? Likely not.
###Code
print(type(TFBS))
print(type(remap_dict))
print(TFBS)
# Okay, the problem is the negative numbers
another_key = [82, 85, 98]
print(len(remap_dict))
# So I need to convert the negative number first.
print([remap_dict[x] for x in another_key])
# Working on converting TFBS negative numbers.
TFBS_2 = []
for x in TFBS:
if x < 0:
TFBS_2.append(905 + x)
else:
TFBS_2.append(x)
print(TFBS_2)
###Output
[10, 102, 137, 164, 225, 310, 309, 408, 420, 429, 453, 459, 465, 529, 558, 566, 597, 593, 600, 616, 613, 623, 665, 679, 777, 828, 825, 826, 886]
|
tasks/twitter/twitter_pre.ipynb | ###Markdown
Preprocessing of the Twitter Dataset
###Code
import pandas as pd
import sys
sys.path.append("..")
from tqdm import tqdm_notebook
import glob
files = glob.glob("*")
files
tweets = pd.read_csv('adr_dataset_split.csv', sep = ",")
tweets.columns
from preprocess_bc import cleaner
tweets["text"] = tweets["text"].apply(lambda x: " ".join(cleaner(x)))
tweets.head(10).text[0]
from preprocess_bc import extract_vocabulary_
word_to_ix = extract_vocabulary_(min_df = 2, dataframe = tweets)
tweets["text"] = tweets["text"].apply(lambda x: ("<SOS> " + x + " <EOS>").split())
###Output
_____no_output_____
###Markdown
Forming our vocabulary, ixing data and saving it
###Code
from preprocess_bc import text_to_seq
ix_to_word = {v:k for k,v in word_to_ix.items()}
train_ix = text_to_seq(tweets[tweets.exp_split == "train"][["text","label"]].values, word_to_ix)
dev_ix = text_to_seq(tweets[tweets.exp_split == "dev"][["text","label"]].values, word_to_ix)
test_ix = text_to_seq(tweets[tweets.exp_split == "test"][["text","label"]].values, word_to_ix)
train_ix[0]
###Output
_____no_output_____
###Markdown
Preparing our embeddings
###Code
from preprocess_bc import pretrained_embeds, DataHolder_BC
pre = pretrained_embeds("fasttext.simple.300d", ix_to_word)
pretrained = pre.processed()
data = DataHolder_BC(train_ix, dev_ix, test_ix, word_to_ix, embeds = pretrained)
import pickle
pickle.dump(data, open("data.p", "wb"))
###Output
_____no_output_____ |
Data loading and formatting.ipynb | ###Markdown
In this notebook, useful data loading and transforming functions are demonstrated. The main sources of data are Quandl, Cryptocompare and Yahoo Finance.
###Code
# Put these at the top of every notebook, to get automatic reloading and inline plotting
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
import re
import dill as pickle
import itertools
from tqdm import tqdm, tqdm_notebook
from datetime import datetime
import numpy as np
import pandas as pd
from pandas import Series, DataFrame
pd.set_option('display.max_rows', 6)
import matplotlib as mpl
from matplotlib import pyplot as plt
import matplotlib.transforms as mtransforms
plt.rcParams['figure.figsize'] = [12, 4]
import quandl
from utils import *
###Output
_____no_output_____
###Markdown
Quandl
###Code
api_key = open(file='quandl_api').read().replace('\n', '')
quandl.ApiConfig.api_key = api_key
###Output
_____no_output_____
###Markdown
Exchange Data InternationalOne may use [_Exchange Data International_](https://www.quandl.com/publishers/edi) free sample series as follows:
###Code
print(os.listdir(QUANDL_PATH + 'EDI/'))
# tickers, prices = get_quandl_edi(get_quandl_edi(list(QUANDL_FREE_SAMPLES_EDI.keys())), download=True) # The first time...
tickers, prices = get_quandl_edi(list(QUANDL_FREE_SAMPLES_EDI.keys()))
prices
print('Number of price series:', len(tickers))
j = np.random.choice(len(tickers) - 1)
ticker_j = list(tickers)[j]
print('j:', j, ' - ', ticker_j)
price_j = prices.loc[ticker_j]
price_j[['Open', 'High', 'Low', 'Close']].plot();
price_j
###Output
_____no_output_____
###Markdown
Sharadar Equity PricesOne may use [_Sharadar Equity Prices_](https://www.quandl.com/publishers/sharadar) free sample series as follows:
###Code
# tickers, prices = get_quandl_sharadar(download=True) # The first time...
tickers, prices = get_quandl_sharadar(free=False)
print('Number of price series:', len(tickers))
prices
j = np.random.choice(len(tickers) - 1)
ticker_j = list(tickers)[j]
print('j:', j, ' - ', ticker_j)
price_j = prices.loc[ticker_j]
price_j[['Open', 'High', 'Low', 'Close']].plot()
plt.axhline(c='grey')
plt.show()
price_j
###Output
_____no_output_____
###Markdown
Metadata...
###Code
shr_meta = pd.read_csv(QUANDL_PATH + 'Sharadar/SHARADAR-TICKERS.csv')
shr_meta.to_excel(QUANDL_PATH + 'Sharadar/SHARADAR-TICKERS.xlsx')
shr_meta.keys()
shr_meta.describe(include=np.object)
shr_meta.groupby('currency').count()
###Output
_____no_output_____
###Markdown
Train, Dev, Test samples
###Code
p_train = 0.7
p_dev = 0.15
N = len(tickers)
train, dev = round(p_train * N), round(p_dev * N)
test = N - train - dev
print('N:', N, ', Tain:', train, ', Dev:', dev, ', Test:', test)
np.random.seed(123)
tickers_full = list(np.random.permutation(tickers))
tickers_train = tickers_full[:train]
tickers_dev = tickers_full[train:(train + dev)]
tickers_test = tickers_full[-test:]
assert len(tickers_train + tickers_dev + tickers_test) == N
tickers_in_train_folder = [f.replace('.feather', '') for f in os.listdir(QUANDL_PATH + 'Sharadar/train/')]
set(tickers_train) == set(tickers_in_train_folder)
# # Only run once
# for t in tickers_train:
# prices.loc[t].reset_index().to_feather(fname=QUANDL_PATH + 'Sharadar/train/' + t + '.feather')
# # Only run once
# for t in tickers_dev:
# prices.loc[t].reset_index().to_feather(fname=QUANDL_PATH + 'Sharadar/dev/' + t + '.feather')
# # Only run once
# for t in tickers_test:
# prices.loc[t].reset_index().to_feather(fname=QUANDL_PATH + 'Sharadar/test/' + t + '.feather')
dir_train = os.listdir(QUANDL_PATH + 'Sharadar/train/')
tickers_train = [f.replace('.feather', '') for f in dir_train]
train_files = [QUANDL_PATH + 'Sharadar/train/' + f for f in dir_train]
prices_train = pd.read_feather(train_files[0]).assign(Ticker=tickers_train[0])
for i in tqdm_notebook(range(1, len(tickers_train))):
df_i = pd.read_feather(train_files[i]).assign(Ticker=tickers_train[i])
prices_train = pd.concat((prices_train, df_i), axis=0)
def select_train_dev_test(x, ptr=0.7, pdv=0.15, pts=0.15):
y = ['train'] * int(len(x) * ptr) + ['dev'] * int(len(x) * pdv)
y += ['test'] * (len(x) - len(y))
return(y)
prices_train = prices_train.sort_values(['Ticker', 'Date'])
prices_train = prices_train.assign(Set=np.nan)
prices_train['Set'] = prices_train.groupby(prices_train['Ticker'], sort=False).transform(select_train_dev_test)
prices_train = prices_train.reset_index().drop('index', axis=1)
prices_train.to_feather(fname=QUANDL_PATH + 'Sharadar/sharadar_train.feather')
dir_dev = os.listdir(QUANDL_PATH + 'Sharadar/dev/')
tickers_dev = [f.replace('.feather', '') for f in dir_dev]
dev_files = [QUANDL_PATH + 'Sharadar/dev/' + f for f in dir_dev]
prices_dev = pd.read_feather(dev_files[0]).assign(Ticker=tickers_dev[0])
for i in tqdm_notebook(range(1, len(tickers_dev))):
df_i = pd.read_feather(dev_files[i]).assign(Ticker=tickers_dev[i])
prices_dev = pd.concat((prices_dev, df_i), axis=0)
prices_dev = prices_dev.reset_index().drop('index', axis=1)
prices_dev.to_feather(fname=QUANDL_PATH + 'Sharadar/sharadar_dev.feather')
dir_test = os.listdir(QUANDL_PATH + 'Sharadar/test/')
tickers_test = [f.replace('.feather', '') for f in dir_test]
test_files = [QUANDL_PATH + 'Sharadar/test/' + f for f in dir_test]
prices_test = pd.read_feather(test_files[0]).assign(Ticker=tickers_test[0])
for i in tqdm_notebook(range(1, len(tickers_test))):
df_i = pd.read_feather(test_files[i]).assign(Ticker=tickers_test[i])
prices_test = pd.concat((prices_test, df_i), axis=0)
prices_test = prices_test.reset_index().drop('index', axis=1)
prices_test.to_feather(fname=QUANDL_PATH + 'Sharadar/sharadar_test.feather')
###Output
_____no_output_____
###Markdown
Train, dev, test split within train
###Code
t_train, p_train = get_sharadar_train()
ticker = np.random.choice(t_train)
price = p_train.loc[ticker]
pal = plt.get_cmap('Paired').colors
fig, ax = plt.subplots(figsize=(16, 5))
trans = mpl.transforms.blended_transform_factory(ax.transData, ax.transAxes)
ax.fill_between(price.index, 0, price.High.max(), where=(price.Set == 'train'), facecolor=pal[0],
alpha=0.25, transform=trans, label='Train')
ax.fill_between(price.index, 0, price.High.max(), where=(price.Set == 'dev'), facecolor=pal[2],
alpha=0.25, transform=trans, label='Dev')
ax.fill_between(price.index, 0, price.High.max(), where=(price.Set == 'test'), facecolor=pal[4],
alpha=0.25, transform=trans, label='Test')
plt.plot(price.Close, label='Close')
plt.axhline(0, c='grey')
plt.legend()
plt.title(ticker)
plt.show()
###Output
_____no_output_____
###Markdown
CryptocompareTO DO Yahoo FinanceTO DO Data Cleaning* Positive Volume.* OHLC: open and close within [low, high].* Positive prices* Non nan.
###Code
from utils import *
tickers, prices = get_sharadar_train()
prices.query('Volume > 0', inplace=True)
prices = prices.assign(
Low = prices[['Open', 'High', 'Low', 'Close']].apply('min', axis=1),
High = prices[['Open', 'High', 'Low', 'Close']].apply('max', axis=1),
)
prices.query('High > 0', inplace=True)
prices.loc[prices.Open == 0, 'Open'] = prices.loc[prices.Open == 0, 'Close']
prices.loc[prices.Close == 0, 'Close'] = prices.loc[prices.Close == 0, 'Open']
prices.loc[np.all(prices[['Open', 'Close']] == 0, axis=1), ['Open', 'Close']] = \
prices.loc[np.all(prices[['Open', 'Close']] == 0, axis=1), ['High', 'High']]
prices.loc[prices.Low == 0, 'Low'] = \
prices.loc[prices.Low == 0, ['Open', 'High', 'Close']].apply('min', axis=1)
# plot_prices(prices.loc['WLM'][-500:])
# plot_prices(prices.loc['AGHC'][-150:])
prices.query('Low < 0')
###Output
_____no_output_____
###Markdown
Nas
###Code
prices.loc[prices.Open.isna(), 'Open'] = prices.loc[prices.Open.isna(), 'Close']
prices.loc[prices.Open.isna(), 'Open'] = prices.loc[prices.Open.isna(), 'High']
prices.loc[prices.Close.isna(), 'Close'] = prices.loc[prices.Close.isna(), 'Low']
###Output
_____no_output_____
###Markdown
Data Transforms Weekly and monthly OHLC prices.
###Code
weekly_j = daily_to_weekly_prices(price_j)
weekly_j[['Open', 'High', 'Low', 'Close']].plot();
###Output
_____no_output_____
###Markdown
ReturnsSee `utils.add_changes(df)`. Technical Indicators* Volatility.* Simple moving averages.* Support and resistance.
###Code
from utils import *
df = pd.read_feather('input/Quandl/Sharadar/train/AAON.feather')
df = add_changes(df)
df
add_technical(df).set_index('Date')[['Close', 'SMA_120', 'Support_120', 'Resistance_120']].plot()
add_technical(df).set_index('Date')[['kurt_SMA_5', 'kurt_SMA_60', 'kurt_SMA_120']].plot()
add_technical(df).keys()
###Output
_____no_output_____
###Markdown
Files in input/.../train
###Code
df = pd.read_feather('input/Quandl/Sharadar/train_0/AAON.feather')
df = clean_sharadar(df)
df = add_changes(df)
df = add_technical(df)
df = df.drop(['Dividends', 'Closeunadj', 'Lastupdated'], axis=1)
df.keys()
def transform_data(old_dir, new_dir):
fnames = os.listdir(old_dir)
for f in fnames:
df = pd.read_feather(os.path.join(old_dir, f))
df = clean_sharadar(df)
df = add_changes(df)
df = add_technical(df)
df = df.drop(['Dividends', 'Closeunadj', 'Lastupdated'], axis=1)
df.reset_index(drop=True).to_feather(os.path.join(new_dir, f))
transform_data('input/Quandl/Sharadar/train_0/',
'input/Quandl/Sharadar/train/')
transform_data('input/Quandl/Sharadar/dev_0/',
'input/Quandl/Sharadar/dev/')
transform_data('input/Quandl/Sharadar/test_0/',
'input/Quandl/Sharadar/test/')
###Output
//anaconda/envs/trend/lib/python3.6/site-packages/pandas/core/indexing.py:543: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self.obj[item] = s
|
vigenere-cryptoanalysis.ipynb | ###Markdown
<img src="http://www.cerm.unifi.it/chianti/images/logo%20unifi_positivo.jpg" alt="UniFI logo" style="float: left; width: 20%; height: 20%;">Massimo NocentiniApril 28, 2016: refactoringApril 11, 2016: breaking the cipherApril 8, 2016: big-bangAbstractVigenere cipher cryptoanalysis. Vigenere cipher cryptoanalysisThis notebook studies the cryptoanalysis of the [Vigenere][wiki] cipher, which is polyalphabetic, namely two occurrences $a_{1}, a_{2}$ of character $a$ belonging to the plaintext are associated with occurrences $c_{1}, c_{2}$ in the ciphertext, such that $c_{1}\neq c_{2}$ with very high probability.Our implementation closely follows a class lecture given by Prof. [Orazio Puglisi][puglisi] within Cryptography course at the University of Florence. In order to fully understand the last part, where the concept of *mutual coincidence index* is crucial, we rest on the explanation at pag. 20 of [this notes][notes].[wiki]:https://en.wikipedia.org/wiki/Vigen%C3%A8re_cipher[puglisi]:http://web.math.unifi.it/users/puglisi/[notes]:http://iml.univ-mrs.fr/~ritzenth/cours/crypto.general.pdf physical shifting ciphersA crypto ruler: ![][crypto-ruler][crypto-ruler]: https://upload.wikimedia.org/wikipedia/commons/f/fa/Cryptographic_sliding_rule-IMG_0533.jpg "Crypto-ruler" A Swiss disk cipher:![][swiss-disk][swiss-disk]:https://upload.wikimedia.org/wikipedia/en/3/3f/Confederate_cipher_disk.png "swiss disk"
###Code
import itertools
from itertools import *
from copy import copy, deepcopy
from heapq import *
from random import *
import matplotlib.pyplot as plt
from collections import Counter
from sympy import *
init_printing()
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 10.0)
###Output
_____no_output_____
###Markdown
--- alphabetLet $\mathcal{A}$ be our alphabet composed of standard English characters plus the `space` character. It will be sufficient to encode simple and (not so quite) short messages. To define it as a group $\frac{\mathbb{Z}}{n\mathbb{Z}}$ in the Python language we use a `dict` object, which can be reversed because it is a bijection.
###Code
def make_alphabet_entry(i):
alpha = i + ord('a')
return chr(alpha),i
A = dict(map(make_alphabet_entry, range(26)))
A.update({' ':26})
inverse_A = {v:k for k,v in A.items()}
A
###Output
_____no_output_____
###Markdown
string encoding and the *plaintext*We define a function `encode` that consumes a string and produces a list of integer in our field. In parallel, function `decode` goes backwards, it consumes a list of integers and return a string. Such functions are useful in order to use the cipher and analyze it using `str` objects instead of the coded version using lists of integers.
###Code
def encode(s, alphabet=A): return list(map(lambda i: alphabet[i], s))
def decode(e, inverse_alphabet=inverse_A): return "".join(map(lambda i: inverse_alphabet[i], e))
###Output
_____no_output_____
###Markdown
The following plaintext is a prose taken from [here]; before using it we have to swipe out punctuation marks:[here]:http://www.bartleby.com/209/2.html
###Code
def clean_text(text):
remove_chars = [',', '.', ';', ':', '-', '(', ')', "'", '"']
for rc in remove_chars: text = text.replace(rc, '')
text = text.replace('\n', ' ')
return "".join(filter(lambda c: not c.isdigit(), text))
with open('rest_plain_text.txt', 'r') as f:
plain_text = clean_text(f.read().lower())
encoded_plain_text = encode(plain_text)
chunk = 500
"{}...".format(plain_text[:chunk])
###Output
_____no_output_____
###Markdown
With the following assert we ensure that function `decode` is the inverse of function `encode`:
###Code
assert decode(encode(plain_text)) == plain_text
###Output
_____no_output_____
###Markdown
keyLet $\textbf{k}=(k_{0},\ldots,k_{m-1}) \in \left(\frac{\mathbb{Z}}{n\mathbb{Z}}\right)^{m}$ be a *key* of length $m\in\mathbb{N}$. In order to have a meaningful analysis we build a function `generate_random_key` which generate a random key of random length, keeping it safe...we will uncover it only at the end to check our work:
###Code
def generate_random_key(given_key=None, required_length=None, max_length=None, alphabet=A):
if given_key is not None: return given_key
if required_length is None and max_length is None: max_length = len(alphabet)
# the minimum length of the key is 3 to build interesting cases
length = required_length if required_length else randint(3, max_length)
key = [0] * length
# -1 in the following max limit because it is inclusive in the sense of `randint`.
for i in range(length):
key[i] = randint(0, len(alphabet)-1)
return key
#key = encode("ericsmullyan")
secret_key = generate_random_key(required_length=17)
###Output
_____no_output_____
###Markdown
encryption and the *ciphertext*Now we are in the position to define `encrypt` and `decrypt` functions, both of them consumes an *encoded* message, namely a list of integers, and a key. The encryption work by repeating the key as long as necessary to match the same length of the input $\textbf{x}=(x_{0},\ldots,x_{h})$, where $h > m-1$ otherwise the cipher is a *OneTimePad* which is unbreakable, $$(\underbrace{x_{0},\ldots,x_{m-1}}_{k_{0},\ldots,k_{m-1}}\underbrace{x_{m},\ldots,x_{m-1}}_{k_{0},\ldots,k_{m-1}}\ldots\underbrace{x_{lm},\ldots,x_{h}}_{k_{0},\ldots,k_{h \mod m}}) = (y_{0},\ldots,y_{h}) = \textbf{y}$$truncating the very right block to match plaintext suffix. At last, the ciphertext $\textbf{y}$ is obtained by addition modulo $n$ of corresponding symbols, where $n$ is the length of the alphabet. Decryption follows the same scheme, using modular subtraction.
###Code
def encrypt(message, key, alphabet=A):
n = len(alphabet)
return [(p+v)%n for p,v in zip(message, cycle(key))]
def decrypt(cipher, key, alphabet=A):
n = len(alphabet)
return [(c-v)%n for c,v in zip(cipher, cycle(key))]
###Output
_____no_output_____
###Markdown
the following is the *ciphertext* produced by the application of function `encrypt` to the plaintext:
###Code
cipher_text = encrypt(encoded_plain_text, secret_key)
'{}...'.format(decode(cipher_text)[:chunk])
###Output
_____no_output_____
###Markdown
Assuming to know the `secret_key`, we ensure that function `decrypt` is the inverse of function `encrypt`:
###Code
assert decode(decrypt(cipher_text, secret_key)) == plain_text
###Output
_____no_output_____
###Markdown
coincidence indexLet $I_{c}$ be the *coincidence index* of a sequence $\alpha$, over an alphabet $A$, defined as $$ I_{c}(\alpha) = \sum_{i=0}^{n}{\frac{f_{a_{i}}^{(\alpha)}(f_{a_{i}}^{(\alpha)}-1)}{m(m-1)}} $$ where $n$ is the length of the alphabet, $m$ is the length of $\alpha$ and $f_{a_{i}}^{^{(\alpha)}}$ is the frequency of symbol $a_{i}\in A$, namely the number of occurrences of $a_{i}$, in $\alpha$. In other words, $I_{c}$ is the probability to sample two occurrences of the same symbol $a$, forall $a\in A$, from the sequence $\alpha$. Index $I_{c}$ is invariant to shifting by the same constant $v$. Let $\alpha,\beta$ be two sequences of integers of length $l$ such that $\beta_{i} \equiv_{n} \alpha_{i} + v$, forall $i\in\lbrace0,\ldots,l\rbrace$; moreover, let $q_{\gamma_{i}}^{(\gamma)}$ be the probability to sample *two* occurrences of $\gamma_{i}$ from a sequence $\gamma$, we can state the relations$$ I_{c}(\beta) = \sum_{a_{i}\in\frac{\mathbb{Z}}{n\mathbb{Z}}}{q_{a_{i}}^{(\beta)}} = \sum_{a_{i}\in\frac{\mathbb{Z}}{n\mathbb{Z}}}{q_{a_{i}-v}^{(\alpha)}} = \sum_{\hat{a}_{i}\in\frac{\mathbb{Z}}{n\mathbb{Z}}}{q_{\hat{a}_{i}}^{(\alpha)}} = I_{c}(\alpha)$$where $\hat{a}_{i}\equiv_{n}a_{i}-v$, proving the invariance of $I_{c}$ when a sequence is produced by another one shifted by a constant $v\in\frac{\mathbb{Z}}{n\mathbb{Z}}$.
###Code
def frequencies(lst, alphabet=A, inverse_alphabet=inverse_A):
""" Produces a `dict` counting occcurrences of each object of the alphabet within the iterable.
`frequencies` consumes an iterable $lst$ and an alphabet $A$,
produces a dictionary of entries $(k,v)$ where $k$ is a character
in $A$ and $v$ is the number of occurrences of $k$ in $lst$
"""
counter = Counter(lst)
return {k:counter[v] for k,v in alphabet.items()} # Counter handles the case of missing key returning 0
def length_from_frequencies(freqs):
""" Returns the length of the original sequence by summation of symbols frequencies. """
return sum(freqs.values())
def coincidence_index(freqs, alphabet=A):
""" Produces the I_{c} relative to frequencies matched against an alphabet. """
denom = length_from_frequencies(freqs)
if denom in range(2): return None
def mapper(a):
v = freqs[a] if a in freqs else 0
return v*(v-1)
return sum(map(mapper, alphabet.keys()))/(denom*(denom-1))
def draw_frequencies_histogram(seq, alphabet=A, y_maxlimit=None, normed=None):
#plaintext_length = len(plain_text)
#freqs = [plaintext_frequencies[inverse_A[ia]] for ia in sorted(inverse_A.keys())]
n, bins, patches = plt.hist(seq, len(alphabet), normed=normed,facecolor='green', alpha=0.5)
plt.xlabel('alphabet symbols')
plt.ylabel('frequencies' + (', normed respect: {}'.format(str(normed)) if normed else ''))
plt.xticks(range(-1, len(alphabet)), sorted(alphabet.keys()))
if y_maxlimit: plt.ylim([0, y_maxlimit])
plt.grid(True)
plt.show()
return None
###Output
_____no_output_____
###Markdown
frequencies comparisonThe following are frequencies of alphabet symbols in the *plaintext*: in the analysis they are used as a `dict` produced by function `frequencies`, moreover it is possible to draw an histogram with the relative function.
###Code
plaintext_frequencies = frequencies(encoded_plain_text)
draw_frequencies_histogram(encoded_plain_text, y_maxlimit=1800)
plaintext_frequencies
###Output
_____no_output_____
###Markdown
The following histogram shows frequencies in the *ciphertext*: using the same `y_maxlimit` value, we see that they are spread "uniformly" over symbols.
###Code
draw_frequencies_histogram(cipher_text, y_maxlimit=1800)
###Output
_____no_output_____
###Markdown
A first approach to encryption finishes computing the coincidence indexes of both *plaintext* and *ciphertext*:
###Code
print("coincidence index of *plaintext*: {}\ncoincidence index of *ciphertext*: {}".format(
coincidence_index(plaintext_frequencies), coincidence_index(frequencies(cipher_text))))
###Output
coincidence index of *plaintext*: 0.07720140650073147
coincidence index of *ciphertext*: 0.039775706179144346
###Markdown
finding the key *length* by spreadingThe following set of functions allows us to probe the key *length* by repeatedly fixing a candidate length $l$, then spreading the *ciphertext* in a matrix with $l$ columns, then computing $I_{c}$ of each column and, finally, report $l$ if the majority of $I_{c}$ scores is greater than a threshold (.06 for English).
###Code
def spread(message, block_length):
return [message[i:i+block_length] for i in range(0, len(message), block_length)]
def col(spreaded, c, join=False, joiner=lambda c: ''.join(decode(c))):
column = [lst[c] if c < len(lst) else None for lst in spreaded]
ready = list(filter(lambda i: i is not None, column))
return joiner(ready) if join else ready
def decode_spreaded(spreaded, join_as_str=False):
decoded_spread = list(map(decode, spreaded))
return '\n'.join(decoded_spread) if join_as_str else decoded_spread
def analyze(cipher_text, max_key_length=None):
res = {}
# we discard the case where the key length equals the
# length of the cipher text, since it is the case of
# OneTimePad cipher, which is unbreakable!
for d in range(2, len(cipher_text) if max_key_length is None else max_key_length + 1):
spreaded = spread(cipher_text, d)
res[d] = []
for c in range(d):
ci = coincidence_index(frequencies(col(spreaded, c)))
if ci: res[d].append(ci)
return res
def guess_key_length(analysis, threshold=0.06):
candidates = {}
for k,v in analysis.items():
cs = list(filter(lambda i: i > threshold, v))
if cs and len(cs) > ceiling(k/2): candidates[k] = cs
return candidates
###Output
_____no_output_____
###Markdown
here are $I_{c}$ scores witnesses for the candidate key length after spreading the *ciphertext* by columns:
###Code
analysis = analyze(cipher_text, max_key_length=20)
guess = guess_key_length(analysis)
guess
probing_key_length = 17
###Output
_____no_output_____
###Markdown
The following is the *ciphertext* spread over a matrix with a number of column equals the candidate length found in the previous cell.
###Code
spreaded = decode_spreaded(spread(cipher_text, probing_key_length))
print("{}\n...\n{}".format("\n".join(spreaded[:10]), "\n".join(spreaded[-3:])))
###Output
thbeczxtx vubwyic
mbikkzchxnugnpvq
urbeeuqgoxrqeqzjp
mlwgfwvnlpszgy o
omxxqsibkqjqeiair
ntongydvdyplt mgx
endqcthbekbrejlxb
akibvvqrbposighx
georctbbxkflwlcfy
ndxyjkvgwtgcsijcq
...
thysbisoapbzobsly
llvektdrnfougrtzx
espf yd
###Markdown
mutual coincidence indexOnce the length $m$ of key $\textbf{k}$ has been "established", we're left with finding actual key symbols $k_{i}$, for $i\in\{0,\ldots,m-1\}$. In order to fullfil this step, we need another object, which resembles the index of mutual coincidence $I_{c}$ but it is more general, in the sense that sampling occurs on two given sequences instead of the same one. Formally, let $I_{mc}(\alpha, \beta)$ be the *index of mutual coincidence* of sequences $\alpha$ and $\beta$, defined as$$I_{mc}(\alpha,\beta) = \sum_{a_{i}\in\frac{\mathbb{Z}}{n\mathbb{Z}}}{q_{a_{i}}^{(\alpha)}q_{a_{i}}^{(\beta)}}$$where $q_{a_{i}}^{(\nu)}$ is the probability to draw an occurrence of symbol $a_{i}$ in sequence $\nu$.Let $\eta$ and $\gamma$ be two sequences of length $l$, produced by adding shifts $v_{\alpha}$ and $v_{\beta}$ to sequences $\alpha$ and $\beta$, respectively (formally, $\eta_{i} \equiv_{n} \alpha_{i} + v_{\alpha}$ and $\gamma_{i} \equiv_{n} \beta_{i} + v_{\beta}$, forall $i\in\lbrace0,\ldots,l-1\rbrace$). The inequality$$ I_{mc}(\eta,\gamma) = \sum_{a_{i}\in\frac{\mathbb{Z}}{n\mathbb{Z}}}{q_{a_{i}}^{(\eta)}q_{a_{i}}^{(\gamma)}} = \sum_{a_{i}\in\frac{\mathbb{Z}}{n\mathbb{Z}}}{q_{a_{i}-v_{\alpha}}^{(\alpha)}q_{a_{i}-v_{\beta}}^{(\beta)}} = \sum_{a_{i}\in\frac{\mathbb{Z}}{n\mathbb{Z}}}{q_{a_{i}}^{(\alpha)}q_{a_{i}+v_{\alpha}-v_{\beta}}^{(\beta)}} \neq \sum_{a_{i}\in\frac{\mathbb{Z}}{n\mathbb{Z}}}{q_{a_{i}}^{(\alpha)}q_{a_{i}}^{(\beta)}} = I_{mc}(\alpha,\beta)$$holds unless factor $v_{\alpha}-v_{\beta}$ in each subscript. We can define an even more general version of $I_{mc}$ as$$ I_{mc}(\eta,\gamma,g) = \sum_{a_{i}\in\frac{\mathbb{Z}}{n\mathbb{Z}}}{q_{a_{i}-g}^{(\eta)}q_{a_{i}}^{(\gamma)}} = \sum_{a_{i}\in\frac{\mathbb{Z}}{n\mathbb{Z}}}{q_{a_{i}}^{(\alpha)}q_{a_{i}+v_{\alpha}-v_{\beta}+g}^{(\beta)}}$$where sequence $\eta$ is shifted back by $g\in\mathbb{N}$. Therefore we can state the equality with the usual definition according to$$ I_{mc}(\eta,\gamma,g) = I_{mc}(\alpha,\beta) \leftrightarrow v_{\beta}-v_{\alpha}=g$$proving the invariance of $I_{mc}$ when two sequences are produced by shifting two other ones by constant values $q,w$ respectively, providing that $g = w-q$.
###Code
def mutual_coincidence_index(fst_freqs, snd_freqs, offset=0,
alphabet=A, inverse_alphabet=inverse_A):
fst_len = length_from_frequencies(fst_freqs)
snd_len = length_from_frequencies(snd_freqs)
n = len(alphabet)
return sum(fst_freqs[k] * snd_freqs[inverse_alphabet[(v+offset) % n]]
for k,v in alphabet.items())/(fst_len * snd_len)
###Output
_____no_output_____
###Markdown
Previous generalization allows us to set the basis to break the cipher. It works as follows: for each pair $(\eta,\gamma)$ of different columns in the spread-matrix form of the *ciphertext*, fix a $g\in\frac{\mathbb{Z}}{n\mathbb{Z}}$ to compute $I_{mc}(\eta,\gamma,g)$: if such index value is close to $I_{c}(\nu)$, where $\nu$ is a structured sequence using the English language, namely close to $0.065$, then collect equation $k_{i(\beta)}-k_{i(\alpha)}=g$, where function $i(\nu)$ return the zero-based index of column $\nu$ in the matrix spread. Such equations are important since state difference relations over symbols of the key.
###Code
def build_offsets_eqs(cipher_text, key_length, indexed_sym, threshold=.06, alphabet=A):
n = len(alphabet)
eqs = {c:{} for c in range(key_length)}
spreaded = spread(cipher_text, key_length)
for c,a in itertools.product(range(key_length), repeat=2):
if a == c: continue
eqs[c][a]=[]
for g in range(1,n):
column_freqs = frequencies(col(spreaded, c))
another_freqs = frequencies(col(spreaded, a))
mci = mutual_coincidence_index(column_freqs, another_freqs, g)
if mci > threshold:
eqs[c][a].append(tuple([Eq(indexed_sym[a]-indexed_sym[c],g,evaluate=True), mci]))
return eqs
k_sym=IndexedBase('k')
eqs_dict = build_offsets_eqs(cipher_text, key_length=probing_key_length, indexed_sym=k_sym)
###Output
_____no_output_____
###Markdown
The following cell report the set of difference equation collected respect the *first* column of the spread matrix: we observe that such set fails to instantiate $k_{6}$ because no likely equation gives a relation for it. On the contrary, it can be the case that collecting equations for a pair of columns with indexes $(c,a)$ yields one or more equations, namely $\{k_{a}-k_{c}=g_{u_{0}},\ldots,k_{a}-k_{c}=g_{u_{v}}\}$, for some $v\in\mathbb{N}$: to properly explore all keys space, we have to consider the product of each list of "equally-likely" equations.
###Code
eqs_dict[0]
###Output
_____no_output_____
###Markdown
The following function implements the last observation, namely it produces the complete keys space where we've to look for the good one.
###Code
def explode_key_space(eqs_dict):
res = {}
for c, eq_dict in eqs_dict.items():
eqs_list = []
for a, eqs in eq_dict.items():
if not eqs:
res[c] = [] # no equations in `eqs` causes the product to be empty as well
break
eqs_list.append([[a] + list(eq_pair) for eq_pair in eqs])
else: # if no empty eqs was found then it is meaningful to cross product them
res[c] = list(itertools.product(*eqs_list))
return res
eqs_dict_pure = explode_key_space(eqs_dict)
eqs_dict_pure
###Output
_____no_output_____
###Markdown
In order to instantiate candidates keys, for each index column $c$ we use the set of equations $\{k_{a}-k_{c}=g_{u}\}$, for some $u\in\mathbb{N}$ and $a\in\{0,\ldots,m-1\}\setminus\{c\}$ as follows: for each equation $k_{a}-k_{c}=g_{u}$, instantiate $k_{c} = s$, for $s\in\frac{\mathbb{Z}}{n\mathbb{Z}}$, and solve it respect $k_{a}$. Therefore for each column index $c$ we have a candidate key if each symbol $k_{i}$ has been instantiated.
###Code
def candidate_keys(eqs_dict, indexed_sym, alphabet=A):
key_length = len(eqs_dict)
n = len(alphabet)
candidates=set()
for c, eqs_tuples in eqs_dict.items():
for d in range(len(alphabet)):
for eq_tuple in eqs_tuples:
key = [indexed_sym[i] for i in range(key_length)]
key[c] = d
for a, eq, mci in eq_tuple:
subs_eq = eq.subs(indexed_sym[c],d)
key[a] = solve(subs_eq, indexed_sym[a])[0]
key[a] = key[a] % n
for k in key:
if isinstance(k, Indexed): break
else:
candidates.add(tuple(key)) # `tuple` application to make `key` hashable
return candidates
possible_keys = candidate_keys(eqs_dict_pure, k_sym)
###Output
_____no_output_____
###Markdown
The last step to filter out mistaken keys, we use each candidate key to perform a decryption, checking it against frequencies of an arbitrary English prose:
###Code
def arbitrary_frequencies(length, filename='rest_plain_text.txt'):
with open(filename, 'r') as f:
text = clean_text(f.read().lower())
text = text[:length]
return frequencies(encode(text))
def attempt_keys(candidate_keys, cipher_text, threshold=.06, arbitrary_freqs=None):
sols = set()
for key in candidate_keys:
decrypted = decrypt(cipher_text, key)
freqs = frequencies(decrypted)
if arbitrary_freqs:
good = True
for k,v in arbitrary_freqs.items():
if 1-abs((freqs[k]-v)/v) < .6: good = False
if good: sols.add(decode(key))
else:
ci = coincidence_index(freqs)
if ci > threshold: sols.add(decode(key))#sols.add((ci, decode(key)))
return sols
sols = attempt_keys(possible_keys, cipher_text, arbitrary_freqs=arbitrary_frequencies(len(cipher_text)))
len(sols), sols
for sol in sols:
key = sol
print("key:({})\nplaintext:\n{}\n\n".format(
key, decode(decrypt(cipher_text, encode(key)))[:chunk]))
decode(secret_key)
###Output
_____no_output_____ |
notebooks/windows_examples.ipynb | ###Markdown
Benchmarking Scipy Signal vs cuSignal Time to Create Windows
###Code
import cusignal
from scipy import signal
###Output
_____no_output_____
###Markdown
General Parameters
###Code
# Num Points in Array - Reduce if getting out of memory errors
M = int(1e7)
###Output
_____no_output_____
###Markdown
Not Implemented* Parzen* Dolph-Chebyshev* Slepian* DPSSTesting performed on 16GB NVIDIA GP100; Performance scales with data sizes, so presumably these scipy.signal vs cusignal benchmarks will increase with more GPU RAM and window sizes. General Cosine
###Code
%%time
HFT90D = [1, 1.942604, 1.340318, 0.440811, 0.043097]
cpu_window = signal.windows.general_cosine(M, HFT90D, sym=False)
%%time
HFT90D = [1, 1.942604, 1.340318, 0.440811, 0.043097]
gpu_window = cusignal.windows.general_cosine(M, HFT90D, sym=False)
###Output
CPU times: user 139 ms, sys: 306 ms, total: 444 ms
Wall time: 465 ms
###Markdown
Boxcar
###Code
%%time
cpu_window = signal.windows.boxcar(M)
%%time
gpu_window = cusignal.windows.boxcar(M)
###Output
CPU times: user 1.7 ms, sys: 787 µs, total: 2.49 ms
Wall time: 1.21 ms
###Markdown
Triangular
###Code
%%time
cpu_window = signal.windows.triang(M)
%%time
gpu_window = cusignal.windows.triang(M)
###Output
CPU times: user 4.79 ms, sys: 387 µs, total: 5.18 ms
Wall time: 3.84 ms
###Markdown
Bohman
###Code
%%time
cpu_window = signal.windows.bohman(M)
%%time
gpu_window = cusignal.windows.bohman(M)
###Output
CPU times: user 101 µs, sys: 8.11 ms, total: 8.21 ms
Wall time: 6.16 ms
###Markdown
Blackman
###Code
%%time
cpu_window = signal.windows.blackman(M)
%%time
gpu_window = cusignal.windows.blackman(M)
###Output
CPU times: user 1.6 ms, sys: 816 µs, total: 2.41 ms
Wall time: 1.49 ms
###Markdown
Nuttall
###Code
%%time
cpu_window = signal.windows.nuttall(M)
%%time
gpu_window = cusignal.windows.nuttall(M)
###Output
CPU times: user 2.5 ms, sys: 0 ns, total: 2.5 ms
Wall time: 1.46 ms
###Markdown
Blackman-Harris
###Code
%%time
cpu_window = signal.windows.blackmanharris(M)
%%time
gpu_window = cusignal.windows.blackmanharris(M)
###Output
CPU times: user 1.79 ms, sys: 896 µs, total: 2.68 ms
Wall time: 1.58 ms
###Markdown
Flat Top
###Code
%%time
cpu_window = signal.windows.flattop(M)
%%time
gpu_window = cusignal.windows.flattop(M)
###Output
CPU times: user 2.42 ms, sys: 0 ns, total: 2.42 ms
Wall time: 1.47 ms
###Markdown
Bartlett
###Code
%%time
cpu_window = signal.windows.bartlett(M)
%%time
gpu_window = cusignal.windows.bartlett(M)
###Output
CPU times: user 6.07 ms, sys: 0 ns, total: 6.07 ms
Wall time: 4.7 ms
###Markdown
Hann
###Code
%%time
cpu_window = signal.windows.hann(M)
%%time
gpu_window = cusignal.windows.hann(M)
###Output
CPU times: user 2.35 ms, sys: 0 ns, total: 2.35 ms
Wall time: 1.49 ms
###Markdown
Tukey
###Code
%%time
cpu_window = signal.windows.tukey(M, alpha=0.5, sym=True)
%%time
gpu_window = cusignal.windows.tukey(M, alpha=0.5, sym=True)
###Output
CPU times: user 3.41 ms, sys: 404 µs, total: 3.81 ms
Wall time: 2.76 ms
###Markdown
Bartlett-Hann
###Code
%%time
cpu_window = signal.windows.barthann(M)
%%time
gpu_window = cusignal.windows.barthann(M)
###Output
CPU times: user 4.26 ms, sys: 139 µs, total: 4.4 ms
Wall time: 2.8 ms
###Markdown
General Hamming
###Code
%%time
cpu_window = signal.windows.general_hamming(M, alpha=0.5, sym=True)
%%time
gpu_window = cusignal.windows.general_hamming(M, alpha=0.5, sym=True)
###Output
CPU times: user 1.31 ms, sys: 0 ns, total: 1.31 ms
Wall time: 966 µs
###Markdown
Hamming
###Code
%%time
cpu_window = signal.windows.hamming(M)
%%time
gpu_window = cusignal.windows.hamming(M)
###Output
CPU times: user 1.79 ms, sys: 961 µs, total: 2.75 ms
Wall time: 1.9 ms
###Markdown
Kaiser
###Code
%%time
cpu_window = signal.windows.kaiser(M, beta=0.5)
%%time
gpu_window = cusignal.windows.kaiser(M, beta=0.5)
###Output
CPU times: user 5 ms, sys: 502 µs, total: 5.5 ms
Wall time: 4.57 ms
###Markdown
Gaussian
###Code
%%time
cpu_window = signal.windows.gaussian(M, std=7)
%%time
gpu_window = cusignal.windows.gaussian(M, std=7)
###Output
CPU times: user 3.26 ms, sys: 0 ns, total: 3.26 ms
Wall time: 2.38 ms
###Markdown
General Gaussian
###Code
%%time
cpu_window = signal.windows.general_gaussian(M, p=1.5, sig=7)
%%time
gpu_window = cusignal.windows.general_gaussian(M, p=1.5, sig=7)
###Output
CPU times: user 766 µs, sys: 386 µs, total: 1.15 ms
Wall time: 753 µs
###Markdown
Cosine
###Code
%%time
cpu_window = signal.windows.cosine(M)
%%time
gpu_window = cusignal.windows.cosine(M)
###Output
CPU times: user 2.28 ms, sys: 770 µs, total: 3.05 ms
Wall time: 2.04 ms
###Markdown
Exponential
###Code
%%time
cpu_window = signal.windows.exponential(M, tau=3.0)
%%time
gpu_window = cusignal.windows.exponential(M, tau=3.0)
###Output
CPU times: user 1.02 ms, sys: 527 µs, total: 1.55 ms
Wall time: 1.13 ms
|
cartier_jewelry_data_analysis.ipynb | ###Markdown
Executive SummaryWe define 6 question about cartier dataset, this data scraped from www.cartier.com, It's include informations about category of products and price of them, tags are covering metal and gems that used in **Jewelery**. 1. Which gem are used in the products?2. Which metal mostly used in Cartier Jewellery?3. How much is the mean of Cartier jewellery as metal type?4. How much the mean price for every metal type in cartier jewels?5. How many gems in every jewels category?6. How much the gem price in every jewels category?7. Which gem is the most expensive?
###Code
import re
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
cartier = pd.read_csv('dataset/cartier_catalog.csv')
cartier.head(5)
# Define tag_splitter // splits column
def tag_spiliter(dataframe , col_name , delimiter , metal , first_gem , second_gem , third_gem , foruth_gem):
dataframe['str_split'] = dataframe[col_name].str.split(delimiter)
dataframe[metal] = dataframe.str_split.str.get(0).str.strip()
dataframe[first_gem] = dataframe.str_split.str.get(1).str.strip()
dataframe[second_gem] = dataframe.str_split.str.get(2).str.strip()
dataframe[third_gem] = dataframe.str_split.str.get(3).str.strip()
dataframe[foruth_gem] = dataframe.str_split.str.get(4).str.strip()
dataframe.fillna(0 , inplace=True)
del dataframe['str_split']
# Recall tag_splitter
tag_spiliter(cartier , 'tags' , ',' , 'metal' , 'gem' , 'second_gem' , 'third_gem' , 'foruth_gem')
###Output
_____no_output_____
###Markdown
1. Which gem are mostly used in the products?
###Code
# Drop redundant columns
cartier.drop(['ref' , 'image' , 'tags' , 'title' , 'description'] , axis = 1 , inplace=True)
gems = pd.concat([cartier["gem"],cartier["second_gem"],cartier["third_gem"],cartier["foruth_gem"]], axis= 0)
gems_values = gems.value_counts()[1:].to_frame()
gems_values.reset_index(inplace=True)
gems_values.columns = ['gem_type' , 'count']
plt.figure(figsize=(15, 5))
sns.barplot(x= 'gem_type', y= "count", data= gems_values,
palette= sns.cubehelix_palette(len(gems_values.gem_type), start=.5, rot=-.75, reverse= True))
plt.xlabel("Gems Type")
plt.ylabel("Count")
plt.title("Count of Gems")
plt.xticks(rotation= 90)
plt.show()
###Output
_____no_output_____
###Markdown
A glance at the above figure reveals to us that diamonds in all products are the most used gem, about **66 percent** of the product have diamonds in them, more than any other gems, Its most popular gem. onyx and emeralds are into the next ranks.
###Code
# Dictionary for costum color palette
color_dict = {'yellow gold': "#fcc72d",
'platinum': "#e5e4e2",
'pink gold': "#e9cdd0",
'white gold': "#f9f3d1",
'non-rhodiumized white gold': "#C0C0C0"}
###Output
_____no_output_____
###Markdown
2. Which metal mostly used in Cartier Jewellery?
###Code
cartier_category_metal = cartier.groupby('categorie')['metal'].value_counts().to_frame()
cartier_category_metal.columns = ['count']
cartier_category_metal.reset_index(level = [0 , 1] , inplace=True)
plt.figure(figsize=(15, 7))
sns.barplot(x= "categorie", y= "count", hue= "metal", data= cartier_category_metal,
palette= color_dict)
plt.xlabel("Jewels Type")
plt.ylabel("Counts")
plt.legend(loc= "upper left")
plt.show()
###Output
_____no_output_____
###Markdown
Jewels type include 4 category: **rings, earring, necklaces and bracelets** \Ranks of metals in every category are the same and equal: 1.White Gold 2.Pink Gold 3.Yellow Gold 4.Platinum 5.Non-Rhodiumized White Gold 3. How much is the mean of Cartier jewellery as metal type?
###Code
cartier_gp1 = cartier.groupby(["categorie", "metal"])["price"].mean().round(2).to_frame()
cartier_gp1 = cartier_gp1.reset_index()
plt.figure(figsize=(15, 7))
sns.barplot(x= 'categorie', y= 'price', hue= 'metal', data= cartier_gp1 , palette = color_dict)
plt.xlabel('Jewels Type')
plt.ylabel('Mean Price in $')
plt.legend(loc= "upper left")
plt.show()
###Output
_____no_output_____
###Markdown
In every category **Platinum** in most valuable metal with a huge difference in price as $. After that White Gold have a second place but other metals are close in price for every category 4. How much the mean price for every metal type in cartier jewels?
###Code
cartier_gp2 = cartier.groupby("metal")["price"].mean().round(2).to_frame()
cartier_gp2.reset_index(inplace=True)
plt.figure(figsize=(15, 7))
sns.barplot(x= "metal" , y = 'price', data=cartier_gp2 , palette = color_dict)
plt.xlabel('Metal')
plt.ylabel('Mean Price in $')
plt.show()
###Output
_____no_output_____
###Markdown
As we saw earlier Platinium is the most valuable metal that the Cartier used in jewels. The mean price of Platinium jewels is more than **40000** Dollars after that white metal is second. "Yellow Gold" and "Non-Rhodiumized White Gold" are about equal in mean price, in last is pink gold with mean of **15000** Dollars that is about one of third of Platinium 5. How many gems in every jewels category?
###Code
cartier_gp_gem = cartier.groupby('categorie')['gem'].value_counts().to_frame()
cartier_gp_gem.columns = ['count']
cartier_gp_gem.reset_index(level = [0 , 1] , inplace=True)
cartier_gp_gem = cartier_gp_gem[cartier_gp_gem["gem"] != 0]
plt.figure(figsize=(15, 7))
sns.barplot(x= 'categorie', y= 'count', hue= 'gem', data= cartier_gp_gem , palette = sns.color_palette("cubehelix", 28))
plt.xlabel('Jewels Type')
plt.ylabel('Counts')
plt.legend(ncol=4, loc= 'upper left')
plt.show()
###Output
_____no_output_____
###Markdown
In all categories, **diamond** is the most popular gem in the making of jewelry. Exceedingly over 100 ring types include **diamonds**, in most cases more than one piece of **diamond**. This also rules for other categories of jewelry such as **earrings**, **necklaces**, and **bracelets**. Variety of the gems used in **rings** and **bracelets** are more than **earrings** and **necklaces**. Furthermore, **Sapphires** are also a popular gem used in **ring** production. 6. How much the gem price in every jewels category?
###Code
cartier_gp1_gem = cartier.groupby(["categorie", "gem"])["price"].mean().round(2).to_frame()
cartier_gp1_gem = cartier_gp1_gem.reset_index()
cartier_gp1_gem = cartier_gp1_gem[cartier_gp1_gem["gem"] != 0]
plt.figure(figsize=(15, 7))
sns.barplot(x= 'categorie', y= 'price', hue= 'gem', data= cartier_gp1_gem , palette = sns.color_palette("cubehelix", 28))
plt.xlabel('Jewels Type')
plt.ylabel('Mean Price in $')
plt.legend(ncol=4, loc= 'upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Earrings, Necklaces, and Rings with Rubies(gem) has a huge difference in the price of jewelry, but Emeralds(gem) in Bracelets shows the is a key factor in price determination. A closer look at necklaces reveals us that Tsavorite garnet has third place in price. 7. Which gem is the most expensive?
###Code
cartier_gp2_gem = cartier.groupby("gem")["price"].mean().round(2).to_frame()
cartier_gp2_gem.reset_index(inplace=True)
cartier_gp2_gem = cartier_gp2_gem[(cartier_gp2_gem['gem'] != 'white gold') &
(cartier_gp2_gem['gem'] != 'yellow gold') &
(cartier_gp2_gem['gem'] != 0)]
plt.figure(figsize=(15, 8))
sns.barplot(x= 'gem' , y = 'price', data=cartier_gp2_gem , palette = sns.color_palette("Set2"))
plt.xlabel('Gem Type')
plt.ylabel('Mean Price in $')
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
This plot shows us the mean price of products with gems on themAs we could have predicted products that have Rubies on them are the most expensive jewelry in Cartier products. the next ranks belong to Tsavorite garnets, Emeralds, and the Chrysoprase. The middle gems on the above figure are equal in mean price
###Code
cartier = pd.read_csv('dataset/cartier_catalog.csv')
cartier = cartier.drop(["image"], axis= 1)
for key,value in cartier["description"].items():
regex = r"\d{2}K\s\w+\s\w+|\d{3}/\d{4}\s\w+|platinum|white gold"
gold = re.findall(regex, value)
cartier.loc[key, "description"] = gold[0]
cartier
###Output
_____no_output_____ |
07-Feature-Extractor.ipynb | ###Markdown
Refine the Data Create the items, users and interaction matrix- **Items**: item id + metadata features- **Users**: User id + metadata features- **Interaction**: Explicit Matrix, Implicit Matrix
###Code
import sys
sys.path.append("../")
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import re
###Output
_____no_output_____
###Markdown
ItemsFor the items, we will keep the following:- From `items_raw` dataframe - movie_id - title - year (release) - genre categories- From `item_features` dataframe - overview - language (original_language) - runtime - vote_average - vote_count
###Code
items_raw = pd.read_csv("data/items_raw.csv")
item_features = pd.read_csv("data/item_features.csv")
###Output
_____no_output_____
###Markdown
*1. Get `year` from `release_title`*
###Code
items_raw["release_date"] = pd.to_datetime(items_raw.release_date, infer_datetime_format=True)
items_raw["year"] = items_raw.release_date.apply(lambda x: str(x.year))
###Output
_____no_output_____
###Markdown
*2. Drop `imdb_url`, `video_release_date` & `release_date`*
###Code
items_main = items_raw.drop(['video_release_date', 'release_date', 'imdb_url'], axis=1).copy()
# Match Whitespace + ( + YEAR + )
# regex_year = re.compile(r'\s\(\d{4}\)')
# items["movie"] = items.title.str.replace(regex_year, "")
# items["movie"] = items.movie.str.strip()
###Output
_____no_output_____
###Markdown
*3. Get the additional features from the item_features*
###Code
items_addtl = item_features[['overview', 'original_language', 'runtime', 'vote_average', 'vote_count', "movie_id"]].copy()
###Output
_____no_output_____
###Markdown
*4. Merge the two dataframes*
###Code
items = pd.merge(left=items_main, right=items_addtl, on="movie_id", how="left")
items.head()
items.overview.isna().sum()
items.overview.fillna("None", inplace=True)
###Output
_____no_output_____
###Markdown
Getting the sentence vector
###Code
import spacy
nlp = spacy.load('en_core_web_lg')
doc = nlp(items["overview"][0])
doc.vector
def word_vec(sentence):
doc = nlp(sentence)
return doc.vector
%%time
overview_embedding = items["overview"].apply(word_vec)
overview_embedding = overview_embedding.to_list()
overview_embedding_list = []
for vec in overview_embedding:
overview_embedding_list.append(vec.tolist())
len(overview_embedding_list)
overview_embedding_df = pd.DataFrame(overview_embedding_list)
overview_embedding_df.head()
items.columns
item_similarity_df = pd.concat([
items[['movie_id', 'genre_unknown', 'Action', 'Adventure',
'Animation', 'Children', 'Comedy', 'Crime', 'Documentary', 'Drama',
'Fantasy', 'FilmNoir', 'Horror', 'Musical', 'Mystery', 'Romance',
'SciFi', 'Thriller', 'War', 'Western']],
overview_embedding_df],
axis=1)
item_similarity_df.head()
###Output
_____no_output_____
###Markdown
Build nearest neighbor model
###Code
from reco.recommend import get_similar
from sklearn.neighbors import NearestNeighbors
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import PIL
%%time
item_distances, item_similar_indices = get_similar(overview_embedding_df, 5)
item_similar_indices
def show_similar(item_index, item_similar_indices):
movie_ids = item_similar_indices[item_index]
#movie_ids = item_encoder.inverse_transform(s)
images = []
for movie_id in movie_ids:
img_path = 'data/posters/' + str(movie_id+1) + '.jpg'
images.append(mpimg.imread(img_path))
plt.figure(figsize=(20,10))
columns = 5
for i, image in enumerate(images):
plt.subplot(len(images) / columns + 1, columns, i + 1)
plt.axis('off')
plt.imshow(image)
show_similar(11, item_similar_indices)
###Output
_____no_output_____ |
extra/02_sparameters.ipynb | ###Markdown
Sparameters databaseThis is some work in progress to create a database of simulated Sparameters instead of relying on JSON files.This approach is different making compact models
###Code
import dataset
import pandas as pd
import matplotlib.pyplot as plt
import pp
import gdslib as gl
from gdslib.config import db
c = pp.c.mmi1x2()
sp = pp.sp.model_from_gdsfactory(c)
pd.DataFrame(sp[1])
sp = db['mzi']
sp.insert()
table = db['mzi']
c = pp.c.mmi1x2()
s = c.get_settings_model()
s
c.sec = pp.c.mmi1x2()ttings.pop(['cladding_offset', 'layer', 'layers_cladding'])
table.insert(s)
c = pp.c.mmi1x2(length_mmi=6)
s = c.get_settings_model()
table.insert(s)
for row in table.find(function_name='mmi1x2'):
print(row['length_mmi'])
###Output
_____no_output_____ |
pandas_06(Processing_missing_values).ipynb | ###Markdown
누락값 처리하기 06 - 1 누락값이란? 누락값과 누락값 확인하기누락값(NaN)은 NaN, NAN, nan 과 같은 방법으로 표기할 수 있습니다. 하지만 여기선 누락값을 NaN이라고 표기하여 사용하겠습니다. 누락값 확인하기1.먼저 누락값을 사용하기 위해 numpy에서 누락값을 불러옵니다.
###Code
from numpy import NaN, NAN, nan
###Output
_____no_output_____
###Markdown
2.누락값은 0, ''와 같은 값과는 다른 개념이라는 것에 주의해야 합니다. 누락값은 말 그대로 데이터 자체가 없다는 것을 의미합니다. 그래서 '같다'라는 개념도 없죠. 다음은 누락값과 True, False, 0, ''을 비교한 결과 입니다.
###Code
print(NaN == True)
print(NaN == False)
print(NaN == 0)
print(NaN == '')
###Output
False
###Markdown
3.과정 2에서도 언급했듯이 누락값은 값 자체가 없기 때문에 자기 자신과 비교해도 True가 아닌 False가 출력됩니다.
###Code
print(NaN == NaN)
print(NaN == nan)
print(NaN == NAN)
print(nan == NAN)
###Output
False
###Markdown
4.그러면 누락값은 어떻게 확인 할까요? 다행이 판다스에는 누락값을 확인하는 메서드인 isnull이 있습니다. 다음은 isnull 메서드로 누락값을 검사한 예입니다.
###Code
import pandas as pd
print(pd.isnull(NaN))
print(pd.isnull(nan))
print(pd.isnull(NAN))
###Output
True
###Markdown
5.반대의 경우(누락값이 아닌 경우)도 검사할 수 있습니다. 다음은 notnull 메서드로 누락값이 아닌 경우를 검사한 예입니다.
###Code
print(pd.notnull(NaN))
print(pd.notnull(42))
print(pd.notnull('missing'))
###Output
True
###Markdown
누락값이 생기는 이유 1.누락값이 있는 데이터 집합을 연결할 때 누락값이 생기는 경우이번에 사용할 데이터 집합은 누락값이 포함되어 있습니다. 누락값이 포함되어 있는 데이터 집합을 연결하면 어떻게 될까? 누락값이 포함되어 있는 데이터 집합을 연결하면 더 많은 누락값이 생깁니다.
###Code
visited = pd.read_csv('C:/Users/김지상/Downloads/doit_pandas-master/doit_pandas-master/data/survey_visited.csv')
survey = pd.read_csv('C:/Users/김지상/Downloads/doit_pandas-master/doit_pandas-master/data/survey_survey.csv')
print(visited)
print(survey)
###Output
taken person quant reading
0 619 dyer rad 9.82
1 619 dyer sal 0.13
2 622 dyer rad 7.80
3 622 dyer sal 0.09
4 734 pb rad 8.41
5 734 lake sal 0.05
6 734 pb temp -21.50
7 735 pb rad 7.22
8 735 NaN sal 0.06
9 735 NaN temp -26.00
10 751 pb rad 4.35
11 751 pb temp -18.50
12 751 lake sal 0.10
13 752 lake rad 2.19
14 752 lake sal 0.09
15 752 lake temp -16.00
16 752 roe sal 41.60
17 837 lake rad 1.46
18 837 lake sal 0.21
19 837 roe sal 22.50
20 844 roe rad 11.25
###Markdown
2.과정 1에서 구한 데이터 집합을 연결해 보자. 그러면 누락값이 많이 생겨난 것을 볼 수 있습니다.
###Code
vs = visited.merge(survey, left_on='ident', right_on='taken')
print(vs)
###Output
ident site dated taken person quant reading
0 619 DR-1 1927-02-08 619 dyer rad 9.82
1 619 DR-1 1927-02-08 619 dyer sal 0.13
2 622 DR-1 1927-02-10 622 dyer rad 7.80
3 622 DR-1 1927-02-10 622 dyer sal 0.09
4 734 DR-3 1939-01-07 734 pb rad 8.41
5 734 DR-3 1939-01-07 734 lake sal 0.05
6 734 DR-3 1939-01-07 734 pb temp -21.50
7 735 DR-3 1930-01-12 735 pb rad 7.22
8 735 DR-3 1930-01-12 735 NaN sal 0.06
9 735 DR-3 1930-01-12 735 NaN temp -26.00
10 751 DR-3 1930-02-26 751 pb rad 4.35
11 751 DR-3 1930-02-26 751 pb temp -18.50
12 751 DR-3 1930-02-26 751 lake sal 0.10
13 752 DR-3 NaN 752 lake rad 2.19
14 752 DR-3 NaN 752 lake sal 0.09
15 752 DR-3 NaN 752 lake temp -16.00
16 752 DR-3 NaN 752 roe sal 41.60
17 837 MSK-4 1932-01-14 837 lake rad 1.46
18 837 MSK-4 1932-01-14 837 lake sal 0.21
19 837 MSK-4 1932-01-14 837 roe sal 22.50
20 844 DR-1 1932-03-22 844 roe rad 11.25
###Markdown
3.데이터를 입력할 때 누락값이 생기는 경우누락값은 데이터를 잘못 입력하여 생길 수도 있습니다. 다음은 시리즈를 생성할 때 데이터프레임에 없는 열과 행 데이터를 입력하여 누락값이 생긴 것입니다. scientists 데이터프레임을 확인하면 missing이라는 열과 함께 행 데이터에 누락값이 추가된 것을 확인할 수 있습니다.
###Code
num_legs = pd.Series({'goat': 4, 'amoeba': nan})
print(num_legs)
print(type(num_legs))
scientists = pd.DataFrame({
'Name': ['Rosaline Franklin', 'William Gosset'],
'Occupation': ['Chemist', 'Statistician'],
'Born': ['1920-07-25', '1876-06-13'],
'Died': ['1958-04-16', '1937-10-16'],
'missing': [NaN, nan]})
print(scientists)
print(type(scientists))
###Output
Name Occupation Born Died missing
0 Rosaline Franklin Chemist 1920-07-25 1958-04-16 NaN
1 William Gosset Statistician 1876-06-13 1937-10-16 NaN
<class 'pandas.core.frame.DataFrame'>
###Markdown
4.범위를 지정하여 데이터를 추출할 때 누락값이 생기는 경우데이터프레임에 존재하지 않는 데이터를 추출하면 누락값이 생깁니다. 이번에는 갭마인더 데이터 집합을 불러와 실습해 보겠습니다.
###Code
gapminder = pd.read_csv('C:/Users/김지상/Downloads/doit_pandas-master/doit_pandas-master/data/gapminder.tsv', sep='\t')
###Output
_____no_output_____
###Markdown
5.다음은 gapminder 데이터프레임을 연도별로 그룹화한 다음 lifeExp 열의 평균을 구한것 입니다.
###Code
life_exp = gapminder.groupby(['year'])['lifeExp'].mean()
print(life_exp)
###Output
year
1952 49.057620
1957 51.507401
1962 53.609249
1967 55.678290
1972 57.647386
1977 59.570157
1982 61.533197
1987 63.212613
1992 64.160338
1997 65.014676
2002 65.694923
2007 67.007423
Name: lifeExp, dtype: float64
###Markdown
6.다음은 range 메서드를 이용하여 life_Exp 열에서 2000~2009년의 데이터를 추출한 것입니다. 그런데 이렇게 데이터를 추출하면 처음부터 life_Exp 열에 없었던 연도가 포함되기 때문에 누락값이 많이 발생합니다.
###Code
print(life_exp.loc[range(2000, 2010), ])
###Output
_____no_output_____
###Markdown
7.앞에서 발생한 문제를 해결하기 위해서는 불린 추출을 이용하여 데이터를 추출하면 됩니다.
###Code
y2000 = life_exp[life_exp.index > 2000]
print(y2000)
###Output
year
2002 65.694923
2007 67.007423
Name: lifeExp, dtype: float64
###Markdown
누락값의 개수 구하기 1.다음과 같이 입력하여 데이터를 불러옵니다.
###Code
ebola = pd.read_csv('C:/Users/김지상/Downloads/doit_pandas-master/doit_pandas-master/data/country_timeseries.csv')
###Output
_____no_output_____
###Markdown
2.먼저 count 메서드로 누락값이 아닌 값의 개수를 구해 보겠습니다.
###Code
print(ebola.count())
###Output
Date 122
Day 122
Cases_Guinea 93
Cases_Liberia 83
Cases_SierraLeone 87
Cases_Nigeria 38
Cases_Senegal 25
Cases_UnitedStates 18
Cases_Spain 16
Cases_Mali 12
Deaths_Guinea 92
Deaths_Liberia 81
Deaths_SierraLeone 87
Deaths_Nigeria 38
Deaths_Senegal 22
Deaths_UnitedStates 18
Deaths_Spain 16
Deaths_Mali 12
dtype: int64
###Markdown
3.과정 2의 결과만 잘 활용해도 누락값의 개수를 쉽게 구할 수 있습니다. shape[0]에 전체 행의 데이터 개수가 저장되어 있다는 점을 이용하여 shape[0]에서 누락값이 아닌 값의 개수를 빼면 누락값의 개수를 구할 수 있습니다.
###Code
num_rows = ebola.shape[0]
num_missing = num_rows - ebola.count()
print(num_missing)
###Output
Date 0
Day 0
Cases_Guinea 29
Cases_Liberia 39
Cases_SierraLeone 35
Cases_Nigeria 84
Cases_Senegal 97
Cases_UnitedStates 104
Cases_Spain 106
Cases_Mali 110
Deaths_Guinea 30
Deaths_Liberia 41
Deaths_SierraLeone 35
Deaths_Nigeria 84
Deaths_Senegal 100
Deaths_UnitedStates 104
Deaths_Spain 106
Deaths_Mali 110
dtype: int64
###Markdown
4.count 메서드를 사용해도 되지만 count_nonzero, isnull 메서드를 조합해도 누락값의 개수를 구할 수 있습니다.
###Code
import numpy as np
print(np.count_nonzero(ebola.isnull()))
print(np.count_nonzero(ebola['Cases_Guinea'].isnull()))
###Output
29
###Markdown
5.시리즈에 포함된 value_counts 메서드는 지정한 열의 빈도를 구하는 메서드 입니다. value_counts 메서드를 사용해 Cases_Guinea 열의 누락값 개수를 구하려면 다음과 같이 입력합니다.
###Code
print(ebola.Cases_Guinea.value_counts(dropna=False).head())
###Output
NaN 29
86.0 3
112.0 2
390.0 2
495.0 2
Name: Cases_Guinea, dtype: int64
###Markdown
누락값 처리하기 - 변경, 삭제 1. 누락값 변경하기데이터프레임에 포함된 fillna 메서드에 0을 대입하면 누락값을 0으로 변경합니다. fillna 메서드는 처리해야 하는 데이터프레임의 크기가 매우 크고 메모리를 효율적으로 사용해야 하는 경우에 자주 사용하는 메서드 입니다.
###Code
print(ebola.fillna(0).iloc[0:10, 0:5])
###Output
Date Day Cases_Guinea Cases_Liberia Cases_SierraLeone
0 1/5/2015 289 2776.0 0.0 10030.0
1 1/4/2015 288 2775.0 0.0 9780.0
2 1/3/2015 287 2769.0 8166.0 9722.0
3 1/2/2015 286 0.0 8157.0 0.0
4 12/31/2014 284 2730.0 8115.0 9633.0
5 12/28/2014 281 2706.0 8018.0 9446.0
6 12/27/2014 280 2695.0 0.0 9409.0
7 12/24/2014 277 2630.0 7977.0 9203.0
8 12/21/2014 273 2597.0 0.0 9004.0
9 12/20/2014 272 2571.0 7862.0 8939.0
###Markdown
2.fillna 메서드의 method 인잣값을 ffill로 지정하면 누락값이 나타나기 전의 값으로 누락값이 변경됩니다. 예를 들어 6행의 누락값은 누락값이 나타나기 전의 값인 5행의 값을 사용하여 누락값을 처리합니다. 하지만 0, 1행은 처음부터 누락값이기 때문에 누락값이 그대로 남아 있습니다.
###Code
print(ebola.fillna(method='ffill').iloc[0:10, 0:5])
###Output
Date Day Cases_Guinea Cases_Liberia Cases_SierraLeone
0 1/5/2015 289 2776.0 NaN 10030.0
1 1/4/2015 288 2775.0 NaN 9780.0
2 1/3/2015 287 2769.0 8166.0 9722.0
3 1/2/2015 286 2769.0 8157.0 9722.0
4 12/31/2014 284 2730.0 8115.0 9633.0
5 12/28/2014 281 2706.0 8018.0 9446.0
6 12/27/2014 280 2695.0 8018.0 9409.0
7 12/24/2014 277 2630.0 7977.0 9203.0
8 12/21/2014 273 2597.0 7977.0 9004.0
9 12/20/2014 272 2571.0 7862.0 8939.0
###Markdown
3.method 인잣값을 bfill로 지정하면 누락값이 나타난 이후의 첫 번째 값으로 앞쪽의 누락값이 모두 변경됩니다. 즉, 과정 2의 반대 방향으로 누락값을 처리한다고 생각하면 됩니다. 하지만 이방법도 마지막 값이 누락값인 경우에는 처리하지 못한다는 단점이 있습니다.
###Code
print(ebola.fillna(method='bfill').iloc[0:10, 0:5])
###Output
Date Day Cases_Guinea Cases_Liberia Cases_SierraLeone
0 1/5/2015 289 2776.0 8166.0 10030.0
1 1/4/2015 288 2775.0 8166.0 9780.0
2 1/3/2015 287 2769.0 8166.0 9722.0
3 1/2/2015 286 2730.0 8157.0 9633.0
4 12/31/2014 284 2730.0 8115.0 9633.0
5 12/28/2014 281 2706.0 8018.0 9446.0
6 12/27/2014 280 2695.0 7977.0 9409.0
7 12/24/2014 277 2630.0 7977.0 9203.0
8 12/21/2014 273 2597.0 7862.0 9004.0
9 12/20/2014 272 2571.0 7862.0 8939.0
###Markdown
4.과정 2, 3에서는 데이터프레임의 값을 그대로 사용하여 누락값을 처리했습니다. interpolate 메서드는 누락값 양쪽에 있는 값을 이용하여 중간값을 구한 다음 누락값을 처리합니다. 이렇게 하면 데이터프레임이 일정한 간격을 유지하고 있는 것처럼 수정할 수 있습니다.
###Code
print(ebola.interpolate().iloc[0:10, 0:5])
###Output
Date Day Cases_Guinea Cases_Liberia Cases_SierraLeone
0 1/5/2015 289 2776.0 NaN 10030.0
1 1/4/2015 288 2775.0 NaN 9780.0
2 1/3/2015 287 2769.0 8166.0 9722.0
3 1/2/2015 286 2749.5 8157.0 9677.5
4 12/31/2014 284 2730.0 8115.0 9633.0
5 12/28/2014 281 2706.0 8018.0 9446.0
6 12/27/2014 280 2695.0 7997.5 9409.0
7 12/24/2014 277 2630.0 7977.0 9203.0
8 12/21/2014 273 2597.0 7919.5 9004.0
9 12/20/2014 272 2571.0 7862.0 8939.0
###Markdown
5. 누락값 삭제하기누락값이 필요 없을 경우에는 누락값을 삭제해도 됩니다. 하지만 누락값을 무작정 삭제하면 데이터가 너무 편향되거나 데이터의 개수가 너무 적어질 수도 있습니다. 그래서 누락값을 삭제할 때는 분석하는 사람이 잘판단해야 합니다. 먼저 ebola의 데이터 구조를 확인해 보겠습니다.
###Code
print(ebola.shape)
###Output
(122, 18)
###Markdown
6.누락값을 삭제하기 위해 dropna 메서드를 사용하겠습니다. 누락값이 포함된 행들이 모두 삭제되기 때문에 많은 데이터가 삭제되었습니다.
###Code
ebola_dropna = ebola.dropna()
print(ebola_dropna.shape)
print(ebola_dropna)
###Output
Date Day Cases_Guinea Cases_Liberia Cases_SierraLeone \
19 11/18/2014 241 2047.0 7082.0 6190.0
Cases_Nigeria Cases_Senegal Cases_UnitedStates Cases_Spain Cases_Mali \
19 20.0 1.0 4.0 1.0 6.0
Deaths_Guinea Deaths_Liberia Deaths_SierraLeone Deaths_Nigeria \
19 1214.0 2963.0 1267.0 8.0
Deaths_Senegal Deaths_UnitedStates Deaths_Spain Deaths_Mali
19 0.0 1.0 0.0 6.0
###Markdown
누락값이 포함된 데이터 계산하기1.Guinea, Liberia, SierraLeone 열에는 누락값들이 존재합니다. 만약 누락값이 존재하는 Guinea, Liberia, SierraLeone 열을 가지고 ebola 발병 수의 합을 계산하면 어떻게 될까? 다음은 Cases_Guinea, Cases_Liberia, Cases_SierraLeone 열을 더하여 Cases_multiple 열을 새로 만든 것입니다.
###Code
ebola['Cases_multiple'] = ebola['Cases_Guinea'] + ebola['Cases_Liberia'] + ebola['Cases_SierraLeone']
###Output
_____no_output_____
###Markdown
2.과정 1에서 계산한 Cases_multiple 열을 포함하여 ebola_subset이라는 데이터프레임을 새로 만들어서 어떤 값이 존재하는지 확인해 보겠습니다 Cases_Guinea, Cases_Liberia, Cases_SierraLeone에서 누락값이 하나라도 있는 행은 계산 결과(Cases_multiple)가 NaN이 되었음을 알 수 있습니다. 즉, 계산 결과 누락값이 더 많이 생겼습니다.
###Code
ebola_subset = ebola.loc[:, ['Cases_Guinea', 'Cases_Liberia', 'Cases_SierraLeone', 'Cases_multiple']]
print(ebola_subset.head(n=10))
###Output
Cases_Guinea Cases_Liberia Cases_SierraLeone Cases_multiple
0 2776.0 NaN 10030.0 NaN
1 2775.0 NaN 9780.0 NaN
2 2769.0 8166.0 9722.0 20657.0
3 NaN 8157.0 NaN NaN
4 2730.0 8115.0 9633.0 20478.0
5 2706.0 8018.0 9446.0 20170.0
6 2695.0 NaN 9409.0 NaN
7 2630.0 7977.0 9203.0 19810.0
8 2597.0 NaN 9004.0 NaN
9 2571.0 7862.0 8939.0 19372.0
###Markdown
3.Cases_multiple 열을 sum 메서드를 사용해 더하면 세 지역의 ebola 발병 수의 합을 구할 수 있습니다. 이때 sum 메서드를 그냥 사용하면 누락값을 포함해 계산합니다. 따라서 결괏값도 누락값이 됩니다. 누락값을 무시한 채 계산하려면 skipna 인잣값을 True로 설정하면 됩니다.
###Code
print(ebola.Cases_Guinea.sum(skipna = True))
print(ebola.Cases_Guinea.sum(skipna = False))
###Output
nan
|
TFLiveLessons/deep net in keras.ipynb | ###Markdown
Deep Net in Keras
###Code
#Building a deep net to classify MNIST digits
#### Set seed
import numpy as np
np.random.seed(42)
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
from keras.layers import Dropout
from keras.layers.normalization import BatchNormalization
from keras import regularizers
####Load data
(X_train,y_train),(X_test,y_test)=mnist.load_data()
X_train.shape
X_test.shape
y_test.shape
##### Pre-process data
X_train = X_train.reshape(60000,784).astype('float32')
X_test = X_test.reshape(10000,784).astype('float32')
X_train /=255
X_test /= 255
n_classes = 10
y_train= keras.utils.to_categorical(y_train,n_classes)
y_test = keras.utils.to_categorical(y_test,n_classes)
y_train[0]
###Output
_____no_output_____
###Markdown
Build a deep neural network
###Code
model = Sequential()
#model.add(Dense((64),activation='relu',input_shape=(784,),kernel_regularizer=regularizers.l2(0.01)))
model.add(Dense((64),activation='relu',input_shape=(784,)))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense((64),activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense((32),activation='relu'))
model.add(BatchNormalization())
model.add(Dropout(0.5))
model.add(Dense((10),activation='softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 64) 50240
_________________________________________________________________
batch_normalization_1 (Batch (None, 64) 256
_________________________________________________________________
dropout_1 (Dropout) (None, 64) 0
_________________________________________________________________
dense_2 (Dense) (None, 64) 4160
_________________________________________________________________
batch_normalization_2 (Batch (None, 64) 256
_________________________________________________________________
dropout_2 (Dropout) (None, 64) 0
_________________________________________________________________
dense_3 (Dense) (None, 32) 2080
_________________________________________________________________
batch_normalization_3 (Batch (None, 32) 128
_________________________________________________________________
dropout_3 (Dropout) (None, 32) 0
_________________________________________________________________
dense_4 (Dense) (None, 10) 330
=================================================================
Total params: 57,450
Trainable params: 57,130
Non-trainable params: 320
_________________________________________________________________
###Markdown
Configure Model
###Code
model.compile(loss='categorical_crossentropy',optimizer='adam',metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train !!
###Code
model.fit(X_train,y_train,batch_size=128,epochs=20,verbose=1,validation_data=(X_test,y_test))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 3s - loss: 1.3157 - acc: 0.5710 - val_loss: 0.3512 - val_acc: 0.9054
Epoch 2/20
60000/60000 [==============================] - 2s - loss: 0.6466 - acc: 0.8109 - val_loss: 0.2784 - val_acc: 0.9166
Epoch 3/20
60000/60000 [==============================] - 3s - loss: 0.5165 - acc: 0.8544 - val_loss: 0.2309 - val_acc: 0.9326
Epoch 4/20
60000/60000 [==============================] - 3s - loss: 0.4573 - acc: 0.8764 - val_loss: 0.2137 - val_acc: 0.9393
Epoch 5/20
60000/60000 [==============================] - 3s - loss: 0.4187 - acc: 0.8877 - val_loss: 0.1873 - val_acc: 0.9458
Epoch 6/20
60000/60000 [==============================] - 2s - loss: 0.3911 - acc: 0.8947 - val_loss: 0.1746 - val_acc: 0.9488
Epoch 7/20
60000/60000 [==============================] - 2s - loss: 0.3720 - acc: 0.9014 - val_loss: 0.1736 - val_acc: 0.9497
Epoch 8/20
60000/60000 [==============================] - 2s - loss: 0.3550 - acc: 0.9071 - val_loss: 0.1687 - val_acc: 0.9539
Epoch 9/20
60000/60000 [==============================] - 2s - loss: 0.3431 - acc: 0.9110 - val_loss: 0.1610 - val_acc: 0.9533
Epoch 10/20
60000/60000 [==============================] - 2s - loss: 0.3360 - acc: 0.9116 - val_loss: 0.1650 - val_acc: 0.9539
Epoch 11/20
60000/60000 [==============================] - 2s - loss: 0.3219 - acc: 0.9143 - val_loss: 0.1523 - val_acc: 0.9577
Epoch 12/20
60000/60000 [==============================] - 3s - loss: 0.3163 - acc: 0.9171 - val_loss: 0.1557 - val_acc: 0.9560
Epoch 13/20
60000/60000 [==============================] - 3s - loss: 0.3104 - acc: 0.9182 - val_loss: 0.1529 - val_acc: 0.9568
Epoch 14/20
60000/60000 [==============================] - 3s - loss: 0.3028 - acc: 0.9205 - val_loss: 0.1438 - val_acc: 0.9610
Epoch 15/20
60000/60000 [==============================] - 3s - loss: 0.2973 - acc: 0.9207 - val_loss: 0.1409 - val_acc: 0.9605
Epoch 16/20
60000/60000 [==============================] - 3s - loss: 0.2929 - acc: 0.9226 - val_loss: 0.1497 - val_acc: 0.9588
Epoch 17/20
60000/60000 [==============================] - 3s - loss: 0.2895 - acc: 0.9244 - val_loss: 0.1395 - val_acc: 0.9606
Epoch 18/20
60000/60000 [==============================] - 4s - loss: 0.2870 - acc: 0.9230 - val_loss: 0.1380 - val_acc: 0.9616
Epoch 19/20
60000/60000 [==============================] - 3s - loss: 0.2846 - acc: 0.9242 - val_loss: 0.1426 - val_acc: 0.9618
Epoch 20/20
60000/60000 [==============================] - 3s - loss: 0.2788 - acc: 0.9267 - val_loss: 0.1363 - val_acc: 0.9618
|
GMM_vs_KMeans/Resolving the Sobel Filtering Issue.ipynb | ###Markdown
ADD-ON to Gaussian Mixture Model vs. KMeans with Fashion-MNIST Library Imports
###Code
import matplotlib.pyplot as plt
import numpy as np
from keras.datasets import fashion_mnist
from scipy import ndimage, misc
from sklearn.cluster import KMeans
from sklearn.metrics import homogeneity_score
from sklearn.metrics import silhouette_score
from sklearn.metrics import v_measure_score
from sklearn.mixture import GaussianMixture
import warnings
warnings.filterwarnings('ignore')
warnings.simplefilter('ignore')
###Output
_____no_output_____
###Markdown
Dataset Import
###Code
(x_train_origin, y_train), (x_test_origin, y_test) = fashion_mnist.load_data()
labelNames = ["T-shirt/Top", "Trouser", "Pullover", "Dress",
"Coat", "Sandal", "Shirt", "Sneaker", "Bag", "Ankle boot"]
print(f"Shape of x_train_origin: {x_train_origin.shape}")
###Output
Shape of x_train_origin: (60000, 28, 28)
###Markdown
Sobel filter function used in the assignmentMore information on the sobel method, part of the ndimage object from the scipy library, can be found [here](https://docs.scipy.org/doc/scipy/reference/generated/scipy.ndimage.sobel.html).
###Code
def sobel_filter(picture):
"""
Sobel filtering of a picture along the x and y axes, with normalization
"""
dx = ndimage.sobel(picture, 0) # horizontal derivative
dy = ndimage.sobel(picture, 1) # vertical derivative
mag = np.hypot(dx, dy) # magnitude
mag *= 255.0 / np.max(mag) # normalize
return np.asarray(mag, dtype=np.float32)/255.
def show_picture(picture):
"""
Displays a picture and its filtered counterpart
"""
fig = plt.figure()
plt.gray()
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
result = sobel_filter(picture)
ax1.imshow(picture)
ax2.imshow(result)
plt.show()
###Output
_____no_output_____
###Markdown
Using the function on a test example:
###Code
stock_picture = misc.ascent()
show_picture(stock_picture)
###Output
_____no_output_____
###Markdown
Using the function on an element form the dataset:
###Code
fmnist_picture = x_train_origin[0]
show_picture(fmnist_picture)
###Output
_____no_output_____
###Markdown
**We see that there is a problem with the filtering of the shoe, while the filtering of another stock image seems to work.** As such we must investigate what is wrong with the picture. Type CheckingStock picture:
###Code
print(type(stock_picture))
print(type(stock_picture[0])) #first row of pixels
print(type(stock_picture[0][0])) # first pixel of first row
###Output
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
<class 'numpy.int32'>
###Markdown
Fashion-MNIST picture:
###Code
print(type(fmnist_picture))
print(type(fmnist_picture[0])) # first row of pixels
print(type(fmnist_picture[0][0])) # first pixel of first row
###Output
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
<class 'numpy.uint8'>
###Markdown
We see that the type of the pixels comprised inside the stock photo is different from the type of the fashion-MNIST photo. We test whether changing the type of each pixel of the fashion-MNIST photo from uint8 to int32 works.
###Code
def new_sobel_filter(picture):
"""
(NEW) Sobel filtering of a picture along the x and y axes, with normalization
"""
dx = ndimage.sobel(picture.astype('int32'), 0) # horizontal derivative
dy = ndimage.sobel(picture.astype('int32'), 1) # vertical derivative
mag = np.hypot(dx, dy) # magnitude
mag *= 255.0 / np.max(mag) # normalize
return np.asarray(mag, dtype=np.float32)/255.
def new_show_picture(picture):
"""
(NEW) Displays a picture and its filtered counterpart
"""
fig = plt.figure()
plt.gray()
ax1 = fig.add_subplot(121)
ax2 = fig.add_subplot(122)
result = new_sobel_filter(picture)
ax1.imshow(picture)
ax2.imshow(result)
plt.show()
new_show_picture(fmnist_picture)
###Output
_____no_output_____
###Markdown
It works. As such, we can revamp the ``Other Explorations: Trying out MaxPooling and a Sobel Filter instead of PCA`` part from the main notebook. Revamping using Sobel filtering instead of PCA for data pre-processingThe goal of using the Sobel filtering instead of PCA was to see whether or not it would have an impact on the validity of a KMeans or GMM learning approaches, using the homogeneity and v-measure scores.**Reminder**:- The **homogeneity score** is a metric that checks whether the resulting clusters contain only data points which are members of a single class. It is *independent of the absolute values of the predicted labels*. The resulting score yields a real value between 0 and 1 with 1 standing for perfect homogeneous labeling- The **v-measure score** takes the homogeneity score and another underlying one (the completeness score, which measures whether all data points that belong to the same class are clustered within the same cluster) and provides a weighted measure: $\frac{(1+\beta).hc}{\beta.h+c}$ with $\beta$ a factor that favors either the homogeneity or the completeness. The value of this evaluation metric is that it is independent of the number of class lavels, clusters, and of the size of the data. The resulting score yields a real value between 0 and 1 with 1 standing for perfectly complete labeling.
###Code
def sobel_fMNIST(dataset):
return np.array(list(map(new_sobel_filter, dataset))).reshape(len(dataset), 784)
x_train_sobel = sobel_fMNIST(x_train_origin)
x_test_sobel = sobel_fMNIST(x_test_origin)
print(x_train_sobel.shape, x_test_sobel.shape)
def run_kmeans(x_train, x_test, y_test, n=10):
"""
Runs a KMeans model on the dataset
"""
model = KMeans(n_clusters=n, random_state=0).fit(x_train)
label_predictions = model.predict(x_test)
kmeans_h_score = homogeneity_score(y_test, label_predictions)
kmeans_v_score = v_measure_score(y_test, label_predictions)
print(f"KMeans with {n} clusters yields a {round(kmeans_h_score,3)} homogeneity score " +
f"and a {round(kmeans_v_score,3)} v-measure score")
return label_predictions
def run_gmm(x_train, x_test, y_test, n=10, cov="full", init_params="kmeans"):
"""
Runs a GMM model on the dataset
"""
model = GaussianMixture(n_components=n,
covariance_type=cov,
init_params=init_params,
random_state=0).fit(x_train)
label_predictions = model.predict(x_test)
gmm_h_score = homogeneity_score(y_test, label_predictions)
gmm_v_score = v_measure_score(y_test, label_predictions)
print(f"GMM with {n} clusters yields a {round(gmm_h_score,3)} homogeneity score " +
f"and a {round(gmm_v_score,3)} v-measure score")
return label_predictions
def visualize_clusters(predictions, y_test, labels):
"""
Visualizes the clustering resulting from the training.
"""
clusters={}
for i in range(10):
clusters[i]=[0]*10
for index, item in enumerate(predictions):
clusters[item][y_test[index]] += 1
for i in range(len(clusters)):
fig = plt.figure(figsize=(10,3))
plt.bar(labels, clusters[i], color="purple", width=0.4)
plt.title(f"Amount of items of a specific label assigned to cluster {i}")
plt.show()
kmeans_label_predictions = run_kmeans(x_train_sobel, x_test_sobel, y_test)
visualize_clusters(kmeans_label_predictions, y_test, labelNames)
###Output
_____no_output_____
###Markdown
We see that the results obtained are marginally better than the assignment model that used PCA, which yielded a homogeneity score of 0.493 (Sobel KMeans yields 0.516) and a v-measure score of 0.505 (Sobel KMeans yields 0.519).**Interesting Note**: Though using Sobel filtering results is seemingly worst clustering for bags and shoe-type items, **the use of filtering solves the Trousers/Dresses separation that PCA failed at** (see clusters 3 and 4 which clearly separate trousers from dresses).
###Code
gmm_label_predictions = run_gmm(x_train_sobel, x_test_sobel, y_test)
visualize_clusters(gmm_label_predictions, y_test, labelNames)
###Output
_____no_output_____
###Markdown
We see that the results obtained are marginally worse than the assignment model that used PCA, which yielded a homogeneity score of 0.548 (Sobel GMM yields 0.514) and a v-measure score of 0.555 (Sobel GMM yields 0.519).The Sobel GMM seems to yield scores closer to the Sobel KMeans than the PCA GMM.**Interesting Note**: Using Sobel filtering results seems to **solve the Trousers/Dresses separation that PCA failed at** (see clusters 4 and 5 which clearly separate trousers from dresses) **albeit in a worse fashion than Sobel KMeans**. Comparing the Sobel GMM procedures to the results of the assignment We recall the results we obtained for the GMM:| Model number | k components | Covariance parameter | Initialization parameter | Homogeneity score (test set) | V-measure score (test set) ||---|---|---|---|---|---|| 11 | 12 | tied | kmeans | 0.618 | 0.606 || 19 | 14 | tied | kmeans | 0.633 | 0.608 || 35 | 18 | tied | kmeans | 0.637 | 0.589 || 59 | 24 | tied | kmeans | 0.642 | 0.563 || 91 | 32 | tied | kmeans | 0.642 | 0.541 || 121 | 40 | full | kmeans | 0.669 | 0.537 |We want to see how it compares with the sobel.
###Code
model = {11:{"n":12,"covariance_type":"tied","init_params":"kmeans"},
19:{"n":14,"covariance_type":"tied","init_params":"kmeans"},
35:{"n":18,"covariance_type":"tied","init_params":"kmeans"},
59:{"n":24,"covariance_type":"tied","init_params":"kmeans"},
91:{"n":32,"covariance_type":"tied","init_params":"kmeans"},
121:{"n":40,"covariance_type":"full","init_params":"kmeans"}}
for model in model.values():
print(f"Model to be run: {model}")
run_gmm(x_train_sobel, x_test_sobel, y_test,
n=model["n"], cov=model["covariance_type"],
init_params=model["init_params"])
###Output
Model to be run: {'n': 12, 'covariance_type': 'tied', 'init_params': 'kmeans'}
GMM with 12 clusters yields a 0.497 homogeneity score and a 0.488 v-measure score
Model to be run: {'n': 14, 'covariance_type': 'tied', 'init_params': 'kmeans'}
GMM with 14 clusters yields a 0.491 homogeneity score and a 0.468 v-measure score
Model to be run: {'n': 18, 'covariance_type': 'tied', 'init_params': 'kmeans'}
GMM with 18 clusters yields a 0.53 homogeneity score and a 0.482 v-measure score
Model to be run: {'n': 24, 'covariance_type': 'tied', 'init_params': 'kmeans'}
GMM with 24 clusters yields a 0.577 homogeneity score and a 0.498 v-measure score
Model to be run: {'n': 32, 'covariance_type': 'tied', 'init_params': 'kmeans'}
GMM with 32 clusters yields a 0.6 homogeneity score and a 0.488 v-measure score
Model to be run: {'n': 40, 'covariance_type': 'full', 'init_params': 'kmeans'}
GMM with 40 clusters yields a 0.635 homogeneity score and a 0.517 v-measure score
|
notebooks/02.5-make-projection-dfs/indv-id/macaque-indv.ipynb | ###Markdown
Collect data
###Code
DATASET_ID = 'macaque_coo'
from avgn.visualization.projections import (
scatter_projections,
draw_projection_transitions,
)
df_loc = DATA_DIR / 'syllable_dfs' / DATASET_ID / 'macaque.pickle'
syllable_df = pd.read_pickle(df_loc)
syllable_df[:3]
###Output
_____no_output_____
###Markdown
cluster
###Code
specs = list(syllable_df.spectrogram.values)
specs = [i / np.max(i) for i in specs]
specs_flattened = flatten_spectrograms(specs)
np.shape(specs_flattened)
from cuml.manifold.umap import UMAP as cumlUMAP
cuml_umap = cumlUMAP()
z = np.vstack(list(cuml_umap.fit_transform(specs_flattened)))
###Output
/mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/ipykernel_launcher.py:1: UserWarning: Parameter should_downcast is deprecated, use convert_dtype in fit, fit_transform and transform methods instead.
"""Entry point for launching an IPython kernel.
/mnt/cube/tsainbur/conda_envs/tpy3/lib/python3.6/site-packages/ipykernel_launcher.py:2: UserWarning: Parameter should_downcast is deprecated, use convert_dtype in fit, fit_transform and transform methods instead.
###Markdown
variation across populations
###Code
fig, ax = plt.subplots(figsize=(15,15))
scatter_projections(projection=z, alpha=.5, labels = syllable_df.indv.values, s=10, ax = ax)
#ax.set_xlim([-15,15])
from avgn.visualization.projections import scatter_spec
np.shape(z), np.shape(specs)
from avgn.utils.general import save_fig
from avgn.utils.paths import FIGURE_DIR, ensure_dir
scatter_spec(
z,
specs,
column_size=15,
#x_range = [-5.5,7],
#y_range = [-10,10],
pal_color="hls",
color_points=False,
enlarge_points=20,
figsize=(10, 10),
scatter_kwargs = {
'labels': syllable_df.indv.values,
'alpha':1.0,
's': 3,
"color_palette": 'Set2',
'show_legend': False
},
matshow_kwargs = {
'cmap': plt.cm.Greys
},
line_kwargs = {
'lw':1,
'ls':"solid",
'alpha':0.25,
},
draw_lines=True
);
#save_fig(FIGURE_DIR / 'macaque_coo', dpi=300, save_jpg=True)
nex = -1
scatter_spec(
z,
specs,
column_size=8,
#x_range = [-4.5,4],
#y_range = [-4.5,5.5],
pal_color="hls",
color_points=False,
enlarge_points=0,
figsize=(10, 10),
range_pad = 0.15,
scatter_kwargs = {
'labels': syllable_df.indv.values,
'alpha':1,
's': 1,
'show_legend': False,
"color_palette": 'Set1',
},
matshow_kwargs = {
'cmap': plt.cm.Greys
},
line_kwargs = {
'lw':3,
'ls':"dashed",
'alpha':0.25,
},
draw_lines=True,
n_subset= 1000,
border_line_width = 3,
);
save_fig(FIGURE_DIR / 'discrete_umap' / 'indv' / 'macaque', dpi=300, save_jpg=True, save_png=True)
###Output
_____no_output_____ |
Data-Science-Projects-master/new_read.ipynb | ###Markdown
...Random forest using SMOTE method
###Code
acc1 # SMOTE method
from sklearn.metrics import cohen_kappa_score
cohen_kappa_score(y_test, y_pred)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
# plot ROC curve
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for diabetes readmission')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
###Output
_____no_output_____
###Markdown
...Random forest using Random over sampling method
###Code
#Random over sampling method
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
acc2 = rf.score(X_test, y_test)
print(acc2)
from sklearn.metrics import cohen_kappa_score
print(cohen_kappa_score(y_test, y_pred))
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
# plot ROC curve
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for diabetes readmission')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
###Output
_____no_output_____
###Markdown
...Regular Random forest Method
###Code
#regular logistic regression Method
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .3, random_state=i)
rf.fit(X_train, y_train)
y_pred = rf.predict(X_test)
acc3 = rf.score(X_test, y_test)
acc3
from sklearn.metrics import cohen_kappa_score
cohen_kappa_score(y_test, y_pred)
# plot ROC curve
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred)
plt.plot(fpr, tpr)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.0])
plt.title('ROC curve for diabetes readmission')
plt.xlabel('False Positive Rate (1 - Specificity)')
plt.ylabel('True Positive Rate (Sensitivity)')
plt.grid(True)
from sklearn.metrics import classification_report
print(classification_report(y_test, y_pred))
####Prediction on new data
new=data[500:575]
# create a (features) and b (response)
p = new.drop(['readmitted'], axis=1)
q = new['readmitted']
new.shape
data.shape
#y_pred_new = rf.predict(p)
y_pred_new
np.array(q)
###Output
_____no_output_____ |
courses/dl1/rz-final-skin-lesion.ipynb | ###Markdown
ISIC 2018 Lesion Diagnosis Challengehttps://challenge2018.isic-archive.com Step 0: Setup
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
## Using Fast.ai library from Paperspace
from fastai.imports import *
from fastai.torch_imports import *
from fastai.transforms import *
from fastai.conv_learner import *
from fastai.model import *
from fastai.dataset import *
from fastai.sgdr import *
from fastai.plots import *
torch.cuda.set_device(0)
torch.backends.cudnn.enabled
###Output
_____no_output_____
###Markdown
Get data from https://challenge2018.isic-archive.com/task3/training/
###Code
PATH = "data/lessionChallenge/"
sz = 224
arch = resnext101_64
bs = 24
label_csv = f'{PATH}labels.csv'
n = len(list(open(label_csv))) - 1 # header is not counted (-1)
val_idxs = get_cv_idxs(n) # random 20% data for validation set
n
len(val_idxs)
# If you haven't downloaded weights.tgz yet, download the file.
# http://forums.fast.ai/t/error-when-trying-to-use-resnext50/7555
# http://forums.fast.ai/t/lesson-2-in-class-discussion/7452/222
#!wget -O fastai/weights.tgz http://files.fast.ai/models/weights.tgz
#!tar xvfz fastai/weights.tgz -C fastai
###Output
_____no_output_____
###Markdown
Step 1: Initial exploration
###Code
!ls {PATH}
label_df = pd.read_csv(label_csv)
label_df.head()
label_df.pivot_table(index="disease", aggfunc=len).sort_values('images', ascending=False)
#We have unbalanced classes
#to-do
tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_csv(PATH, 'train', f'{PATH}labels.csv',
val_idxs=val_idxs, suffix='.jpg', tfms=tfms, bs=bs)
fn = PATH + data.trn_ds.fnames[0]; fn
img = PIL.Image.open(fn); img
img.size
size_d = {k: PIL.Image.open(PATH + k).size for k in data.trn_ds.fnames}
row_sz, col_sz = list(zip(*size_d.values()))
row_sz = np.array(row_sz); col_sz = np.array(col_sz)
row_sz[:5]
plt.hist(row_sz);
plt.hist(row_sz[row_sz < 1000])
plt.hist(col_sz);
plt.hist(col_sz[col_sz < 1000])
len(data.classes), data.classes[:5]
###Output
_____no_output_____
###Markdown
Step 2: Initial model
###Code
def get_data(sz, bs): # sz: image size, bs: batch size
tfms = tfms_from_model(arch, sz, aug_tfms=transforms_side_on, max_zoom=1.1)
data = ImageClassifierData.from_csv(PATH, 'train', f'{PATH}labels.csv',
val_idxs=val_idxs, suffix='.jpg', tfms=tfms, bs=bs)
# http://forums.fast.ai/t/how-to-train-on-the-full-dataset-using-imageclassifierdata-from-csv/7761/13
# http://forums.fast.ai/t/how-to-train-on-the-full-dataset-using-imageclassifierdata-from-csv/7761/37
return data if sz > 300 else data.resize(340, 'tmp') # Reading the jpgs and resizing is slow for big images, so resizing them all to 340 first saves time
#Source:
# def resize(self, targ, new_path):
# new_ds = []
# dls = [self.trn_dl,self.val_dl,self.fix_dl,self.aug_dl]
# if self.test_dl: dls += [self.test_dl, self.test_aug_dl]
# else: dls += [None,None]
# t = tqdm_notebook(dls)
# for dl in t: new_ds.append(self.resized(dl, targ, new_path))
# t.close()
# return self.__class__(new_ds[0].path, new_ds, self.bs, self.num_workers, self.classes)
#File: ~/fastai/courses/dl1/fastai/dataset.py
###Output
_____no_output_____
###Markdown
Precompute
###Code
data = get_data(sz, bs)
learn = ConvLearner.pretrained(arch, data, precompute=True)
learn.summary()
learn.fit(2e-2, 3)
###Output
_____no_output_____
###Markdown
Step 3: Evaluation Metric
###Code
from sklearn.metrics import confusion_matrix
def get_balanced_accuracy(probs,y,classes): # sz: image size, bs: batch size
preds = np.argmax(probs, axis=1)
probs = probs[:,1]
cm = confusion_matrix(y, preds)
plot_confusion_matrix(cm, classes)
return ((
cm[0][0]/(cm[0][0]+cm[0][1]+cm[0][2]+cm[0][3]+cm[0][4]+cm[0][5]+cm[0][6]) +
cm[1][1]/(cm[1][0]+cm[1][1]+cm[1][2]+cm[1][3]+cm[1][4]+cm[1][5]+cm[1][6]) +
cm[2][2]/(cm[2][0]+cm[2][1]+cm[2][2]+cm[2][3]+cm[2][4]+cm[2][5]+cm[2][6]) +
cm[3][3]/(cm[3][0]+cm[3][1]+cm[3][2]+cm[3][3]+cm[3][4]+cm[3][5]+cm[3][6]) +
cm[4][4]/(cm[4][0]+cm[4][1]+cm[4][2]+cm[4][3]+cm[4][4]+cm[4][5]+cm[4][6]) +
cm[5][5]/(cm[5][0]+cm[5][1]+cm[5][2]+cm[5][3]+cm[5][4]+cm[5][5]+cm[5][6]) +
cm[6][6]/(cm[6][0]+cm[6][1]+cm[6][2]+cm[6][3]+cm[6][4]+cm[6][5]+cm[6][6])
)/7)
log_preds,y = learn.TTA()
probs = np.mean(np.exp(log_preds),0)
get_balanced_accuracy(probs,y,data.classes)
###Output
[[ 24 6 22 0 3 7 0]
[ 14 53 17 0 2 20 0]
[ 10 9 121 1 8 57 0]
[ 0 0 3 2 2 15 0]
[ 12 7 33 0 62 125 0]
[ 5 11 38 0 14 1268 2]
[ 0 3 2 0 0 5 20]]
###Markdown
Step 4: Find Learning Rate
###Code
learn = ConvLearner.pretrained(arch, data, precompute=True)
lrf=learn.lr_find()
learn.sched.plot_lr()
learn.sched.plot()
###Output
_____no_output_____
###Markdown
The loss is still clearly improving at lr=(1e-2), so that's what we use. Note that the optimal learning rate can change as we train the model, so you may want to re-run this function from time to time. Step 5: Augment
###Code
from sklearn import metrics
learn.fit(1e-2, 3)
learn.precompute = False
learn.fit(1e-2, 3, cycle_len=1)
learn.save('resnet101_224_pre')
learn.load('resnet101_224_pre')
###Output
_____no_output_____
###Markdown
Step 6: Fine-tuning and differential learning rate annealing
###Code
learn.unfreeze()
lr=np.array([1e-2/9,1e-2/3,1e-2])
learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
learn.save('resnext101_224_all')
learn.load('resnext101_224_all')
log_preds,y = learn.TTA()
probs = np.mean(np.exp(log_preds),0)
get_balanced_accuracy(probs,y,data.classes)
#learn.load('final_challenge1')
###Output
_____no_output_____
###Markdown
Step 7: Increase Size
###Code
bs=12
learn.set_data(get_data(299, bs))
learn.freeze()
learn.fit(1e-3, 3, cycle_len=1)
log_preds,y = learn.TTA()
probs = np.mean(np.exp(log_preds),0)
get_balanced_accuracy(probs,y,data.classes)
learn.unfreeze()
lr=np.array([1e-2/9,1e-2/3,1e-2])
learn.fit(lr, 3, cycle_len=1, cycle_mult=2)
learn.save('resnet101_299_unfrozen')
learn.load('resnet101_299_unfrozen')
###Output
_____no_output_____ |
plot_cv_indices.ipynb | ###Markdown
Visualizing cross-validation behavior in scikit-learn=====================================================Choosing the right cross-validation object is a crucial part of fitting amodel properly. There are many ways to split data into training and testsets in order to avoid model overfitting, to standardize the number ofgroups in test sets, etc.This example visualizes the behavior of several common scikit-learn objectsfor comparison.
###Code
from sklearn.model_selection import (TimeSeriesSplit, KFold, ShuffleSplit,
StratifiedKFold, GroupShuffleSplit,
GroupKFold, StratifiedShuffleSplit)
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Patch
np.random.seed(1338)
cmap_data = plt.cm.Paired
cmap_cv = plt.cm.coolwarm
n_splits = 4
###Output
_____no_output_____
###Markdown
Visualize our data------------------First, we must understand the structure of our data. It has 100 randomlygenerated input datapoints, 3 classes split unevenly across datapoints,and 10 "groups" split evenly across datapoints.As we'll see, some cross-validation objects do specific things withlabeled data, others behave differently with grouped data, and othersdo not use this information.To begin, we'll visualize our data.
###Code
# Generate the class/group data
n_points = 100
X = np.random.randn(100, 10)
percentiles_classes = [.1, .3, .6]
y = np.hstack([[ii] * int(100 * perc)
for ii, perc in enumerate(percentiles_classes)])
# Evenly spaced groups repeated once
groups = np.hstack([[ii] * 10 for ii in range(10)])
def visualize_groups(classes, groups, name):
# Visualize dataset groups
fig, ax = plt.subplots()
ax.scatter(range(len(groups)), [.5] * len(groups), c=groups, marker='_',
lw=50, cmap=cmap_data)
ax.scatter(range(len(groups)), [3.5] * len(groups), c=classes, marker='_',
lw=50, cmap=cmap_data)
ax.set(ylim=[-1, 5], yticks=[.5, 3.5],
yticklabels=['Data\ngroup', 'Data\nclass'], xlabel="Sample index")
visualize_groups(y, groups, 'no groups')
###Output
_____no_output_____
###Markdown
Define a function to visualize cross-validation behavior--------------------------------------------------------We'll define a function that lets us visualize the behavior of eachcross-validation object. We'll perform 4 splits of the data. On eachsplit, we'll visualize the indices chosen for the training set(in blue) and the test set (in red).
###Code
def plot_cv_indices(cv, X, y, group, ax, n_splits, lw=10):
"""Create a sample plot for indices of a cross-validation object."""
# Generate the training/testing visualizations for each CV split
for ii, (tr, tt) in enumerate(cv.split(X=X, y=y, groups=group)):
# Fill in indices with the training/test groups
indices = np.array([np.nan] * len(X))
indices[tt] = 1
indices[tr] = 0
# Visualize the results
ax.scatter(range(len(indices)), [ii + .5] * len(indices),
c=indices, marker='_', lw=lw, cmap=cmap_cv,
vmin=-.2, vmax=1.2)
# Plot the data classes and groups at the end
ax.scatter(range(len(X)), [ii + 1.5] * len(X),
c=y, marker='_', lw=lw, cmap=cmap_data)
ax.scatter(range(len(X)), [ii + 2.5] * len(X),
c=group, marker='_', lw=lw, cmap=cmap_data)
# Formatting
yticklabels = list(range(n_splits)) + ['class', 'group']
ax.set(yticks=np.arange(n_splits+2) + .5, yticklabels=yticklabels,
xlabel='Sample index', ylabel="CV iteration",
ylim=[n_splits+2.2, -.2], xlim=[0, 100])
ax.set_title('{}'.format(type(cv).__name__), fontsize=15)
return ax
###Output
_____no_output_____
###Markdown
Let's see how it looks for the `KFold` cross-validation object:
###Code
fig, ax = plt.subplots()
cv = KFold(n_splits)
plot_cv_indices(cv, X, y, groups, ax, n_splits)
###Output
_____no_output_____
###Markdown
As you can see, by default the KFold cross-validation iterator does nottake either datapoint class or group into consideration. We can change thisby using the ``StratifiedKFold`` like so.
###Code
fig, ax = plt.subplots()
cv = StratifiedKFold(n_splits)
plot_cv_indices(cv, X, y, groups, ax, n_splits)
###Output
_____no_output_____
###Markdown
In this case, the cross-validation retained the same ratio of classes acrosseach CV split. Next we'll visualize this behavior for a number of CViterators.Visualize cross-validation indices for many CV objects------------------------------------------------------Let's visually compare the cross validation behavior for manyscikit-learn cross-validation objects. Below we will loop through severalcommon cross-validation objects, visualizing the behavior of each.Note how some use the group/class information while others do not.
###Code
cvs = [KFold, GroupKFold, ShuffleSplit, StratifiedKFold,
GroupShuffleSplit, StratifiedShuffleSplit, TimeSeriesSplit]
for cv in cvs:
this_cv = cv(n_splits=n_splits)
fig, ax = plt.subplots(figsize=(6, 3))
plot_cv_indices(this_cv, X, y, groups, ax, n_splits)
ax.legend([Patch(color=cmap_cv(.8)), Patch(color=cmap_cv(.02))],
['Testing set', 'Training set'], loc=(1.02, .8))
# Make the legend fit
plt.tight_layout()
fig.subplots_adjust(right=.7)
plt.show()
###Output
_____no_output_____ |
Spark/.ipynb_checkpoints/Lab03-Resposta-checkpoint.ipynb | ###Markdown
 **Modelos de Classificação** Este laboratório irá cobrir os passos para tratar a base de dados de taxa de cliques (click-through rate - CTR) e criar um modelo de classificação para tentar determinar se um usuário irá ou não clicar em um banner. Para isso utilizaremos a base de dados [Criteo Labs](http://labs.criteo.com/) que foi utilizado em uma competição do [Kaggle](https://www.kaggle.com/c/criteo-display-ad-challenge). ** Nesse notebook: **+ *Parte 1:* Utilização do one-hot-encoding (OHE) para transformar atributos categóricos em numéricos+ *Parte 2:* Construindo um dicionário OHE+ *Parte 3:* Geração de atributos OHE na base de dados CTR + *Visualização 1:* Frequência de atributos+ *Parte 4:* Predição de CTR e avaliação da perda logarítimica (logloss) + *Visualização 2:* Curva ROC+ *Parte 5:* Reduzindo a dimensão dos atributos através de hashing (feature hashing) Referências de métodos: [Spark's Python API](https://spark.apache.org/docs/latest/api/python/pyspark.htmlpyspark.RDD)e [NumPy Reference](http://docs.scipy.org/doc/numpy/reference/index.html) ** Part 1: Utilização do one-hot-encoding (OHE) para transformar atributos categóricos em numéricos ** ** (1a) One-hot-encoding ** Para um melhor entendimento do processo da codificação OHE vamos trabalhar com uma base de dados pequena e sem rótulos. Cada objeto dessa base pode conter três atributos, o primeiro indicando o animal, o segundo a cor e o terceiro qual animal que ele come. No esquema OHE, queremos representar cada tupla `(IDatributo, categoria)` através de um atributo binário. Nós podemos fazer isso no Python criando um dicionário que mapeia cada possível tupla em um inteiro que corresponde a sua posição no vetor de atributos binário. Para iniciar crie um dicionário correspondente aos atributos categóricos da base construída logo abaixo. Faça isso manualmente.
###Code
# Data for manual OHE
# Note: the first data point does not include any value for the optional third feature
sampleOne = [(0, 'mouse'), (1, 'black')]
sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
sampleDataRDD = sc.parallelize([sampleOne, sampleTwo, sampleThree])
# EXERCICIO
sampleOHEDictManual = {}
sampleOHEDictManual[(0,'mouse')] = 0
sampleOHEDictManual[(0,'cat')] = 1
sampleOHEDictManual[(0,'bear')] = 2
sampleOHEDictManual[(1,'black')] = 3
sampleOHEDictManual[(1,'tabby')] = 4
sampleOHEDictManual[(2,'mouse')] = 5
sampleOHEDictManual[(2,'salmon')] = 6
# TEST One-hot-encoding (1a)
assert (0, 'mouse') in sampleOHEDictManual, "(0, 'mouse') not in sampleOHEDictManual"
assert (0, 'cat') in sampleOHEDictManual, "(0, 'cat') not in sampleOHEDictManual"
assert (0, 'bear') in sampleOHEDictManual, "(0, 'bear') not in sampleOHEDictManual"
assert (1, 'black') in sampleOHEDictManual, "(1, 'black') not in sampleOHEDictManual"
assert (1, 'tabby') in sampleOHEDictManual, "(1, 'tabby') not in sampleOHEDictManual"
assert (2, 'mouse') in sampleOHEDictManual, "(2, 'mouse') not in sampleOHEDictManual"
assert (2, 'salmon') in sampleOHEDictManual, "(2, 'salmon') not in sampleOHEDictManual"
###Output
_____no_output_____
###Markdown
** (1b) Vetores Esparsos ** Pontos de dados categóricos geralmente apresentam um pequeno conjunto de OHE não-nulos relativo ao total de possíveis atributos. Tirando proveito dessa propriedade podemos representar nossos dados como vetores esparsos, economizando espaço de armazenamento e cálculos computacionais. No próximo exercício transforme os vetores com nome precedidos de `Dense` para vetores esparsos. Utilize a classe [SparseVector](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.linalg.SparseVector) para representá-los e verifique que ambas as representações retornam o mesmo resultado nos cálculos dos produtos interno. Use `SparseVector(tamanho, *args)` para criar um novo vetor esparso onde `tamanho` é o tamanho do vetor e `args` pode ser um dicionário, uma lista de tuplas (índice, valor) ou duas arrays separadas de índices e valores ordenados por índice.
###Code
import numpy as np
from pyspark.mllib.linalg import SparseVector
# EXERCICIO
aDense = np.array([0., 3., 0., 4.])
aSparse = SparseVector(4, [(1,3),(3,4)])
bDense = np.array([0., 0., 0., 1.])
bSparse = SparseVector(4, [(3,1)])
w = np.array([0.4, 3.1, -1.4, -.5])
print (aDense.dot(w))
print (aSparse.dot(w))
print (bDense.dot(w))
print (bSparse.dot(w))
# TEST Sparse Vectors (1b)
assert isinstance(aSparse, SparseVector), 'aSparse needs to be an instance of SparseVector'
assert isinstance(bSparse, SparseVector), 'aSparse needs to be an instance of SparseVector'
assert aDense.dot(w) == aSparse.dot(w), 'dot product of aDense and w should equal dot product of aSparse and w'
assert bDense.dot(w) == bSparse.dot(w), 'dot product of bDense and w should equal dot product of bSparse and w'
###Output
_____no_output_____
###Markdown
**(1c) Atributos OHE como vetores esparsos ** Agora vamos representar nossos atributos OHE como vetores esparsos. Utilizando o dicionário `sampleOHEDictManual`, crie um vetor esparso para cada amostra de nossa base de dados. Todo atributo que ocorre em uma amostra deve ter valor 1.0. Por exemplo, um vetor para um ponto com os atributos 2 e 4 devem ser `[0.0, 0.0, 1.0, 0.0, 1.0, 0.0, 0.0]`.
###Code
# Reminder of the sample features
# sampleOne = [(0, 'mouse'), (1, 'black')]
# sampleTwo = [(0, 'cat'), (1, 'tabby'), (2, 'mouse')]
# sampleThree = [(0, 'bear'), (1, 'black'), (2, 'salmon')]
# EXERCICIO
sampleOneOHEFeatManual = SparseVector(7, [(sampleOHEDictManual[x],1.0) for x in sampleOne])
sampleTwoOHEFeatManual = SparseVector(7, [(sampleOHEDictManual[x],1.0) for x in sampleTwo])
sampleThreeOHEFeatManual = SparseVector(7, [(sampleOHEDictManual[x],1.0) for x in sampleThree])
# TEST OHE Features as sparse vectors (1c)
assert isinstance(sampleOneOHEFeatManual, SparseVector), 'sampleOneOHEFeatManual needs to be a SparseVector'
assert isinstance(sampleTwoOHEFeatManual, SparseVector), 'sampleTwoOHEFeatManual needs to be a SparseVector'
assert isinstance(sampleThreeOHEFeatManual, SparseVector), 'sampleThreeOHEFeatManual needs to be a SparseVector'
###Output
_____no_output_____
###Markdown
**(1d) Função de codificação OHE ** Vamos criar uma função que gera um vetor esparso codificado por um dicionário de OHE. Ele deve fazer o procedimento similar ao exercício anterior.
###Code
# EXERCICIO
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
"""Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
You should ensure that the indices used to create a SparseVector are sorted.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
"""
return SparseVector(numOHEFeats, [(OHEDict[x], 1.0) for x in rawFeats])
# Calculate the number of features in sampleOHEDictManual
numSampleOHEFeats = len(sampleOHEDictManual)
# Run oneHotEnoding on sampleOne
sampleOneOHEFeat = oneHotEncoding(sampleOne, sampleOHEDictManual, numSampleOHEFeats)
print (sampleOneOHEFeat)
# TEST Define an OHE Function (1d)
assert sampleOneOHEFeat == sampleOneOHEFeatManual, 'sampleOneOHEFeat should equal sampleOneOHEFeatManual'
###Output
_____no_output_____
###Markdown
**(1e) Aplicar OHE em uma base de dados ** Finalmente, use a função da parte (1d) para criar atributos OHE para todos os 3 objetos da base de dados artificial.
###Code
# EXERCICIO
sampleOHEData = sampleDataRDD.map(lambda x: oneHotEncoding(x, sampleOHEDictManual, numSampleOHEFeats))
print (sampleOHEData.collect())
# TEST Apply OHE to a dataset (1e)
sampleOHEDataValues = sampleOHEData.collect()
assert len(sampleOHEDataValues) == 3, 'sampleOHEData should have three elements'
###Output
_____no_output_____
###Markdown
** Part 2: Construindo um dicionário OHE ** **(2a) Tupla RDD de `(IDatributo, categoria)` ** Crie um RDD de pares distintos de `(IDatributo, categoria)`. Em nossa base de dados você deve gerar `(0, 'bear')`, `(0, 'cat')`, `(0, 'mouse')`, `(1, 'black')`, `(1, 'tabby')`, `(2, 'mouse')`, `(2, 'salmon')`. Repare que `'black'` aparece duas vezes em nossa base de dados mas contribui apenas para um item do RDD: `(1, 'black')`, por outro lado `'mouse'` aparece duas vezes e contribui para dois itens: `(0, 'mouse')` and `(2, 'mouse')`. Dica: use [flatMap](https://spark.apache.org/docs/latest/api/python/pyspark.htmlpyspark.RDD.flatMap) e [distinct](https://spark.apache.org/docs/latest/api/python/pyspark.htmlpyspark.RDD.distinct).
###Code
# EXERCICIO
sampleDistinctFeats = (sampleDataRDD
.flatMap(lambda x: x)
.distinct()
)
# TEST Pair RDD of (featureID, category) (2a)
assert sorted(sampleDistinctFeats.collect()) == [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),(1, 'tabby'), (2, 'mouse'), (2, 'salmon')], 'incorrect value for sampleDistinctFeats'
###Output
_____no_output_____
###Markdown
** (2b) Dicionário OHE de atributos únicos ** Agora, vamos criar um RDD de tuplas para cada `(IDatributo, categoria)` em `sampleDistinctFeats`. A chave da tupla é a própria tupla original, e o valor será um inteiro variando de 0 até número de tuplas - 1. Em seguida, converta essa `RDD` em um dicionário, utilizando o comando `collectAsMap`. Use o comando [zipWithIndex](https://spark.apache.org/docs/latest/api/python/pyspark.htmlpyspark.RDD.zipWithIndex) seguido de [collectAsMap](https://spark.apache.org/docs/latest/api/python/pyspark.htmlpyspark.RDD.collectAsMap).
###Code
# EXERCICIO
sampleOHEDict = (sampleDistinctFeats
.zipWithIndex()
.collectAsMap())
print (sampleOHEDict)
# TEST OHE Dictionary from distinct features (2b)
assert sorted(sampleOHEDict.keys()) == [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'),(1, 'tabby'), (2, 'mouse'), (2, 'salmon')], 'sampleOHEDict has unexpected keys'
assert sorted(sampleOHEDict.values()) == list(range(7)), 'sampleOHEDict has unexpected values'
###Output
_____no_output_____
###Markdown
**(2c) Criação automática do dicionário OHE ** Agora use os códigos dos exercícios anteriores para criar uma função que retorna um dicionário OHE a partir dos atributos categóricos de uma base de dados.
###Code
# EXERCICIO
def createOneHotDict(inputData):
"""Creates a one-hot-encoder dictionary based on the input data.
Args:
inputData (RDD of lists of (int, str)): An RDD of observations where each observation is
made up of a list of (featureID, value) tuples.
Returns:
dict: A dictionary where the keys are (featureID, value) tuples and map to values that are
unique integers.
"""
return (inputData
.flatMap(lambda x: x)
.distinct()
.zipWithIndex()
.collectAsMap()
)
sampleOHEDictAuto = createOneHotDict(sampleDataRDD)
print (sampleOHEDictAuto)
# TEST Automated creation of an OHE dictionary (2c)
assert sorted(sampleOHEDictAuto.keys()) == [(0, 'bear'), (0, 'cat'), (0, 'mouse'), (1, 'black'), (1, 'tabby'), (2, 'mouse'), (2, 'salmon')], 'sampleOHEDictAuto has unexpected keys'
assert sorted(sampleOHEDictAuto.values()) == list(range(7)), 'sampleOHEDictAuto has unexpected values'
###Output
_____no_output_____
###Markdown
**Part 3: Parse CTR data and generate OHE features** Antes de começar essa parte, vamos carregar a base de dados e verificar o formato dela. Repare que o primeiro campo é o rótulo de cada objeto, sendo 0 se o usuário não clicou no banner e 1 caso tenha clicado. O restante dos atributos ou são numéricos ou são strings representando categorias anônimas. Vamos tratar todos os atributos como categóricos.
###Code
import os.path
fileName = os.path.join('Data', 'dac_sample.txt')
if os.path.isfile(fileName):
rawData = (sc
.textFile(fileName, 2)
.map(lambda x: x.replace('\t', ','))) # work with either ',' or '\t' separated data
print (rawData.take(1))
###Output
['0,1,1,5,0,1382,4,15,2,181,1,2,,2,68fd1e64,80e26c9b,fb936136,7b4723c4,25c83c98,7e0ccccf,de7995b8,1f89b562,a73ee510,a8cd5504,b2cb9c98,37c9c164,2824a5f6,1adce6ef,8ba8b39a,891b62e7,e5ba7672,f54016b9,21ddcdc9,b1252a9d,07b5194c,,3a171ecb,c5c50484,e8b83407,9727dd16']
###Markdown
**(3a) Carregando e dividindo os dados ** Da mesma forma que no notebook anterior, vamos dividir os dados entre treinamento, validação e teste. Use o método [randomSplit](https://spark.apache.org/docs/latest/api/python/pyspark.htmlpyspark.RDD.randomSplit) com os pesos (weights) e semente aleatória (seed) especificados para criar os conjuntos, então faça o [cache](https://spark.apache.org/docs/latest/api/python/pyspark.htmlpyspark.RDD.cache) de cada RDD, pois utilizaremos cada uma delas com frequência durante esse exercício.
###Code
# EXERCICIO
weights = [.8, .1, .1]
seed = 42
# Use randomSplit with weights and seed
rawTrainData, rawValidationData, rawTestData = rawData.randomSplit(weights, seed)
# Cache the data
rawTrainData.cache()
rawValidationData.cache()
rawTestData.cache()
nTrain = rawTrainData.count()
nVal = rawValidationData.count()
nTest = rawTestData.count()
print (nTrain, nVal, nTest, nTrain + nVal + nTest)
print (rawData.take(1))
# TEST Loading and splitting the data (3a)
assert all([rawTrainData.is_cached, rawValidationData.is_cached, rawTestData.is_cached]), 'you must cache the split data'
assert nTrain == 80053, 'incorrect value for nTrain'
assert nVal == 9941, 'incorrect value for nVal'
assert nTest == 10006, 'incorrect value for nTest'
###Output
_____no_output_____
###Markdown
** (3b) Extração de atributos ** Como próximo passo, crie uma função para ser aplicada em cada objeto do RDD para gerar uma RDD de tuplas (IDatributo, categoria). Ignore o primeiro campo, que é o rótulo e gere uma lista de tuplas para os atributos seguintes. Utilize o comando [enumerate](https://docs.python.org/2/library/functions.htmlenumerate) para criar essas tuplas.
###Code
# EXERCICIO
def parsePoint(point):
"""Converts a comma separated string into a list of (featureID, value) tuples.
Note:
featureIDs should start at 0 and increase to the number of features - 1.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
Returns:
list: A list of (featureID, value) tuples.
"""
return list(enumerate(point.split(',')[1:]))
parsedTrainFeat = rawTrainData.map(parsePoint)
numCategories = (parsedTrainFeat
.flatMap(lambda x: x)
.distinct()
.map(lambda x: (x[0], 1))
.reduceByKey(lambda x, y: x + y)
.sortByKey()
.collect()
)
print (numCategories[2][1])
# TEST Extract features (3b)
assert numCategories[2][1] == 864, 'incorrect implementation of parsePoint'
assert numCategories[32][1] == 4, 'incorrect implementation of parsePoint'
###Output
_____no_output_____
###Markdown
**(3c) Crie o dicionário de OHE dessa base de dados ** Note que a função parsePoint retorna um objeto em forma de lista `(IDatributo, categoria)`, que é o mesmo formato utilizado pela função `createOneHotDict`. Utilize o RDD `parsedTrainFeat` para criar um dicionário OHE.
###Code
# EXERCICIO
ctrOHEDict = createOneHotDict(parsedTrainFeat)
numCtrOHEFeats = len(ctrOHEDict.keys())
print (numCtrOHEFeats)
print (ctrOHEDict[(0, '')])
# TEST Create an OHE dictionary from the dataset (3c)
assert numCtrOHEFeats == 234358, 'incorrect number of features in ctrOHEDict'
assert (0, '') in ctrOHEDict, 'incorrect features in ctrOHEDict'
###Output
_____no_output_____
###Markdown
** (3d) Aplicando OHE à base de dados ** Agora vamos usar o dicionário OHE para criar um RDD de objetos [LabeledPoint](http://spark.apache.org/docs/1.3.1/api/python/pyspark.mllib.htmlpyspark.mllib.regression.LabeledPoint) usando atributos OHE. Complete a função `parseOHEPoint`. Dica: essa função é uma extensão da função `parsePoint` criada anteriormente e que usa a função `oneHotEncoding`.
###Code
from pyspark.mllib.regression import LabeledPoint
# EXERCICIO
def parseOHEPoint(point, OHEDict, numOHEFeats):
"""Obtain the label and feature vector for this raw observation.
Note:
You must use the function `oneHotEncoding` in this implementation or later portions
of this lab may not function as expected.
Args:
point (str): A comma separated string where the first value is the label and the rest
are features.
OHEDict (dict of (int, str) to int): Mapping of (featureID, value) to unique integer.
numOHEFeats (int): The number of unique features in the training dataset.
Returns:
LabeledPoint: Contains the label for the observation and the one-hot-encoding of the
raw features based on the provided OHE dictionary.
"""
splitted = point.split(',')
label, features = float(splitted[0]), list(enumerate(splitted[1:]))
return LabeledPoint(label, oneHotEncoding(features, OHEDict, numOHEFeats) )
OHETrainData = rawTrainData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHETrainData.cache()
print (OHETrainData.take(1))
# Check that oneHotEncoding function was used in parseOHEPoint
backupOneHot = oneHotEncoding
oneHotEncoding = None
withOneHot = False
try: parseOHEPoint(rawTrainData.take(1)[0], ctrOHEDict, numCtrOHEFeats)
except TypeError: withOneHot = True
oneHotEncoding = backupOneHot
# TEST Apply OHE to the dataset (3d)
numNZ = sum(parsedTrainFeat.map(lambda x: len(x)).take(5))
numNZAlt = sum(OHETrainData.map(lambda lp: len(lp.features.indices)).take(5))
assert numNZ == numNZAlt, 'incorrect implementation of parseOHEPoint'
assert withOneHot, 'oneHotEncoding not present in parseOHEPoint'
###Output
_____no_output_____
###Markdown
**Visualização 1: Frequência dos Atributos ** Vamos agora visualizar o número de vezes que cada um dos 233.286 atributos OHE aparecem na base de treino. Para isso primeiro contabilizamos quantas vezes cada atributo aparece na base, então alocamos cada atributo em um balde de histograma. Os baldes tem tamanhos de potência de 2, então o primeiro balde conta os atributos que aparecem exatamente uma vez ( $ \scriptsize 2^0 $ ), o segundo atributos que aparecem duas vezes ( $ \scriptsize 2^1 $ ), o terceiro os atributos que aparecem de 3 a 4 vezes ( $ \scriptsize 2^2 $ ), o quinto balde é para atributos que ocorrem de cinco a oito vezes ( $ \scriptsize 2^3 $ ) e assim por diante. O gráfico de dispersão abaixo mostra o logarítmo do tamanho dos baldes versus o logarítmo da frequência de atributos que caíram nesse balde.
###Code
def bucketFeatByCount(featCount):
"""Bucket the counts by powers of two."""
for i in range(11):
size = 2 ** i
if featCount <= size:
return size
return -1
featCounts = (OHETrainData
.flatMap(lambda lp: lp.features.indices)
.map(lambda x: (x, 1))
.reduceByKey(lambda x, y: x + y))
featCountsBuckets = (featCounts
.map(lambda x: (bucketFeatByCount(x[1]), 1))
.filter(lambda kv: kv[0] != -1)
.reduceByKey(lambda x, y: x + y)
.collect())
print (featCountsBuckets)
import matplotlib.pyplot as plt
% matplotlib inline
x, y = zip(*featCountsBuckets)
x, y = np.log(x), np.log(y)
def preparePlot(xticks, yticks, figsize=(10.5, 6), hideLabels=False, gridColor='#999999',
gridWidth=1.0):
"""Template for generating the plot layout."""
plt.close()
fig, ax = plt.subplots(figsize=figsize, facecolor='white', edgecolor='white')
ax.axes.tick_params(labelcolor='#999999', labelsize='10')
for axis, ticks in [(ax.get_xaxis(), xticks), (ax.get_yaxis(), yticks)]:
axis.set_ticks_position('none')
axis.set_ticks(ticks)
axis.label.set_color('#999999')
if hideLabels: axis.set_ticklabels([])
plt.grid(color=gridColor, linewidth=gridWidth, linestyle='-')
map(lambda position: ax.spines[position].set_visible(False), ['bottom', 'top', 'left', 'right'])
return fig, ax
# generate layout and plot data
fig, ax = preparePlot(np.arange(0, 10, 1), np.arange(4, 14, 2))
ax.set_xlabel(r'$\log_e(bucketSize)$'), ax.set_ylabel(r'$\log_e(countInBucket)$')
plt.scatter(x, y, s=14**2, c='#d6ebf2', edgecolors='#8cbfd0', alpha=0.75)
pass
###Output
_____no_output_____
###Markdown
**(3e) Atributos não observados ** Naturalmente precisaremos aplicar esse mesmo procedimento para as outras bases (validação e teste), porém nessas bases podem existir atributos não observados na base de treino. Precisamos adaptar a função `oneHotEncoding` para ignorar os atributos que não existem no dicionário.
###Code
# EXERCICIO
def oneHotEncoding(rawFeats, OHEDict, numOHEFeats):
"""Produce a one-hot-encoding from a list of features and an OHE dictionary.
Note:
If a (featureID, value) tuple doesn't have a corresponding key in OHEDict it should be
ignored.
Args:
rawFeats (list of (int, str)): The features corresponding to a single observation. Each
feature consists of a tuple of featureID and the feature's value. (e.g. sampleOne)
OHEDict (dict): A mapping of (featureID, value) to unique integer.
numOHEFeats (int): The total number of unique OHE features (combinations of featureID and
value).
Returns:
SparseVector: A SparseVector of length numOHEFeats with indicies equal to the unique
identifiers for the (featureID, value) combinations that occur in the observation and
with values equal to 1.0.
"""
return SparseVector(numOHEFeats, [(OHEDict[x], 1.0) for x in rawFeats if x in OHEDict])
OHEValidationData = rawValidationData.map(lambda point: parseOHEPoint(point, ctrOHEDict, numCtrOHEFeats))
OHEValidationData.cache()
print (OHEValidationData.take(1))
# TEST Handling unseen features (3e)
numNZVal = (OHEValidationData
.map(lambda lp: len(lp.features.indices))
.sum())
assert numNZVal == 367585, 'incorrect number of features'
###Output
_____no_output_____
###Markdown
** Part 4: Predição do CTR e avaliação da perda-log (logloss) ** ** (4a) Regressão Logística ** Um classificador que podemos utilizar nessa base de dados é a regressão logística, que nos dá a probabilidade de um evento de clique em banner ocorrer. Vamos utilizar a função [LogisticRegressionWithSGD](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.classification.LogisticRegressionWithSGD) para treinar um modelo usando `OHETrainData` com a configuração de parâmetros dada. `LogisticRegressionWithSGD` retorna um [LogisticRegressionModel](https://spark.apache.org/docs/latest/api/python/pyspark.mllib.htmlpyspark.mllib.regression.LogisticRegressionModel). Em seguida, imprima `LogisticRegressionModel.weights` e `LogisticRegressionModel.intercept` para verificar o modelo gerado.
###Code
from pyspark.mllib.classification import LogisticRegressionWithSGD
# fixed hyperparameters
numIters = 50
stepSize = 10.
regParam = 1e-6
regType = 'l2'
includeIntercept = True
# EXERCICIO
model0 = LogisticRegressionWithSGD.train(OHETrainData, numIters, stepSize, regParam=regParam, regType=regType, intercept=includeIntercept)
sortedWeights = sorted(model0.weights)
print (sortedWeights[:5], model0.intercept)
# TEST Logistic regression (4a)
assert np.allclose(model0.intercept, 0.5616041364601837), 'incorrect value for model0.intercept'
assert np.allclose(sortedWeights[0:5], [-0.46297159426279577, -0.39040230182817892, -0.3871281985827924, -0.35815003494268316, -0.34963241495474701]), 'incorrect value for model0.weights'
###Output
_____no_output_____
###Markdown
** (4b) Log loss ** Uma forma de avaliar um classificador binário é através do log-loss, definido como: $$ \begin{align} \scriptsize \ell_{log}(p, y) = \begin{cases} -\log (p) & \text{if } y = 1 \\\ -\log(1-p) & \text{if } y = 0 \end{cases} \end{align} $$ onde $ \scriptsize p$ é uma probabilidade entre 0 e 1 e $ \scriptsize y$ é o rótulo binário (0 ou 1). Log loss é um critério de avaliação muito utilizado quando deseja-se predizer eventos raros. Escreva uma função para calcular o log-loss, e avalie algumas entradas de amostra.
###Code
# EXERCICIO
import numpy as np
def computeLogLoss(p, y):
"""Calculates the value of log loss for a given probabilty and label.
Note:
log(0) is undefined, so when p is 0 we need to add a small value (epsilon) to it
and when p is 1 we need to subtract a small value (epsilon = 1e-11) from it.
Args:
p (float): A probabilty between 0 and 1.
y (int): A label. Takes on the values 0 and 1.
Returns:
float: The log loss value.
"""
if p == 0.0:
p += 1e-11
elif p == 1.0:
p -= 1e-11
return -(y*np.log(p) + (1. - y)*np.log(1-p))
print (computeLogLoss(.5, 1))
print( computeLogLoss(.5, 0))
print (computeLogLoss(.99, 1))
print (computeLogLoss(.99, 0))
print (computeLogLoss(.01, 1))
print (computeLogLoss(.01, 0))
print (computeLogLoss(0, 1))
print (computeLogLoss(1, 1))
print (computeLogLoss(1, 0))
# TEST Log loss (4b)
assert np.allclose([computeLogLoss(.5, 1), computeLogLoss(.01, 0), computeLogLoss(.01, 1)], [0.69314718056, 0.0100503358535, 4.60517018599]), 'computeLogLoss is not correct'
assert np.allclose([computeLogLoss(0, 1), computeLogLoss(1, 1), computeLogLoss(1, 0)], [25.3284360229, 1.00000008275e-11, 25.3284360229]), 'computeLogLoss needs to bound p away from 0 and 1 by epsilon'
###Output
_____no_output_____
###Markdown
** (4c) Baseline log loss ** Agora, vamos utilizar a função da Parte (4b) para calcular um baseline da métrica de log-loss na nossa base de treino. Uma forma de calcular um baseline é predizer sempre a média dos rótulos observados. Primeiro calcule a média dos rótulos da base e, em seguida, calcule o log-loss médio para a base de treino.
###Code
# EXERCICIO
# Note that our dataset has a very high click-through rate by design
# In practice click-through rate can be one to two orders of magnitude lower
classOneFracTrain = OHETrainData.map(lambda lp: lp.label).mean()
print (classOneFracTrain)
logLossTrBase = OHETrainData.map(lambda lp: computeLogLoss(classOneFracTrain, lp.label)).mean()
print( 'Baseline Train Logloss = {0:.3f}\n'.format(logLossTrBase))
# TEST Baseline log loss (4c)
assert np.allclose(classOneFracTrain, 0.2271245299988764), 'incorrect value for classOneFracTrain'
assert np.allclose(logLossTrBase, 0.535778466496), 'incorrect value for logLossTrBase'
###Output
_____no_output_____
###Markdown
** (4d) Probabilidade da Predição ** O modelo gerado na Parte (4a) possui um método chamado `predict`, porém esse método retorna apenas 0's e 1's. Para calcular a probabilidade de um evento, vamos criar uma função `getP` que recebe como parâmetro o ponto x, o conjunto de pesos `w` e o `intercept`. Calcule o modelo de regressão linear nesse ponto x e aplique a [função sigmoidal](http://en.wikipedia.org/wiki/Sigmoid_function) $ \scriptsize \sigma(t) = (1+ e^{-t})^{-1} $ para retornar a probabilidade da predição do objeto x.
###Code
# EXERCICIO
from math import exp # exp(-t) = e^-t
def getP(x, w, intercept):
"""Calculate the probability for an observation given a set of weights and intercept.
Note:
We'll bound our raw prediction between 20 and -20 for numerical purposes.
Args:
x (SparseVector): A vector with values of 1.0 for features that exist in this
observation and 0.0 otherwise.
w (DenseVector): A vector of weights (betas) for the model.
intercept (float): The model's intercept.
Returns:
float: A probability between 0 and 1.
"""
# calculate rawPrediction = w.x + intercept
rawPrediction = x.dot(w) + intercept
# Bound the raw prediction value
rawPrediction = min(rawPrediction, 20)
rawPrediction = max(rawPrediction, -20)
# calculate (1+e^-rawPrediction)^-1
return 1./(1. + np.exp(-rawPrediction))
trainingPredictions = OHETrainData.map(lambda lp: getP(lp.features, model0.weights, model0.intercept))
print (trainingPredictions.take(5))
# TEST Predicted probability (4d)
assert np.allclose(trainingPredictions.sum(), 18198.8525175), 'incorrect value for trainingPredictions'
###Output
_____no_output_____
###Markdown
** (4e) Avalie o modelo ** Finalmente, crie uma função `evaluateResults` que calcula o log-loss médio do modelo em uma base de dados. Em seguida, execute essa função na nossa base de treino.
###Code
# EXERCICIO
def evaluateResults(model, data):
"""Calculates the log loss for the data given the model.
Args:
model (LogisticRegressionModel): A trained logistic regression model.
data (RDD of LabeledPoint): Labels and features for each observation.
Returns:
float: Log loss for the data.
"""
return (data
.map(lambda lp: (lp.label, getP(lp.features, model0.weights, model0.intercept)))
.map(lambda lp: computeLogLoss(lp[1], lp[0]))
.mean()
)
logLossTrLR0 = evaluateResults(model0, OHETrainData)
print ('OHE Features Train Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'.format(logLossTrBase, logLossTrLR0))
# TEST Evaluate the model (4e)
assert np.allclose(logLossTrLR0, 0.45704573867), 'incorrect value for logLossTrLR0'
###Output
0.45704573867
###Markdown
** (4f) log-loss da validação ** Agora aplique o modelo na nossa base de validação e calcule o log-loss médio, compare com o nosso baseline.
###Code
# EXERCICIO
logLossValBase = OHEValidationData.map(lambda lp: computeLogLoss(classOneFracTrain, lp.label)).mean()
logLossValLR0 = evaluateResults(model0, OHEValidationData)
print ('OHE Features Validation Logloss:\n\tBaseline = {0:.3f}\n\tLogReg = {1:.3f}'.format(logLossValBase, logLossValLR0))
# TEST Validation log loss (4f)
assert np.allclose(logLossValBase, 0.526558409461), 'incorrect value for logLossValBase'
assert np.allclose(logLossValLR0, 0.458434994198), 'incorrect value for logLossValLR0'
###Output
_____no_output_____
###Markdown
**Visualização 2: Curva ROC ** A curva ROC nos mostra o custo-benefício entre a taxa de falso positivo e a taxa de verdadeiro positivo, conforme diminuimos o limiar de predição. Um modelo aleatório é representado por uma linha pontilhada. Idealmente nosso modelo deve formar uma curva acima dessa linha.
###Code
labelsAndScores = OHEValidationData.map(lambda lp:
(lp.label, getP(lp.features, model0.weights, model0.intercept)))
labelsAndWeights = labelsAndScores.collect()
labelsAndWeights.sort(key=lambda kv: kv[1], reverse=True)
labelsByWeight = np.array([k for (k, v) in labelsAndWeights])
length = labelsByWeight.size
truePositives = labelsByWeight.cumsum()
numPositive = truePositives[-1]
falsePositives = np.arange(1.0, length + 1, 1.) - truePositives
truePositiveRate = truePositives / numPositive
falsePositiveRate = falsePositives / (length - numPositive)
# Generate layout and plot data
fig, ax = preparePlot(np.arange(0., 1.1, 0.1), np.arange(0., 1.1, 0.1))
ax.set_xlim(-.05, 1.05), ax.set_ylim(-.05, 1.05)
ax.set_ylabel('True Positive Rate (Sensitivity)')
ax.set_xlabel('False Positive Rate (1 - Specificity)')
plt.plot(falsePositiveRate, truePositiveRate, color='#8cbfd0', linestyle='-', linewidth=3.)
plt.plot((0., 1.), (0., 1.), linestyle='--', color='#d6ebf2', linewidth=2.) # Baseline model
pass
###Output
_____no_output_____ |
C9_C10.ipynb | ###Markdown
Projection plots for CERN HL YR David Straub, 2018
###Code
import flavio
from wilson import Wilson
flavio.__version__
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
from projections import *
with open(PDAT, 'rb') as f:
plotdata = pickle.load(f)
###Output
_____no_output_____
###Markdown
Hack to change number of sigma contours
###Code
def makesigma(pdat, levels):
_pdat = pdat.copy()
_pdat['levels'] = [-1] + [flavio.statistics.functions.delta_chi2(n, dof=2) for n in levels]
return _pdat
###Output
_____no_output_____
###Markdown
Plot
###Code
plt.figure(figsize=(6, 6))
opt = dict(filled=False, interpolation_factor=50)
opt_d = dict(filled=False, interpolation_factor=50, contour_args=dict(linestyles=':'))
for sc in ['SM', 'Scenario I', 'Scenario II']:
fpl.contour(**makesigma(plotdata['Phase II stat+sys ' + sc], (3,)),
label=r'Phase II $3\sigma$' if sc == 'SM' else None,
col=0, **opt)
fpl.contour(**makesigma(plotdata['Phase I stat+sys ' + sc], (3,)),
label=r'Phase I $3\sigma$' if sc == 'SM' else None,
col=0, **opt_d)
# fpl.contour(**makesigma(plotdata['Phase II stat ' + sc], (2,)),
# label=r'Phase II stat $2\sigma$' if sc == 'SM' else None,
# col=1, **opt)
# fpl.contour(**makesigma(plotdata['Phase I stat ' + sc], (2,)),
# label=r'Phase I stat $2\sigma$' if sc == 'SM' else None,
# col=1, **opt_d)
plt.xlabel(r'$C_9^{bs\mu\mu}$')
plt.ylabel(r'$C_{10}^{bs\mu\mu}$')
plt.scatter([0], [0], marker='*', label='SM', c='k')
plt.scatter([-1.4], [0], marker='.', label='NP $C_9$', c='k')
plt.scatter([-0.7], [0.7], marker='x', label=r'NP $C_9=-C_{10}$', c='k')
plt.xlim([-1.8, 0.8])
plt.ylim([-1.3, 1.3])
plt.legend(loc='lower right');
fpl.flavio_branding(version=False)
plt.savefig('YR_C9_C10.pdf', bbox_inches='tight')
###Output
_____no_output_____ |
notebooks/Simple_convex_example-3_ways.ipynb | ###Markdown
A simple example, solved three ways1. CVXPY + MOSEK2. SD ADMM3. Coordinate descent
###Code
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy import signal
from time import time
import seaborn as sns
import cvxpy as cvx
sns.set_style('darkgrid')
import sys
sys.path.append('..')
from osd import Problem
from osd.components import MeanSquareSmall, SmoothSecondDifference, SparseFirstDiffConvex
from osd.signal_decomp_bcd import run_bcd
from osd.utilities import progress
from osd.signal_decomp_admm import calc_obj
TOL = 1e-5
###Output
_____no_output_____
###Markdown
Data generation
###Code
np.random.seed(42)
t = np.linspace(0, 1000, 200)
signal1 = np.sin(2 * np.pi * t * 1 / (500.))
signal2 = signal.square(2 * np.pi * t * 1 / (450.))
X_real = np.zeros((3, len(t)), dtype=float)
X_real[0] = 0.15 * np.random.randn(len(signal1))
X_real[1] = signal1
X_real[2] = signal2
y = np.sum(X_real, axis=0)
K, T = X_real.shape
plt.figure(figsize=(10, 6))
plt.plot(t, np.sum(X_real[1:], axis=0), label='true signal minus noise')
plt.plot(t, y, alpha=0.5, label='observed signal')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
CVXPY + MOSEK
###Code
c1 = MeanSquareSmall(size=T)
c2 = SmoothSecondDifference(weight=1e3/T)
c3 = SparseFirstDiffConvex(weight=2e0/T, vmax=1, vmin=-1)
components = [c1, c2, c3]
problem1 = Problem(y, components)
problem1.decompose(how='cvx')
opt_obj_val = problem1.objective_value
opt_obj_val
problem1.plot_decomposition(X_real=X_real);
###Output
_____no_output_____
###Markdown
SD ADMM
###Code
problem2 = Problem(y, components)
problem2.decompose(how='admm', stopping_tolerance=TOL)
problem2.objective_value
plt.figure()
plt.plot(problem2.admm_result['obj_vals'] - opt_obj_val)
plt.axvline(problem2.admm_result['it'], color='red', ls='--')
plt.title('suboptimality as compared to CVXPY')
plt.yscale('log')
plt.show()
plt.figure()
plt.plot(problem2.admm_result['optimality_residual'], label='residual')
plt.axvline(problem2.admm_result['it'], color='red', ls='--')
plt.yscale('log')
plt.legend()
plt.title('internal optimality residual')
plt.show()
problem2.objective_value
problem2.plot_decomposition(X_real=X_real);
###Output
_____no_output_____
###Markdown
Coordinate Descent
###Code
problem3 = Problem(y, components)
problem3.decompose(how='bcd', stopping_tolerance=TOL)
problem3.objective_value
len(problem3.bcd_result['obj_vals'])
plt.figure()
plt.plot(problem3.bcd_result['obj_vals'] - opt_obj_val, label='coordinate descent')
plt.plot(problem2.admm_result['obj_vals'] - opt_obj_val, label='SD ADMM')
plt.title('suboptimality as compared to CVXPY')
plt.yscale('log')
plt.legend()
plt.show()
plt.figure()
plt.plot(problem3.bcd_result['optimality_residual'], label='coordinate descent')
plt.plot(problem2.admm_result['optimality_residual'], label='SD ADMM')
plt.yscale('log')
plt.title('internal optimality residual')
plt.legend()
plt.show()
plt.scatter(problem3.bcd_result['optimality_residual'], problem3.bcd_result['obj_vals'] - opt_obj_val,
label='sd-bcd', marker='.')
plt.scatter(problem2.admm_result['optimality_residual'], problem2.admm_result['obj_vals'] - opt_obj_val,
label='sd-admm', marker='.')
plt.xscale('log')
plt.yscale('log')
# plt.xlim(plt.ylim())
plt.xlabel('optimality residual')
plt.ylabel('subpotimality as compared to cvxpy')
# plt.gca().set_aspect('equal')
plt.legend()
plt.title('Comparison of algorithm optimality residual\nto actual difference between objective value and CVXPY value');
problem3.plot_decomposition(X_real=X_real);
###Output
_____no_output_____
###Markdown
new work 9/16- The dual variable is $-2/T$ times the first component- The ADMM dual state variable converges to $1/\rho$ times the SD consistency dual variable
###Code
plt.figure()
plt.plot(problem1.problem.variables()[0].value * (-2 / T),
problem1.problem.constraints[-1].dual_value)
plt.figure()
plt.plot(problem2.admm_result['u'] * (2/T),
problem1.problem.constraints[-1].dual_value,
marker='.', ls='none')
plt.figure()
plt.plot(problem1.problem.variables()[0].value,
(-1) * problem2.admm_result['u'], marker='.', ls='none');
###Output
_____no_output_____
###Markdown
A simple example, solved three waysThis notebook demonstrates a decomposition of a sine wave and a square wave using a very simple, convex SD model. We start by showing the default behavior, and then demonstrate the three methods that may be invoked using keyword arguements to solve a convex problem:1. CVXPY2. SD ADMM3. Coordinate descent
###Code
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
from scipy import signal
from time import time
import seaborn as sns
import cvxpy as cvx
import sys
sys.path.append('..')
from osd import Problem
from osd.classes import MeanSquareSmall, SmoothSecondDifference, SparseFirstDiffConvex
rms = lambda x: np.sqrt(np.average(np.power(x, 2)))
###Output
_____no_output_____
###Markdown
Data generation
###Code
np.random.seed(42)
t = np.linspace(0, 1000, 200)
signal1 = np.sin(2 * np.pi * t * 1 / (500.))
signal2 = signal.square(2 * np.pi * t * 1 / (450.))
X_real = np.zeros((3, len(t)), dtype=float)
X_real[0] = 0.15 * np.random.randn(len(signal1))
X_real[1] = signal1
X_real[2] = signal2
y = np.sum(X_real, axis=0)
K, T = X_real.shape
plt.figure(figsize=(10, 6))
plt.plot(t, np.sum(X_real[1:], axis=0), label='true signal minus noise')
plt.plot(t, y, alpha=0.5, marker='.', label='observed signal')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Default settings
###Code
c1 = MeanSquareSmall(size=T)
c2 = SmoothSecondDifference(weight=1e3/T)
c3 = SparseFirstDiffConvex(weight=2e0/T, vmax=1, vmin=-1)
classes = [c1, c2, c3]
problem = Problem(y, classes)
problem.decompose()
problem.plot_decomposition(X_real=X_real);
###Output
_____no_output_____
###Markdown
CVXPY
###Code
c1 = MeanSquareSmall(size=T)
c2 = SmoothSecondDifference(weight=1e3/T)
c3 = SparseFirstDiffConvex(weight=2e0/T, vmax=1, vmin=-1)
classes = [c1, c2, c3]
problem1 = Problem(y, classes)
problem1.decompose(how='cvx')
opt_obj_val = problem1.objective_value
opt_obj_val
problem1.plot_decomposition(X_real=X_real);
###Output
_____no_output_____
###Markdown
SD ADMM
###Code
problem2 = Problem(y, classes)
problem2.decompose(how='admm', verbose=True)
problem2.objective_value
plt.figure()
plt.plot(problem2.admm_result['obj_vals'] - opt_obj_val, label='suboptimality as compared to CVXPY')
plt.plot(problem2.admm_result['optimality_residual'], label='internal residual')
plt.yscale('log')
plt.legend()
plt.show()
problem2.plot_decomposition(X_real=X_real);
###Output
_____no_output_____
###Markdown
Coordinate Descent
###Code
problem3 = Problem(y, classes)
problem3.decompose(how='bcd')
problem3.objective_value
plt.figure()
plt.plot(problem3.bcd_result['obj_vals'] - opt_obj_val, label='suboptimality as compared to CVXPY')
plt.plot(problem3.bcd_result['optimality_residual'], label='internal residual')
plt.yscale('log')
plt.legend()
plt.show()
problem3.plot_decomposition(X_real=X_real);
###Output
_____no_output_____
###Markdown
Comparisons between ADMM and BCD
###Code
plt.figure()
plt.plot(problem3.bcd_result['obj_vals'] - opt_obj_val, label='coordinate descent')
plt.plot(problem2.admm_result['obj_vals'] - opt_obj_val, label='SD ADMM')
plt.title('suboptimality as compared to CVXPY')
plt.yscale('log')
plt.legend()
plt.show()
plt.figure()
plt.plot(problem3.bcd_result['optimality_residual'], label='coordinate descent')
plt.plot(problem2.admm_result['optimality_residual'], label='SD ADMM')
plt.yscale('log')
plt.title('internal optimality residual')
plt.legend()
plt.show()
###Output
_____no_output_____ |
PythonNotebooks/indexFileNavigation/index_file_navigation_boundingbox.ipynb | ###Markdown
ABSTRACT All CMEMS in situ data products can be found and downloaded after [registration](http://marine.copernicus.eu/services-portfolio/register-now/) via [CMEMS catalogue](http://marine.copernicus.eu/services-portfolio/access-to-products/).Such channel is advisable just for sporadic netCDF donwloading because when operational, interaction with the web user interface is not practical. In this context though, the use of scripts for ftp file transference is is a much more advisable approach.As long as every line of such files contains information about the netCDFs contained within the different directories [see at tips why](https://github.com/CopernicusMarineInsitu/INSTACTraining/blob/master/tips/README.md), it is posible for users to loop over its lines to download only those that matches a number of specifications such as spatial coverage, time coverage, provider, data_mode, parameters or file_name related (region, data type, TS or PF, platform code, or/and platform category, timestamp). PREREQUISITES - [credentias](http://marine.copernicus.eu/services-portfolio/register-now/)- aimed [in situ product name](http://cmems-resources.cls.fr/documents/PUM/CMEMS-INS-PUM-013.pdf)- aimed [hosting distribution unit](https://github.com/CopernicusMarineInsitu/INSTACTraining/blob/master/tips/README.md)- aimed [index file](https://github.com/CopernicusMarineInsitu/INSTACTraining/blob/master/tips/README.md)i.e:
###Code
user = #type CMEMS user name within colons
password = #type CMEMS password within colons
product_name = 'INSITU_BAL_NRT_OBSERVATIONS_013_032' #type aimed CMEMS in situ product
distribution_unit = 'cmems.smhi.se' #type aimed hosting institution
index_file = 'index_latest.txt' #type aimed index file name
###Output
_____no_output_____
###Markdown
DOWNLOAD 1. Index file download
###Code
import ftplib
ftp=ftplib.FTP(distribution_unit,user,password)
ftp.cwd("Core")
ftp.cwd(product_name)
remote_filename= index_file
local_filename = remote_filename
local_file = open(local_filename, 'wb')
ftp.retrbinary('RETR ' + remote_filename, local_file.write)
local_file.close()
ftp.quit()
#ready when 221 Goodbye.!
###Output
_____no_output_____
###Markdown
QUICK VIEW
###Code
import numpy as np
import pandas as pd
from random import randint
index = np.genfromtxt(index_file, skip_header=6, unpack=False, delimiter=',', dtype=None,
names=['catalog_id', 'file_name', 'geospatial_lat_min', 'geospatial_lat_max',
'geospatial_lon_min', 'geospatial_lon_max',
'time_coverage_start', 'time_coverage_end',
'provider', 'date_update', 'data_mode', 'parameters'])
dataset = randint(0,len(index)) #ramdom line of the index file
values = [index[dataset]['catalog_id'], '<a href='+index[dataset]['file_name']+'>'+index[dataset]['file_name']+'</a>', index[dataset]['geospatial_lat_min'], index[dataset]['geospatial_lat_max'],
index[dataset]['geospatial_lon_min'], index[dataset]['geospatial_lon_max'], index[dataset]['time_coverage_start'],
index[dataset]['time_coverage_end'], index[dataset]['provider'], index[dataset]['date_update'], index[dataset]['data_mode'],
index[dataset]['parameters']]
headers = ['catalog_id', 'file_name', 'geospatial_lat_min', 'geospatial_lat_max',
'geospatial_lon_min', 'geospatial_lon_max',
'time_coverage_start', 'time_coverage_end',
'provider', 'date_update', 'data_mode', 'parameters']
df = pd.DataFrame(values, index=headers, columns=[dataset])
df.style
###Output
_____no_output_____
###Markdown
FILTERING CRITERIA
###Code
from shapely.geometry import box, multipoint
import shapely
###Output
_____no_output_____
###Markdown
Regarding the above glimpse, it is posible to filter by 12 criteria. As example we will setup next a filter to only download those files that contains data within a defined boundingbox. 1. Aimed boundingbox
###Code
targeted_geospatial_lat_min = 55.0 # enter min latitude of your bounding box
targeted_geospatial_lat_max = 70.0 # enter max latitude of your bounding box
targeted_geospatial_lon_min = 12.0 # enter min longitude of your bounding box
targeted_geospatial_lon_max = 26.00 # enter max longitude of your bounding box
targeted_bounding_box = box(targeted_geospatial_lon_min, targeted_geospatial_lat_min, targeted_geospatial_lon_max, targeted_geospatial_lat_max)
###Output
_____no_output_____
###Markdown
2. netCDF filtering/selection
###Code
selected_netCDFs = [];
for netCDF in index:
file_name = netCDF['file_name']
geospatial_lat_min = float(netCDF['geospatial_lat_min'])
geospatial_lat_max = float(netCDF['geospatial_lat_max'])
geospatial_lon_min = float(netCDF['geospatial_lon_min'])
geospatial_lon_max = float(netCDF['geospatial_lon_max'])
bounding_box = shapely.geometry.box(geospatial_lon_min, geospatial_lat_min, geospatial_lon_max, geospatial_lat_max)
bounding_box_centroid = bounding_box.centroid
if (targeted_bounding_box.contains(bounding_box_centroid)):
selected_netCDFs.append(file_name)
print("total: " +str(len(selected_netCDFs)))
###Output
total: 2336
###Markdown
SELECTION DOWNLOAD
###Code
for nc in selected_netCDFs:
last_idx_slash = nc.rfind('/')
ncdf_file_name = nc[last_idx_slash+1:]
folders = nc.split('/')[3:len(nc.split('/'))-1]
host = nc.split('/')[2] #or distribution unit
ftp=ftplib.FTP(host,user,password)
for folder in folders:
ftp.cwd(folder)
local_file = open(ncdf_file_name, 'wb')
ftp.retrbinary('RETR '+ncdf_file_name, local_file.write)
local_file.close()
ftp.quit()
###Output
_____no_output_____ |
notebooks/not_for_release/1.0-jmb-event-identification.ipynb | ###Markdown
GDELT Pilot John Brandt Data Acquisition with BigQuery
###Code
import pandas as pd
import os
from google.cloud import bigquery
client = bigquery.Client()
query = (
"SELECT SourceCommonName, Amounts, V2Locations, V2Organizations, V2Themes FROM [gdelt-bq:gdeltv2.gkg@-604800000-] "
'WHERE Amounts LIKE "%trees%"'
'AND Amounts LIKE "%planted%"'
)
query_job = client.query(
query,
location="US",
)
for row in query_job: # API request - fetches results
# Row values can be accessed by field name or index
assert row[0] == row.name == row["name"]
print(row)
###Output
_____no_output_____
###Markdown
Load in dataBecause each BigQuery call costs about \$0.25 USD, I will load in a CSV during each jupyter session. This notebook currently works with weekly references to tree plantings, but will be functionalized to work with other event detections in the future.
###Code
files = os.listdir("../data/external")
data = pd.read_csv("../data/external/" + files[2])
###Output
_____no_output_____
###Markdown
Locating events In order to tie events to locations and organizations, we assign the location most closely referenced to the event in the text. For each event detected, create a dictionary of the form {index : (number, action)}. This will be matched with a similar location dictionary of the form {index : location} with a grid search.
###Code
def locate_event(i):
#print("\nData point {}".format(i))
amount = data.iloc[i, 1]
locs = [x for x in str(data.iloc[i, 2]).split(";")]
refs = ([x for x in amount.split(";") if "tree" in x]) # Split up the references into value, action, index
refs = ([x for x in refs if "plant" in x])
values, actions, indexes = [], [], []
# Parse into separate lists
# Generate key, (number, action) dictionary for each entry
for ref in refs:
parsed = ref.split(",")
values.append(int(parsed[0]))
actions.append(parsed[1])
indexes.append(int(parsed[2]))
refs = dict(zip(indexes, zip(values, actions))) # {index: (number, action)}
locs_dict = {}
for loc in locs: # Generate key, value pair for each location in each entry
dict_i = {}
locs_dict.update( { loc.split("#")[-1] : loc.split("#")[:-1] }) # {index : location}
if list(locs_dict.keys()) == ['nan']: # if no location, return null
return None, None
if len(list(refs.keys())) == 0: # if no references, return null
return None, None
refs_idx = [int(x) for x in list(refs.keys())][0]
locs_idx = [int(x) for x in list(locs_dict.keys())]
loc_idx = min(locs_idx, key=lambda x:abs(x-refs_idx))
location = locs_dict.get(str(loc_idx))
return refs, location
for i in range(105, 112):
refs, location = locate_event(i)
print("Reference {}: \n {} \n {}\n".format(i, refs, location))
# TODO: merge refs, location into dataframe
# TODO: join original themes, locations, events, and people to above DF
# TODO: confidence
# FIXME: Why do some of them not have references?
# TODO: Implement summary statistics of matched ref, loc
# TODO: Geocode proposed loc
# TODO: Event isolation / deduplication
# TODO: Develop SVM / RandomForests classifier for (False positive / Planned / Implemented)
# TODO: Port to REST API
# TODO: Attache confidence to each point (define an algorithm)
# TODO: Export to leaflet dashboard
###Output
_____no_output_____ |
week06_rnn/week0_10_Names_generation_from_scratch.ipynb | ###Markdown
seq2seq practice Generating names with recurrent neural networksThis time you'll find yourself delving into the heart (and other intestines) of recurrent neural networks on a class of toy problems.Struggle to find a name for the variable? Let's see how you'll come up with a name for your son/daughter. Surely no human has expertize over what is a good child name, so let us train RNN instead;It's dangerous to go alone, take these:
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os
###Output
_____no_output_____
###Markdown
Our dataThe dataset contains ~8k earthling names from different cultures, all in latin transcript.This notebook has been designed so as to allow you to quickly swap names for something similar: deep learning article titles, IKEA furniture, pokemon names, etc.
###Code
start_token = " "
def read_names(path_to_file):
global start_token
with open(path_to_file) as f:
names = f.read()[:-1].split('\n')
names = [start_token + line for line in names]
return names
try:
names = read_names('names')
except FileNotFoundError:
!wget https://raw.githubusercontent.com/girafe-ai/ml-mipt/master/datasets/names_dataset/names -nc -O names
names = read_names('./names')
print ('n samples = ',len(names))
for x in names[::1000]:
print (x)
MAX_LENGTH = max(map(len, names))
print("max length =", MAX_LENGTH)
plt.title('Sequence length distribution')
plt.hist(list(map(len, names)),bins=25);
###Output
max length = 16
###Markdown
Text processingFirst we need next to collect a "vocabulary" of all unique tokens i.e. unique characters. We can then encode inputs as a sequence of character ids.
###Code
tokens = list(set("".join(names)))
num_tokens = len(tokens)
print ('num_tokens = ', num_tokens)
assert 50 < num_tokens < 60, "Names should contain within 50 and 60 unique tokens depending on encoding"
###Output
num_tokens = 55
###Markdown
Convert characters to integersTorch is built for crunching numbers, not strings. To train our neural network, we'll need to replace characters with their indices in tokens list.Let's compose a dictionary that does this mapping.
###Code
token_to_id = {tokens[i]:i for i in range(len(tokens))}
assert len(tokens) == len(token_to_id), "dictionaries must have same size"
for i in range(num_tokens):
assert token_to_id[tokens[i]] == i, "token identifier must be it's position in tokens list"
print("Seems alright!")
def to_matrix(names, max_len=None, pad=token_to_id[' '], dtype='int32', batch_first = True):
"""Casts a list of names into rnn-digestable matrix"""
max_len = max_len or max(map(len, names))
names_ix = np.zeros([len(names), max_len], dtype) + pad
for i in range(len(names)):
line_ix = [token_to_id[c] for c in names[i]]
names_ix[i, :len(line_ix)] = line_ix
if not batch_first: # convert [batch, time] into [time, batch]
names_ix = np.transpose(names_ix)
return names_ix
names[:2]
#Example: cast 4 random names to matrices, pad with zeros
print('\n'.join(names[::2000]))
print(to_matrix(names[::2000]))
###Output
Abagael
Glory
Prissie
Giovanne
[[51 11 31 18 35 18 22 49 51]
[51 14 49 54 37 9 51 51 51]
[51 1 37 23 6 6 23 22 51]
[51 14 23 54 50 18 17 17 22]]
###Markdown
Recurrent neural networkWe can rewrite recurrent neural network as a consecutive application of dense layer to input $x_t$ and previous rnn state $h_t$. This is exactly what we're gonna do now.Since we're training a language model, there should also be:* An embedding layer that converts character id x_t to a vector.* An output layer that predicts probabilities of next phoneme
###Code
import torch, torch.nn as nn
import torch.nn.functional as F
class CharRNNCell(nn.Module):
"""
Implement the scheme above as torch module
"""
def __init__(self, num_tokens=len(tokens), embedding_size=16, rnn_num_units=64):
super(self.__class__,self).__init__()
self.num_units = rnn_num_units
self.embedding = nn.Embedding(num_tokens, embedding_size)
self.rnn_update = nn.Linear(embedding_size + rnn_num_units, rnn_num_units)
self.rnn_to_logits = nn.Linear(rnn_num_units, num_tokens)
def forward(self, x, h_prev):
"""
This method computes h_next(x, h_prev) and log P(x_next | h_next)
We'll call it repeatedly to produce the whole sequence.
:param x: batch of character ids, containing vector of int64
:param h_prev: previous rnn hidden states, containing matrix [batch, rnn_num_units] of float32
"""
# get vector embedding of x
x_emb = self.embedding(x)
# compute next hidden state using self.rnn_update
# hint: use torch.cat(..., dim=...) for concatenation
x_and_h = torch.cat([x_emb, h_prev], dim=1)
h_next = self.rnn_update(x_and_h)
h_next = torch.tanh(h_next)
assert h_next.size() == h_prev.size()
#compute logits for next character probs
logits = F.log_softmax(self.rnn_to_logits(h_next), -1)
return h_next, logits
def initial_state(self, batch_size):
""" return rnn state before it processes first input (aka h0) """
return torch.zeros(batch_size, self.num_units, requires_grad=True)
char_rnn = CharRNNCell()
criterion = nn.NLLLoss()
###Output
_____no_output_____
###Markdown
RNN loopOnce we've defined a single RNN step, we can apply it in a loop to get predictions on each step.
###Code
def rnn_loop(char_rnn, batch_ix):
"""
Computes log P(next_character) for all time-steps in names_ix
:param names_ix: an int32 matrix of shape [batch, time], output of to_matrix(names)
"""
batch_size, max_length = batch_ix.size()
hid_state = char_rnn.initial_state(batch_size)
logprobs = []
for x_t in batch_ix.transpose(0,1):
hid_state, logits = char_rnn(x_t, hid_state) # <-- here we call your one-step code
logprobs.append(F.log_softmax(logits, -1))
return torch.stack(logprobs, dim=1)
batch_ix = to_matrix(names[:5])
batch_ix = torch.tensor(batch_ix, dtype=torch.int64)
logp_seq = rnn_loop(char_rnn, batch_ix)
assert torch.max(logp_seq).data.numpy() <= 0
assert tuple(logp_seq.size()) == batch_ix.shape + (num_tokens,)
###Output
_____no_output_____
###Markdown
Likelihood and gradientsWe can now train our neural network to minimize crossentropy (maximize log-likelihood) with the actual next tokens.To do so in a vectorized manner, we take `batch_ix[:, 1:]` - a matrix of token ids shifted i step to the left so i-th element is acutally the "next token" for i-th prediction
###Code
predictions_logp = logp_seq[:, :-1]
actual_next_tokens = batch_ix[:, 1:]
# .contiguous() method checks that tensor is stored in the memory correctly to
# get its view of desired shape.
loss = criterion(predictions_logp.contiguous().view(-1, num_tokens),
actual_next_tokens.contiguous().view(-1))
loss.backward()
for w in char_rnn.parameters():
assert w.grad is not None and torch.max(torch.abs(w.grad)).data.numpy() != 0, \
"Loss is not differentiable w.r.t. a weight with shape %s. Check forward method." % (w.size(),)
###Output
_____no_output_____
###Markdown
The training loopWe train our char-rnn exactly the same way we train any deep learning model: by minibatch sgd.The only difference is that this time we sample strings, not images or sound.
###Code
from IPython.display import clear_output
from random import sample
char_rnn = CharRNNCell()
criterion = nn.NLLLoss()
opt = torch.optim.Adam(char_rnn.parameters())
history = []
MAX_LENGTH = 16
for i in range(1000):
batch_ix = to_matrix(sample(names, 32), max_len=MAX_LENGTH)
batch_ix = torch.tensor(batch_ix, dtype=torch.int64)
logp_seq = rnn_loop(char_rnn, batch_ix)
# compute loss
predictions_logp = logp_seq[:, :-1]
actual_next_tokens = batch_ix[:, 1:]
loss = criterion(predictions_logp.contiguous().view(-1, num_tokens),
actual_next_tokens.contiguous().view(-1))
# train with backprop
# YOUR CODE HERE
loss.backward()
opt.step()
opt.zero_grad()
history.append(loss.data.numpy())
if (i+1)%100==0:
clear_output(True)
plt.plot(history,label='loss')
plt.legend()
plt.show()
assert np.mean(history[:10]) > np.mean(history[-10:]), "RNN didn't converge."
###Output
_____no_output_____
###Markdown
RNN: samplingOnce we've trained our network a bit, let's get to actually generating stuff. All we need is the single rnn step function you have defined in `char_rnn.forward`.
###Code
def generate_sample(char_rnn, seed_phrase=' ', max_length=MAX_LENGTH, temperature=1.0):
'''
The function generates text given a phrase of length at least SEQ_LENGTH.
:param seed_phrase: prefix characters. The RNN is asked to continue the phrase
:param max_length: maximum output length, including seed_phrase
:param temperature: coefficient for sampling. higher temperature produces more chaotic outputs,
smaller temperature converges to the single most likely output
'''
# convert to list of numbers
x_sequence = [token_to_id[token] for token in seed_phrase]
# convert to tensor
x_sequence = torch.tensor([x_sequence], dtype=torch.int64)
hid_state = char_rnn.initial_state(batch_size=1)
#feed the seed phrase, if any
for i in range(len(seed_phrase) - 1):
hid_state, _ = char_rnn(x_sequence[:, i], hid_state)
#start generating
for _ in range(max_length - len(seed_phrase)):
hid_state, logits = char_rnn(x_sequence[:, -1], hid_state)
p_next = F.softmax(logits / temperature, dim=-1).data.numpy()[0]
# sample next token and push it back into x_sequence
# выбираем рандомный номер из распределения вероятностей что мы получили
next_ix = np.random.choice(num_tokens,p=p_next)
next_ix = torch.tensor([[next_ix]], dtype=torch.int64)
x_sequence = torch.cat([x_sequence, next_ix], dim=1)
return ''.join([tokens[ix] for ix in x_sequence.data.numpy()[0]])
for _ in range(10):
print(generate_sample(char_rnn, temperature=1.1))
for _ in range(50):
print(generate_sample(char_rnn, seed_phrase=' Deb'))
###Output
Deboll
Debana
Deberela
Debhror
Debana
Debice
Debere
Debam
Debise
Debere
Debily
Debrein
Debk
Debiy
Debey
Debcel
Debo
Debashel
Debelea
Debnotecie
Debsotuenonli
Debina
Debmerle
Debian
Debelheeteen
Deber
Debon
Debel
Debnory
Debane
Debened
Debva
Debhie
Debbi
Deb
Debe
Deb
Debig
Debmera
Debanne
Deby
Debii
Debi
Debe
Debetmino
Deben
Deboro
Deble
Debtiph
Debrtie
###Markdown
More seriouslyWhat we just did is a manual low-level implementation of RNN. While it's cool, i guess you won't like the idea of re-writing it from scratch on every occasion. As you might have guessed, torch has a solution for this. To be more specific, there are two options:* `nn.RNNCell(emb_size, rnn_num_units)` - implements a single step of RNN just like you did. Basically concat-linear-tanh* `nn.RNN(emb_size, rnn_num_units` - implements the whole rnn_loop for you.There's also `nn.LSTMCell` vs `nn.LSTM`, `nn.GRUCell` vs `nn.GRU`, etc. etc.In this example we'll rewrite the char_rnn and rnn_loop using high-level rnn API.
###Code
class CharRNNLoop(nn.Module):
def __init__(self, num_tokens=num_tokens, emb_size=16, rnn_num_units=64):
super(self.__class__, self).__init__()
self.emb = nn.Embedding(num_tokens, emb_size)
self.rnn = nn.LSTM(emb_size, rnn_num_units, batch_first=True)
self.hid_to_logits = nn.Linear(rnn_num_units, num_tokens)
def forward(self, x):
assert isinstance(x.data, torch.LongTensor)
h_seq, _ = self.rnn(self.emb(x))
next_logits = self.hid_to_logits(h_seq)
next_logp = F.log_softmax(next_logits, dim=-1)
return next_logp
model = CharRNNLoop()
opt = torch.optim.Adam(model.parameters(), lr=1e-3)
history = []
# the model applies over the whole sequence
batch_ix = to_matrix(sample(names, 32), max_len=MAX_LENGTH)
batch_ix = torch.LongTensor(batch_ix)
logp_seq = model(batch_ix)
loss = criterion(logp_seq[:, :-1].contiguous().view(-1, num_tokens),
batch_ix[:, 1:].contiguous().view(-1))
loss.backward()
MAX_LENGTH = 16
for i in range(10):
batch_ix = to_matrix(sample(names, 32), max_len=MAX_LENGTH)
batch_ix = torch.tensor(batch_ix, dtype=torch.int64)
logp_seq = model(batch_ix)
# compute loss
# YOUR CODE HERE
loss = criterion(logp_seq[:, :-1].contiguous().view(-1, num_tokens),
batch_ix[:, 1:].contiguous().view(-1))
# print(logp_seq)
# print(batch_ix)
loss.backward()
opt.step()
opt.zero_grad()
# train with backprop
# YOUR CODE HERE
history.append(loss.data.numpy())
if (i+1)%100==0:
clear_output(True)
plt.plot(history,label='loss')
plt.legend()
plt.show()
assert np.mean(history[:10]) > np.mean(history[-10:]), "RNN didn't converge."
def generate_sample(char_rnn=model, seed_phrase=' ', max_length=MAX_LENGTH, temperature=1.0):
'''
The function generates text given a phrase of length at least SEQ_LENGTH.
:param seed_phrase: prefix characters. The RNN is asked to continue the phrase
:param max_length: maximum output length, including seed_phrase
:param temperature: coefficient for sampling. higher temperature produces more chaotic outputs,
smaller temperature converges to the single most likely output
'''
x_sequence = [token_to_id[token] for token in seed_phrase]
x_sequence = torch.tensor([x_sequence], dtype=torch.int64)
#start generating
for _ in range(4):
p_next = char_rnn(x_sequence).data.numpy()
# print(p_next[0][-1:][0], x_sequence)
p_next = torch.tensor(p_next[0][-1:][0])
p_next = torch.softmax(p_next/ temperature, dim=-1)
# sample next token and push it back into x_sequence
# выбираем рандомный номер из распределения вероятностей что мы получили
next_ix = np.random.choice(num_tokens,p=p_next.data.numpy())
next_ix = torch.tensor([[next_ix]], dtype=torch.int64)
x_sequence = torch.cat([x_sequence, next_ix], dim=1)
return ''.join([tokens[ix] for ix in x_sequence.data.numpy()[0]])
# generate_sample(seed_phrase=" ")
for _ in range(10):
print(generate_sample(seed_phrase=' Em', temperature=1) )
print(generate_sample(seed_phrase=" "))
print(generate_sample(seed_phrase=" "))
print(generate_sample(seed_phrase=" "))
print(generate_sample(seed_phrase=" "))
###Output
Jomi
Lend
Tran
Romo
|
.ipynb_checkpoints/qam-checkpoint.ipynb | ###Markdown
Quadrature Amplitude Modulation
###Code
import numpy as np
import matplotlib.pyplot as plt
fs = 44100 # sampling rate
baud = 300 # symbol rate
Nbits = 10 # number of bits
Ns = fs//baud
f0 = 1800
#code = { 2: -2+2j, 6: -1+2j, 14: 1+2j, 10: 2+2j,
# 3: -2+1j, 7: -1-1j, 15: 1+1j, 11: 2+1j,
# 1: -2-1j, 5: -1-1j, 13: 1-1j, 9: 2-1j,
# 0: -2-2j, 4: -1-2j, 12: 1-2j, 8: 2-2j}
Nbits = 16 # number of bits
N = Nbits * Ns
code = np.array((-2-2j, -2-1j,-2+2j,-2+1j,-1-2j,-1-1j,-1+2j,-1+1j,+2-2j,+2-1j,+2+2j,+2+1j,1-2j,+1-1j,1+2j,1+1j))/2
np.random.seed(seed=1)
bits = np.int16(np.random.rand(Nbits,1)*16)
bits
plt.scatter(code.real, code.imag, color='red')
M = np.tile(code[bits],(1,Ns))
print(M.shape)
print(M)
t = np.r_[0.0:N]/fs
t
QAM = np.real(M.ravel()*np.exp(1j*2*np.pi*f0*t))/np.sqrt(2)/2
QAM
plt.figure(figsize = (16,4))
plt.plot(t,QAM.real)
plt.xlabel('time [s]')
plt.title("QAM=16 of the sequence:"+ np.array2string(np.transpose(bits)))
###Output
_____no_output_____ |
course-exercises/week4-Clustering-and-Similarity/Class-examples.ipynb | ###Markdown
Document retrieval from Wikipedia data
###Code
import turicreate
###Output
_____no_output_____
###Markdown
Load some text data from Wikipedia
###Code
people = turicreate.SFrame('../../data/people_wiki.sframe')
people
###Output
_____no_output_____
###Markdown
Explore data Taking a look at the entry for President Obama
###Code
obama = people[people['name'] == 'Barack Obama']
obama
obama['text']
###Output
_____no_output_____
###Markdown
Explore the entry for actor George Clooney
###Code
clooney = people[people['name'] == 'George Clooney']
clooney['text']
###Output
_____no_output_____
###Markdown
Word counts for Obama acticle
###Code
obama['word_count'] = turicreate.text_analytics.count_words(obama['text'])
obama
print (obama['word_count'])
###Output
[{'cuba': 1, 'relations': 1, 'sought': 1, 'combat': 1, 'ending': 1, 'withdrawal': 1, 'state': 1, 'islamic': 1, 'by': 1, 'gains': 1, 'unconstitutional': 1, '8': 1, 'marriage': 1, 'down': 1, 'urged': 1, 'process': 1, 'which': 1, 'filed': 1, 'administration': 1, 'while': 1, 'americans': 1, 'full': 1, 'lgbt': 1, 'called': 1, 'proposition': 1, 'shooting': 1, 'elementary': 1, 'gun': 1, 'related': 1, 'policies': 1, 'promoted': 1, 'supreme': 1, 'has': 4, '2012': 1, 'reelected': 1, '2012obama': 1, 'continued': 1, 'budget': 1, 'californias': 1, 'limit': 1, 'nations': 1, 'briefs': 1, 'not': 1, 'or': 1, 'lengthy': 1, '63': 1, 'lost': 1, '2011': 3, 'laden': 1, 'against': 1, 'osama': 1, 'obamacare': 1, 'that': 1, 'defense': 1, 'national': 2, 'operation': 1, 'ordered': 3, 'made': 1, 'earning': 1, 'control': 4, 'victory': 1, 'arms': 1, 'afghanistan': 2, 'economic': 1, 'three': 1, 'troop': 1, 'primaries': 1, 'war': 1, 'involvement': 3, 'with': 3, 'foreign': 2, 'relief': 2, 'repeal': 1, 'convention': 1, 'street': 1, 'referred': 1, 'military': 4, 'constitutional': 1, 'consumer': 1, 'care': 1, 'major': 1, 'patient': 1, 'start': 1, 'whether': 1, 'federal': 1, 'term': 3, 'initiatives': 1, 'ask': 1, 'domestic': 2, 'increased': 1, 'other': 1, 'over': 1, 'democratic': 4, 'creation': 1, 'romney': 1, 'job': 1, 'school': 3, 'russia': 1, 'reauthorization': 1, 'degree': 1, 'insurance': 1, 'unemployment': 1, 'normalize': 1, 'tax': 1, 'reinvestment': 1, 'recovery': 1, 'form': 1, 'into': 1, 'recession': 1, 'libya': 1, 'states': 3, 'great': 1, 'sandy': 1, 'legislation': 1, 'stimulus': 1, 'mitt': 1, 'levels': 1, 'reform': 1, '1961': 1, 'as': 6, 'doddfrank': 1, 'operations': 1, 'sworn': 1, 'years': 1, 'laureateduring': 1, 'prize': 1, '1996': 1, 'named': 1, 'nine': 1, '2009': 3, '20': 2, 'included': 1, 'began': 1, 'on': 2, '13th': 1, 'inaugurated': 1, 'to': 14, 'general': 1, 'second': 2, 'rodham': 1, 'two': 1, 'john': 1, 'address': 1, 'sufficient': 1, 'protection': 2, 'hold': 1, 'won': 1, 'hillary': 1, 'born': 2, 'close': 1, 'after': 4, 'a': 7, 'treaty': 1, 'seats': 1, 'november': 2, 'election': 3, 'wall': 1, 'presidential': 2, 'often': 1, 'july': 1, 'organizer': 1, 'primary': 2, 'nobel': 1, 'party': 3, 'court': 1, 'march': 1, '1992': 1, 'bm': 1, 'keynote': 1, 'campaign': 3, '2000in': 1, 'clinton': 1, 'during': 2, 'ended': 1, 'he': 7, 'house': 2, 'bin': 1, 'and': 21, 'senate': 3, 'attention': 1, 'district': 1, 'new': 1, 'worked': 1, 'representing': 1, 'mccain': 1, 'current': 1, 'january': 3, 'terms': 1, '2004': 3, '2010': 2, '2008': 1, 'from': 3, 'months': 1, 'policy': 2, 'at': 2, 'strike': 1, 'for': 4, 'civil': 1, 'taught': 1, 'brk': 1, 'attorney': 1, 'running': 1, 'before': 1, 'then': 1, 'chicago': 2, '4': 1, 'community': 1, '1997': 1, 'resulted': 1, 'debt': 1, 'illinois': 2, 'harvard': 2, 'us': 6, 'review': 1, 'nomination': 1, 'taxpayer': 1, 'death': 1, 'law': 6, 'husen': 1, 'dont': 2, 'spending': 1, 'served': 2, 'equality': 1, 'iraq': 4, 'where': 1, 'columbia': 1, 'unsuccessfully': 1, 'was': 5, 'hawaii': 1, 'total': 1, '2007': 1, 'hook': 1, 'debate': 1, 'honolulu': 1, 'raise': 1, 'barack': 1, 'tell': 1, 'affordable': 1, 'first': 3, 'obama': 9, 'hussein': 1, '2013': 1, 'republicans': 1, 'in': 30, 'office': 2, 'american': 3, 'graduate': 1, 'is': 2, 'peace': 1, 'regained': 1, 'represent': 1, 'defeating': 1, 'signed': 3, 'president': 4, 'united': 3, 'african': 1, 'representatives': 2, 'nominee': 2, '44th': 1, 'delegates': 1, 'the': 40, 'his': 11, 'august': 1, 'republican': 2, 'defeated': 1, 'university': 2, 'received': 1, 'of': 18, 'receive': 1, 'response': 3, 'ii': 1, 'act': 8, 'rights': 1}]
###Markdown
Find most common words in Obama article
###Code
obama.stack('word_count',new_column_name=['word','count'])
obama_word_count_table = obama[['word_count']].stack('word_count', new_column_name = ['word','count'])
obama_word_count_table
obama_word_count_table.sort('count',ascending=False)
###Output
_____no_output_____
###Markdown
Compute TF-IDF for the entire corpus of articles
###Code
people['word_count'] = turicreate.text_analytics.count_words(people['text'])
people
people['tfidf'] = turicreate.text_analytics.tf_idf(people['text'])
people
###Output
_____no_output_____
###Markdown
Examine the TF-IDF for the Obama article
###Code
obama = people[people['name'] == 'Barack Obama']
obama[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
###Output
_____no_output_____
###Markdown
Examine the TF-IDF for Clooney
###Code
clooney = people[people['name'] == 'George Clooney']
clooney[['tfidf']].stack('tfidf',new_column_name=['word','tfidf']).sort('tfidf',ascending=False)
###Output
_____no_output_____
###Markdown
Manually evaluate the distance between certain people's articles
###Code
clinton = people[people['name'] == 'Bill Clinton']
beckham = people[people['name'] == 'David Beckham']
###Output
_____no_output_____
###Markdown
Is Obama closer to Clinton or to Beckham?
###Code
turicreate.distances.cosine(obama['tfidf'][0],clinton['tfidf'][0])
turicreate.distances.cosine(obama['tfidf'][0],beckham['tfidf'][0])
###Output
_____no_output_____
###Markdown
Apply nearest neighbors for retrieval of Wikipedia articles Build the NN model
###Code
knn_model = turicreate.nearest_neighbors.create(people,features=['tfidf'],label='name')
###Output
<unknown>:36: DeprecationWarning: invalid escape sequence \c
###Markdown
Use model for retrieval... for example, who is closest to Obama?
###Code
knn_model.query(obama)
###Output
_____no_output_____
###Markdown
Other examples of retrieval
###Code
swift = people[people['name'] == 'Taylor Swift']
knn_model.query(swift)
jolie = people[people['name'] == 'Angelina Jolie']
knn_model.query(jolie)
arnold = people[people['name'] == 'Arnold Schwarzenegger']
knn_model.query(arnold)
###Output
_____no_output_____ |
notebook/01_data_extract.ipynb | ###Markdown
1. Load data
###Code
# Load csv file
df_train = pd.read_csv('../data_csv/aug_train.csv')
df_test = pd.read_csv('../data_csv/aug_test.csv')
test_target = np.load('../data_csv/jobchange_test_target_values.npy')
test_target = pd.DataFrame(test_target,columns=['target'])
test_target['enrollee_id'] = df_test['enrollee_id']
test_target = test_target[['enrollee_id','target']]
test_target.to_csv('../data_csv/test_target.csv')
# test_target = pd.read_csv('../data_csv/test_target.csv')
test_target
# df
# Check each column
# terminal install: conda install -c conda-forge pandas-profiling
from pandas_profiling import ProfileReport as pr
profile = pr(df_train, minimal=True).to_notebook_iframe()
###Output
_____no_output_____
###Markdown
2. Examine and impute missing values
###Code
df_train.info()
# Pairplot
sns.pairplot(df_train, corner=True, height=1.5, plot_kws={'size': 3}, hue='target');
# Examine data
df_train['company_type'].value_counts()
df_train['enrolled_university'].value_counts()
df_train['education_level'].value_counts()
df_train['experience'].value_counts()
df_train['company_size'].value_counts()
df_train['company_type'].value_counts()
df_train['last_new_job'].value_counts()
# Replace string with float/int
df_train['experience'] = df_train['experience'].replace('>20','25')
df_train['experience'] = df_train['experience'].replace('<1','0.5')
df_train['experience'] = df_train['experience'].astype('float')
df_train['last_new_job'] = df_train['last_new_job'].replace('>4','5')
df_train['last_new_job'] = df_train['last_new_job'].replace('never','0')
# Impute/fill NaN
df_train['gender'] = df_train['gender'].replace(np.nan, 'unknown')
df_train['enrolled_university'] = df_train['enrolled_university'].replace(np.nan, 'unknown')
df_train['education_level'] = df_train['education_level'].replace(np.nan, 'unknown')
df_train['major_discipline'] = df_train['major_discipline'].replace(np.nan, 'unknown')
df_train['education_level'] = df_train['education_level'].replace(np.nan, 'unknown')
df_train['experience'] = df_train['experience'].fillna(value = df_train['experience'].median())
df_train['company_size'] = df_train['company_size'].fillna(value = df_train['company_size'].value_counts().index[0])
df_train['company_type'] = df_train['company_type'].replace(np.nan, 'unknown')
df_train['last_new_job'] = df_train['last_new_job'].fillna(value = df_train['last_new_job'].median()).astype('int')
df_train['target'] = df_train['target'].astype('int')
df_train.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 19158 entries, 0 to 19157
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 enrollee_id 19158 non-null int64
1 city 19158 non-null object
2 city_development_index 19158 non-null float64
3 gender 19158 non-null object
4 relevent_experience 19158 non-null object
5 enrolled_university 19158 non-null object
6 education_level 19158 non-null object
7 major_discipline 19158 non-null object
8 experience 19158 non-null float64
9 company_size 19158 non-null object
10 company_type 19158 non-null object
11 last_new_job 19158 non-null int64
12 training_hours 19158 non-null int64
13 target 19158 non-null int64
dtypes: float64(2), int64(4), object(8)
memory usage: 2.0+ MB
###Markdown
3. Pickle
###Code
df_train.to_pickle('../dump/df_train.csv')
###Output
_____no_output_____
###Markdown
4. Repeat for test set Examine and impute missing values
###Code
df_test['target'] = test_target
df_test.info()
# Pairplot
sns.pairplot(df_test, corner=True, height=1.5, plot_kws={'size': 3}, hue='target');
# Examine data
df_train['company_type'].value_counts()
df_train['enrolled_university'].value_counts()
df_train['education_level'].value_counts()
df_train['experience'].value_counts()
df_train['company_size'].value_counts()
df_train['company_type'].value_counts()
df_train['last_new_job'].value_counts()
# Replace string with float/int
df_test['experience'] = df_test['experience'].replace('>20','25')
df_test['experience'] = df_test['experience'].replace('<1','0.5')
df_test['experience'] = df_test['experience'].astype('float')
df_test['last_new_job'] = df_test['last_new_job'].replace('>4','5')
df_test['last_new_job'] = df_test['last_new_job'].replace('never','0')
# Impute/fill NaN
df_test['gender'] = df_test['gender'].replace(np.nan, 'unknown')
df_test['enrolled_university'] = df_test['enrolled_university'].replace(np.nan, 'unknown')
df_test['education_level'] = df_test['education_level'].replace(np.nan, 'unknown')
df_test['major_discipline'] = df_test['major_discipline'].replace(np.nan, 'unknown')
df_test['education_level'] = df_test['education_level'].replace(np.nan, 'unknown')
df_test['experience'] = df_test['experience'].fillna(value = df_test['experience'].median())
df_test['company_size'] = df_test['company_size'].fillna(value = df_test['company_size'].value_counts().index[0])
df_test['company_type'] = df_test['company_type'].replace(np.nan, 'unknown')
df_test['last_new_job'] = df_test['last_new_job'].fillna(value = df_test['last_new_job'].median()).astype('int')
df_test['target'] = df_test['target'].astype('int')
df_test.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 2129 entries, 0 to 2128
Data columns (total 14 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 enrollee_id 2129 non-null int64
1 city 2129 non-null object
2 city_development_index 2129 non-null float64
3 gender 2129 non-null object
4 relevent_experience 2129 non-null object
5 enrolled_university 2129 non-null object
6 education_level 2129 non-null object
7 major_discipline 2129 non-null object
8 experience 2129 non-null float64
9 company_size 2129 non-null object
10 company_type 2129 non-null object
11 last_new_job 2129 non-null int64
12 training_hours 2129 non-null int64
13 target 2129 non-null int64
dtypes: float64(2), int64(4), object(8)
memory usage: 233.0+ KB
###Markdown
3. Pickle
###Code
df_test.to_pickle('../dump/df_test.csv')
###Output
_____no_output_____ |
lung_cancer/control_group_analysis/tgi_koch_2009_reparametrised_model/population_inference/lognormal/find_maps.ipynb | ###Markdown
Infer Population Model Parameters from Individuals in Lung Cancer Control Group
###Code
import os
import arviz as az
import matplotlib.pyplot as plt
import numpy as np
import pints
from scipy.optimize import minimize, basinhopping
import xarray as xr
import erlotinib as erlo
###Output
_____no_output_____
###Markdown
Show control group data
###Code
# Get data
data = erlo.DataLibrary().lung_cancer_control_group()
# Create scatter plot
fig = erlo.plots.PDTimeSeriesPlot()
fig.add_data(data, biomarker='Tumour volume')
fig.set_axis_labels(xlabel=r'$\text{Time in day}$', ylabel=r'$\text{Tumour volume in cm}^3$')
# Show figure
fig.show()
###Output
_____no_output_____
###Markdown
**Figure 1:** Visualisation of the measured tumour growth in 8 mice with patient-derived lung cancer implants. Build model
###Code
# Define mechanistic model
path = erlo.ModelLibrary().tumour_growth_inhibition_model_koch_reparametrised()
mechanistic_model = erlo.PharmacodynamicModel(path)
mechanistic_model.set_parameter_names(names={
'myokit.tumour_volume': 'Tumour volume in cm^3',
'myokit.critical_volume': 'Critical volume in cm^3',
'myokit.drug_concentration': 'Drug concentration in mg/L',
'myokit.kappa': 'Potency in L/mg/day',
'myokit.lambda': 'Exponential growth rate in 1/day'})
mechanistic_model.set_output_names({
'myokit.tumour_volume': 'Tumour volume'})
# Define error model
error_model = erlo.ConstantAndMultiplicativeGaussianErrorModel()
# Define population model
population_model = [
erlo.LogNormalModel(), # Initial tumour volume
erlo.LogNormalModel(), # Critical tumour volume
erlo.LogNormalModel(), # Tumour growth rate
erlo.PooledModel(), # Base noise
erlo.PooledModel()] # Relative noise
# Build model
problem = erlo.ProblemModellingController(
mechanistic_model, error_model)
problem.fix_parameters({
'Drug concentration in mg/L': 0,
'Potency in L/mg/day': 0})
problem.set_population_model(population_model)
###Output
_____no_output_____
###Markdown
Prior predictive checks Population model
###Code
# Define prior distribution
log_priors = [
pints.TruncatedGaussianLogPrior(mean=0.1, sd=1, a=0, b=np.inf), # Mean Initial tumour volume
pints.TruncatedGaussianLogPrior(mean=1, sd=1, a=0, b=np.inf), # Std. Initial tumour volume
pints.TruncatedGaussianLogPrior(mean=1, sd=1, a=0, b=np.inf), # Mean Critical tumour volume
pints.TruncatedGaussianLogPrior(mean=1, sd=1, a=0, b=np.inf), # Std. Critical tumour volume
pints.TruncatedGaussianLogPrior(mean=0.1, sd=1, a=0, b=np.inf), # Mean Growth rate
pints.TruncatedGaussianLogPrior(mean=1, sd=1, a=0, b=np.inf), # Std. Growth rate
pints.TruncatedGaussianLogPrior(mean=0.1, sd=1, a=0, b=np.inf), # Pooled Sigma base
pints.TruncatedGaussianLogPrior(mean=0.1, sd=0.1, a=0, b=np.inf)] # Pooled Sigma rel.
log_prior = pints.ComposedLogPrior(*log_priors)
# Define prior predictive model
predictive_model = problem.get_predictive_model()
model = erlo.PriorPredictiveModel(predictive_model, log_prior)
# Sample from prior predictive model
seed = 42
n_samples = 100
times = np.linspace(0, 30)
samples = model.sample(times, n_samples, seed)
# Visualise prior predictive model
fig = erlo.plots.PDPredictivePlot()
fig.add_prediction(data=samples, bulk_probs=[0.3, 0.6, 0.9])
fig.set_axis_labels(xlabel=r'$\text{Time in day}$', ylabel=r'$\text{Tumour volume in cm}^3$')
fig.show()
###Output
_____no_output_____
###Markdown
**Figure 3:** Approximate prior predictive model for the tumour growth in a population over time. The shaded areas indicate the 30%, 60% and 90% bulk of the prior predictive model (from dark to light). The prior predictive model was approximated by sampling 1000 parameters from the prior distribution, and subsequent sampling of 50 equidistant time points from the predictive model for each parameter set. Find maximum a posteriori estimates
###Code
# # Define log-posterior
# problem.set_data(data)
# problem.set_log_prior(log_priors)
# log_posterior = problem.get_log_posterior()
def fun(log_parameters):
score, sens = log_posterior.evaluateS1(np.exp(log_parameters))
return (-score, -sens)
# Run optimisation
initial_parameters = np.log(erlo.InferenceController(log_posterior)._initial_params[0, 0])
print(fun(initial_parameters))
result = minimize(fun=fun, x0=initial_parameters, method='L-BFGS-B', jac=True)
result
np.exp(result.x) # 408.5831950704941
np.exp(result.x) # 406.1479936996002
np.exp(result.x) # 219.54113709105013
np.exp(result.x) # 36.90877472832281
###Output
_____no_output_____
###Markdown
Running 3 times produces three vastly different results!
###Code
# Run optimisation
initial_parameters = np.log(erlo.InferenceController(log_posterior)._initial_params[0, 0])
minimizer_kwargs = {"method":"L-BFGS-B", "jac":True}
result = basinhopping(
func=fun, x0=initial_parameters, minimizer_kwargs=minimizer_kwargs, niter=10000)
result
np.exp(result.x) # -98.49277693232358
log_posterior.get_parameter_names(include_ids=True)
###Output
_____no_output_____ |
Wit-APPV3-final-backup.ipynb | ###Markdown
Table of Contents
###Code
from __future__ import unicode_literals
from youtube_search import YoutubeSearch
import youtube_dl
import os
from pydub import AudioSegment
from pydub.silence import split_on_silence
import speech_recognition as sr
import bs4 as bs
import urllib.request
import re
import nltk
import heapq
import pubchempy as pcp
import smtplib,ssl
import requests
import json
import pandas as pd
import time
import yagmail
from wit import Wit
chemicals=[]
ureceiver='[email protected]'
usender="[email protected]"
access_token_wit='FE5AVTCKI4WL7X2S4RCZPS4L7D53S5QP'
#question='Can I combine the benzene and methanol?'
question=input('Please ask a question about the chemical compound that you want to know about:')
def wit_request(question,access_token_wit):
client = Wit(access_token=access_token_wit)
resp=client.message(msg=question)
data_extraction=json.dumps(resp)
data=json.loads(data_extraction)
def depth(data):
if "entities" in data:
return 1 + max([0] + list(map(depth, data['entities'])))
else:
return 1
levels_with_entities=depth(data)
#print(json.dumps(data,indent=2,sort_keys=True))
def json_extract(obj,key):
arr=[]
def extract(obj,arr,key):
if isinstance(obj,dict):
for k,v in obj.items():
if isinstance(v,(dict,list)):
extract(v,arr,key)
elif v==key:
if obj not in arr:
arr.append(obj)
elif isinstance(obj,list):
for item in obj:
extract(item,arr,key)
return (arr)
values=extract(obj,arr,key)
return values
#get intents
intent=resp['intents'][0]['name']
#extract chemicals that wit.ai found
result_confirm=json_extract(data,'chemical_substance')
chemicals=[]
number_chemicals=len(result_confirm)
for q in range(number_chemicals):
chemicals.append(result_confirm[q]['value'])
#print(json.dumps(result_confirm,indent=2,sort_keys=True))
#print('result confirm:',chemicals,intent)#result_confirm)
return (chemicals,intent)
def summarizing_video(chemical_compound):
confirmation_video=""
summary=''
formatted_article_text=''
max_elements=1
#results=YoutubeSearch('Benzene',max_results=5).to_json()
results=YoutubeSearch(chemical_compound,max_results=max_elements).to_dict()
#print(results)
def validate_reply(confirmation_video):
confirmation_verified=''
if confirmation_video=='YES' or confirmation_video=='NO':
confirmation_verified=confirmation_video
return confirmation_verified
else:
print('Please confirm that you want me to transcribe it?')
confirmation_video=input('(yes/no):').upper()
return validate_reply(confirmation_video)
for i in range(max_elements):
url="https://www.youtube.com/watch?v="+results[i]['id']
title_video=results[i]['title']
duration=results[i]['duration']
views=results[i]['views']
print('I found this video, do you want me to transcribe it?\n')
print('****************')
print("Title: ",title_video)
print('Duration',duration)
print("Url",url)
print("Views",views)
print("***************")
confirmation_video=input('Transcribing video? (yes/no):').upper()
confirmation_verified=validate_reply(confirmation_video)
print('out',confirmation_verified)
if confirmation_verified=='YES':
print('in',confirmation_verified)
ydl_opts={
'format':'bestaudio/best',
'postprocessors': [{
'key':'FFmpegExtractAudio',
'preferredcodec':'wav',
'preferredquality':'192',
}],
}
with youtube_dl.YoutubeDL(ydl_opts) as ydl:
ydl.download([url])
info_dict = ydl.extract_info(url)
fn = ydl.prepare_filename(info_dict)
path=fn[:-4]+"wav"
r=sr.Recognizer()
print("started..")
def get_large_audio_transcription(path):
sound=AudioSegment.from_wav(path)
chunks=split_on_silence(sound,
min_silence_len=500,
silence_thresh=sound.dBFS-14,
keep_silence=500,)
folder_name="audio-chunks"
if not os.path.isdir(folder_name):
os.mkdir(folder_name)
whole_text=""
for i,audio_chunk in enumerate(chunks,start=1):
chunk_filename=os.path.join(folder_name,f"chunk{i}.wav")
audio_chunk.export(chunk_filename,format="wav")
with sr.AudioFile(chunk_filename) as source:
audio_listened=r.record(source)
try:
text=r.recognize_google(audio_listened,language="en-US")
except sr.UnknownValueError as e:
pass
#print("Error:",str(e))
else:
text=f"{text.capitalize()}. "
#print(chunk_filename,":",text)
whole_text+=text
return whole_text
# (starting here:)#path="Audacity FFMpeg codec install for Windows-v2J6fT65Ydc.wav"
#print("\nFull text:",get_large_audio_transcription(path))
article_text=get_large_audio_transcription(path)
article_text=re.sub(r'\[[0-9]*\]',' ',article_text)
article_text=re.sub(r'\s+',' ',article_text)
formatted_article_text=re.sub('^a-zA-Z',' ',article_text)
formatted_article_text=re.sub(r'\s+',' ',formatted_article_text)
#print(formatted_article_text) #final text from audio
print('*********************')
print("Summaryzing..")
#tokenization
sentence_list=nltk.sent_tokenize(article_text)
stopwords=nltk.corpus.stopwords.words('english')
word_frequencies={}
for word in nltk.word_tokenize(formatted_article_text):
if word not in stopwords:
if word not in word_frequencies.keys():
word_frequencies[word]=1
else:
word_frequencies[word]+=1
#print(list(map(str,word_frequencies)))
#word frequency
maximum_frequency=max(word_frequencies.values())
for word in word_frequencies.keys():
word_frequencies[word]=(word_frequencies[word]/maximum_frequency)
#print(word_frequencies)
#sentence score
sentence_scores={}
for sent in sentence_list:
for word in nltk.word_tokenize(sent.lower()):
if word in word_frequencies.keys():
if len(sent.split(' '))<50:
if sent not in sentence_scores.keys():
sentence_scores[sent]=word_frequencies[word]
else:
sentence_scores[sent]+=word_frequencies[word]
#top 7 most frequent sentences
summary_sentences=heapq.nlargest(10,sentence_scores,key=sentence_scores.get)
summary=' '.join(summary_sentences)
return (summary,formatted_article_text)
#find the email
def send_email(ureceiver,usender,body,result_content,email_title):
if not result_content:
result_content='No records found, please search with synonyms.'
receiver=ureceiver
body="Hello, Buddy!"
body+="This is an email with the information requested on the chat. Hope you find it hepful"
yag=yagmail.SMTP("[email protected]","VentaProduct51g1")
email_sent=yag.send(to=receiver,
subject=email_title,
contents=result_content)
if not email_sent:
email_confirmation='Email Sent'
else:
email_confirmation='Email Not Sent'
return email_confirmation
#find info safe storage
def info_safe_storage():
API_ENDPOINT='https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/'+str(cid)+'/JSON'
dat={}
#print(newintent)
headers = {'authorization': 'Bearer ','Content-Type': 'application/json'}
resp=requests.post(API_ENDPOINT,headers=headers,json=dat)
textt=json.loads(resp.content)
def json_extract(obj,key):
arr=[]
def extract(obj,arr,key):
if isinstance(obj,dict):
for k,v in obj.items():
if isinstance(v,(dict,list)):
extract(v,arr,key)
elif v==key:
arr.append(obj)
elif isinstance(obj,list):
for item in obj:
extract(item,arr,key)
return (arr)
values=extract(obj,arr,key)
return values
result_safe=json_extract(textt,'Information for Safe Storage')
result_storage=json_extract(textt,'Information for Storage Conditions')
#print(result_storage)
result_validate_safe=json_extract(result_safe,'Not Classified')
result_validate_storage=json_extract(result_storage,'Not Classified')
response_title='Handling and Storage Summary:\n'
if len(result_safe[0])==0 and len(result_storage[0])==0:
if result_validate_storage[0]['validate']=='Not Classified' and result_validate_safe[0]['validate']=='Not Classified':
response_api="There are are not records of hazard classification so that it may not be dangerous, please look for other professional resources"
response=response_title+response_api
else:
print('Continue')
handling_storage={}
safe_storage={}
for key,value in result_storage[0].items():
if value not in handling_storage.values():
handling_storage[key]=value
for key,value in result_safe[0].items():
if value not in safe_storage.values():
safe_storage[key]=value
#print(handling_storage)
#print(json.dumps(handling_storage,indent=2,sort_keys=True))
elements_storage=len(handling_storage['Information'])
elements_safe=len(safe_storage['Information'])
response_storage_res=""
response_safe_res=""
for i in range(elements_storage):
response_storage=handling_storage['Information'][i]['Value']['StringWithMarkup'][0]['String']
response_storage_res+=response_storage
for x in range(elements_safe):
response_safe=safe_storage['Information'][x]['Value']['StringWithMarkup'][0]['String']
response_safe_res+=response_safe
#print("---",safe_storage['Information'][x]['Value']['StringWithMarkup'][0]['String'])
response=response_storage_res+response_safe_res
#print(response_storage_res,"----",response_safe_res)
return response
#find toxicity documentation
def toxicity(chemical_compound,cid):
API_ENDPOINT='https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/'+str(cid)+'/JSON'
dat={}
headers = {'authorization': 'Bearer ','Content-Type': 'application/json'}
resp=requests.post(API_ENDPOINT,headers=headers,json=dat)
textt=json.loads(resp.content)
def json_extract(obj,key):
arr=[]
def extract(obj,arr,key):
if isinstance(obj,dict):
for k,v in obj.items():
if isinstance(v,(dict,list)):
extract(v,arr,key)
elif v==key:
arr.append(obj)
elif isinstance(obj,list):
for item in obj:
extract(item,arr,key)
return (arr)
values=extract(obj,arr,key)
return values
result=json_extract(textt,'Toxicity Summary')
result_validate=json_extract(result,'Not Classified')
#print(json.dumps(result_validate,indent=2,sort_keys=True))
response_title='Toxicity Summary:\n'
if len(result[0])==0:
if len(result_validate)>=1:
response_api="There are are not records of hazard classification so that it may not be dangerous, please look for other professional resources"
result_toxicity=response_title+response_api
else:
toxicity={}
for key,value in result[0].items():
if value not in toxicity.values():
toxicity[key]=value
result_toxicity=toxicity['Information'][0]['Value']['StringWithMarkup'][0]['String']
return result_toxicity
#find handling & storage characteristics
def handling_store(chemical_compound,cid):
API_ENDPOINT='https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/'+str(cid)+'/JSON'
dat={}
#print(newintent)
headers = {'authorization': 'Bearer ','Content-Type': 'application/json'}
resp=requests.post(API_ENDPOINT,headers=headers,json=dat)
textt=json.loads(resp.content)
def json_extract(obj,key):
arr=[]
def extract(obj,arr,key):
if isinstance(obj,dict):
for k,v in obj.items():
if isinstance(v,(dict,list)):
extract(v,arr,key)
elif v==key:
arr.append(obj)
elif isinstance(obj,list):
for item in obj:
extract(item,arr,key)
return (arr)
values=extract(obj,arr,key)
return values
result=json_extract(textt,'Handling and Storage')
result_validate=json_extract(result,'Not Classified')
#print('handling55:',result)
#print(json.dumps(result,indent=2,sort_keys=True))
result_handling_storage=''
response_title='Handling and Storage:\n\n'
#si no hay devolucion de datos
if len(result[0])==0:
if result_validate[0]['validate']=='Not Classified':
response_api="There are are not records of hazard classification so that it may not be dangerous, please look for other professional resources"
result_handling_storage=response_title+response_api
#print('No results:',result_handling_storage)
else:
handling_storage={}
result_handling_storage=''
for key,value in result[0].items():
if value not in handling_storage.values():
handling_storage[key]=value
result_handling_storage=handling_storage['Section'][0]['Information'][0]['Value']['StringWithMarkup'][0]['String']
return result_handling_storage
def ghs_classification(chemical_compound,cid):
API_ENDPOINT='https://pubchem.ncbi.nlm.nih.gov/rest/pug_view/data/compound/'+str(cid)+'/JSON'
dat={}
#print(newintent)
headers = {'authorization': 'Bearer ','Content-Type': 'application/json'}
resp=requests.post(API_ENDPOINT,headers=headers,json=dat)
textt=json.loads(resp.content)
def json_extract(obj,key):
arr=[]
def extract(obj,arr,key):
if isinstance(obj,dict):
for k,v in obj.items():
if isinstance(v,(dict,list)):
extract(v,arr,key)
elif v==key:
arr.append(obj)
elif isinstance(obj,list):
for item in obj:
extract(item,arr,key)
return (arr)
values=extract(obj,arr,key)
return values
result=json_extract(textt,'GHS Classification')
result_validate=json_extract(result,'Not Classified')
#print(json.dumps(result_validate,indent=2,sort_keys=True))
#print(json.dumps(result,indent=2,sort_keys=True))
response_title="GHS Classification:\n"
if len(result[0])==0:
if result_validate[0]['validate']=='Not Classified':
response_api="There are are not records of hazard classification so that it may not be dangerous, please look for other professional resources"
response_ghs_classification=response_title+response_api
else:
results=json_extract(textt,'Pictogram(s)')
ghs_classification={}
for key,value in results[0].items():
if value not in ghs_classification.values():
ghs_classification[key]=value
#print(json.dumps(ghs_classification,indent=2,sort_keys=True))
response_api=""
response=''
number_classified=len(ghs_classification['Value']['StringWithMarkup'][0]['Markup'])
#print("number:",number_classified)
ghs_class=ghs_classification['Value']['StringWithMarkup'][0]['Markup']
for ghs in range(number_classified):
#print(ghs_class[ghs]['Extra'])
response=ghs_class[ghs]['Extra']+" "
response_api+=response
response_ghs_classification=response_title+response_api
#print(response_ghs_classification)
return response_ghs_classification
def content_sorted(chemicals,title_total_content,intent_chemical_request):
chemical={}
content=''
response=''
cid=''
for chemical_compound in chemicals:
for compound in pcp.get_compounds(chemical_compound,'name'):
chemical['cid']=compound.cid
cid=compound.cid
#print("--->",chemical_compound,chemical['cid'])
if intent_chemical_request=="confirm_storage_compatibility":
title="Chemical Compatibility Info: "+chemical_compound.upper()+"\n\n"
content_csc=handling_store(chemical_compound,cid)
response=title+content_csc
elif intent_chemical_request=="get_ghs_classification":
title="GHS Info: "
response_ghs=ghs_classification(chemical_compound,cid)
content_title=title+chemical_compound.upper()+"\n\n"
content_ghs= content_title+"\n\n"+response_ghs
toxicity_title='Toxicity Summary: '+chemical_compound.upper()+"\n\n"
result_content_toxicity=toxicity(chemical_compound,chemical['cid'])
content_toxicity=toxicity_title+result_content_toxicity
response=content_ghs+content_toxicity
elif intent_chemical_request=="info_storage_compatibility":
title_si="Storage Information: "+chemical_compound.upper()+"\n\n"
result_content_ghs=ghs_classification(chemical_compound,cid)
total_content_si=title_si+result_content_ghs
title_sc="Storage Compatibility Info: "+chemical_compound.upper()+"\n\n"
content_csc=handling_store(chemical_compound,cid)
total_content_csc=title_sc+content_csc
response=total_content_si+total_content_csc
#content_chemical=chemical_compound.upper()+"\n"
#content= content_chemical+response+"\n\n"+content
content= response+"\n\n"+content
full_content=title_total_content+content+"\n\n"
print('ful text:',full_content)
return full_content
#-----Wit.Ai request to identify intents and more tasks----
def validate_reply(confirmation_video):
confirmation_verified=''
if confirmation_video=='YES' or confirmation_video=='NO':
confirmation_verified=confirmation_video
return confirmation_verified
else:
confirmation_video=input('Please confirm that you want me to transcribe it? (yes/no):').upper()
return validate_reply(confirmation_video)
chemical={}
chemicals=[]
wit_results=wit_request(question,access_token_wit)
print('Wit results New:',wit_results)
#request trutful information from PubChemAPI
for chem in wit_results[0]:
chemicals.append(chem)
chemical_compound=chem
intent=wit_results[1]
for compound in pcp.get_compounds(chemicals,'name'):
chemical['cid']=compound.cid
cid=compound.cid
body='PubChem Library API'
#intent='get_ghs_classification'
if intent=="confirm_storage_compatibility":
#result_content=content_sorted("Globally Harmonized System Hazard Classification",handling_store(chemical_compound,chemical['cid']))
email_title="Compatibility - Globally Harmonized System Hazard Classification: "+chemical_compound.upper()
result_content=content_sorted(chemicals,email_title,intent)
email_confirmation=send_email(ureceiver,usender,body,result_content,email_title)
print("Sending confirmation;",email_confirmation)
elif intent=="get_ghs_classification":
email_title="Summarized Documentation: "+chemical_compound.upper()
full_content=content_sorted(chemicals,email_title,intent)
email_confirmation=send_email(ureceiver,usender,body,full_content,email_title)
else:#info_storage_classification
email_title="Storage Information (Globally Harmonized System Hazard Classification): "+chemical_compound.upper()
result_content=content_sorted(chemicals,email_title,intent)
email_confirmation=send_email(ureceiver,usender,body,result_content,email_title)
confirmation_video=''
answer=input('Would you like me to look for a video about '+chemical_compound+' on youtube video to transcribe into text and make a summary of the video? (yes/no) ').upper()#+str(chem)).upper()
#Extract full text and summa
if answer=='YES' or answer=='NO':
confirmation_video=answer
else:
confirmation_video=validate_reply(confirmation_video)
if answer=='YES' or confirmation_video=='YES':
#print(chem,"---->")
summary_text,full_video_text=summarizing_video(chemical_compound)
print(full_video_text,"----",summary_text)
email_title="Video Summary: "+chemical_compound
full_documentation=summary_text+"\n\n"+"Full Text: "+full_video_text
if full_video_text:
full_text=send_email(ureceiver,usender,body,full_documentation,email_title)
print(full_text)
Email_confirmed="Email Sent"
print("You will receive an email soon with the full text and summary")
else:
print("There woon't be any email sent")
else:
pass
if email_confirmation == "Email Sent":
print('Email Sent: You will receive your summarized info on your registered email soon.')
else:
print('Not Email Sent: There was not enough information to send you.')
#print(email_confirmation)
#print(full_content)
print('Finished...')
###Output
Please ask a question about the chemical compound that you want to know about:can I mix benzene and hexane?
Wit results New: (['Benzene', 'hexane'], 'confirm_storage_compatibility')
ful text: Compatibility - Globally Harmonized System Hazard Classification: BENZENEChemical Compatibility Info: BENZENE
Excerpt from ERG Guide 130 [Flammable Liquids (Water-Immiscible / Noxious)]: ELIMINATE all ignition sources (no smoking, flares, sparks or flames in immediate area). All equipment used when handling the product must be grounded. Do not touch or walk through spilled material. Stop leak if you can do it without risk. Prevent entry into waterways, sewers, basements or confined areas. A vapor-suppressing foam may be used to reduce vapors. Absorb or cover with dry earth, sand or other non-combustible material and transfer to containers. Use clean, non-sparking tools to collect absorbed material. LARGE SPILL: Dike far ahead of liquid spill for later disposal. Water spray may reduce vapor, but may not prevent ignition in closed spaces. (ERG, 2016)
Sending confirmation; Email Sent
Would you like me to look for a video about Benzene on youtube video to transcribe into text and make a summary of the video? (yes/no) yes
I found this video, do you want me to transcribe it?
****************
Title: What Is Benzene | Organic Chemistry | Chemistry | FuseSchool
Duration 3:50
Url https://www.youtube.com/watch?v=xD7Z7SHix-w
Views 75,728 vistas
***************
Transcribing video? (yes/no):yes
out YES
in YES
[youtube] xD7Z7SHix-w: Downloading webpage
[download] Destination: What Is Benzene _ Organic Chemistry _ Chemistry _ FuseSchool-xD7Z7SHix-w.webm
[download] 100% of 2.67MiB in 00:00
[ffmpeg] Destination: What Is Benzene _ Organic Chemistry _ Chemistry _ FuseSchool-xD7Z7SHix-w.wav
Deleting original file What Is Benzene _ Organic Chemistry _ Chemistry _ FuseSchool-xD7Z7SHix-w.webm (pass -k to keep)
[youtube] xD7Z7SHix-w: Downloading webpage
[download] Destination: What Is Benzene _ Organic Chemistry _ Chemistry _ FuseSchool-xD7Z7SHix-w.webm
[download] 100% of 2.67MiB in 00:00
[ffmpeg] Destination: What Is Benzene _ Organic Chemistry _ Chemistry _ FuseSchool-xD7Z7SHix-w.wav
Deleting original file What Is Benzene _ Organic Chemistry _ Chemistry _ FuseSchool-xD7Z7SHix-w.webm (pass -k to keep)
started..
*********************
Summaryzing..
Benzene is a colorless liquid at room temperature. Is boiling point. Is 82 degrees celsius. It's found naturally in crude oil smells a little like petrol. On an atomic level benzene is made up of six carbon.. Covalently bonded in a ring. Each carbon is also covalently bonded to one hydrogen. This makes it a hydrocarbon. Carbon usually forms for single covalent bonds and these cabins at the moment sammy have three. The molecule not just something very special. The woman tied election for me carbon becomes. Conjugated into the ring. This means they have free movement around all six carbons and this gives benzene the property of. Aromaticity. Aromaticity simplement nice-smelling scientifically. There are various ways of showing benzene's aromaticity like these for example. The most calm by you'll see benzene drawn in a textbook is like this. This is done only for convenience. Can be dangerous isaac's carcinogenic. That is known to cause cancer in sufficient doses. The magic. When will i be able to join benzene ring. The other organic molecules. Some of the most important and useful drugs are made this way. Example is aspirin. One of the most famous painkillers in the world. The important thing to remember here is it's almost impossible to do an addition reaction on benzene. This destroys the aromaticity of the ring a process which need a lot of energy. What joseph. Is substitution reactions. Why one of the protons it kicked out of the compounds by another species we takes its place on the molecule keeps his iron deficiency. This can happen more than once and the benzene is said to be mono die or trisubstituted. What sort of chemical species will we need to cause a substitution reaction to happen. Remember the anthem in the benzene being attacked is one of the coffins and these coffins have lots of electrons moving around them. Pause the video and have a thing. Them redeem. Beyonce's we will need an electrophile something that will be attracted to all those electrons. Benzene is quite a stable molecule as a case so we need a strong electrophile. So species with a positive charge is often necessary. For example of cl + made from a chlorine acid. Nitronium ion. No2 +. Well they exist and are widely used in benzene chemistry. In fact + is reacted with benzene to form trinitrotoluene. Also called dynamite. Very powerful explosive. ---- Remember the anthem in the benzene being attacked is one of the coffins and these coffins have lots of electrons moving around them. The important thing to remember here is it's almost impossible to do an addition reaction on benzene. On an atomic level benzene is made up of six carbon.. Covalently bonded in a ring. There are various ways of showing benzene's aromaticity like these for example. Benzene is quite a stable molecule as a case so we need a strong electrophile. This means they have free movement around all six carbons and this gives benzene the property of. The most calm by you'll see benzene drawn in a textbook is like this. Why one of the protons it kicked out of the compounds by another species we takes its place on the molecule keeps his iron deficiency. What sort of chemical species will we need to cause a substitution reaction to happen. In fact + is reacted with benzene to form trinitrotoluene.
Email Sent
You will receive an email soon with the full text and summary
ful text: Compatibility - Globally Harmonized System Hazard Classification: HEXANEChemical Compatibility Info: HEXANE
Excerpt from ERG Guide 128 [Flammable Liquids (Water-Immiscible)]: ELIMINATE all ignition sources (no smoking, flares, sparks or flames in immediate area). All equipment used when handling the product must be grounded. Do not touch or walk through spilled material. Stop leak if you can do it without risk. Prevent entry into waterways, sewers, basements or confined areas. A vapor-suppressing foam may be used to reduce vapors. Absorb or cover with dry earth, sand or other non-combustible material and transfer to containers. Use clean, non-sparking tools to collect absorbed material. LARGE SPILL: Dike far ahead of liquid spill for later disposal. Water spray may reduce vapor, but may not prevent ignition in closed spaces. (ERG, 2016)
Chemical Compatibility Info: BENZENE
Excerpt from ERG Guide 130 [Flammable Liquids (Water-Immiscible / Noxious)]: ELIMINATE all ignition sources (no smoking, flares, sparks or flames in immediate area). All equipment used when handling the product must be grounded. Do not touch or walk through spilled material. Stop leak if you can do it without risk. Prevent entry into waterways, sewers, basements or confined areas. A vapor-suppressing foam may be used to reduce vapors. Absorb or cover with dry earth, sand or other non-combustible material and transfer to containers. Use clean, non-sparking tools to collect absorbed material. LARGE SPILL: Dike far ahead of liquid spill for later disposal. Water spray may reduce vapor, but may not prevent ignition in closed spaces. (ERG, 2016)
Sending confirmation; Email Sent
|
NLP/Learn_by_deeplearning.ai/Course 3 - Sequence Models/Labs/Week 2/C3_W2_lecture_notebook_RNNs.ipynb | ###Markdown
Vanilla RNNs, GRUs and the `scan` function In this notebook, you will learn how to define the forward method for vanilla RNNs and GRUs. Additionally, you will see how to define and use the function `scan` to compute forward propagation for RNNs.By completing this notebook, you will:- Be able to define the forward method for vanilla RNNs and GRUs- Be able to define the `scan` function to perform forward propagation for RNNs- Understand how forward propagation is implemented for RNNs.
###Code
import numpy as np
from numpy import random
from time import perf_counter
###Output
_____no_output_____
###Markdown
An implementation of the `sigmoid` function is provided below so you can use it in this notebook.
###Code
def sigmoid(x): # Sigmoid function
return 1.0 / (1.0 + np.exp(-x))
###Output
_____no_output_____
###Markdown
Part 1: Forward method for vanilla RNNs and GRUs In this part of the notebook, you'll see the implementation of the forward method for a vanilla RNN and you'll implement that same method for a GRU. For this excersice you'll use a set of random weights and variables with the following dimensions:- Embedding size (`emb`) : 128- Hidden state size (`h_dim`) : (16,1)The weights `w_` and biases `b_` are initialized with dimensions (`h_dim`, `emb + h_dim`) and (`h_dim`, 1). We expect the hidden state `h_t` to be a column vector with size (`h_dim`,1) and the initial hidden state `h_0` is a vector of zeros.
###Code
random.seed(1) # Random seed, so your results match ours
emb = 128 # Embedding size
T = 256 # Number of variables in the sequences
h_dim = 16 # Hidden state dimension
h_0 = np.zeros((h_dim, 1)) # Initial hidden state
# Random initialization of weights and biases
w1 = random.standard_normal((h_dim, emb+h_dim))
w2 = random.standard_normal((h_dim, emb+h_dim))
w3 = random.standard_normal((h_dim, emb+h_dim))
b1 = random.standard_normal((h_dim, 1))
b2 = random.standard_normal((h_dim, 1))
b3 = random.standard_normal((h_dim, 1))
X = random.standard_normal((T, emb, 1))
weights = [w1, w2, w3, b1, b2, b3]
###Output
_____no_output_____
###Markdown
1.1 Forward method for vanilla RNNs The vanilla RNN cell is quite straight forward. Its most general structure is presented in the next figure: As you saw in the lecture videos, the computations made in a vanilla RNN cell are equivalent to the following equations:\begin{equation}h^{}=g(W_{h}[h^{},x^{}] + b_h)\label{eq: htRNN}\end{equation} \begin{equation}\hat{y}^{}=g(W_{yh}h^{} + b_y)\label{eq: ytRNN}\end{equation}where $[h^{},x^{}]$ means that $h^{}$ and $x^{}$ are concatenated together. In the next cell we provide the implementation of the forward method for a vanilla RNN.
###Code
def forward_V_RNN(inputs, weights): # Forward propagation for a a single vanilla RNN cell
x, h_t = inputs
# weights.
wh, _, _, bh, _, _ = weights
# new hidden state
h_t = np.dot(wh, np.concatenate([h_t, x])) + bh
h_t = sigmoid(h_t)
return h_t, h_t
###Output
_____no_output_____
###Markdown
As you can see, we omitted the computation of $\hat{y}^{}$. This was done for the sake of simplicity, so you can focus on the way that hidden states are updated here and in the GRU cell. 1.2 Forward method for GRUs A GRU cell have more computations than the ones that vanilla RNNs have. You can see this visually in the following diagram:As you saw in the lecture videos, GRUs have relevance $\Gamma_r$ and update $\Gamma_u$ gates that control how the hidden state $h^{}$ is updated on every time step. With these gates, GRUs are capable of keeping relevant information in the hidden state even for long sequences. The equations needed for the forward method in GRUs are provided below: \begin{equation}\Gamma_r=\sigma{(W_r[h^{}, x^{}]+b_r)}\end{equation}\begin{equation}\Gamma_u=\sigma{(W_u[h^{}, x^{}]+b_u)}\end{equation}\begin{equation}c^{}=\tanh{(W_h[\Gamma_r*h^{},x^{}]+b_h)}\end{equation}\begin{equation}h^{}=\Gamma_u*c^{}+(1-\Gamma_u)*h^{}\end{equation}In the next cell, please implement the forward method for a GRU cell by computing the update `u` and relevance `r` gates, and the candidate hidden state `c`.
###Code
def forward_GRU(inputs, weights): # Forward propagation for a single GRU cell
x, h_t = inputs
# weights.
wu, wr, wc, bu, br, bc = weights
# Update gate
### START CODE HERE (1-2 lINES) ###
u = np.dot(wu, np.concatenate([h_t, x])) + bu
u = sigmoid(u)
### END CODE HERE ###
# Relevance gate
### START CODE HERE (1-2 lINES) ###
r = np.dot(wr, np.concatenate([h_t, x])) + br
r = sigmoid(u)
### END CODE HERE ###
# Candidate hidden state
### START CODE HERE (1-2 lINES) ###
c = np.dot(wc, np.concatenate([r * h_t, x])) + bc
c = np.tanh(c)
### END CODE HERE ###
# New Hidden state h_t
h_t = u* c + (1 - u)* h_t
return h_t, h_t
###Output
_____no_output_____
###Markdown
Run the following cell to check your implementation.
###Code
forward_GRU([X[1],h_0], weights)[0]
###Output
_____no_output_____
###Markdown
Expected output:array([[ 9.77779014e-01], [-9.97986240e-01], [-5.19958083e-01], [-9.99999886e-01], [-9.99707004e-01], [-3.02197037e-04], [-9.58733503e-01], [ 2.10804828e-02], [ 9.77365398e-05], [ 9.99833090e-01], [ 1.63200940e-08], [ 8.51874303e-01], [ 5.21399924e-02], [ 2.15495959e-02], [ 9.99878828e-01], [ 9.77165472e-01]]) Part 2: Implementation of the `scan` function In the lectures you saw how the `scan` function is used for forward propagation in RNNs. It takes as inputs:- `fn` : the function to be called recurrently (i.e. `forward_GRU`)- `elems` : the list of inputs for each time step (`X`)- `weights` : the parameters needed to compute `fn`- `h_0` : the initial hidden state`scan` goes through all the elements `x` in `elems`, calls the function `fn` with arguments ([`x`, `h_t`],`weights`), stores the computed hidden state `h_t` and appends the result to a list `ys`. Complete the following cell by calling `fn` with arguments ([`x`, `h_t`],`weights`).
###Code
def scan(fn, elems, weights, h_0=None): # Forward propagation for RNNs
h_t = h_0
ys = []
for x in elems:
### START CODE HERE (1 lINE) ###
y, h_t = fn([x, h_t], weights)
### END CODE HERE ###
ys.append(y)
return ys, h_t
###Output
_____no_output_____
###Markdown
Part 3: Comparison between vanilla RNNs and GRUs You have already seen how forward propagation is computed for vanilla RNNs and GRUs. As a quick recap, you need to have a forward method for the recurrent cell and a function like `scan` to go through all the elements from a sequence using a forward method. You saw that GRUs performed more computations than vanilla RNNs, and you can check that they have 3 times more parameters. In the next two cells, we compute forward propagation for a sequence with 256 time steps (`T`) for an RNN and a GRU with the same hidden state `h_t` size (`h_dim`=16).
###Code
# vanilla RNNs
tic = perf_counter()
ys, h_T = scan(forward_V_RNN, X, weights, h_0)
toc = perf_counter()
RNN_time=(toc-tic)*1000
print (f"It took {RNN_time:.2f}ms to run the forward method for the vanilla RNN.")
# GRUs
tic = perf_counter()
ys, h_T = scan(forward_GRU, X, weights, h_0)
toc = perf_counter()
GRU_time=(toc-tic)*1000
print (f"It took {GRU_time:.2f}ms to run the forward method for the GRU.")
###Output
It took 46.94ms to run the forward method for the GRU.
|
splitbyimages_v1.ipynb | ###Markdown
split datasetsversion: 1info:- split json into train.json, val.json and test.json WARNING: splitbyimages is not ideal (use splitbyannotations) -> because you can fail to have all the classes in train (or val,or test). This was done, because some datasets like tao are missing annotations and imagesauthor: nuno costa
###Code
from annotate_v5 import *
import platform
import numpy as np
import time
import pandas as pd
from IPython.display import Image, display
import copy
import os
from shutil import copyfile
import matplotlib.pyplot as plt
from matplotlib.image import imread
from matplotlib.patches import Rectangle
import random
#Define root dir dependent on OS
rdir='D:/external_datasets/MOLA/annotations/'
if str(platform.platform()).find('linux')>-1: rdir=rdir.replace('D:/','/mnt/d/')
print('OS: {}'.format(platform.platform()))
print('root dir: {}'.format(rdir))
###Output
OS: Windows-10-10.0.21292-SP0
root dir: D:/external_datasets/MOLA/annotations/
###Markdown
1. Init vars
###Code
train=70
val=20
test=100-(train+val)
injsonfile='coco2017_reorder_cleanclass.json'
infilename=injsonfile.split('.')[0]
# init json
molajson = json.load(open(rdir+injsonfile))
for k in molajson:
print(k, len(molajson[k]))
###Output
info 6
licenses 8
images 123287
annotations 1170251
categories 80
###Markdown
2. Import ids NOTE: work with ids and index so you can use numpy for faster operations
###Code
# categories id
cats=[]
catids=[]
for c in molajson['categories']:
catids.append(c['id'])
cats.append(c['name'])
#print(cats)
# images filepath and id
imgs=[]
imgids=[]
for c in molajson['images']:
imgs.append(c['file_name'])
imgids.append(c['id'])
# annotations category_id
ann_catids=[]
ann_ids=[]
ann_imgids=[]
for an in tqdm(molajson['annotations']):
ann_catids.append(an['category_id'])
ann_ids.append(an['id'])
ann_imgids.append(an['image_id'])
print(len(ann_ids))
#TEST dupplicates v1 - slow
# duplicates_l=list(set([x for x in ann_ids if ann_ids.count(x) > 1])) # duplicates l
#TEST dupplicates v2 - fast
#from collections import Counter
#duplicates_l=[item for item, count in Counter(ann_ids).items() if count > 1]
#TEST duplicates v3 -faster
u, c = np.unique(np.array(ann_ids), return_counts=True)
duplicates_l= u[c > 1].tolist()
print(len(duplicates_l))
###Output
273469
###Markdown
3. split by imagesQUESTION Seeded random or not?
###Code
#init
train_imgids=[]
val_imgids=[]
test_imgids=[]
#size
train_size=len(imgids) * train // 100 #floor division
val_size=len(imgids) * val // 100
test_size=len(imgids) * test // 100
#select images
random.shuffle(imgids)
train_imgids.extend(imgids[:train_size])
val_imgids.extend(imgids[train_size+1:train_size+val_size-1])
test_imgids.extend(imgids[train_size+val_size+1:train_size+val_size+test_size])
print((len(train_imgids)/len(imgids))*100)
print((len(val_imgids)/len(imgids))*100)
print((len(test_imgids)/len(imgids))*100)
ann_catids_np=np.array(ann_catids)
train_ann_catidx=[]
val_ann_catidx=[]
test_ann_catidx=[]
for imgid in tqdm(train_imgids):
ann_idx_np = np.where(ann_catids_np==imgid)[0] #annotation index of ids
if not ann_idx_np.any(): continue
train_ann_catidx.extend(ann_idx_np.tolist())
for imgid in tqdm(val_imgids):
ann_idx_np = np.where(ann_catids_np==imgid)[0] #annotation index of ids
if not ann_idx_np.any(): continue
val_ann_catidx.extend(ann_idx_np.tolist())
for imgid in tqdm(test_imgids):
ann_idx_np = np.where(ann_catids_np==imgid)[0] #annotation index of ids
if not ann_idx_np.any(): continue
test_ann_catidx.extend(ann_idx_np.tolist())
print((len(train_ann_catidx)/len(ann_catids))*100)
print((len(val_ann_catidx)/len(ann_catids))*100)
print((len(test_ann_catidx)/len(ann_catids))*100)
l_dup=[train_ann_catidx, val_ann_catidx,test_ann_catidx ]
for i in l_dup:
print('original: ', len(i))
u, c = np.unique(np.array(i), return_counts=True)
duplicates_l= u[c > 1].tolist()
print('duplicate: ',len(duplicates_l))
###Output
original: 112356
duplicate: 0
original: 7964
duplicate: 0
original: 4984
duplicate: 0
###Markdown
4. Save splited jsons
###Code
percent_idx=[train_ann_catidx,val_ann_catidx, test_ann_catidx]
percent_names=['train', 'val', 'test']
newjson=copy.copy(molajson)
annotations=copy.copy(molajson['annotations'])
for i, percent_i in enumerate(tqdm(percent_idx)):
#get new annotations
newjson['annotations']=[annotations[index] for index in percent_i]
# save
print('\n >> SAVING {}...'.format(percent_names[i]))
outpath=rdir+'splitimg_{}/'.format(infilename)
assure_path_exists(outpath)
outjsonfile=outpath+'{}.json'.format(percent_names[i]) #rdir+'{}_{}.json'.format(percent_names[i],infilename)
with open(outjsonfile, 'w') as f:
json.dump(newjson, f)
print("JSON SAVED : {} \n".format(outjsonfile))
for k in molajson:
print(k, len(newjson[k]))
###Output
0%| | 0/3 [00:00<?, ?it/s]
###Markdown
5. TEST SPLIT ANNOTATIONS DUPLICATES
###Code
injsonfile='mola_mix_aggressive.json'
outjsonfile=rdir+'split_{}/'.format(infilename)+'test.json'
# init json
molajson = json.load(open(outjsonfile))
for k in molajson:
print(k, len(molajson[k]))
# annotations category_id
ann_ids=[]
for an in tqdm(molajson['annotations']):
ann_ids.append(an['id'])
print(len(ann_ids))
#TEST duplicates v3 -faster
u, c = np.unique(np.array(ann_ids), return_counts=True)
duplicates_l= u[c > 1].tolist()
print(len(duplicates_l))
###Output
100%|██████████████████████████████████████████████████████████████████| 133266/133266 [00:00<00:00, 1497403.10it/s] |
Natural Language Processing/Bag of Word/natural_language_processing.ipynb | ###Markdown
Natural Language Processing Importing the libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
Importing the dataset
###Code
dataset = pd.read_csv('Restaurant_Reviews.tsv', delimiter = '\t', quoting = 3)
###Output
_____no_output_____
###Markdown
Cleaning the texts
###Code
import re
import nltk
nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer #used fr stemming the word
corpus = []
for i in range(0, dataset.shape[0]):
review = re.sub('[^a-zA-Z]', ' ', dataset['Review'][i])
review = review.split()
ps = PorterStemmer()
all_stopwords = stopwords.words('english')
all_stopwords.remove('not')
review = [ps.stem(word) for word in review if not word in set(all_stopwords)]
review = ' '.join(review)
corpus.append(review)
print(corpus)
###Output
['wow love place', 'crust not good', 'not tasti textur nasti', 'stop late may bank holiday rick steve recommend love', 'the select menu great price', 'now i get angri i want damn pho', 'honeslti tast that fresh', 'the potato like rubber could tell made ahead time kept warmer', 'the fri great', 'a great touch', 'servic prompt', 'would not go back', 'the cashier care ever i say still end wayyy overpr', 'i tri cape cod ravoli chicken cranberri mmmm', 'i disgust i pretti sure human hair', 'i shock sign indic cash', 'highli recommend', 'waitress littl slow servic', 'thi place not worth time let alon vega', 'not like', 'the burritto blah', 'the food amaz', 'servic also cute', 'i could care less the interior beauti', 'so perform', 'that right red velvet cake ohhh stuff good', 'they never brought salad ask', 'thi hole wall great mexican street taco friendli staff', 'took hour get food tabl restaur food luke warm our sever run around like total overwhelm', 'the worst salmon sashimi', 'also combo like burger fri beer decent deal', 'thi like final blow', 'i found place accid i could not happier', 'seem like good quick place grab bite familiar pub food favor look elsewher', 'overal i like place lot', 'the redeem qualiti restaur inexpens', 'ampl portion good price', 'poor servic waiter made feel like i stupid everi time came tabl', 'my first visit hiro delight', 'servic suck', 'the shrimp tender moist', 'there not deal good enough would drag establish', 'hard judg whether side good gross melt styrofoam want eat fear get sick', 'on posit note server attent provid great servic', 'frozen puck disgust worst peopl behind regist', 'the thing i like prime rib dessert section', 'it bad food damn gener', 'the burger good beef cook right', 'if want sandwich go firehous', 'my side greek salad greek dress tasti pita hummu refresh', 'we order duck rare pink tender insid nice char outsid', 'he came run us realiz husband left sunglass tabl', 'their chow mein good', 'they horribl attitud toward custom talk one custom enjoy food', 'the portion huge', 'love friendli server great food wonder imagin menu', 'the heart attack grill downtown vega absolut flat line excus restaur', 'not much seafood like string pasta bottom', 'the salad right amount sauc not power scallop perfectli cook', 'the rip banana not rip petrifi tasteless', 'at least think refil water i struggl wave minut', 'thi place receiv star appet', 'the cocktail handmad delici', 'we definit go back', 'we glad found place', 'great food servic huge portion give militari discount', 'alway great time do gringo', 'updat went back second time still amaz', 'we got food appar never heard salt batter fish chewi', 'a great way finish great', 'the deal includ tast drink jeff went beyond expect', 'realli realli good rice time', 'the servic meh', 'it took min get milkshak noth chocol milk', 'i guess i known place would suck insid excalibur i use common sens', 'the scallop dish quit appal valu well', 'time veri bad custom servic', 'the sweet potato fri good season well', 'today second time i lunch buffet pretti good', 'there much good food vega i feel cheat wast eat opportun go rice compani', 'come like experienc underwhelm relationship parti wait person ask break', 'walk place smell like old greas trap other eat', 'the turkey roast beef bland', 'thi place', 'the pan cake everyon rave tast like sugari disast tailor palat six year old', 'i love pho spring roll oh yummi tri', 'the poor batter meat ratio made chicken tender unsatisfi', 'all i say food amaz', 'omelet die', 'everyth fresh delici', 'in summari larg disappoint dine experi', 'it like realli sexi parti mouth outrag flirt hottest person parti', 'never hard rock casino will never ever step forward in it again', 'best breakfast buffet', 'say bye bye tip ladi', 'we never go', 'will back', 'food arriv quickli', 'it not good', 'on side cafe serv realli good food', 'our server fantast found wife love roast garlic bone marrow ad extra meal anoth marrow go', 'the good thing waiter help kept bloddi mari come', 'best buffet town price cannot beat', 'i love mussel cook wine reduct duck tender potato dish delici', 'thi one better buffet i', 'so went tigerlilli fantast afternoon', 'the food delici bartend attent person and got great deal', 'the ambienc wonder music play', 'will go back next trip', 'sooooo good', 'real sushi lover let honest yama not good', 'at least min pass us order food arriv busi', 'thi realli fantast thai restaur definit worth visit', 'nice spici tender', 'good price', 'check', 'it pretti gross', 'i better atmospher', 'kind hard mess steak', 'although i much like look sound place actual experi bit disappoint', 'i know place manag serv blandest food i ever eaten prepar indian cuisin', 'worst servic boot least worri', 'servic fine waitress friendli', 'the guy steak steak love son steak best worst place said best steak ever eaten', 'we thought ventur away get good sushi place realli hit spot night', 'host staff lack better word bitch', 'bland not like place number reason i want wast time bad review i leav', 'phenomen food servic ambianc', 'i return', 'definit worth ventur strip pork belli return next time i vega', 'thi place way overpr mediocr food', 'penn vodka excel', 'they good select food includ massiv meatloaf sandwich crispi chicken wrap delish tuna melt tasti burger', 'the manag rude', 'delici nyc bagel good select cream chees real lox caper even', 'great subway fact good come everi subway not meet expect', 'i serious solid breakfast', 'thi one best bar food vega', 'he extrem rude realli mani restaur i would love dine weekend vega', 'my drink never empti made realli great menu suggest', 'don', 'the waiter help friendli rare check us', 'my husband i ate lunch disappoint food servic', 'and red curri much bamboo shoot tasti', 'nice blanket moz top feel like done cover subpar food', 'the bathroom clean place well decor', 'the menu alway chang food qualiti go servic extrem slow', 'the servic littl slow consid serv peopl server food come slow pace', 'i give thumb', 'we watch waiter pay lot attent tabl ignor us', 'my fianc i came middl day greet seat right away', 'thi great restaur mandalay bay', 'we wait forti five minut vain', 'crostini came salad stale', 'some highlight great qualiti nigiri', 'staff friendli joint alway clean', 'differ cut piec day still wonder tender well well flavor', 'i order voodoo pasta first time i realli excel pasta sinc go gluten free sever year ago', 'place good', 'unfortun must hit bakeri leftov day everyth order stale', 'i came back today sinc reloc still not impress', 'i seat immedi', 'their menu divers reason price', 'avoid cost', 'restaur alway full never wait', 'delici', 'thi place hand one best place eat phoenix metro area', 'so go look good food', 'i never treat bad', 'bacon hella salti', 'we also order spinach avocado salad ingredi sad dress liter zero tast', 'thi realli vega fine dine use right menu hand ladi price list', 'the waitress friendli', 'lordi khao soi dish not miss curri lover', 'everyth menu terrif also thrill made amaz accommod vegetarian daughter', 'perhap i caught night judg review i not inspir go back', 'the servic leav lot desir', 'the atmospher modern hip maintain touch cozi', 'not weekli haunt definit place come back everi', 'we liter sat minut one ask take order', 'the burger absolut flavor meat total bland burger overcook charcoal flavor', 'i also decid not send back waitress look like verg heart attack', 'i dress treat rude', 'it probabl dirt', 'love place hit spot i want someth healthi not lack quantiti flavor', 'i order lemon raspberri ice cocktail also incred', 'the food suck expect suck could imagin', 'interest decor', 'what i realli like crepe station', 'also serv hot bread butter home made potato chip bacon bit top origin good', 'watch prepar delici food', 'both egg roll fantast', 'when order arriv one gyro miss', 'i salad wing ice cream dessert left feel quit satisfi', 'i not realli sure joey vote best hot dog valley reader phoenix magazin', 'the best place go tasti bowl pho', 'the live music friday total blow', 'i never insult felt disrespect', 'veri friendli staff', 'it worth drive', 'i heard good thing place exceed everi hope i could dream', 'food great serivc', 'the warm beer help', 'great brunch spot', 'servic friendli invit', 'veri good lunch spot', 'i live sinc first last time i step foot place', 'the worst experi ever', 'must night place', 'the side delish mix mushroom yukon gold pure white corn beateou', 'if bug never show i would given sure side wall bug climb kitchen', 'for minut wait salad realiz come time soon', 'my friend love salmon tartar', 'won go back', 'extrem tasti', 'waitress good though', 'soggi not good', 'the jamaican mojito delici', 'which small not worth price', 'food rich order accordingli', 'the shower area outsid rins not take full shower unless mind nude everyon see', 'the servic bit lack', 'lobster bisqu bussel sprout risotto filet all need salt pepper cours none tabl', 'hope bode go busi someon cook come', 'it either cold not enough flavor bad', 'i love bacon wrap date', 'thi unbeliev bargain', 'the folk otto alway make us feel welcom special', 'as main also uninspir', 'thi place i first pho amaz', 'thi wonder experi made place must stop whenev town', 'if food bad enough enjoy deal world worst annoy drunk peopl', 'veri fun chef', 'order doubl cheeseburg got singl patti fall apart pictur upload yeah still suck', 'great place coupl drink watch sport event wall cover tv', 'if possibl give zero star', 'the descript said yum yum sauc anoth said eel sauc yet anoth said spici mayo well none roll sauc', 'i say would hardest decis honestli m dish tast suppos tast amaz', 'if not roll eye may stay not sure go back tri', 'everyon attent provid excel custom servic', 'horribl wast time money', 'now dish quit flavour', 'by time side restaur almost empti excus', 'it busi either also build freez cold', 'like review said pay eat place', 'drink took close minut come one point', 'serious flavor delight folk', 'much better ayc sushi place i went vega', 'the light dark enough set mood', 'base sub par servic i receiv effort show gratitud busi i go back', 'owner realli great peopl', 'there noth privileg work eat', 'the greek dress creami flavor', 'overal i think i would take parent place made similar complaint i silent felt', 'now pizza good peanut sauc tasti', 'we tabl servic pretti fast', 'fantast servic', 'i well would given godfath zero star possibl', 'they know make', 'tough short flavor', 'i hope place stick around', 'i bar vega not ever recal charg tap water', 'the restaur atmospher exquisit', 'good servic clean inexpens boot', 'the seafood fresh gener portion', 'plu buck', 'the servic not par either', 'thu far visit twice food absolut delici time', 'just good i year ago', 'for self proclaim coffe cafe i wildli disappoint', 'the veggitarian platter world', 'you cant go wrong food', 'you beat', 'stop place madison ironman friendli kind staff', 'the chef friendli good job', 'i better not dedic boba tea spot even jenni pho', 'i like patio servic outstand', 'the goat taco skimp meat wow flavor', 'i think not', 'i mac salad pretti bland i not get', 'i went bachi burger friend recommend not disappoint', 'servic stink', 'i wait wait', 'thi place not qualiti sushi not qualiti restaur', 'i would definit recommend wing well pizza', 'great pizza salad', 'thing went wrong they burn saganaki', 'we wait hour breakfast i could done time better home', 'thi place amaz', 'i hate disagre fellow yelper husband i disappoint place', 'wait hour never got either pizza mani around us came later', 'just know slow', 'the staff great food delish incred beer select', 'i live neighborhood i disappoint i back conveni locat', 'i know pull pork could soooo delici', 'you get incred fresh fish prepar care', 'befor i go i gave star rate pleas know third time eat bachi burger write review', 'i love fact everyth menu worth', 'never i dine place', 'the food excel servic good', 'good beer drink select good food select', 'pleas stay away shrimp stir fri noodl', 'the potato chip order sad i could probabl count mani chip box probabl around', 'food realli bore', 'good servic check', 'thi greedi corpor never see anoth dime', 'will never ever go back', 'as much i like go back i get pass atroci servic never return', 'in summer dine charm outdoor patio delight', 'i not expect good', 'fantast food', 'she order toast english muffin came untoast', 'the food good', 'never go back', 'great food price high qualiti hous made', 'the bu boy hand rude', 'by point friend i basic figur place joke mind make publicli loudli known', 'back good bbq lighter fare reason price tell public back old way', 'and consid two us left full happi go wrong', 'all bread made hous', 'the downsid servic', 'also fri without doubt worst fri i ever', 'servic except food good review', 'a coupl month later i return amaz meal', 'favorit place town shawarrrrrrma', 'the black eye pea sweet potato unreal', 'you disappoint', 'they could serv vinaigrett may make better overal dish still good', 'i go far mani place i never seen restaur serv egg breakfast especi', 'when mom i got home immedi got sick bite salad', 'the server not pleasant deal alway honor pizza hut coupon', 'both truli unbeliev good i glad went back', 'we fantast servic pleas atmospher', 'everyth gross', 'i love place', 'great servic food', 'first bathroom locat dirti seat cover not replenish plain yucki', 'the burger i got gold standard burger kind disappoint', 'omg food delicioso', 'there noth authent place', 'spaghetti noth special whatsoev', 'of dish salmon best great', 'the veget fresh sauc feel like authent thai', 'it worth drive tucson', 'the select probabl worst i seen vega none', 'pretti good beer select', 'thi place like chipotl better', 'classi warm atmospher fun fresh appet succul steak basebal steak', 'star brick oven bread app', 'i eaten multipl time time food delici', 'we sat anoth ten minut final gave left', 'he terribl', 'everyon treat equal special', 'it take min pancak egg', 'it delici', 'on good side staff genuin pleasant enthusiast real treat', 'sadli gordon ramsey steak place shall sharpli avoid next trip vega', 'as alway even wonder food delici', 'best fish i ever life', 'the bathroom next door nice', 'the buffet small food offer bland', 'thi outstand littl restaur best food i ever tast', 'pretti cool i would say', 'definit turn doubt i back unless someon els buy', 'server great job handl larg rowdi tabl', 'i find wast food despic food', 'my wife lobster bisqu soup lukewarm', 'would come back i sushi crave vega', 'the staff great ambianc great', 'he deserv star', 'i left stomach ach felt sick rest day', 'they drop ball', 'the dine space tini elegantli decor comfort', 'they custom order way like usual eggplant green bean stir fri love', 'and bean rice mediocr best', 'best taco town far', 'i took back money got outta', 'in interest part town place amaz', 'rude inconsider manag', 'the staff not friendli wait time serv horribl one even say hi first minut', 'i back', 'they great dinner', 'the servic outshin i definit recommend halibut', 'the food terribl', 'will never ever go back and have told mani peopl what had happen', 'i recommend unless car break front starv', 'i come back everi time i vega', 'thi place deserv one star food', 'thi disgrac', 'def come back bowl next time', 'if want healthi authent ethic food tri place', 'i continu come ladi night andddd date night highli recommend place anyon area', 'i sever time past experi alway great', 'we walk away stuf happi first vega buffet experi', 'servic excel price pretti reason consid vega locat insid crystal shop mall aria', 'to summar food incred nay transcend noth bring joy quit like memori pneumat condiment dispens', 'i probabl one peopl ever go ian not like', 'kid pizza alway hit lot great side dish option kiddo', 'servic perfect famili atmospher nice see', 'cook perfect servic impecc', 'thi one simpli disappoint', 'overal i disappoint qualiti food bouchon', 'i account know i get screw', 'great place eat remind littl mom pop shop san francisco bay area', 'today first tast buldogi gourmet hot dog i tell i ever thought possibl', 'left frustrat', 'i definit soon', 'food realli good i got full petti fast', 'servic fantast', 'total wast of time', 'i know kind best ice tea', 'come hungri leav happi stuf', 'for servic i give star', 'i assur disappoint', 'i take littl bad servic food suck', 'gave tri eat crust teeth still sore', 'but i complet gross', 'i realli enjoy eat', 'first time go i think i quickli becom regular', 'our server nice even though look littl overwhelm need stay profession friendli end', 'from dinner companion told everyth fresh nice textur tast', 'on ground right next tabl larg smear step track everywher pile green bird poop', 'furthermor even find hour oper websit', 'we tri like place time i think done', 'what mistak', 'no complaint', 'thi serious good pizza i expert connisseur topic', 'waiter jerk', 'strike want rush', 'these nicest restaur owner i ever come across', 'i never come', 'we love biscuit', 'servic quick friendli', 'order appet took minut pizza anoth minut', 'so absolutley fantast', 'it huge awkward lb piec cow th gristl fat', 'definit come back', 'i like steiner dark feel like bar', 'wow spici delici', 'if not familiar check', 'i take busi dinner dollar elsewher', 'i love go back', 'anyway fs restaur wonder breakfast lunch', 'noth special', 'each day week differ deal delici', 'not mention combin pear almond bacon big winner', 'will not back', 'sauc tasteless', 'the food delici spici enough sure ask spicier prefer way', 'my ribey steak cook perfectli great mesquit flavor', 'i think go back anytim soon', 'food gooodd', 'i far sushi connoisseur i definit tell differ good food bad food certainli bad food', 'i insult', 'the last time i lunch bad', 'the chicken wing contain driest chicken meat i ever eaten', 'the food good i enjoy everi mouth enjoy relax venu coupl small famili group etc', 'nargil i think great', 'best tater tot southwest', 'we love place', 'definit not worth i paid', 'the vanilla ice cream creami smooth profiterol choux pastri fresh enough', 'im az time new spot', 'the manag worst', 'the insid realli quit nice clean', 'the food outstand price reason', 'i think i run back carli anytim soon food', 'thi due fact took minut acknowledg anoth minut get food kept forget thing', 'love margarita', 'thi first vega buffet not disappoint', 'veri good though', 'the one note ventil could use upgrad', 'great pork sandwich', 'don wast time', 'total letdown i would much rather go camelback flower shop cartel coffe', 'third chees friend burger cold', 'we enjoy pizza brunch', 'the steak well trim also perfectli cook', 'we group claim would handl us beauti', 'i love', 'we ask bill leav without eat bring either', 'thi place jewel la vega exactli i hope find nearli ten year live', 'seafood limit boil shrimp crab leg crab leg definit not tast fresh', 'the select food not best', 'delici i absolut back', 'thi small famili restaur fine dine establish', 'they toro tartar cavier extraordinari i like thinli slice wagyu white truffl', 'i dont think i back long time', 'it attach ga station rare good sign', 'how awesom', 'i back mani time soon', 'the menu much good stuff could not decid', 'wors humili worker right front bunch horribl name call', 'conclus veri fill meal', 'their daili special alway hit group', 'and tragedi struck', 'the pancak also realli good pretti larg', 'thi first crawfish experi delici', 'their monster chicken fri steak egg time favorit', 'waitress sweet funni', 'i also tast mom multi grain pumpkin pancak pecan butter amaz fluffi delici', 'i rather eat airlin food serious', 'cant say enough good thing place', 'the ambianc incred', 'the waitress manag friendli', 'i would not recommend place', 'overal i impress noca', 'my gyro basic lettuc', 'terribl servic', 'thoroughli disappoint', 'i much pasta i love homemad hand made pasta thin pizza', 'give tri happi', 'by far best cheesecurd ever', 'reason price also', 'everyth perfect night', 'the food good typic bar food', 'drive get', 'at first glanc love bakeri cafe nice ambianc clean friendli staff', 'anyway i not think go back', 'point finger item menu order disappoint', 'oh thing beauti restaur', 'if gone go now', 'a greasi unhealthi meal', 'first time might last', 'those burger amaz', 'similarli deliveri man not say word apolog food minut late', 'and way expens', 'be sure order dessert even need pack go tiramisu cannoli die', 'thi first time i wait next', 'the bartend also nice', 'everyth good tasti', 'thi place two thumb way', 'the best place vega breakfast check sat sun', 'if love authent mexican food want whole bunch interest yet delici meat choos need tri place', 'terribl manag', 'an excel new restaur experienc frenchman', 'if zero star i would give zero star', 'great steak great side great wine amaz dessert', 'worst martini ever', 'the steak shrimp opinion best entre gc', 'i opportun today sampl amaz pizza', 'we wait thirti minut seat although vacant tabl folk wait', 'the yellowtail carpaccio melt mouth fresh', 'i tri go back even empti', 'no i go eat potato i found stranger hair', 'just spici enough perfect actual', 'last night second time dine i happi i decid go back', 'not even hello right', 'the dessert bit strang', 'my boyfriend i came first time recent trip vega could not pleas qualiti food servic', 'i realli recommend place go wrong donut place', 'nice ambianc', 'i would recommend save room', 'i guess mayb went night disgrac', 'howev recent experi particular locat not good', 'i know not like restaur someth', 'avoid thi establish', 'i think restaur suffer not tri hard enough', 'all tapa dish delici', 'i heart place', 'my salad bland vinegrett babi green heart palm', 'after two i felt disgust', 'a good time', 'i believ place great stop huge belli hanker sushi', 'gener portion great tast', 'i never go back place never ever recommend place anyon', 'the server went back forth sever time not even much are help', 'food delici', 'an hour serious', 'i consid theft', 'eew thi locat need complet overhaul', 'we recent wit poor qualiti manag toward guest well', 'wait wait wait', 'he also came back check us regularli excel servic', 'our server super nice check us mani time', 'the pizza tast old super chewi not good way', 'i swung give tri deepli disappoint', 'servic good compani better', 'the staff also friendli effici', 'as servic i fan quick serv nice folk', 'boy sucker dri', 'over rate', 'if look authent thai food go els', 'their steak recommend', 'after i pull car i wait anoth minut acknowledg', 'great food great servic clean friendli set', 'all i assur i back', 'i hate thing much cheap qualiti black oliv', 'my breakfast perpar great beauti present giant slice toast lightli dust powder sugar', 'the kid play area nasti', 'great place fo take eat', 'the waitress friendli happi accomod vegan veggi option', 'omg i felt like i never eaten thai food dish', 'it extrem crumbi pretti tasteless', 'it pale color instead nice char no flavor', 'the crouton also tast homemad extra plu', 'i got home see driest damn wing ever', 'it regular stop trip phoenix', 'i realli enjoy crema caf expand i even told friend best breakfast', 'not good money', 'i miss wish one philadelphia', 'we got sit fairli fast end wait minut place order anoth minut food arriv', 'they also best chees crisp town', 'good valu great food great servic', 'couldn ask satisfi meal', 'the food good', 'it awesom', 'i want leav', 'we made drive way north scottsdal i not one bit disappoint', 'i not eat', 'the owner realli realli need quit soooooo cheap let wrap freak sandwich two paper not one', 'i check place coupl year ago not impress', 'the chicken i got definit reheat ok wedg cold soggi', 'sorri i not get food anytim soon', 'an absolut must visit', 'the cow tongu cheek taco amaz', 'my friend not like bloodi mari', 'despit hard i rate busi actual rare give star', 'they realli want make experi good one', 'i not return', 'i chicken pho tast bland', 'veri disappoint', 'the grill chicken tender yellow saffron season', 'drive thru mean not want wait around half hour food somehow end go make us wait wait', 'pretti awesom place', 'ambienc perfect', 'best luck rude non custom servic focus new manag', 'ani grandmoth make roast chicken better one', 'i ask multipl time wine list time ignor i went hostess got one', 'the staff alway super friendli help especi cool bring two small boy babi', 'four star food guy blue shirt great vibe still let us eat', 'the roast beef sandwich tast realli good', 'same even i drastic sick', 'high qualiti chicken chicken caesar salad', 'order burger rare came done', 'we promptli greet seat', 'tri go lunch madhous', 'i proven dead wrong sushi bar not qualiti great servic fast food impecc', 'after wait hour seat i not greatest mood', 'thi good joint', 'the macaron insan good', 'i not eat', 'our waiter attent friendli inform', 'mayb cold would somewhat edibl', 'thi place lot promis fail deliv', 'veri bad experi', 'what mistak', 'food averag best', 'great food', 'we go back anytim soon', 'veri veri disappoint order big bay plater', 'great place relax awesom burger beer', 'it perfect sit famili meal get togeth friend', 'not much flavor poorli construct', 'the patio seat comfort', 'the fri rice dri well', 'hand favorit italian restaur', 'that scream legit book somethat also pretti rare vega', 'it not fun experi', 'the atmospher great love duo violinist play song request', 'i person love hummu pita baklava falafel baba ganoush amaz eggplant', 'veri conveni sinc stay mgm', 'the owner super friendli staff courteou', 'both great', 'eclect select', 'the sweet potato tot good onion ring perfect close i', 'the staff attent', 'and chef gener time even came around twice take pictur', 'the owner use work nobu place realli similar half price', 'googl mediocr i imagin smashburg pop', 'dont go', 'i promis disappoint', 'as sushi lover avoid place mean', 'what great doubl cheeseburg', 'awesom servic food', 'a fantast neighborhood gem', 'i wait go back', 'the plantain worst i ever tast', 'it great place i highli recommend', 'servic slow not attent', 'i gave star i give star', 'your staff spend time talk', 'dessert panna cotta amaz', 'veri good food great atmospher', 'damn good steak', 'total brunch fail', 'price reason flavor spot sauc home made slaw not drench mayo', 'the decor nice piano music soundtrack pleasant', 'the steak amaz rge fillet relleno best seafood plate ever', 'good food good servic', 'it absolut amaz', 'i probabl back honest', 'definit back', 'the sergeant pepper beef sandwich auju sauc excel sandwich well', 'hawaiian breez mango magic pineappl delight smoothi i tri far good', 'went lunch servic slow', 'we much say place walk expect amaz quickli disappoint', 'i mortifi', 'needless say never back', 'anyway the food definit not fill price pay expect', 'the chip came drip greas mostli not edibl', 'i realli impress strip steak', 'have go sinc everi meal awesom', 'our server nice attent serv staff', 'the cashier friendli even brought food', 'i work hospit industri paradis valley refrain recommend cibo longer', 'the atmospher fun', 'would not recommend other', 'servic quick even go order like like', 'i mean realli get famou fish chip terribl', 'that said mouth belli still quit pleas', 'not thing', 'thumb up', 'if read pleas go', 'i love grill pizza remind legit italian pizza', 'onli pro larg seat area nice bar area great simpl drink menu the best brick oven pizza homemad dough', 'they realli nice atmospher', 'tonight i elk filet special suck', 'after one bite i hook', 'we order old classic new dish go time sore disappoint everyth', 'cute quaint simpl honest', 'the chicken delici season perfect fri outsid moist chicken insid', 'the food great alway compliment chef', 'special thank dylan t recommend order all yummi tummi', 'awesom select beer', 'great food awesom servic', 'one nice thing ad gratuiti bill sinc parti larger expect tip', 'a fli appl juic a fli', 'the han nan chicken also tasti', 'as servic i thought good', 'the food bare lukewarm must sit wait server bring us', 'ryan bar definit one edinburgh establish i revisit', 'nicest chines restaur i', 'overal i like food servic', 'they also serv indian naan bread hummu spici pine nut sauc world', 'probabl never come back recommend', 'friend pasta also bad bare touch', 'tri airport experi tasti food speedi friendli servic', 'i love decor chines calligraphi wall paper', 'never anyth complain', 'the restaur clean famili restaur feel', 'it way fri', 'i not sure long stood long enough begin feel awkwardli place', 'when i open sandwich i impress not good way', 'will not back', 'there warm feel servic i felt like guest special treat', 'an extens menu provid lot option breakfast', 'i alway order vegetarian menu dinner wide array option choos', 'i watch price inflat portion get smaller manag attitud grow rapidli', 'wonder lil tapa ambienc made feel warm fuzzi insid', 'i got enjoy seafood salad fabul vinegrett', 'the wonton thin not thick chewi almost melt mouth', 'level spici perfect spice whelm soup', 'we sat right time server get go fantast', 'main thing i enjoy crowd older crowd around mid', 'when i side town definit spot i hit', 'i wait minut get drink longer get arepa', 'thi great place eat', 'the jalapeno bacon soooo good', 'the servic poor that nice', 'food good servic good price good', 'the place not clean food oh stale', 'the chicken dish ok beef like shoe leather', 'but servic beyond bad', 'i happi', 'tast like dirt', 'one place phoenix i would defin go back', 'the block amaz', 'it close hous low key non fanci afford price good food', 'both hot sour egg flower soup absolut star', 'my sashimi poor qualiti soggi tasteless', 'great time famili dinner sunday night', 'food not tasti not say real tradit hunan style', 'what bother slow servic', 'the flair bartend absolut amaz', 'their frozen margarita way sugari tast', 'these good order twice', 'so nutshel the restaraunt smell like combin dirti fish market sewer', 'my girlfriend veal bad', 'unfortun not good', 'i pretti satifi experi', 'join club get awesom offer via email', 'perfect someon like beer ice cold case even colder', 'bland flavorless good way describ bare tepid meat', 'the chain i fan beat place easili', 'the nacho must have', 'we not come back', 'i mani word say place everyth pretti well', 'the staff super nice quick even crazi crowd downtown juri lawyer court staff', 'great atmospher friendli fast servic', 'when i receiv pita huge lot meat thumb', 'onc food arriv meh', 'pay hot dog fri look like came kid meal wienerschnitzel not idea good meal', 'the classic main lobster roll fantast', 'my brother law work mall ate day guess sick night', 'so good i go review place twice herea tribut place tribut event held last night', 'the chip salsa realli good salsa fresh', 'thi place great', 'mediocr food', 'onc get insid impress place', 'i super pissd', 'and servic super friendli', 'whi sad littl veget overcook', 'thi place nice surpris', 'they golden crispi delici', 'i high hope place sinc burger cook charcoal grill unfortun tast fell flat way flat', 'i could eat bruschetta day devin', 'not singl employe came see ok even need water refil final serv us food', 'lastli mozzarella stick best thing order', 'the first time i ever came i amaz experi i still tell peopl awesom duck', 'the server neglig need made us feel unwelcom i would not suggest place', 'the servic terribl though', 'thi place overpr not consist boba realli overpr', 'it pack', 'i love place', 'i say dessert yummi', 'the food terribl', 'the season fruit fresh white peach pure', 'it kept get wors wors i offici done', 'thi place honestli blown', 'but i definit would not eat', 'do not wast money', 'i love put food nice plastic contain oppos cram littl paper takeout box', 'the cr pe delic thin moist', 'aw servic', 'won ever go', 'food qualiti horribl', 'for price i think place i would much rather gone', 'the servic fair best', 'i love sushi i found kabuki price hip servic', 'do favor stay away dish', 'veri poor servic', 'no one tabl thought food averag worth wait', 'best servic food ever maria server good friendli made day', 'they excel', 'i paid bill not tip i felt server terribl job', 'just lunch great experi', 'i never bland food surpris consid articl read focus much spice flavor', 'food way overpr portion fuck small', 'i recent tri caballero i back everi week sinc', 'buck head realli expect better food', 'the food came good pace', 'i ate twice last visit especi enjoy salmon salad', 'i back', 'we could not believ dirti oyster', 'thi place deserv star', 'i would not recommend place', 'in fact i go round star awesom', 'to disbelief dish qualifi worst version food i ever tast', 'bad day not i low toler rude custom servic peopl job nice polit wash dish otherwis', 'potato great biscuit', 'i probabl would not go', 'so flavor perfect amount heat', 'the price reason servic great', 'the wife hate meal coconut shrimp friend realli not enjoy meal either', 'my fella got huevo ranchero look appeal', 'went happi hour great list wine', 'some may say buffet pricey i think get pay place get quit lot', 'i probabl come back', 'worst food servic i', 'thi place pretti good nice littl vibe restaur', 'talk great custom servic cours back', 'hot dish not hot cold dish close room temp i watch staff prepar food bare hand glove everyth deep fri oil', 'i love fri bean', 'alway pleasur deal', 'they plethora salad sandwich everyth i tri get seal approv', 'thi place awesom want someth light healthi summer', 'for sushi strip place go', 'the servic great even manag came help tabl', 'the feel dine room colleg cook cours high class dine servic slow best', 'i start review two star i edit give one', 'worst sushi ever eat besid costco', 'all excel restaur highlight great servic uniqu menu beauti set', 'my boyfriend sat bar complet delight experi', 'weird vibe owner', 'there hardli meat', 'i better bagel groceri store', 'go to place gyro', 'i love owner chef one authent japanes cool dude', 'now burger good pizza use amaz doughi flavorless', 'i found six inch long piec wire salsa', 'the servic terribl food mediocr', 'we defin enjoy', 'i order albondiga soup warm tast like tomato soup frozen meatbal', 'on three differ occas i ask well done medium well three time i got bloodiest piec meat plate', 'i two bite refus eat anymor', 'the servic extrem slow', 'after minut wait i got tabl', 'serious killer hot chai latt', 'no allergi warn menu waitress absolut clue meal not contain peanut', 'my boyfriend tri mediterranean chicken salad fell love', 'their rotat beer tap also highlight place', 'price bit concern mellow mushroom', 'worst thai ever', 'if stay vega must get breakfast least', 'i want first say server great perfect servic', 'the pizza select good', 'i strawberri tea good', 'highli unprofession rude loyal patron', 'overal great experi', 'spend money elsewher', 'their regular toast bread equal satisfi occasion pat butter mmmm', 'the buffet bellagio far i anticip', 'and drink weak peopl', 'my order not correct', 'also i feel like chip bought not made hous', 'after disappoint dinner went elsewher dessert', 'the chip sal amaz', 'we return', 'thi new fav vega buffet spot', 'i serious cannot believ owner mani unexperienc employe run around like chicken head cut', 'veri sad', 'felt insult disrespect could talk judg anoth human like', 'how call steakhous properli cook steak i understand', 'i not impress concept food', 'the thing i crazi guacamol i like pur ed', 'there realli noth postino hope experi better', 'i got food poison buffet', 'they brought fresh batch fri i think yay someth warm', 'what should hilari yummi christma eve dinner rememb biggest fail entir trip us', 'needless say i go back anytim soon', 'thi place disgust', 'everi time i eat i see care teamwork profession degre', 'the ri style calamari joke', 'howev much garlic fondu bare edibl', 'i could bare stomach meal complain busi lunch', 'it bad i lost heart finish', 'it also took forev bring us check ask', 'we one make scene restaur i get definit lost love one', 'disappoint experi', 'the food par denni say not good', 'if want wait mediocr food downright terribl servic place', 'waaaaaayyyyyyyyyi rate i say', 'we go back', 'the place fairli clean food simpli worth', 'thi place lack style', 'the sangria half glass wine full ridicul', 'don bother come', 'the meat pretti dri i slice brisket pull pork', 'the build seem pretti neat bathroom pretti trippi i eat', 'it equal aw', 'probabl not hurri go back', 'slow seat even reserv', 'not good stretch imagin', 'the cashew cream sauc bland veget undercook', 'the chipolt ranch dip saus tasteless seem thin water heat', 'it bit sweet not realli spici enough lack flavor', 'i veri disappoint', 'thi place horribl way overpr', 'mayb vegetarian fare i twice i thought averag best', 'it busi know', 'the tabl outsid also dirti lot time worker not alway friendli help menu', 'the ambianc not feel like buffet set douchey indoor garden tea biscuit', 'con spotti servic', 'the fri not hot neither burger', 'but came back cold', 'then food came disappoint ensu', 'the real disappoint waiter', 'my husband said rude not even apolog bad food anyth', 'the reason eat would fill night bing drink get carb stomach', 'insult profound deuchebaggeri go outsid smoke break serv solidifi', 'if someon order two taco think may part custom servic ask combo ala cart', 'she quit disappoint although blame need place door', 'after rave review i wait eat disappoint', 'del taco pretti nasti avoid possibl', 'it not hard make decent hamburg', 'but i like', 'hell i go back', 'we gotten much better servic pizza place next door servic receiv restaur', 'i know big deal place i back ya', 'i immedi said i want talk manag i not want talk guy shot firebal behind bar', 'the ambianc much better', 'unfortun set us disapppoint entre', 'the food good', 'your server suck wait correct server heimer suck', 'what happen next pretti put', 'bad caus i know famili own i realli want like place', 'overpr get', 'i vomit bathroom mid lunch', 'i kept look time soon becom minut yet still food', 'i place eat circumst would i ever return top list', 'we start tuna sashimi brownish color obvious fresh', 'food averag', 'it sure beat nacho movi i would expect littl bit come restaur', 'all ha long bay bit flop', 'the problem i charg sandwich bigger subway sub offer better amount veget', 'shrimp when i unwrap i live mile brushfir liter ice cold', 'it lack flavor seem undercook dri', 'it realli impress place close', 'i would avoid place stay mirag', 'the refri bean came meal dri crusti food bland', 'spend money time place els', 'a ladi tabl next us found live green caterpillar in salad', 'present food aw', 'i tell disappoint i', 'i think food flavor textur lack', 'appetit instantli gone', 'overal i not impress would not go back', 'the whole experi underwhelm i think go ninja sushi next time', 'then i wast enough life pour salt wound draw time took bring check']
###Markdown
Creating the Bag of Words model
###Code
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer(max_features= 1600)
X = cv.fit_transform(corpus).toarray()
y = dataset.iloc[:, -1].values
print(len(X[0]))
###Output
1600
###Markdown
Splitting the dataset into the Training set and Test set
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size= .2, random_state = 0)
###Output
_____no_output_____
###Markdown
Training the Naive Bayes model on the Training set
###Code
from sklearn.naive_bayes import GaussianNB
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from sklearn.linear_model import LogisticRegression
# classifier = GaussianNB()
classifier = RandomForestClassifier(n_estimators=1000, criterion='entropy')
# classifier = SVC(kernel='rbf')
# classifier = LogisticRegression()
classifier.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predicting the Test set results
###Code
y_pred = classifier.predict(X_test)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
###Output
[[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 0]
[1 1]
[1 1]
[1 1]
[0 0]
[0 0]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[1 1]
[0 1]
[1 1]
[1 1]
[0 0]
[0 1]
[0 1]
[0 1]
[0 1]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[1 1]
[1 1]
[0 0]
[0 1]
[0 0]
[0 0]
[0 0]
[1 0]
[1 0]
[1 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 1]
[0 0]
[0 0]
[0 1]
[0 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[1 0]
[0 1]
[0 0]
[0 1]
[0 1]
[0 1]
[0 0]
[1 1]
[1 1]
[1 1]
[0 1]
[0 0]
[0 0]
[0 1]
[1 1]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 1]
[0 0]
[0 1]
[0 1]
[1 0]
[0 1]
[1 1]
[1 1]
[1 0]
[1 1]
[0 0]
[0 1]
[1 1]
[0 0]
[0 1]
[0 1]
[1 1]
[0 0]
[1 0]
[0 1]
[1 0]
[1 1]
[1 1]
[1 1]
[1 1]
[0 1]
[0 0]
[1 1]
[0 0]
[0 0]
[0 0]
[0 1]
[0 0]
[0 0]
[0 1]
[0 0]
[1 1]
[0 0]
[0 0]
[1 1]
[1 1]
[1 1]
[1 1]
[1 1]
[0 0]
[1 1]
[1 1]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 1]
[0 1]
[1 1]
[0 1]
[1 1]
[1 1]
[1 1]
[0 0]
[0 0]
[1 1]
[0 1]
[1 1]
[0 0]
[0 0]
[0 0]
[1 1]
[0 1]
[1 0]
[0 0]
[0 0]
[0 0]
[0 0]
[0 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[0 1]
[0 0]
[1 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[0 0]
[0 0]
[0 1]
[0 0]
[0 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 0]
[0 0]
[0 1]
[0 0]
[1 1]
[1 1]
[0 0]
[0 1]]
###Markdown
Making the Confusion Matrix
###Code
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
###Output
[[87 10]
[36 67]]
|
workspace/pcb_data_aug/Process&Train_PCB.ipynb | ###Markdown
Data Augmenation On The PCB Defect DatasetThis notebook contains all the code needed to use TAO augmenation on subsets of the PCB defect dataset to showcase how augmenatation can be used to improve KPIs for small datasets. This notebook requires the TAO Launcher, Docker and NGC to be setupThe github readme has steps on setting up the prerequisites This notebook also requires preprocess_pcb.py to be in the same directory to function. This notebook takes the following steps1) Download and unpack the PCB defect dataset2) Convert the dataset to kitti format 3) Split the dataset into test and train subsets4) Map local directories the the TAO launcher5) Generate offline augmenation spec file and apply augmentation to the training sets6) Generate TF Records for the test and training sets7) Downloads pretrained object detection weights needed for the trainings8) Launch trainings and evaluationThe last section of this notebook contains all the commands needed to run training and evaluation on all 6 datasets. Steps 1-7 only need to run 1 time. The trainings in step 7 can be run in any order once steps 1-6 have successfully run. A common test set of 500 images is used for validation on all trainingsDatasets100 subset x1 100 subset x10 100 subset x20 500 subset x1 500 subset x10 500 subset x20
###Code
!python3 -m pip install matplotlib
import os
from preprocess_pcb import convert_annotation, create_subset
#paths relative to local repository
repo_home = os.path.join(os.getcwd(), "../../")
model_home = os.path.join(repo_home, "workspace/models")
dataset_home = os.path.join(repo_home, "datasets/pcb_defect")
exp_home = os.path.join(repo_home, "workspace/pcb_data_aug")
#paths for inside the container
dataset_home_cont = "/datasets/pcb_defect/"
exp_home_cont = "/tlt_exp/pcb_data_aug/"
###Output
_____no_output_____
###Markdown
Download and unpack the PCB defect dataset
###Code
%cd $dataset_home
#download and unzip
!wget https://www.dropbox.com/s/h0f39nyotddibsb/VOC_PCB.zip
!unzip VOC_PCB.zip
###Output
_____no_output_____
###Markdown
Convert the dataset to kitti format
###Code
#setup folders for dataset images and labels
os.makedirs("original/images", exist_ok=True)
os.makedirs("original/labels", exist_ok=True)
!cp -r VOC_PCB/JPEGImages/. original/images
#Setup Paths and make label folder
xml_label_path = "VOC_PCB/Annotations"
kitti_label_output = "original/labels"
#Convert labels to kitti and put into output folder
for x in os.listdir(xml_label_path):
current_label_path = os.path.join(xml_label_path, x)
convert_annotation(current_label_path, kitti_label_output)
###Output
_____no_output_____
###Markdown
Split the dataset into test and train subsets
###Code
#Setup folders for dataset subset
test_500 = os.path.join(exp_home, "test_500_list.txt")
train_100 = os.path.join(exp_home, "train_100_list.txt")
train_500 = os.path.join(exp_home, "train_500_list.txt")
os.makedirs("500_subset_test_x1", exist_ok=True)
os.makedirs("100_subset_train_x1", exist_ok=True)
os.makedirs("500_subset_train_x1", exist_ok=True)
#Create the subsets based on predefined lists
create_subset("original", test_500, "500_subset_test_x1")
create_subset("original", train_100, "100_subset_train_x1")
create_subset("original", train_500, "500_subset_train_x1")
## Map local directories the the TAO launcher
# Mapping up the local directories to the TAO docker.
import json
mounts_file = os.path.expanduser("~/.tao_mounts.json")
# Define the dictionary with the mapped drives
drive_map = {
"Mounts": [
# Mapping the data directory
{
"source": os.path.join(repo_home, "datasets"),
"destination": "/datasets"
},
# Mapping the specs directory.
{
"source": os.path.join(repo_home, "workspace"),
"destination": "/tlt_exp"
},
]
}
# Writing the mounts file.
with open(mounts_file, "w") as mfile:
json.dump(drive_map, mfile, indent=4)
###Output
_____no_output_____
###Markdown
Generate offline augmenation spec file and apply augmentation to the training sets
###Code
from preprocess_pcb import gen_random_aug_spec, combine_kitti, visualize_images
from random import randint
#Input dataset folder to augment, augment output folder and number of augmentations. Requires local paths and container paths
#For each augment a randomized spec file and augmented dataset is produced
#Also outputs a dataset with all combined augmentations
def generate_augments(dataset_folder, dataset_folder_cont, output_folder, output_folder_cont, num_augments):
for i in range(0,num_augments):
spec_out = os.path.join(output_folder, "aug_spec" + str(i) + ".txt")
spec_out_cont = os.path.join(output_folder_cont, "aug_spec" + str(i) + ".txt")
gen_random_aug_spec(600,600,"jpg", spec_out)
!cat $spec_out
aug_folder = os.path.join(output_folder, "aug" + str(i))
aug_folder_cont = os.path.join(output_folder_cont, "aug" + str(i))
!tao augment -a $spec_out_cont -o $aug_folder_cont -d $dataset_folder_cont
if i == 0:
d1 = dataset_folder
d2 = aug_folder
d3 = os.path.join(output_folder, "combined_x2")
combine_kitti(d1,d2,d3)
else:
d1 = os.path.join(output_folder, "combined_x" + str(i+1))
d2 = aug_folder
d3 = os.path.join(output_folder, "combined_x" + str(i+2))
combine_kitti(d1,d2,d3)
#generate augmentations for 100 image subset
dataset_folder = "100_subset_train_x1" #folder for the existing dataset to be augmented. This folder will not be modified
dataset_folder_cont = os.path.join(dataset_home_cont, "100_subset_train_x1")
output_folder = "100_subset_train_aug" #folder for the augmented output. Does not need to exist
output_folder_cont = os.path.join(dataset_home_cont, output_folder)
num_augments = 19 #number of augmented datasets to generate
os.makedirs(output_folder, exist_ok=True)
generate_augments(dataset_folder,dataset_folder_cont,output_folder, output_folder_cont, num_augments)
#Display some of the augmented images
#Rerun to see new images each time
aug_choice = str(randint(0,num_augments-1))
visualize_images(os.path.join(output_folder, "aug"+aug_choice+"/images"), num_images=8)
#generate augmentations for 500 image subset
dataset_folder = "500_subset_train_x1" #folder for the existing dataset to be augmented. This folder will not be modified
dataset_folder_cont = os.path.join(dataset_home_cont, "500_subset_train_x1")
output_folder = "500_subset_train_aug" #folder for the augmented output. Does not need to exist
output_folder_cont = os.path.join(dataset_home_cont, "500_subset_train_aug")
num_augments = 19 #number of augmented datasets to generate
os.makedirs(output_folder, exist_ok=True)
generate_augments(dataset_folder, dataset_folder_cont, output_folder, output_folder_cont, num_augments)
#Display some of the augmented images
#Rerun to see new images each time
aug_choice = str(randint(0,num_augments-1))
visualize_images(os.path.join(output_folder, "aug"+aug_choice+"/images"), num_images=8)
#Place important datasets in the dataset folder
!mv 100_subset_train_aug/combined_x10 100_subset_train_x10
!mv 100_subset_train_aug/combined_x20 100_subset_train_x20
!mv 500_subset_train_aug/combined_x10 500_subset_train_x10
!mv 500_subset_train_aug/combined_x20 500_subset_train_x20
###Output
_____no_output_____
###Markdown
Generate TF Records for the test and training sets
###Code
#Returns the tf record config as a string with the given dataset path
#root directory path must be inside the container
def gen_tf_spec(dataset_path):
spec_str = f"""
kitti_config {{
root_directory_path: "/datasets/pcb_defect/{dataset_path}"
image_dir_name: "images"
label_dir_name: "labels"
image_extension: ".jpg"
partition_mode: "random"
num_partitions: 2
val_split: 20
num_shards: 10
}}
"""
return spec_str
#Loop through all datasets to generate tf records
dataset_paths = ["500_subset_test_x1", "500_subset_train_x1", "500_subset_train_x10", "500_subset_train_x20", "100_subset_train_x1", "100_subset_train_x10", "100_subset_train_x20"]
for path in dataset_paths:
record_path = os.path.join(dataset_home, path, "tfrecord_spec.txt")
record_path_cont = os.path.join(dataset_home_cont, path, "tfrecord_spec.txt")
record_output = os.path.join(dataset_home, path, "tfrecords_rcnn/")
record_output_cont = os.path.join(dataset_home_cont, path, "tfrecords_rcnn/")
print("************" + record_path)
with open(record_path, "w+") as spec:
spec.write(gen_tf_spec(path))
!tao faster_rcnn dataset_convert -d $record_path_cont -o $record_output_cont
###Output
_____no_output_____
###Markdown
Downloads pretrained object detection weights needed for the trainings
###Code
#requires NGC to be configured
os.makedirs(os.path.join(model_home, "fasterRCNN"), exist_ok=True)
%cd $model_home/fasterRCNN
!ngc registry model download-version "nvidia/tlt_pretrained_object_detection:resnet18"
###Output
_____no_output_____
###Markdown
Launch trainings and evaluation Each cell in this section will train and evaluate on 1 dataset in the experiment. The results will be output to the respective experiment folder. The trainings may take several hours depending on your hardware.
###Code
experiments_cont = os.path.join(exp_home_cont, "experiments")
experiments = os.path.join(exp_home, "experiments")
!tao faster_rcnn train -e $experiments_cont/offline_aug/100_subset_train_x1/training_spec.txt -k tlt_encode
!tao faster_rcnn evaluate -e $experiments_cont/offline_aug/100_subset_train_x1/training_spec.txt -k tlt_encode --log_file $experiments_cont/offline_aug/100_subset_train_x1/eval_log.txt
!cat $experiments/offline_aug/100_subset_train_x1/eval_log.txt
!tao faster_rcnn train -e $experiments_cont/offline_aug/100_subset_train_x10/training_spec.txt -k tlt_encode
!tao faster_rcnn evaluate -e $experiments_cont/offline_aug/100_subset_train_x10/training_spec.txt -k tlt_encode --log_file $experiments_cont/offline_aug/100_subset_train_x10/eval_log.txt
!cat $experiments/offline_aug/100_subset_train_x10/eval_log.txt
!tao faster_rcnn train -e $experiments_cont/offline_aug/100_subset_train_x20/training_spec.txt -k tlt_encode
!tao faster_rcnn evaluate -e $experiments_cont/offline_aug/100_subset_train_x20/training_spec.txt -k tlt_encode --log_file $experiments_cont/offline_aug/100_subset_train_x20/eval_log.txt
!cat $experiments/offline_aug/100_subset_train_x20/eval_log.txt
!tao faster_rcnn train -e $experiments_cont/offline_aug/500_subset_train_x1/training_spec.txt -k tlt_encode
!tao faster_rcnn evaluate -e $experiments_cont/offline_aug/500_subset_train_x1/training_spec.txt -k tlt_encode --log_file $experiments_cont/offline_aug/500_subset_train_x1/eval_log.txt
!cat $experiments/offline_aug/500_subset_train_x1/eval_log.txt
!tao faster_rcnn train -e $experiments_cont/offline_aug/500_subset_train_x10/training_spec.txt -k tlt_encode
!tao faster_rcnn evaluate -e $experiments_cont/offline_aug/500_subset_train_x10/training_spec.txt -k tlt_encode --log_file $experiments_cont/offline_aug/500_subset_train_x10/eval_log.txt
!cat $experiments/offline_aug/500_subset_train_x10/eval_log.txt
!tao faster_rcnn train -e $experiments_cont/offline_aug/500_subset_train_x20/training_spec.txt -k tlt_encode
!tao faster_rcnn evaluate -e $experiments_cont/offline_aug/500_subset_train_x20/training_spec.txt -k tlt_encode --log_file $experiments_cont/offline_aug/500_subset_train_x20/eval_log.txt
!cat $experiments/offline_aug/500_subset_train_x20/eval_log.txt
!tao faster_rcnn train -e $experiments_cont/offline_online_aug/100_subset_train_x1/training_spec.txt -k tlt_encode
!tao faster_rcnn evaluate -e $experiments_cont/offline_online_aug/100_subset_train_x1/training_spec.txt -k tlt_encode --log_file $experiments_cont/offline_online_aug/100_subset_train_x1/eval_log.txt
!cat $experiments/offline_online_aug/100_subset_train_x1/eval_log.txt
!tao faster_rcnn train -e $experiments_cont/offline_online_aug/100_subset_train_x10/training_spec.txt -k tlt_encode
!tao faster_rcnn evaluate -e $experiments_cont/offline_online_aug/100_subset_train_x10/training_spec.txt -k tlt_encode --log_file $experiments_cont/offline_online_aug/100_subset_train_x10/eval_log.txt
!cat $experiments/offline_online_aug/100_subset_train_x10/eval_log.txt
!tao faster_rcnn train -e $experiments_cont/offline_online_aug/100_subset_train_x20/training_spec.txt -k tlt_encode
!tao faster_rcnn evaluate -e $experiments_cont/offline_online_aug/100_subset_train_x20/training_spec.txt -k tlt_encode --log_file $experiments_cont/offline_online_aug/100_subset_train_x20/eval_log.txt
!cat $experiments/offline_online_aug/100_subset_train_x20/eval_log.txt
!tao faster_rcnn train -e $experiments_cont/offline_online_aug/500_subset_train_x1/training_spec.txt -k tlt_encode
!tao faster_rcnn evaluate -e $experiments_cont/offline_online_aug/500_subset_train_x1/training_spec.txt -k tlt_encode --log_file $experiments_cont/offline_online_aug/500_subset_train_x1/eval_log.txt
!cat $experiments/offline_online_aug/500_subset_train_x1/eval_log.txt
!tao faster_rcnn train -e $experiments_cont/offline_online_aug/500_subset_train_x10/training_spec.txt -k tlt_encode
!tao faster_rcnn evaluate -e $experiments_cont/offline_online_aug/500_subset_train_x10/training_spec.txt -k tlt_encode --log_file $experiments_cont/offline_online_aug/500_subset_train_x10/eval_log.txt
!cat $experiments/offline_online_aug/500_subset_train_x10/eval_log.txt
!tao faster_rcnn train -e $experiments_cont/offline_online_aug/500_subset_train_x20/training_spec.txt -k tlt_encode
!tao faster_rcnn evaluate -e $experiments_cont/offline_online_aug/500_subset_train_x20/training_spec.txt -k tlt_encode --log_file $experiments_cont/offline_online_aug/500_subset_train_x20/eval_log.txt
!cat $experiments/offline_online_aug/500_subset_train_x20/eval_log.txt
###Output
_____no_output_____ |
tutorials/notebook/cx_site_chart_examples/heatmap_6.ipynb | ###Markdown
Example: CanvasXpress heatmap Chart No. 6This example page demonstrates how to, using the Python package, create a chart that matches the CanvasXpress online example located at:https://www.canvasxpress.org/examples/heatmap-6.htmlThis example is generated using the reproducible JSON obtained from the above page and the `canvasxpress.util.generator.generate_canvasxpress_code_from_json_file()` function.Everything required for the chart to render is included in the code below. Simply run the code block.
###Code
from canvasxpress.canvas import CanvasXpress
from canvasxpress.js.collection import CXEvents
from canvasxpress.render.jupyter import CXNoteBook
cx = CanvasXpress(
render_to="heatmap6",
data={
"z": {
"Type": [
"Pro",
"Tyr",
"Pho",
"Kin",
"Oth",
"Pro",
"Tyr",
"Pho",
"Kin",
"Oth",
"Pro",
"Tyr",
"Pho",
"Kin",
"Oth",
"Pro",
"Tyr",
"Pho",
"Kin",
"Oth",
"Pro",
"Tyr",
"Pho",
"Kin",
"Oth",
"Pro",
"Tyr",
"Pho",
"Kin",
"Oth",
"Pro",
"Tyr",
"Pho",
"Kin",
"Oth",
"Pro",
"Tyr",
"Pho",
"Kin",
"Oth"
],
"Sens": [
1,
2,
3,
4,
1,
2,
3,
4,
1,
2,
3,
4,
1,
2,
3,
4,
1,
2,
3,
4,
1,
2,
3,
4,
1,
2,
3,
4,
1,
2,
3,
4,
1,
2,
3,
4,
1,
2,
3,
4
]
},
"x": {
"Treatment": [
"Control",
"Control",
"Control",
"Control",
"Control",
"TreatmentA",
"TreatmentB",
"TreatmentA",
"TreatmentB",
"TreatmentA",
"TreatmentB",
"TreatmentA",
"TreatmentB",
"TreatmentA",
"TreatmentB",
"TreatmentA",
"TreatmentB",
"TreatmentA",
"TreatmentB",
"TreatmentA",
"TreatmentB",
"TreatmentA",
"TreatmentB",
"TreatmentA",
"TreatmentB"
],
"Site": [
"Site1",
"Site1",
"Site1",
"Site1",
"Site1",
"Site2",
"Site2",
"Site2",
"Site2",
"Site2",
"Site2",
"Site2",
"Site2",
"Site2",
"Site2",
"Site3",
"Site3",
"Site3",
"Site3",
"Site3",
"Site3",
"Site3",
"Site3",
"Site3",
"Site3"
],
"Dose-Type": [
"",
"",
"",
"",
"",
"Dose1",
"Dose1",
"Dose2",
"Dose2",
"Dose3",
"Dose3",
"Dose4",
"Dose4",
"Dose5",
"Dose5",
"Dose1",
"Dose1",
"Dose2",
"Dose2",
"Dose3",
"Dose3",
"Dose4",
"Dose4",
"Dose5",
"Dose5"
],
"Dose": [
0,
0,
0,
0,
0,
5,
5,
10,
10,
15,
15,
20,
20,
25,
25,
5,
5,
10,
10,
15,
15,
20,
20,
25,
25
]
},
"y": {
"smps": [
"S1",
"S2",
"S3",
"S4",
"S5",
"S6",
"S7",
"S8",
"S9",
"S10",
"S11",
"S12",
"S13",
"S14",
"S15",
"S16",
"S17",
"S18",
"S19",
"S20",
"S21",
"S22",
"S23",
"S24",
"S25"
],
"vars": [
"V1",
"V2",
"V3",
"V4",
"V5",
"V6",
"V7",
"V8",
"V9",
"V10",
"V11",
"V12",
"V13",
"V14",
"V15",
"V16",
"V17",
"V18",
"V19",
"V20",
"V21",
"V22",
"V23",
"V24",
"V25",
"V26",
"V27",
"V28",
"V29",
"V30",
"V31",
"V32",
"V33",
"V34",
"V35",
"V36",
"V37",
"V38",
"V39",
"V40"
],
"data": [
[
0.784,
1.036,
-0.641,
1.606,
2.208,
3.879,
0.333,
2.265,
-1.55,
1.678,
-0.639,
-0.533,
-0.078,
0.433,
0.391,
1.013,
0.928,
0.812,
0.072,
3.564,
0.47,
1.836,
0.351,
3.139,
-2.207
],
[
0.222,
0.716,
0.993,
-0.913,
0.996,
1.235,
1.396,
1.817,
0.162,
1.137,
-0.126,
1.56,
1.003,
1.86,
0.43,
0.696,
0.777,
1.6,
0.175,
2.423,
0.044,
3.881,
-0.757,
1.486,
0.01
],
[
0.486,
2.15,
-0.069,
-0.468,
0.402,
0.725,
-1.697,
0.653,
0.101,
2.852,
-0.27,
0.414,
-0.789,
1.877,
1.555,
2.511,
0.07,
0.244,
-0.41,
2.345,
2.401,
-0.033,
0.951,
2.053,
0.725
],
[
-1.857,
0.698,
0.528,
1.024,
-0.035,
2.181,
-0.015,
3.68,
-1.13,
-0.842,
-1.759,
1.784,
-0.673,
0.147,
0.765,
1.585,
0.33,
1.481,
-0.362,
1.456,
-0.719,
0.961,
1.296,
2.375,
0.208
],
[
-1.19,
1.564,
-2.407,
0.642,
-0.51,
4.116,
-0.379,
0.786,
1.508,
3.119,
1.011,
1.54,
1.184,
1.821,
-0.217,
2.752,
0.083,
1.663,
0.568,
2.48,
-1.207,
1.222,
0.296,
1.055,
1.078
],
[
0.256,
1.214,
1.919,
0.577,
1.07,
1.53,
1.537,
3.063,
0.481,
2.332,
-1.466,
0.167,
0.428,
1.401,
-1.716,
3.524,
-0.822,
1.073,
-1.825,
3.923,
-0.542,
2.637,
-1.296,
0.759,
0.836
],
[
-0.443,
0.286,
0.379,
1.076,
0.478,
3.443,
-0.287,
1.206,
-1.275,
2.275,
1.101,
2.821,
-0.638,
0.922,
-0.205,
2.318,
0.494,
1.648,
-0.585,
1.963,
-0.636,
1.229,
0.998,
1.523,
-1.01
],
[
1.023,
-0.417,
0.865,
1.645,
0.324,
1.94,
0.122,
-0.171,
0.352,
1.42,
-0.436,
3.076,
0.434,
0.986,
-1.912,
3.899,
-0.212,
0.716,
0.782,
0.534,
1.939,
1.374,
-0.083,
2.318,
0.982
],
[
-2.33,
0.575,
-0.543,
-0.38,
-2.153,
1.717,
-1.219,
0.725,
0.273,
1.908,
0.488,
1.426,
0.108,
2.586,
0.653,
0.317,
0.112,
3.138,
0.212,
1.393,
-0.506,
1.87,
0.332,
1.893,
1.017
],
[
0.841,
0.146,
0.716,
-0.233,
-0.206,
0.237,
-0.307,
2.499,
-1.619,
1.957,
-0.12,
3.058,
0.511,
3.598,
0.286,
0.922,
0.164,
0.782,
-3.468,
0.262,
0.812,
0.798,
1.209,
2.964,
-1.47
],
[
-0.099,
1.666,
-1.635,
1.475,
-0.186,
0.781,
-1.919,
1.472,
-0.109,
1.588,
-0.379,
0.862,
-1.455,
2.386,
2.783,
0.98,
-0.136,
1.042,
0.532,
1.778,
0.463,
0.647,
0.92,
2.427,
-0.07
],
[
0.663,
-1.411,
-0.69,
-0.216,
-0.735,
1.878,
-0.073,
1.568,
-1.254,
3.792,
-0.345,
3.384,
0.206,
1.572,
0.134,
2.035,
-0.26,
2.42,
0.437,
2.164,
-0.063,
5.027,
-0.166,
3.878,
-1.313
],
[
-0.647,
-1.152,
3.437,
-0.3,
0.358,
1.766,
0.067,
0.149,
-1.005,
1.191,
-1.299,
1.326,
-2.378,
1.8,
-0.858,
2.019,
-1.357,
2.278,
-0.711,
2.196,
-0.243,
3.326,
-0.215,
2.25,
-0.504
],
[
-0.264,
-1.328,
1.228,
1.247,
0.692,
1.763,
-0.978,
2.781,
-0.058,
2.223,
0.796,
2.414,
-1.834,
3.203,
0.459,
2.914,
0.375,
3.309,
0.946,
0.943,
-1.365,
2.452,
0.474,
0.503,
0.025
],
[
0.253,
-0.529,
-0.429,
-1.111,
0.398,
2.332,
-1.334,
2.202,
-0.585,
1.885,
0.398,
1.788,
0.972,
2.025,
-0.835,
0.622,
0.001,
0.837,
-0.776,
2.257,
0.682,
1.304,
2.407,
4.038,
0.518
],
[
-0.876,
-1.41,
0.538,
-1.04,
-0.717,
-0.889,
3.129,
1.202,
3.398,
0.398,
3.857,
1.372,
4.813,
-1.311,
4.029,
-0.432,
3.01,
0.756,
4.688,
0.294,
4.61,
0.859,
4.498,
1.794,
3.319
],
[
-0.363,
0.042,
-0.253,
-0.076,
-1.27,
-0.904,
2.931,
-0.119,
2.669,
-0.165,
6.023,
-0.65,
2.031,
1.424,
2.844,
-1.019,
4.062,
-0.025,
2.637,
-0.317,
4.228,
-0.142,
3.013,
0.611,
3.74
],
[
-1.674,
-0.318,
-0.726,
-1.271,
1.753,
-1.678,
3.341,
-1.772,
3.814,
-1.391,
2.622,
0.677,
3.307,
-0.92,
3.545,
0.305,
2.808,
0.836,
4.532,
-0.378,
4.87,
-0.044,
4.061,
1.684,
5.486
],
[
-0.288,
0.165,
-0.468,
1.219,
-3.353,
-0.578,
3.414,
-0.674,
4.755,
0.033,
4.025,
0.44,
4.186,
1.136,
2.505,
0.436,
3.293,
-0.868,
4.746,
-0.545,
3.666,
-0.295,
3.206,
-0.966,
4.678
],
[
-0.558,
-0.855,
-1.114,
-0.623,
0.368,
-0.182,
4.37,
0.563,
3.75,
0.189,
2.717,
-1.708,
5.274,
0.741,
2.537,
-1.583,
2.832,
-1.515,
3.829,
0.358,
5.306,
0.388,
3.284,
0.661,
3.804
],
[
1.693,
-1.53,
0.057,
-0.217,
0.511,
0.309,
3.998,
0.925,
1.045,
0.379,
2.024,
0.331,
3.612,
0.151,
5.808,
-1.429,
3.402,
-0.297,
4.692,
-0.439,
4.521,
-0.816,
4.693,
0.323,
2.869
],
[
-0.234,
1.999,
-1.603,
-0.292,
-0.91,
-0.766,
6.167,
1.242,
4.219,
-1.291,
6.974,
-0.443,
4.039,
0.72,
3.808,
1.465,
2.86,
2.736,
4.675,
-0.554,
3.091,
0.057,
4.311,
-0.005,
2.605
],
[
0.529,
-1.721,
2.207,
-0.873,
-1.364,
1.139,
3.146,
1.277,
3.963,
-0.234,
4.581,
-1.266,
3.743,
-0.84,
3.682,
-0.566,
4.249,
0.599,
4.202,
0.125,
4.136,
-0.67,
3.433,
-0.954,
3.97
],
[
-0.529,
0.375,
0.204,
-0.529,
1.001,
0.244,
3.922,
-0.904,
3.479,
-0.926,
4.171,
-0.047,
2.158,
0.467,
2.277,
0.429,
3.903,
-1.013,
3.182,
0.73,
3.318,
-1.663,
4.222,
0.264,
3.538
],
[
2.302,
-0.218,
-1.184,
-0.644,
0.118,
-1.35,
4.497,
1.262,
5.131,
-1.095,
4.354,
-1.364,
4.376,
-0.936,
3.278,
0.753,
4.903,
-2.193,
3.336,
0.722,
3.92,
-1.341,
4.762,
1.756,
4.032
],
[
0.957,
1.309,
-1.317,
1.254,
-0.397,
0.004,
3.34,
1.233,
4.681,
-0.875,
2.497,
0.207,
1.703,
-0.614,
3.171,
-0.034,
2.59,
0.968,
3.576,
0.946,
3.85,
1.128,
4.015,
0.633,
3.148
],
[
-0.789,
-1.139,
0.066,
0.418,
0.366,
-0.932,
3.982,
0.151,
4.018,
0.74,
5.374,
0.067,
6.07,
1.178,
6.316,
1.948,
3.389,
0.383,
5.084,
-0.251,
3.874,
-0.715,
3.101,
-0.172,
4.867
],
[
-0.26,
-0.005,
-0.12,
-0.422,
0.629,
1.242,
3.954,
-0.027,
4.352,
-0.074,
4.369,
0.196,
4.847,
-0.763,
3.042,
-1.643,
3.952,
-1.358,
4.105,
-0.257,
4.168,
0.047,
1.782,
-0.585,
5.465
],
[
1.882,
0.869,
-1.305,
1.095,
1.002,
-0.897,
3.248,
1.113,
5.83,
0.298,
4.811,
-0.128,
3.263,
0.186,
4.244,
1.314,
2.832,
0.222,
3.899,
-1.279,
4.133,
-1.523,
4.49,
0.966,
4.658
],
[
-1.052,
0.429,
0.646,
0.642,
1.037,
-1.046,
1.724,
-0.698,
5.316,
-0.403,
2.821,
-0.108,
5.52,
-0.352,
3.298,
-0.716,
2.672,
1.499,
3.919,
0.202,
3.409,
0.841,
5.47,
1.225,
1.988
],
[
-1.862,
-0.589,
0.205,
1.281,
-1.256,
0.924,
4.189,
-1.219,
3.137,
0.142,
5.869,
0.529,
2.138,
-0.034,
3.921,
-1.097,
5.402,
1.468,
5.034,
0.088,
3.055,
1.587,
3.374,
0.377,
2.939
],
[
-0.315,
-0.369,
0.634,
0.495,
-1.059,
-0.481,
1.329,
1.105,
5.3,
0.135,
6.515,
0.001,
4.161,
1.686,
4.747,
-0.911,
3.24,
-1.461,
4.64,
0.698,
5.006,
-1.072,
4.608,
-0.317,
5.208
],
[
0.558,
0.793,
-1.713,
0.055,
2.242,
0.588,
3.785,
2.949,
2.175,
2.055,
3.328,
0.236,
3.549,
-0.009,
1.477,
0.538,
3.116,
-0.621,
5.203,
0.736,
3.606,
-0.313,
4.402,
-1.039,
3.894
],
[
-1.332,
-1.134,
0.153,
0.66,
1.764,
-0.588,
3.417,
-0.547,
3.849,
-1.521,
3.332,
0.88,
5.449,
0.179,
4.596,
0.626,
4.006,
0.33,
2.969,
-0.42,
2.606,
-0.485,
4.581,
-0.199,
5.008
],
[
0.29,
0.228,
0.117,
-0.587,
-2.116,
0.188,
4.009,
0.551,
3.127,
0.682,
3.858,
-1.053,
4.388,
-1.46,
1.468,
0.434,
4.221,
0.782,
2.992,
0.056,
5.223,
-0.747,
6.549,
-0.959,
3.714
],
[
-0.015,
-1.665,
1.007,
0.278,
-0.091,
1.919,
3.861,
-0.318,
3.026,
-1.642,
5.379,
2.097,
4.396,
0.802,
3.66,
0.544,
2.156,
0.87,
4.044,
0.3,
4.422,
-0.788,
4.677,
-0.215,
4.643
],
[
-0.984,
0.915,
0.944,
-1.975,
-1.717,
0.16,
4.748,
1.521,
4.091,
-0.386,
3.802,
-1.134,
5.701,
-0.402,
5.682,
-0.987,
4.281,
0.844,
3.427,
1.368,
3.358,
-1.748,
3.792,
2.125,
5.137
],
[
-0.399,
-0.613,
2.211,
0.238,
2.799,
0.687,
5.522,
0.534,
5.233,
-0.395,
4.387,
-1.733,
4.494,
-1.26,
4.693,
1.679,
4.477,
0.399,
2.508,
1.683,
3.185,
0.865,
4.958,
0.602,
4.371
],
[
1.205,
-0.562,
1.134,
0.202,
0.209,
0.692,
2.419,
0.397,
2.429,
0.911,
6.341,
0.713,
4.548,
-0.688,
3.947,
0.439,
4.69,
-0.324,
3.07,
0.265,
3.757,
-1.535,
5.434,
-0.017,
4.125
],
[
-0.298,
0.118,
1.653,
1.519,
-0.821,
-0.85,
4.602,
1.073,
5.087,
0.155,
6.987,
-0.716,
2.912,
0.581,
2.112,
-0.426,
3.523,
0.188,
4.548,
0.155,
4.256,
0.775,
2.607,
-0.697,
5.338
]
]
}
},
config={
"colorSmpDendrogramBy": "Treatment",
"colorSpectrum": [
"magenta",
"blue",
"black",
"red",
"gold"
],
"colorSpectrumZeroValue": 0,
"graphType": "Heatmap",
"heatmapIndicatorHeight": 50,
"heatmapIndicatorHistogram": True,
"heatmapIndicatorPosition": "topLeft",
"heatmapIndicatorWidth": 60,
"samplesClustered": True,
"title": "R Heatmap",
"variablesClustered": True
},
width=613,
height=613,
events=CXEvents(),
after_render=[],
other_init_params={
"version": 35,
"events": False,
"info": False,
"afterRenderInit": False,
"noValidate": True
}
)
display = CXNoteBook(cx)
display.render(output_file="heatmap_6.html")
###Output
_____no_output_____ |
notebooks/Case Scenario HSDN.ipynb | ###Markdown
Use BioKEEN Programmatically to Train and Evalaute a KGE Model on HSDN
###Code
import json
import logging
import os
import sys
import time
import warnings
import matplotlib
import matplotlib.pyplot as plt
import numpy as np
import biokeen
import pykeen
warnings.filterwarnings('ignore', category=UserWarning)
logging.basicConfig(level=logging.INFO)
logging.getLogger('biokeen').setLevel(logging.INFO)
print(sys.version)
print(time.asctime())
print(f'PyKEEN Version: {pykeen.constants.VERSION}')
print(f'BioKEEN Version: {biokeen.constants.VERSION}')
output_directory = os.path.join(
os.path.expanduser('~'),
'Desktop',
'biokeen_test'
)
###Output
_____no_output_____
###Markdown
Step 1: Configure your experiment
###Code
config = dict(
training_set_path = 'bio2bel:hsdn',
execution_mode = 'Training_mode',
kg_embedding_model_name = 'TransE',
embedding_dim = 50,
normalization_of_entities = 2, # corresponds to L2
scoring_function = 1, # corresponds to L1
margin_loss = 1,
learning_rate = 0.01,
batch_size = 128,
num_epochs = 1000,
test_set_ratio = 0.1,
filter_negative_triples = True,
random_seed = 2,
preferred_device = 'cpu',
)
###Output
_____no_output_____
###Markdown
Step 2: Run BioKEEN to Train and Evaluate the Model
###Code
results = pykeen.run(
config=config,
output_directory=output_directory,
)
print('Keys:', *sorted(results.results.keys()), sep='\n ')
###Output
Keys:
entity_to_embedding
entity_to_id
eval_summary
final_configuration
losses
relation_to_embedding
relation_to_id
trained_model
###Markdown
Step 3: Show Exported Results 3.1: Show Trained Model
###Code
results.results['trained_model']
###Output
_____no_output_____
###Markdown
3.2: Plot losses
###Code
losses = results.results['losses']
epochs = np.arange(len(losses))
plt.title(r'Loss Per Epoch')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.plot(epochs, losses)
plt.show()
###Output
_____no_output_____
###Markdown
3.3: Show Evaluation Results
###Code
print(json.dumps(results.results['eval_summary'], indent=2))
###Output
{
"mean_rank": 24.59951219512195,
"hits@k": {
"1": 0.12097560975609756,
"3": 0.2531707317073171,
"5": 0.3297560975609756,
"10": 0.4678048780487805
}
}
|
Notebooks/ORF_CNN_117.ipynb | ###Markdown
ORF recognition by CNNSame as 116 but let 5'UTR vary from 0 to 6 so memorizing specific STOP positions is harder.
###Code
import time
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
PC_SEQUENCES=4000 # how many protein-coding sequences
NC_SEQUENCES=4000 # how many non-coding sequences
PC_TESTS=1000
NC_TESTS=1000
RNA_LEN=36 # how long is each sequence
CDS_LEN=30 # include bases in start, residues, stop
ALPHABET=4 # how many different letters are possible
INPUT_SHAPE_2D = (RNA_LEN,ALPHABET,1) # Conv2D needs 3D inputs
INPUT_SHAPE = (RNA_LEN,ALPHABET) # Conv1D needs 2D inputs
FILTERS = 16 # how many different patterns the model looks for
NEURONS = 16
DROP_RATE = 0.4
WIDTH = 3 # how wide each pattern is, in bases
STRIDE_2D = (1,1) # For Conv2D how far in each direction
STRIDE = 1 # For Conv1D, how far between pattern matches, in bases
EPOCHS=25 # how many times to train on all the data
SPLITS=5 # SPLITS=3 means train on 2/3 and validate on 1/3
FOLDS=5 # train the model this many times (range 1 to SPLITS)
import sys
try:
from google.colab import drive
IN_COLAB = True
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
#drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py')
with open('RNA_gen.py', 'w') as f:
f.write(r.text)
from RNA_gen import *
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py')
with open('RNA_prep.py', 'w') as f:
f.write(r.text)
from RNA_prep import *
except:
print("CoLab not working. On my PC, use relative paths.")
IN_COLAB = False
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_gen import *
from SimTools.RNA_describe import ORF_counter
from SimTools.RNA_prep import *
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
if not assert_imported_RNA_gen():
print("ERROR: Cannot use RNA_gen.")
if not assert_imported_RNA_prep():
print("ERROR: Cannot use RNA_prep.")
from os import listdir
import csv
from zipfile import ZipFile
import numpy as np
import pandas as pd
from scipy import stats # mode
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Conv1D,Conv2D
from keras.layers import Flatten,MaxPooling1D,MaxPooling2D
from keras.losses import BinaryCrossentropy
# tf.keras.losses.BinaryCrossentropy
import matplotlib.pyplot as plt
from matplotlib import colors
mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1
np.set_printoptions(precision=2)
import random
def partition_random_sequences(goal_per_class):
between_bases = CDS_LEN - 6
utr5_bases = random.randint(0,RNA_LEN-CDS_LEN)
utr3_bases = RNA_LEN - utr5_bases - CDS_LEN
pc_seqs=[]
nc_seqs=[]
oc = ORF_counter()
trials = 0
pc_cnt = 0
nc_cnt = 0
bases=['A','C','G','T']
while pc_cnt<goal_per_class or nc_cnt<goal_per_class:
trials += 1
one_seq = "".join(random.choices(bases,k=utr5_bases))
one_seq += 'ATG'
random_cnt = random.randint(1,between_bases-3)
one_seq += "".join(random.choices(bases,k=random_cnt))
random_stop = random.choice(['TAA','TAG','TGA']) # random frame
one_seq += random_stop
remaining_cnt = between_bases - 3 - random_cnt
one_seq += "".join(random.choices(bases,k=remaining_cnt))
#one_seq += "".join(random.choices(bases,k=between_bases))
random_stop = random.choice(['TAA','TAG','TGA']) # in frame
one_seq += random_stop
one_seq += "".join(random.choices(bases,k=utr3_bases))
oc.set_sequence(one_seq)
cds_len = oc.get_max_cds_len() + 3
if cds_len >= CDS_LEN and pc_cnt<goal_per_class:
pc_cnt += 1
pc_seqs.append(one_seq)
elif cds_len < CDS_LEN and nc_cnt<goal_per_class:
nc_cnt += 1
nc_seqs.append(one_seq)
print ("It took %d trials to reach %d per class."%(trials,goal_per_class))
return pc_seqs,nc_seqs
pc_all,nc_all=partition_random_sequences(10) # just testing
pc_all,nc_all=partition_random_sequences(PC_SEQUENCES+PC_TESTS)
print("Use",len(pc_all),"PC seqs")
print("Use",len(nc_all),"NC seqs")
# Describe the sequences
def describe_sequences(list_of_seq):
oc = ORF_counter()
num_seq = len(list_of_seq)
rna_lens = np.zeros(num_seq)
orf_lens = np.zeros(num_seq)
for i in range(0,num_seq):
rna_len = len(list_of_seq[i])
rna_lens[i] = rna_len
oc.set_sequence(list_of_seq[i])
orf_len = oc.get_max_orf_len()
orf_lens[i] = orf_len
print ("Average RNA length:",rna_lens.mean())
print ("Average ORF length:",orf_lens.mean())
print("Simulated sequences prior to adjustment:")
print("PC seqs")
describe_sequences(pc_all)
print("NC seqs")
describe_sequences(nc_all)
pc_train=pc_all[:PC_SEQUENCES]
nc_train=nc_all[:NC_SEQUENCES]
pc_test=pc_all[PC_SEQUENCES:]
nc_test=nc_all[NC_SEQUENCES:]
# Use code from our SimTools library.
X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles
print("Data ready.")
def make_DNN():
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
#dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same",
input_shape=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
#dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
#dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
#dnn.add(MaxPooling1D())
dnn.add(Flatten())
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=np.float32))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
#ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE)
#bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
#model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"])
return dnn
model = make_DNN()
print(model.summary())
from keras.callbacks import ModelCheckpoint
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS) # this does not shuffle
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(X,y)
from keras.models import load_model
X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET)
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
###Output
_____no_output_____ |
keras_single_hidden_layer/keras-single-hidden-layer-architecure.ipynb | ###Markdown
Transform and normalize images??
###Code
# load the training labels into a dictionary,
# also load it into a list, haven't decided which one is better yet
labels_dic = {}
labels = []
with open('../train_labels.csv') as csvDataFile:
csvReader = csv.reader(csvDataFile)
for row in csvReader:
try:
labels_dic[int(row[0])] = int(row[1])
labels.append(int(row[1]))
except: print(row)
len(labels_dic)
# normalize images
train_images = np.array(train_images, dtype="float") / 255.0
labels = np.array(labels)
train_labels = []
for i in labels:
label = np.zeros(10)
label[i]=1
train_labels.append(label)
train_labels = np.array(train_labels)
print(labels.shape,train_images.shape)
# flatten and train
train_images_flatten = np.array([i.flatten("C") for i in train_images])
train_images_flatten.shape
###Output
_____no_output_____
###Markdown
Creating keras modelfollowing this guidhttps://medium.com/@pallawi.ds/ai-starter-train-and-test-your-first-neural-network-classifier-in-keras-from-scratch-b6a5f3b3ebc4
###Code
# define the 784-350-200-10 architecture using keras
model = Sequential()
# we construct our nn architecture - a feedforward nn
# our input layer has 28 x 28 x 1 = 784 raw pixels
model.add(Dense(350, input_shape=(784,), activation="sigmoid"))
model.add(Dense(200, activation="sigmoid"))
model.add(Dense(10, activation="softmax"))
print("printing summary of model")
model.summary()
###Output
printing summary of model
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_3 (Dense) (None, 350) 274750
_________________________________________________________________
dense_4 (Dense) (None, 200) 70200
_________________________________________________________________
dense_5 (Dense) (None, 10) 2010
=================================================================
Total params: 346,960
Trainable params: 346,960
Non-trainable params: 0
_________________________________________________________________
###Markdown
Compile the modelYou can compile a network (model) as many times as you want. You need to compile the model if you wish to change the loss function, optimizer or matrices.You need a compiled model to train (because training uses the loss function and the optimizer). But it’s not necessary to compile the model when testing the model on a new data.
###Code
# initialize our initial learning rate and # of epochs to train for
INIT_LR = 0.01
EPOCHS = 100
# compile the model using SGD as our optimizer and categorical
# cross-entropy loss (if you only have two classes use binary =_crossentropy)
print("[INFO] training network...")
opt = SGD(lr=INIT_LR) # stochastic gradient descent
model.compile(loss="categorical_crossentropy", optimizer=opt,
metrics=["accuracy"])
# split the train into a train and valid set
ratio = 0.8
cut = int(ratio*len(train_images_flatten))
trainX = train_images_flatten[:cut]
trainY = train_labels[:cut]
valX = train_images_flatten[cut:]
valY = train_labels[cut:]
train_labels.shape
# train the neural network
H = model.fit(trainX, trainY,
validation_data=(valX, valY),
epochs=EPOCHS, batch_size=32)
#evaluate the network
print("[INFO] evaluating network...")
predictions = model.predict(trainX, batch_size=32)
#Uncomment to see the predicted probabilty for each class in every test image
# print ("predictions---------------->",predictions)
#Uncomment to print the predicted labels in each image
# print("predictions.argmax(axis=1)",predictions.argmax(axis=1))
### print the performance report of the prediction
#print(classification_report(valY.argmax(axis=1),
# predictions.argmax(axis=1),
# target_names=[str(i) for i in range(10)]))
# plot the training loss and accuracy for each epoch
N = np.arange(0, EPOCHS)
plt.style.use("ggplot")
plt.figure()
plt.plot(N, H.history["loss"], label="train_loss")
plt.plot(N, H.history["val_loss"], label="val_loss")
plt.plot(N, H.history["accuracy"], label="train_acc")
plt.plot(N, H.history["val_accuracy"], label="val_acc")
plt.title("Training Loss and Accuracy (simple_multiclass_classifcation)")
plt.xlabel("Epoch #")
plt.ylabel("Loss/Accuracy")
plt.legend()
plt.savefig("training_performance.png")
###Output
[INFO] evaluating network...
###Markdown
save the model
###Code
# save the model to disk
print("[INFO] serializing network...")
model.save("keras_two_hidden_layer_784_350_200_10_795_valmodel.model")
# import the necessary packages
from keras.models import load_model
import argparse
import pickle
import cv2
import os
import pandas as pd
model = load_model("keras_two_hidden_layer_784_350_200_10_795_valmodel.model")
x_test = np.load('../test_images.npy').squeeze()
x_test = x_test.reshape(len(x_test), -1)
y_test = model.predict(x_test)
df_test = pd.read_csv('submission_2.csv')
df_test['label'] = y_test
df_test.to_csv('submission_2.csv', index=False)
###Output
_____no_output_____ |
examples/mordl.ipynb | ###Markdown
UPOS
###Code
from corpuscula.corpus_utils import download_ud, UniversalDependencies, \
AdjustedForSpeech
import junky
from mordl import UposTagger, FeatsTagger
BERT_MODEL_FN = 'xlm-roberta-base'
MODEL_FN = 'upos-bert_model'
SEED = 42
BERT_MAX_LEN, BERT_EPOCHS, BERT_BATCH_SIZE = 0, 3, 8
DEVICE = 'cuda:0'
corpus_name = 'UD_Russian-SynTagRus'
download_ud(corpus_name, overwrite=False)
corpus = UniversalDependencies(corpus_name)
#corpus = AdjustedForSpeech(corpus)
tagger = UposTagger()
tagger.load_train_corpus(corpus.train)
tagger.load_test_corpus(corpus.dev)
_ = tagger.train(MODEL_FN, device=DEVICE, control_metric='accuracy',
max_epochs=None, min_epochs=0, bad_epochs=5,
max_grad_norm=None, tags_to_remove=None, word_emb_type='bert',
word_emb_path=BERT_MODEL_FN, word_transform_kwargs={
'max_len': BERT_MAX_LEN, 'hidden_ids': 10, 'aggregate_subtokens_op': 'absmax'
# BertDataset.transform() (for BERT-descendant models)
# params:
# {'max_len': 0, 'batch_size': 64, 'hidden_ids': '10',
# 'aggregate_hiddens_op': 'cat',
# 'aggregate_subtokens_op': 'absmax', 'to': junky.CPU,
# 'loglevel': 1}
# WordDataset.transform() (for other models) params:
# {'check_lower': True}
},
stage1_params=None,
# {'lr': .0001, 'betas': (0.9, 0.999), 'eps': 1e-8,
# 'weight_decay': 0, 'amsgrad': False,
# 'max_epochs': None, 'min_epochs': None,
# 'bad_epochs': None, 'batch_size': None,
# 'max_grad_norm': None}
stage2_params=None,
# {'lr': .001, 'momentum': .9, 'weight_decay': 0,
# 'dampening': 0, 'nesterov': False,
# 'max_epochs': None, 'min_epochs': None,
# 'bad_epochs': None, 'batch_size': None,
# 'max_grad_norm': None}
stage3_params={
'save_as': MODEL_FN.replace('-bert_model', '_' + BERT_MODEL_FN)
+ f'_len{BERT_MAX_LEN}_ep{BERT_EPOCHS}_bat{BERT_BATCH_SIZE}_seed{SEED}',
'epochs': BERT_EPOCHS,
'batch_size': BERT_BATCH_SIZE,
'lr': 2e-5, 'num_warmup_steps': 3,
# {'save_as': None, 'max_epochs': 3, 'batch_size': 8,
# 'lr': 2e-5, 'betas': (0.9, 0.999), 'eps': 1e-8,
# 'weight_decay': .01, 'amsgrad': False,
# 'num_warmup_steps': 3, 'max_grad_norm': 1.}
},
stages=[1, 2, 3, 1, 2], save_stages=True, load_from=None,
learn_on_padding=True, remove_padding_intent=False,
seed=SEED, start_time=None, keep_embs=False,
rnn_emb_dim=None, cnn_emb_dim=None, cnn_kernels=range(1, 7),
emb_bn=True, emb_do=.2,
final_emb_dim=512, pre_bn=True, pre_do=.5,
lstm_layers=1, lstm_do=0, tran_layers=0, tran_heads=8,
post_bn=True, post_do=.4)
res_dev = 'corpora/_dev_' + MODEL_FN.replace('_model', '.conllu')
res_test = 'corpora/_test_' + MODEL_FN.replace('_model', '.conllu')
tagger = UposTagger(embs=globals()['tagger'].embs
if 'tagger' in globals() else
None)
tagger.load(MODEL_FN)
junky.clear_tqdm()
_ = tagger.predict(corpus.dev, clone_ds=True, save_to=res_dev)
_ = tagger.evaluate(corpus.dev)
_ = tagger.evaluate(corpus.dev, res_dev)
_ = tagger.predict(corpus.test, save_to=res_test)
_ = tagger.evaluate(corpus.test, clone_ds=True)
_ = tagger.evaluate(corpus.test, res_test)
corp_gold = list(corpus.test())
corp_test = list(tagger._get_corpus(res_test))
tags = sorted(set(x['UPOS'] for x in corp_gold
for x in x[0] if x['UPOS']))
for tag in tags:
print('{}: {}'.format(
tag, tagger.evaluate(corp_gold, corp_test,
label=tag, log_file=None)
))
###Output
Load corpus
[=======] 6491
Corpus has been loaded: 6491 sentences, 117523 tokens
###Markdown
FEATS
###Code
from corpuscula.corpus_utils import download_ud, UniversalDependencies, \
AdjustedForSpeech
import junky
from mordl import FeatsTagger
BERT_MODEL_FN = 'xlm-roberta-base'
MODEL_FN = 'feats-bert_model'
SEED=42
BERT_MAX_LEN, BERT_EPOCHS, BERT_BATCH_SIZE = 0, 3, 8
DEVICE='cuda:0'
corpus_name = 'UD_Russian-SynTagRus'
download_ud(corpus_name, overwrite=False)
corpus = UniversalDependencies(corpus_name)
#corpus = AdjustedForSpeech(corpus)
tagger = FeatsTagger()
tagger.load_train_corpus(corpus.train)
tagger.load_test_corpus(corpus.dev)
_ = tagger.train(MODEL_FN, device=DEVICE, control_metric='accuracy',
max_epochs=None, min_epochs=0, bad_epochs=5,
max_grad_norm=None, tags_to_remove=None, word_emb_type='bert',
word_emb_path=BERT_MODEL_FN, word_transform_kwargs={
'max_len': BERT_MAX_LEN, 'hidden_ids': 10, 'aggregate_subtokens_op': 'absmax'
# BertDataset.transform() (for BERT-descendant models)
# params:
# {'max_len': 0, 'batch_size': 64, 'hidden_ids': '10',
# 'aggregate_hiddens_op': 'cat',
# 'aggregate_subtokens_op': 'absmax', 'to': junky.CPU,
# 'loglevel': 1}
# WordDataset.transform() (for other models) params:
# {'check_lower': True}
},
stage1_params=None,
# {'lr': .0001, 'betas': (0.9, 0.999), 'eps': 1e-8,
# 'weight_decay': 0, 'amsgrad': False,
# 'max_epochs': None, 'min_epochs': None,
# 'bad_epochs': None, 'batch_size': None,
# 'max_grad_norm': None}
stage2_params=None,
# {'lr': .001, 'momentum': .9, 'weight_decay': 0,
# 'dampening': 0, 'nesterov': False,
# 'max_epochs': None, 'min_epochs': None,
# 'bad_epochs': None, 'batch_size': None,
# 'max_grad_norm': None}
stage3_params={
'save_as': MODEL_FN.replace('-bert_model', '_' + BERT_MODEL_FN)
+ f'_len{BERT_MAX_LEN}_ep{BERT_EPOCHS}_bat{BERT_BATCH_SIZE}_seed{SEED}',
'epochs': BERT_EPOCHS,
'batch_size': BERT_BATCH_SIZE,
'lr': 2e-5, 'num_warmup_steps': 3,
# {'save_as': None, 'max_epochs': 3, 'batch_size': 8,
# 'lr': 2e-5, 'betas': (0.9, 0.999), 'eps': 1e-8,
# 'weight_decay': .01, 'amsgrad': False,
# 'num_warmup_steps': 3, 'max_grad_norm': 1.}
},
stages=[1, 2, 3, 1, 2], save_stages=True, load_from=None,
learn_on_padding=True, remove_padding_intent=False,
seed=SEED, start_time=None, keep_embs=False,
rnn_emb_dim=None, cnn_emb_dim=200, cnn_kernels=range(1, 7),
upos_emb_dim=200, emb_bn=True, emb_do=.2,
final_emb_dim=512, pre_bn=True, pre_do=.5,
lstm_layers=1, lstm_do=0, tran_layers=0, tran_heads=8,
post_bn=True, post_do=.4)
res_dev = 'corpora/_dev_' + MODEL_FN.replace('_model', '.conllu')
res_test = 'corpora/_test_' + MODEL_FN.replace('_model', '.conllu')
tagger = FeatsTagger(embs=globals()['tagger'].embs
if 'tagger' in globals() else
None)
tagger.load(MODEL_FN)
junky.clear_tqdm()
_ = tagger.predict(corpus.dev, clone_ds=True, save_to=res_dev)
_ = tagger.evaluate(corpus.dev)
_ = tagger.evaluate(corpus.dev, res_dev)
_ = tagger.predict(corpus.test, save_to=res_test)
_ = tagger.evaluate(corpus.test, clone_ds=True)
_ = tagger.evaluate(corpus.test, res_test)
corp_gold = list(corpus.test())
corp_test = list(tagger._get_corpus(res_test))
tags = sorted(set(x for x in corp_gold
for x in x[0]
for x in x['FEATS'].keys()))
for tag in tags:
print('{}: {}'.format(
tag, tagger.evaluate(corp_gold, corp_test,
feats=tag, log_file=None)
))
###Output
Load corpus
[=====> 5600
###Markdown
LEMMAFor lemmata, besides of *BERT* word embeddings one can use *FastText*. In this case, model performance on the *SynTagRus* test datasetis just slightly worse (0.9945 vs. 0.9948, and, we think, it may be tuned if need). So, we give here training snippets for both version of tagger, *BERT* (next snippet) and *FastText* (see further). *BERT* Lemmata Tagger
###Code
from corpuscula.corpus_utils import download_ud, UniversalDependencies, \
AdjustedForSpeech
import junky
from mordl import LemmaTagger
BERT_MODEL_FN = 'xlm-roberta-base'
MODEL_FN = 'lemma-bert_model'
SEED=42
BERT_MAX_LEN, BERT_EPOCHS, BERT_BATCH_SIZE = 0, 2, 8
DEVICE='cuda:0'
corpus_name = 'UD_Russian-SynTagRus'
download_ud(corpus_name, overwrite=False)
corpus = UniversalDependencies(corpus_name)
#corpus = AdjustedForSpeech(corpus)
tagger = LemmaTagger()
tagger.load_train_corpus(corpus.train)
tagger.load_test_corpus(corpus.dev)
_ = tagger.train(MODEL_FN, device=DEVICE, control_metric='accuracy',
max_epochs=None, min_epochs=0, bad_epochs=5,
max_grad_norm=None, tags_to_remove=None, word_emb_type='bert',
word_emb_path=BERT_MODEL_FN, word_transform_kwargs=None,
# BertDataset.transform() (for BERT-descendant models)
# params:
# {'max_len': 0, 'batch_size': 64, 'hidden_ids': '10',
# 'aggregate_hiddens_op': 'cat',
# 'aggregate_subtokens_op': 'absmax', 'to': junky.CPU,
# 'loglevel': 1}
# WordDataset.transform() (for other models) params:
# {'check_lower': True}
stage1_params=None,
# {'lr': .0001, 'betas': (0.9, 0.999), 'eps': 1e-8,
# 'weight_decay': 0, 'amsgrad': False,
# 'max_epochs': None, 'min_epochs': None,
# 'bad_epochs': None, 'batch_size': None,
# 'max_grad_norm': None}
stage2_params=None,
# {'lr': .001, 'momentum': .9, 'weight_decay': 0,
# 'dampening': 0, 'nesterov': False,
# 'max_epochs': None, 'min_epochs': None,
# 'bad_epochs': None, 'batch_size': None,
# 'max_grad_norm': None}
stage3_params={
'save_as': MODEL_FN.replace('-bert_model', '_' + BERT_MODEL_FN)
+ f'_len{BERT_MAX_LEN}_ep{BERT_EPOCHS}_bat{BERT_BATCH_SIZE}_seed{SEED}',
'epochs': BERT_EPOCHS,
'batch_size': BERT_BATCH_SIZE,
'lr': 2e-5, 'num_warmup_steps': 6
},
# {'save_as': None, 'epochs': 3, 'batch_size': 8,
# 'lr': 2e-5, 'betas': (0.9, 0.999), 'eps': 1e-8,
# 'weight_decay': .01, 'amsgrad': False,
# 'num_warmup_steps': 3, 'max_grad_norm': 1.}
stages=[1, 2, 3, 1, 2], save_stages=True, load_from=None,
learn_on_padding=False, remove_padding_intent=False,
seed=SEED, start_time=None, keep_embs=False,
rnn_emb_dim=384, cnn_emb_dim=None, cnn_kernels=range(1, 7),
upos_emb_dim=256, emb_bn=True, emb_do=.2,
final_emb_dim=512, pre_bn=True, pre_do=.5,
lstm_layers=1, lstm_do=0, tran_layers=0, tran_heads=8,
post_bn=True, post_do=.4)
res_dev = 'corpora/_dev_' + MODEL_FN.replace('_model', '.conllu')
res_test = 'corpora/_test_' + MODEL_FN.replace('_model', '.conllu')
tagger = LemmaTagger(embs=globals()['tagger'].embs
if 'tagger' in globals() else
None)
tagger.load(MODEL_FN)
junky.clear_tqdm()
_ = tagger.predict(corpus.dev, clone_ds=True, save_to=res_dev)
_ = tagger.evaluate(corpus.dev)
_ = tagger.evaluate(corpus.dev, res_dev)
_ = tagger.predict(corpus.test, save_to=res_test)
_ = tagger.evaluate(corpus.test, clone_ds=True)
_ = tagger.evaluate(corpus.test, res_test)
###Output
Evaluating LEMMA
Load corpus
[=====> 5100
###Markdown
*FastText* Lemmata Tagger **NB:** For this task, we use Russian *FastText* embeddings provided by *Facebook*: [cc.ru.300.bin.gz](https://dl.fbaipublicfiles.com/fasttext/vectors-crawl/cc.ru.300.bin.gz). We highly recommend them because it delivers the highest evaluation scores. Also, embeddings provided by *DeepPavlov* ([ft_native_300_ru_wiki_lenta_nltk_wordpunct_tokenize.bin](http://files.deeppavlov.ai/embeddings/ft_native_300_ru_wiki_lenta_nltk_wordpunct_tokenize/ft_native_300_ru_wiki_lenta_nltk_wordpunct_tokenize.bin)) could be used, too. They deliver just slightly worse model performance.Maybe, one can try embeddings from *RusVectores*. Early (long ago) it was the worst choice because of inappropriate preprocessing. But now it seems, it's corrected. We didn't try, but if you surely want *FastText* model, embeddings from *RusVectores* are also worth to check.If you want your model to achieve high scores, use embeddings without any lemmatization, removal of punctuation, and adding any other archaic transformations. Embeddings of words with part of speech tags appended in the end are also useless (by obvious reasons).
###Code
from corpuscula.corpus_utils import download_ud, UniversalDependencies, \
AdjustedForSpeech
import junky
from mordl import LemmaTagger
FT_MODEL_FN = '../mordl/cc.ru.300.bin'
MODEL_FN = 'lemma-ft_model'
SEED=42
DEVICE='cuda:0'
corpus_name = 'UD_Russian-SynTagRus'
download_ud(corpus_name, overwrite=False)
corpus = UniversalDependencies(corpus_name)
#corpus = AdjustedForSpeech(corpus)
tagger = LemmaTagger()
tagger.load_train_corpus(corpus.train)
tagger.load_test_corpus(corpus.dev)
_ = tagger.train(MODEL_FN, device=DEVICE, control_metric='accuracy',
max_epochs=None, min_epochs=0, bad_epochs=5,
max_grad_norm=None, tags_to_remove=None, word_emb_type='ft',
word_emb_path=FT_MODEL_FN, word_transform_kwargs=None,
# BertDataset.transform() (for BERT-descendant models)
# params:
# {'max_len': 0, 'batch_size': 64, 'hidden_ids': '10',
# 'aggregate_hiddens_op': 'cat',
# 'aggregate_subtokens_op': 'absmax', 'to': junky.CPU,
# 'loglevel': 1}
# WordDataset.transform() (for other models) params:
# {'check_lower': True}
stage1_params=None,
# {'lr': .0001, 'betas': (0.9, 0.999), 'eps': 1e-8,
# 'weight_decay': 0, 'amsgrad': False,
# 'max_epochs': None, 'min_epochs': None,
# 'bad_epochs': None, 'batch_size': None,
# 'max_grad_norm': None}
stage2_params=None,
# {'lr': .001, 'momentum': .9, 'weight_decay': 0,
# 'dampening': 0, 'nesterov': False,
# 'max_epochs': None, 'min_epochs': None,
# 'bad_epochs': None, 'batch_size': None,
# 'max_grad_norm': None}
stage3_params=None,
# {'save_as': None, 'epochs': 3, 'batch_size': 8,
# 'lr': 2e-5, 'betas': (0.9, 0.999), 'eps': 1e-8,
# 'weight_decay': .01, 'amsgrad': False,
# 'num_warmup_steps': 3, 'max_grad_norm': 1.}
stages=[1, 2], save_stages=True, load_from=None,
learn_on_padding=False, remove_padding_intent=False,
seed=SEED, start_time=None, keep_embs=False,
rnn_emb_dim=300, cnn_emb_dim=None, cnn_kernels=range(1, 7),
upos_emb_dim=200, emb_bn=True, emb_do=.2,
final_emb_dim=512, pre_bn=True, pre_do=.5,
lstm_layers=1, lstm_do=0, tran_layers=0, tran_heads=8,
post_bn=True, post_do=.4)
res_dev = 'corpora/_dev_' + MODEL_FN.replace('_model', '.conllu')
res_test = 'corpora/_test_' + MODEL_FN.replace('_model', '.conllu')
tagger = LemmaTagger(embs=globals()['tagger'].embs
if 'tagger' in globals() else
None)
tagger.load(MODEL_FN)
junky.clear_tqdm()
_ = tagger.predict(corpus.dev, clone_ds=True, save_to=res_dev)
_ = tagger.evaluate(corpus.dev)
_ = tagger.evaluate(corpus.dev, res_dev)
_ = tagger.predict(corpus.test, save_to=res_test)
_ = tagger.evaluate(corpus.test, clone_ds=True)
_ = tagger.evaluate(corpus.test, res_test)
###Output
Evaluating LEMMA
Load corpus
[=> 1500
###Markdown
CoNLL18 Validation
###Code
from corpuscula.corpus_utils import download_ud, get_ud_test_path
import junky
from mordl import UposTagger, FeatsTagger, LemmaTagger, conll18_ud_eval
corpus_name = 'UD_Russian-SynTagRus'
download_ud(corpus_name, overwrite=False)
DEVICE = 'cuda:0'
corpus_gold = get_ud_test_path(corpus_name)
corpus_test = 'corpora/_test_tagged.conllu'
del tagger
tagger_u = UposTagger()
tagger_u.load('upos-bert_model', device=DEVICE, dataset_device=DEVICE)
tagger_f = FeatsTagger()
tagger_f.load('feats-bert_model', device=DEVICE, dataset_device=DEVICE)
tagger_l = LemmaTagger()
tagger_l.load('lemma-bert_model', device=DEVICE, dataset_device=DEVICE)
_ = tagger_l.predict(
tagger_f.predict(
tagger_u.predict(corpus_gold)
), save_to=corpus_test
)
del tagger_u, tagger_f, tagger_l
conll18_ud_eval(corpus_gold, corpus_test)
###Output
Metric | Precision | Recall | F1 Score | AligndAcc
-----------+-----------+-----------+-----------+-----------
Tokens | 100.00 | 100.00 | 100.00 |
Sentences | 100.00 | 100.00 | 100.00 |
Words | 100.00 | 100.00 | 100.00 |
UPOS | 99.35 | 99.35 | 99.35 | 99.35
XPOS | 100.00 | 100.00 | 100.00 | 100.00
UFeats | 98.36 | 98.36 | 98.36 | 98.36
AllTags | 98.21 | 98.21 | 98.21 | 98.21
Lemmas | 98.88 | 98.88 | 98.88 | 98.88
UAS | 100.00 | 100.00 | 100.00 | 100.00
LAS | 100.00 | 100.00 | 100.00 | 100.00
CLAS | 100.00 | 100.00 | 100.00 | 100.00
MLAS | 97.22 | 97.22 | 97.22 | 97.22
BLEX | 98.29 | 98.29 | 98.29 | 98.29
###Markdown
MISC:NENote: the corpora we used are proprietary (and of low quality). You have to find another corpora.
###Code
from mordl import UposTagger, FeatsTagger
DEVICE = 'cuda:0'
tagger_u = UposTagger()
tagger_u.load('upos-bert_model', device=DEVICE, dataset_device=DEVICE)
tagger_f = FeatsTagger()
tagger_f.load('feats-bert_model', device=DEVICE, dataset_device=DEVICE)
PREFIX = 'ner-old-'
for corpora in zip([f'corpora/{PREFIX}train.conllu',
f'corpora/{PREFIX}dev.conllu',
f'corpora/{PREFIX}test.conllu'],
[f'corpora/{PREFIX}train_upos_feats.conllu',
f'corpora/{PREFIX}dev_upos_feats.conllu',
f'corpora/{PREFIX}test_upos_feats.conllu']):
tagger_f.predict(
tagger_u.predict(corpora[0]), save_to=corpora[1]
)
del tagger_u, tagger_f
import junky
from mordl import NeTagger
BERT_MODEL_FN = 'xlm-roberta-base'
MODEL_FN = 'misc-ne-bert_model'
SEED=42
BERT_MAX_LEN, BERT_EPOCHS, BERT_BATCH_SIZE = 0, 2, 8
DEVICE='cuda:0'
PREFIX = 'ner-old-'
corpus_train = f'corpora/{PREFIX}train_upos_feats.conllu'
corpus_dev = f'corpora/{PREFIX}dev_upos_feats.conllu'
corpus_test = f'corpora/{PREFIX}test_upos_feats.conllu'
tagger = NeTagger()
tagger.load_train_corpus(corpus_train)
tagger.load_test_corpus(corpus_dev)
_ = tagger.train(MODEL_FN, device=DEVICE, control_metric='accuracy',
max_epochs=None, min_epochs=0, bad_epochs=5,
max_grad_norm=None, tags_to_remove=None, word_emb_type='bert',
word_emb_path=BERT_MODEL_FN, word_transform_kwargs={
'max_len': BERT_MAX_LEN, 'hidden_ids': 10, 'aggregate_subtokens_op': 'absmax'
# BertDataset.transform() (for BERT-descendant models)
# params:
# {'max_len': 0, 'batch_size': 64, 'hidden_ids': '10',
# 'aggregate_hiddens_op': 'cat',
# 'aggregate_subtokens_op': 'absmax', 'to': junky.CPU,
# 'loglevel': 1}
# WordDataset.transform() (for other models) params:
# {'check_lower': True}
},
stage1_params=None,
# {'lr': .0001, 'betas': (0.9, 0.999), 'eps': 1e-8,
# 'weight_decay': 0, 'amsgrad': False,
# 'max_epochs': None, 'min_epochs': None,
# 'bad_epochs': None, 'batch_size': None,
# 'max_grad_norm': None}
stage2_params=None,
# {'lr': .001, 'momentum': .9, 'weight_decay': 0,
# 'dampening': 0, 'nesterov': False,
# 'max_epochs': None, 'min_epochs': None,
# 'bad_epochs': None, 'batch_size': None,
# 'max_grad_norm': None}
stage3_params={
'save_as': MODEL_FN.replace('-bert_model', '_' + BERT_MODEL_FN)
+ f'_len{BERT_MAX_LEN}_ep{BERT_EPOCHS}_bat{BERT_BATCH_SIZE}_seed{SEED}',
'epochs': BERT_EPOCHS,
'batch_size': BERT_BATCH_SIZE,
'lr': 4e-5, 'num_warmup_steps': 1,
# {'save_as': None, 'max_epochs': 3, 'batch_size': 8,
# 'lr': 2e-5, 'betas': (0.9, 0.999), 'eps': 1e-8,
# 'weight_decay': .01, 'amsgrad': False,
# 'num_warmup_steps': 0, 'max_grad_norm': 1.}
},
stages=[1, 2, 3, 1, 2], save_stages=True, load_from=None,
learn_on_padding=True, remove_padding_intent=False,
seed=SEED, start_time=None, keep_embs=False,
rnn_emb_dim=None, cnn_emb_dim=None, cnn_kernels=range(1, 7),
upos_emb_dim=300, emb_bn=True, emb_do=.2,
final_emb_dim=512, pre_bn=True, pre_do=.5,
lstm_layers=1, lstm_do=0, tran_layers=0, tran_heads=8,
post_bn=True, post_do=.4)
res_dev = 'corpora/_dev_' + MODEL_FN.replace('_model', '.conllu')
res_test = 'corpora/_test_' + MODEL_FN.replace('_model', '.conllu')
tagger = NeTagger(embs=globals()['tagger'].embs
if 'tagger' in globals() else
None)
tagger.load(MODEL_FN)
junky.clear_tqdm()
_ = tagger.predict(corpus_dev, clone_ds=True, save_to=res_dev)
_ = tagger.evaluate(corpus_dev)
_ = tagger.evaluate(corpus_dev, res_dev)
_ = tagger.predict(corpus_test, save_to=res_test)
_ = tagger.evaluate(corpus_test, clone_ds=True)
_ = tagger.evaluate(corpus_test, res_test)
corp_gold = list(tagger._get_corpus(corpus_test, asis=True))
corp_test = list(tagger._get_corpus(res_test))
tags = sorted(set(x['MISC'].get('NE')
for x in corp_gold for x in x[0]
if x['MISC'].get('NE')))
for tag in tags:
print('{}: {}'.format(
tag, tagger.evaluate(corp_gold, corp_test,
label=tag, log_file=None)
))
import gc
del tagger
gc.collect()
###Output
_____no_output_____ |
big_data_esencial.ipynb | ###Markdown
Determinado la eficiencia del código
###Code
import time
start = time.time()
#Nuestro código
time.sleep(0.5)
end = time.time()
print(end - start)
from datetime import timedelta
start = time.monotonic()
#Nuestro código
time.sleep(1)
end = time.monotonic()
print(timedelta(seconds = end - start))
start = time.time()
#Nuestro código
time.sleep(0.5)
end = time.time()
start2 = time.time()
#Nuestro código
time.sleep(0.501)
end2 = time.time()
print(end - start > end2 - start2)
###Output
False
###Markdown
Pandas
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
**'pdread_csv'** es el comando para abrir la base de datos y **'nrows'** es la cantidad de filas que va a abrir en este caso el archivo tiene ```1'048.575``` filas, pesa ```657 mb``` para que sea más fácil trabajar con el elegimos un número menor. La sintaxis **`1e6`** es para indicar que se cargan 1 millón de datos.
###Code
df = pd.read_csv("C:/Users/Roger Castillo/Desktop/GitHub/Python/Ciencia_Datos/base_datos_2008.csv", nrows = 1e6)
###Output
_____no_output_____
###Markdown
**'df.head()'** permite ver los datos iniciales de la tabla, dentro del paréntesis le indicamos la cantidad de filas que queremos ver. La función `.head()` sola, devuleve solo los 5 primeros resultados.
###Code
df.head()
###Output
_____no_output_____
###Markdown
**'df.tail()'** permite ver los datos finales de la tabla
###Code
df.tail()
###Output
_____no_output_____
###Markdown
**'df.sample()'** permite reorganizar toda la tabla dependiendo del parámetro sin guardarla. Para guardarla la asignamos a un objeto **'df'** que consume memoria, de igual manera se puede guardar una cantidad x en otro `objeto` para hacer comparaciones y otras operaciones. `'frac = 1'` significa que queremos utilizar el 100% de las filas que seleccionamos anteriormente.
###Code
df.sample(frac = 1)
###Output
_____no_output_____
###Markdown
**'df. columns'** mustra información de las columnas en forma de lista, se escribe sin paréntesis porque se esta llamando a uno de los atributos del data frame que ya exíste.
###Code
df. columns
###Output
_____no_output_____
###Markdown
Sí por ejemplo queremos ver los datos de una sola columna se utiliza el comando **'df.DepTime'**, o el nombre de la columna que se quiere ver, de nuevo sin paréntesis debido a que no es una función.
###Code
df.DepTime
###Output
_____no_output_____
###Markdown
**'df.dtypes'** nos muestra el tipo de variable que se esta utilizando en cada caso.
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
El comando **'df.values'** devuelve los datos en una `array` que va a permitir realizar operaciones matriciales y otros tipos de cálculo más cómodos.
###Code
df.values
###Output
_____no_output_____
###Markdown
Herramientas de filtrado La sintaxsis **'df["ArrDelay"].head()'** nos muestra información concreta de una columna.
###Code
df["ArrDelay"].head()
###Output
_____no_output_____
###Markdown
La sintaxis **'df[0:10]'** devulve exactamente lo mismo que la función `.head(10)`, con la diferencia que le podemos especificar el intervalo.
###Code
df[100:110]
###Output
_____no_output_____
###Markdown
Para obtener información sobre valores concretos, se le indican los parámetros especificos por ejemplo para saber que vuelos tienen retrazo de menos de una hora se usa la sintaxis `df[df["ArrDelay"] 60].head()`, se usa doble `'=='` para saber los que tienen exactamente una hora de retraso `df[df["ArrDelay"] == 60].head()`, `'!='` para saber los que no se retrasaron una hora, `'='` mayor o igual que.
###Code
df[df["ArrDelay"] < 60].head()
###Output
_____no_output_____
###Markdown
Igualmente se puede filtrar por 'cadenas de texto'
###Code
df[df["Origin"] == "OAK"].head()
###Output
_____no_output_____
###Markdown
Para usar un filtro compuesto se utiliza este tipo de sintaxis, si se utiliza `'&'` se requiere que ambas condiciones se cumplan, `'|'` para indicar que una ó la otra, dependiendo del tipo de análisis que se necesite.
###Code
df[(df["Origin"] == "OAK") & (df["ArrDelay"] > 60)].head()
###Output
_____no_output_____
###Markdown
También se puede usar la siguiente función `'isin'` para requerir una u otra condición en vez de usar `'|'`
###Code
df[df.Origin.isin(["AOK", "IND"])].head()
###Output
_____no_output_____
###Markdown
Si los datos están perdidos se puede usar la siguiente sintaxis sin recurrir al data frame directamente desde pandas con la función `'isna'`
###Code
df[pd.isna(df["ArrDelay"])].head()
###Output
_____no_output_____
###Markdown
Para saber el número exacto de los vuelos con retraso podemos usar
###Code
len(df[pd.isna(df["ArrDelay"])].head())
###Output
_____no_output_____
###Markdown
Transformaciónes Lo primero que se hace es crear otra columna y asignarle los valores que queremos ver.
###Code
df["HoursDelay"] = round(df["ArrDelay"]/60)
###Output
_____no_output_____
###Markdown
Ya creada la columna aún no se puede ver y se aplica el siguiente código para visualizarla.
###Code
df["HoursDelay"].head()
###Output
_____no_output_____
###Markdown
Si por el contrario lo que se quiere es borrar la columna que se creo u otra por algín motivo se usa
###Code
del(df["HoursDelay"])
###Output
_____no_output_____
###Markdown
Yá se ha eliminado y para comprobarlo se hace un `'.head()'` de la base de datos. Al final no aparece la columna. Se puede volver a crear usando el comando `df["HoursDelay"] = round(df["ArrDelay"]/60)` y se observa el resultado en la columna del final.
###Code
df.head()
###Output
_____no_output_____
###Markdown
Para borrar varias columanas a la vez se utiliza una lista y el comando 'drop' y para que no arroje error se le debe especificar el eje, sin embargo no se guarda y lo comprobamos a aplicar un `.head()` de nuevo.
###Code
df.drop(["Diverted", "Cancelled", "Year"], axis=1)
df.head()
###Output
_____no_output_____
###Markdown
Para que se guarde hay que igualarlo y eso se hace de la siguiente manera
###Code
df = df.drop(["Diverted", "Cancelled", "Year"], axis=1)
###Output
_____no_output_____
###Markdown
O también se puede hacer especificandole que lo haga en el sitio
###Code
df.drop(["Diverted", "Cancelled", "Year"], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
La función `'drop'` también sirve para eliminar filas de a una o varias indicandole la fila o el rango, y para que las guarde se debe igualar o indicar el sitio.
###Code
df.drop(0)
df.drop(range(0,1000))
###Output
_____no_output_____
###Markdown
Para añadir mas filas a una tabla se utiliza la siguiente sintaxis, se crean los objetos con solo los origenes especificados.
###Code
dfATL = df[df.Origin == "ALT"]
dfHOU = df[df.Origin == "HOU"]
###Output
_____no_output_____
###Markdown
Para sumar esos dos data frame a la columna Atlanta se le agrega la columan Houston `dfATL.append(dfHOU)`y se puede hacer porque ambos tienen la misma estructura de columnas y para que se guarde se crea el nuevo objeto `newdf`
###Code
newdf = dfATL.append(dfHOU)
###Output
_____no_output_____
###Markdown
Se comprueba con el comando `newdf.Origin` primero se observan las filas de Atlanta seguidas de las de Houston
###Code
newdf.Origin
###Output
_____no_output_____
###Markdown
GroupbyHerramienta de resumen de nuestros datos que nos permite realizar operaciones matemáticas sencillas agrupando por categorías.El siguiente ejemplo muestra como se aplica este filtro con tres parámetros, para saber el retraso máximo de cada dia de la semana se usa la función `max()`, para saber la media `mean()`, `min()` para saber el mínimo, o incluso se puede saber que días hay menos vuelos usando la función `count()`, `describe()` muestra un resumen estadistico de lo que sucedio en cada uno de los días de la semana.
###Code
df.groupby(by = "DayOfWeek")["ArrDelay"].max()
df.groupby(by = "DayOfWeek")["ArrDelay"].describe()
###Output
_____no_output_____
###Markdown
Se pueden hacer comparaciones complejas, siempre y cuando las columnas sean del mismo tipo (texto con texto, décimales con décimales, etc.).
###Code
df.groupby(by = "DayOfWeek")["ArrDelay", "DepDelay"].mean()
###Output
<ipython-input-62-4360c000c3f2>:1: FutureWarning: Indexing with multiple keys (implicitly converted to a tuple of keys) will be deprecated, use a list instead.
df.groupby(by = "DayOfWeek")["ArrDelay", "DepDelay"].mean()
###Markdown
Se pueden realizar operaciones entre valores, por ejemplo para deterinar en que día hay más distancia
###Code
df.groupby(by = "DayOfWeek")["ArrDelay"].max()- df.groupby(by = "DayOfWeek")["ArrDelay"].min()
###Output
_____no_output_____
###Markdown
Para hacer un análisis más complejo se crea una base de datos alterna con los siguientes datos:
###Code
dfATLHOU = df[df.Origin.isin(["ALT","HOU"])]
###Output
_____no_output_____
###Markdown
Con la siguiente instrucción lo que vamos a ver es cada día de la semana por separado con los vuelos que salen de Atlanta y aquellos vuelos que salen de Houston y sus medias. Esto permite comparar, por ejemplo, en cuál de las dos ciudades los vuelos llegan más retrasados y en qué día las diferencias son (importante el orden en el by para poder leer bien la información ya que al no hacerlo así se puede dificultar este trabajo, prueba con `["Origin", "DayOfWeek"]`).
###Code
dfATLHOU.groupby(by = ["DayOfWeek", "Origin"])["ArrDelay"].mean()
###Output
_____no_output_____
###Markdown
Para facilitar el trabajo, también se puede guardar esta información en un objeto al que se le pueden aplicar directamente la instrucciones y va ha realizarlo de una manera más rápida
###Code
mygroupby = dfATLHOU.groupby(by = ["DayOfWeek", "Origin"])["ArrDelay"]
mygroupby.min()
###Output
_____no_output_____
###Markdown
Duplicados y perdidosPor lo general lo que se hace con los datos duplicados, es eliminalos.
###Code
dfduplicate = df.append(df)
dfduplicate = dfduplicate.sample(frac=1)
dfclean = dfduplicate.drop_duplicates()
len(dfclean) == len(df)
len(dfclean)
###Output
_____no_output_____
###Markdown
Para hacer algo más especifico
###Code
dfclean.drop_duplicates(subset = "DayofMonth")
###Output
_____no_output_____
###Markdown
La función `dropna()` se usa para gestionar los datos faltantes
###Code
df.dropna()
###Output
_____no_output_____
###Markdown
Hay parámetros que permiten que la función `dropna()` no funcione de manera tan radical, es el caso de `theshold`
###Code
df.dropna(thresh=25)
df.dropna(thresh=len(df.columns)-2)
###Output
_____no_output_____
###Markdown
Para devolver una columna que no tenga Nan en la que seleccionemos, usamos `subset` dentro de la función y dentro de una lista entregarle el nombre de la columna que vamos a filtrar.
###Code
df.dropna(subset = ["CancellationCode"])
###Output
_____no_output_____
###Markdown
NumpyEl objero más popular dentro de la librería son los 'arrays'
###Code
import numpy as np
valoraciones = np.array([[8,7,8,5], [2,6,8,1],[8,8,9,5]])
valoraciones
valoraciones[0][1]
valoraciones[0,1]
valoraciones2 = np.array([[[8,7,8,5],[2,5,5,2]],[[2,6,8,4],[8,9,7,4]],[[8,8,9,3],[10,9,10,8]]])
valoraciones2
valoraciones2[0,1,2]
###Output
_____no_output_____
###Markdown
La función `zeros()` permite crear un objeto dependiendo de las dimenciones que le indiquemos
###Code
np.zeros((3,2,4,5,6))
###Output
_____no_output_____
###Markdown
Se pueden hacer operaciones entre 'arrays' del mismo tamaño
###Code
valoraciones2 + np.ones((3,2,4))
###Output
_____no_output_____
###Markdown
Se puede obtener la `media` total, o parcial especificandole el eje
###Code
np.mean(valoraciones2)
np.mean(valoraciones2,axis = 0)
np.mean(valoraciones2,axis = 1)
np.mean(valoraciones2,axis = 2)
###Output
_____no_output_____
###Markdown
La función `reshape` permite convertir una 'lista' en un array del tamaño que se le indique según sea necesario
###Code
np.reshape([1,2,3,4,5,6,7,8,9,10,11,12], (3,2,2))
###Output
_____no_output_____
###Markdown
Se puede usar la función 'median' sobre columnas de data frames por ejemplo si cargaramos "columana1"
###Code
np.median(df["columna1"])
###Output
_____no_output_____
###Markdown
La función `random` permite crear un 'array' aleatorio dependiendo de la necesidad
###Code
np.random.rand(2,2)
###Output
_____no_output_____
###Markdown
CorrelacionesAntes que nada una correlación es una relación lineal entre dos variables cuantitativas que toma la expresión que vemos por pantalla, "Correlación no implica causalidad". Vamos a poder detectar correlaciones, pero no va a servir para encontrar explicaciones ('a' está correlacionado positivamente con 'b') y poder cuantificar esta relación. Se interpreta como el cociente entre la covarianza entre dos variables y el producto de sus desviaciones estándar. Esto puede tomar valores entre `-1 y 1`. Y como más cerca de estos extremos se encuentra el valor, más fuerte será la relación. Normalmente los valores entre `0,3 y -0,3` son considerados muy `bajos`, y ya sea a partir de `0,6 o 0,7` en cualquiera de los dos signos cuando estamos hablando de correlaciones `fuertes`.
###Code
df = pd.read_csv("C:/Users/Roger Castillo/Desktop/GitHub/Python/Ciencia_Datos/base_datos_2008.csv", nrows = 100000)
np.corrcoef(df["ArrDelay"],df["DepDelay"])
###Output
_____no_output_____
###Markdown
Esto se presenta porque el cociente de correlación no admite valores faltantes. Hay que imputarlos o quitarlos. El paso más sencillo es quitarlos.
###Code
df.dropna(inplace=True, subset=["ArrDelay", "DepDelay"])
np.corrcoef([df["ArrDelay"],df["DepDelay"],df["DepTime"]])
df.drop(inplace = True, columns = ["Year","Cancelled","Diverted"])
df.corr()
df.drop(inplace = True, columns = ["Month"])
corr = round(df.corr(),3)
corr.style.background_gradient()
###Output
_____no_output_____
###Markdown
Test de la Chi-Cuadrado
###Code
import pandas as pd
import numpy as np
df = pd.read_csv("C:/Users/Roger Castillo/Desktop/GitHub/Python/Ciencia_Datos/base_datos_2008.csv")
np.random.seed(0)
df = df[df["Origin"].isin(["HOU", "ATL", "IND"])]
df = df.sample(frac=1)
dg = df[0:10000]
df["BigDelay"] = df["ArrDelay"] > 30
observados = pd.crosstab(index=df['BigDelay'],columns=df['Origin'], margins=True)
observados
###Output
_____no_output_____
###Markdown
Si presernta algún error con el módulo `scipy.stats` pruebe instalando scipy directamente con el comando `pip install scipy`
###Code
from scipy.stats import chi2_contingency
test = chi2_contingency(observados)
test
esperados = pd.DataFrame(test[3])
esperados
esperados_rel = round(esperados.apply(lambda r: r/len(df) *100,axis=1),2)
observados_rel = round(observados.apply(lambda r: r/len(df) *100,axis=1),2)
observados_rel
esperados_rel
test[1]
###Output
_____no_output_____
###Markdown
Resumen de Test de Hipótesis* Si el p-valor<0.05, hay diferencias significativas: Hay relación entre variable* si el p-valor>0.05, no hay diferencias significativas: No hay relación entre variables Análisis de datos extremos o Outliers
###Code
import pandas as pd
import numpy as np
df = pd.read_csv("C:/Users/Roger Castillo/Desktop/GitHub/Python/Ciencia_Datos/base_datos_2008.csv", nrows=100000)
x = df["ArrDelay"].dropna()
Q1 = np.percentile(x,25)
Q3 = np.percentile(x,75)
rangointer = Q3 - Q1
umbralsuperior = Q3 + 1.5*rangointer
umbralinferior = Q1 - 1.5*rangointer
umbralsuperior
umbralinferior
np.mean(x > umbralsuperior)
np.mean(x < umbralinferior)
from sklearn.covariance import EllipticEnvelope
outliers = EllipticEnvelope(contamination = .01)
var_list = ["DepDelay", "TaxiIn", "TaxiOut", "CarrierDelay", "WeatherDelay", "NASDelay", "SecurityDelay", "LateAircraftDelay"]
x = np.array(df.loc[:,var_list].dropna())
outliers.fit(x)
pred = outliers.predict(x)
pred
elips_outliers = np.where(pred == -1)[0]
elips_outliers
###Output
_____no_output_____
###Markdown
Transformar un dataframe en base de datos relacional
###Code
import pandas as pd
data = [(1,"Joan","Gastón",25,1,"Libreta",1.2,.4,0.8,3,"03-02-2018"),
(1,"Joan","Gastón",25,2,"Esfero",0.4,0.15,0.25,1,"03-02-2018"),
(1,"Joan","Gastón",25,1,"Libreta",1.2,.4,0.8,2,"15-02-2018"),
(2,"Joan","López",33,2,"Esfero",0.4,0.15,0.25,4,"01-02-2018"),
(2,"Joan","López",33,1,"Libreta",1.2,.4,0.8,10,"05-03-2018"),
(3,"María","García",40,1,"Libreta",1.2,.4,0.8,20,"13-04-2018"),
(3,"María","García",40,2,"Esfero",0.4,0.15,0.25,1,"09-02-2018"),
(3,"María","García",40,2,"Esfero",0.4,0.15,0.25,3,"03-04-2018")]
labels = ["Comprador_id","Nombre","Apellido","Edad","Producto_id","Producto","Precio","Coste","Margen","Cantidad","Fecha"]
df = pd.DataFrame.from_records(data, columns = labels)
df
compradores = df.drop_duplicates(subset = "Comprador_id", keep = "first")
compradores
compradores = compradores = compradores[["Comprador_id","Nombre","Apellido","Edad"]]
compradores
productos = df.drop_duplicates(subset = "Producto_id", keep = "first")
productos = productos[["Producto_id","Producto","Precio","Coste","Margen"]]
productos
compras = df[["Comprador_id","Producto_id","Fecha","Cantidad"]]
compras
###Output
_____no_output_____
###Markdown
Joins en bases de datos relacionales
###Code
import pandas as pd
consumidores = [("A","Móvil"),("B","Móvil"),("A","Portátil"),("A","Tablet"),
("B","Tablet"),("C","Portátil"),("D","Smartwatch"),("E","Consola")]
con_labels = ["Consumidor","Producto"]
con_df = pd.DataFrame.from_records(consumidores,columns = con_labels)
productores = [("a","Móvil"),("a","Smartwatch"),("a","Tablet"),("b","Portátil"),
("c","Sobremesa"),("c","Portátil")]
prod_labels = ["Productor","Producto"]
prod_df = pd.DataFrame.from_records(productores,columns = prod_labels)
###Output
_____no_output_____
###Markdown
Las tablas son para un ejemplo básico, primero se visualizan
###Code
con_df
prod_df
###Output
_____no_output_____
###Markdown
Unir las tablasCon la función `merge()` se unen las tablas, al cambiar el argumento en `how` por ejemplo con el **`'outer'`** se muestra la unión completa con los resultados que no aparecen en ambas tablas, **`'inner'`** en cambio omite los resultados que no esten en ambas tablas, y en la siguientes estructuras el orden es importante con el **`'right'`** hace la unión solo si encuentra relación con la derecha (por esto deja sobremesa y omite consola) y con el **`'left'`** pasa todo lo contrario (deja consola y omite sobremesa)
###Code
pd.merge(con_df,prod_df,on="Producto",how="outer")
pd.merge(con_df,prod_df,on="Producto",how="inner")
pd.merge(con_df,prod_df,on="Producto",how="right")
pd.merge(con_df,prod_df,on="Producto",how="left")
###Output
_____no_output_____
###Markdown
Paralelizar loops en python
###Code
import pandas as pd
import numpy as np
from joblib import Parallel, delayed
df = pd.read_csv("C:/Users/Roger Castillo/Desktop/GitHub/Python/Ciencia_Datos/base_datos_2008.csv", nrows=100000)
df_sub = df[['CarrierDelay','WeatherDelay','NASDelay','SecurityDelay','LateAircraftDelay']]
df_sub.head(10)
def retraso_maximo(fila):
if not np.isnan(fila).any():
names = ['CarrierDelay','WeatherDelay','NASDelay','SecurityDelay','LateAircraftDelay']
return names[fila.index(max(fila))]
else:
return "None"
results = []
for fila in df_sub.values.tolist():
results.append(retraso_maximo(fila))
results
result = Parallel(n_jobs = 2, backend = "multiprocessing")(
map(delayed(retraso_maximo), df_sub.values.tolist()))
result
###Output
_____no_output_____
###Markdown
Matplotlib
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv("C:/Users/Roger Castillo/Desktop/GitHub/Python/Ciencia_Datos/base_datos_2008.csv", nrows=100000)
data = np.unique(df.Cancelled, return_counts = True)
data
plt.pie(x = data[1],
labels = data[0], # Se imprime una básico
colors = ["Red","Green"],
shadow = True,
startangle = 90,
radius= 2)
plt.show()
###Output
_____no_output_____
###Markdown
Modificar elementos del gráfico en MatplotlibGráfico de burbujas
###Code
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0)
df = pd.read_csv("C:/Users/Roger Castillo/Desktop/GitHub/Python/Ciencia_Datos/base_datos_2008.csv", nrows=1000000)
df= df.sample(frac=1).head(100)
plt.scatter(x=df.DayofMonth,y=df.ArrDelay,s=df.Distance)
plt.scatter(x=df.DayofMonth,y=df.ArrDelay,s=df.Distance,alpha=.3,c = df.DayOfWeek.isin([6,7]))
plt.title("Retrasos en EEUU")
plt.xlabel("Día del Mes")
plt.ylabel("Retrasos al Llegar")
plt.ylim([0,150])
plt.xticks([0,15,30])
plt.text(x=28,y=120,s="Mi vuelo")
plt.show()
###Output
_____no_output_____
###Markdown
Etiquetas y leyendas en Matplotlib
###Code
import pandas as pd
import seaborn as sns
import numpy as np
import matplotlib.pyplot as plt
df = pd.read_csv("C:/Users/Roger Castillo/Desktop/GitHub/Python/Ciencia_Datos/base_datos_2008.csv", nrows=100000)
data = np.unique(df.DayOfWeek,return_counts=True)
labs = ["lun","Mar","Mie","Jue","Vie","Sab","Dom"]
data
plt.pie(x = data[1],
labels = data[0],
radius = 1.5,
colors = ["Red","Green","Orange","Blue","Gray","Pink","Black"],
startangle = 90)
plt.show()
plt.pie(x = data[1],
labels = labs,
radius = .7,
colors = sns.color_palette("hls",7),
startangle = 90,
autopct = "%1.1f%%",
explode = (0,0,0,0.2,0,0,0.1))
plt.legend(loc="upper right",labels = labs)
plt.show()
plt = sns.barplot(x = labs, y = data[1])
plt.set(xlabel = "Día de la semana", ylabel = "Número de vuelos")
###Output
_____no_output_____
###Markdown
Gráficos para series temporales en Matplotlib
###Code
import pandas as pd
import seaborn as sns
import numpy as np
import datetime
import time
df = pd.read_csv("C:/Users/Roger Castillo/Desktop/GitHub/Python/Ciencia_Datos/base_datos_2008.csv")
df2 = df[df["Origin"].isin(["ATL","HOU","IND"])]
df.head(500000)
times = []
for i in np.arange(len(df)):
times.append(datetime.datetime(year = 2008, month = df.loc[i,"Month"], day = df.loc[i,"DayofMonth"]))
times[50000]
df["Time"] = times
data = df.groupby(by=["Time"],as_index=Falsr)["DepDelay","ArrDelay"].mean()
data.head()
sns.lineplot(data["Time"],data[DepDelay])
data = df.groupby(by=["Time"])["DepDelay","ArrDelay"].mean()
data.head()
sns.lineplot(data=data)
times = []
for i in df2.index:
times.append(datetime.datetime(year = 2008, month = df2.loc[i,"Month"], day = df2.loc[i,"DayofMonth"]))
df2["Time"] = times
sns.set(rc={'figure.figsize':(10,15)})
sns.lineplot(x="Time",y="ArrDelay",hue="Origin",data=df2)
###Output
_____no_output_____
###Markdown
Histogramas y box plots en MatplotlibEstos gráficos sirven para ver dónde están concentrados nuestros datos y sacar conclusiones exploratorias de cómo están distribuidos.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("C:/Users/Roger Castillo/Desktop/GitHub/Python/Ciencia_Datos/base_datos_2008.csv")
df.dropna(inplace=True, subset = ["ArrDelay", "DepDelay", "Distance"])
sns.distplot(df["Distance"], kde = False, bins = 100)
sns.kdeplot(df["ArrDelay"])
sns.kdeplot(df["DepDelay"])
plt.xlim([-300,300])
df2 = df[df["Origin"].isin(["ATL","HOU","IND"])].sample(frac = 1).head(500)
sns.boxplot(x="DepDelay",y="Origin",data = df2)
plt.xlim([-20,150])
###Output
_____no_output_____
###Markdown
Nubes de puntos y mapas de calor en Matplotlib
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("C:/Users/Roger Castillo/Desktop/GitHub/Python/Ciencia_Datos/base_datos_2008.csv")
df.dropna(inplace=True, subset = ["ArrDelay", "DepDelay", "Distance","AirTime"])
sns.set(rc={'figure.figsize':(15,10)}) # Ajusta el tamaño del gráfico
df2 =df[df["Origin"].isin(["ATL","HOU","IND"])].sample(frac=1).head(1000)
sns.jointplot(df2["DepDelay"],df2["ArrDelay"])
df3 = df2[np.abs(df2["DepDelay"])<40]
df3 = df3[np.abs(df2["DepDelay"])<40]
sns.jointplot(df3["DepDelay"],df3["ArrDelay"],kind="hex")
sns.jointplot(df3["DepDelay"],df3["ArrDelay"],kind="kde")
gb_df = pd.DataFrame(df2.groupby(["Origin","Month"],as_index=False)["DepDelay"].mean())
gb_df.head()
data = gb_df.pivot("Month","Origin","DepDelay")
data
sns.set(rc={'figure.figsize':(15,8)})
sns.heatmap(data = data,annot=True,linewidths=.5)
###Output
_____no_output_____ |
notebooks/solutions/session1_solutions.ipynb | ###Markdown
Apache Spark for astronomy: hands-on session 1 ContextWelcome to the series of notebooks on Apache Spark! The main goal of this series is to get familiar with Apache Spark, and in particular its Python API called PySpark in a context of the astronomy. In this first notebook, we will introduce few Apache Spark functionalities of interest (and by no means complete!). Apache Spark[Apache Spark](https://spark.apache.org/) is a cluster computing framework, that is a set of tools to perform computation on a network of many machines. Spark started in 2009 as a research project, and it had a huge success so far in the industry. It is based on the so-called MapReduce cluster computing paradigm, popularized by the Hadoop framework using implicit data parallelism and fault tolerance.The core of Spark is written in Scala which is a general-purpose programming language that has been started in 2004 by Martin Odersky (EPFL). The language is inter-operable with Java and Java-like languages, and Scala executables run on the Java Virtual Machine (JVM). Note that Scala is not a pure functional programming language. It is multi-paradigm, including functional programming, imperative programming, object-oriented programming and concurrent computing.Spark provides many functionalities exposed through Scala/Python/Java/R API (Scala being the most complete one). As far as this workshop is concerned, we will use the Python API (called PySpark) for obvious reasons. But feel free to put your hands on Scala, it's worth it. For those interested, you can have a look at this [tutorial](https://gitlab.in2p3.fr/MaitresNageurs/QuatreNages/Scala) on Scala. Learning objectives- Loading and distributing data with Spark SQL (Apache Spark Data Sources API)- Exploring DataFrame & partitioning- Manipulating Spark SQL built-in functions Apache Spark Data Sources A tour of data formatsThere are many data formats used in the context of Big Data: CSV (1978), XML (1996), JSON (2001), Thrift (2007), Protobuf (2008), Avro & SequenceFile (2009), Parquet (2013), ORC (2016), and the list goes on... Some are _naively_ structured that is using a single type to describe the data (e.g. text) without any internal organisation to access faster the data. Others are more complex and highly optimised for big data treatment (e.g. Parquet). Spark handles most of them by default. Unfortunately those are not the data formats typically chosen by the scientific community. In astronomy for example you would rather store the data in FITS (1981) or HDF5 (1988) format, and in particle physics you would use ROOT (1995). These are multi-purposes data formats: images, histograms, spectra, particle lists, data cubes, or even structured data such as multi-table databases can be efficiently stored and accessed. Connecting to Data SourceThe data source API in Apache Spark belongs to the [Spark SQL module](https://spark.apache.org/sql/). Note that Spark Core has some simple built-in ways to read data from disk (binary or text), but Spark SQL is more complete and give you access to DataFrames directly. If you want to connect a specific data source with Apache Spark, you have mostly two ways:- [indirect] Access and distribute your files as binary streams (Spark does it natively), and decode the data on-the-fly within executors using third-party libraries.- [native] Use a built-in or custom connector to access, distribute and decode the data natively.FITS or HDF5 as most of scientific data formats, were not designed for serialisation (distribution of data over machines) originally and they often use compression to reduce the size on disk. Needless to say that default Spark cannot read those natively. First attempts to connect those data formats (see e.g. [1] for FITS) with Spark were using the indirect method above. By reading files as binary streams, the indirect method has the advantage of having access to all FITS functionalities implemented in the underlying user library. This can be an advantage when working with the Python API for example which already contains many great scientific libraries. However this indirect method assumes each Spark mapper will receive and handle one entire file (since the filenames are parallelized and entire file data must be reconstructed from binary once the file has been opened by a Spark mapper). Therefore each single file must fit within the memory of a Spark mapper, hence the indirect method cannot distribute a dataset made of large FITS files (e.g. in [1] they have a 65 GB dataset made of 11,150 files). In addition by assuming each Spark mapper will receive and handle one entire file, the indirect method will have a poor load balancing if the dataset is made of files with not all the same size.Fortunately Apache Spark low-level layers are sufficiently well written to allow extending the framework and write native connectors for any kind of data sources. Recently connectors for FITS, HDF5 and ROOT were made available [2, 3, 4] to the community. With such connectors, there is a guarantee of having a good load balancing regardless the structure of the dataset and the size of the input files is no more a problem (a 1 TB dataset made of thousand 1 GB files or one single 1 TB file will be viewed as almost the same by a native Spark connector). Note however that the Data Source API is in Java/Scala and if there is no library to play with your data source in those languages you must implement it (what has been done in [2]) or interface with another language.Note that the low-level layers dealing with the data sources have been recently updated. Apache Spark 2.3 introduced the Data Source API version 2. While the version 1 is still available and usable for a long time, we expect that all Spark connectors will comply with this v2 in the future.[1] Z. Zhang and K. Barbary and F. A. Nothaft and E. R. Sparks and O. Zahn and M. J. Franklin and D. A. Patterson and S. Perlmutter, Kira: Processing Astronomy Imagery Using Big Data Technology, DOI 10.1109/TBDATA.2016.2599926. [2] Peloton, Julien and Arnault, Christian and Plaszczynski, Stéphane, FITS Data Source for Apache Spark, Computing and Software for Big Science (1804.07501). https://github.com/astrolabsoftware/spark-fits [3] Liu, Jialin and Racah, Evan and Koziol, Quincey and Canon, Richard Shane, H5spark: bridging the I/O gap between Spark and scientific data formats on HPC systems, Cray user group (2016). https://github.com/valiantljk/h5spark [4] Viktor Khristenko, & Jim Pivarski. (2017, October 20). diana-hep/spark-root: Release 0.1.14 (Version v0.1.14). Zenodo. http://doi.org/10.5281/zenodo.1034230 Spark SQL and DataFrames DataFrameReaderThe interface to read data from disk is always the same for any kind of built-in and officially supported data format:```pythondf = spark.read\ .format(format: str)\ .option(key: str, value: Any)\ ... .option(key: str, value: Any)\ .load(path: str)``` Note that for most of the data sources, you can use wrappers such as:```pythonspark.read.csv(path, key1=value1, key2=value2, ...)```**Format**: The format can be "csv", "json", "parquet", etc. **Options**: The number of options depends on the underlying data source. Each has its own set of options. In most of the case, no options are needed, but you might want to explore the different possibilities at some point. Surprisingly it is not easy to find documentation and the best remains to read the source code documentation. In pyspark you can easily access it via the wrappers:```python DataFrameReader objectdf_reader = spark.read Doc on reading CSVdf_reader.csv? doc printed Doc on reading Parquetdf_reader.parquet? doc printed```**Path**: The way to specify path is threefold: either a single file (`path/to/folder/myfile.source`), or an entire folder (`path/to/folder`), or a glob pattern (`path/to/folder/*pattern*.source`). Note that you also need to specify the type of file system you are using. Example:``` python Connect to hdfspath = 'hdfs:///path/to/data' Connect to S3path = 's3:///path/to/data' Connect to local file systempath = 'files:///path/to/data'```If nothing is specified (`'/path/to/data'`), it will adapt to your `--master` (e.g. if you launch spark in local mode, you will connect to the local file system by default). Using a custom connectorYou can also connect to custom connector not included in the default Spark distribution. To do so, you will need to specify the dependencies when submitting your job or invoking your shell. If your connector is available through [Maven Central Repository](https://search.maven.org/), you can easily specify it via:``` Direct download from central repositoryspark-submit --packages groupId:artifactId:version ...```Note that this is the same syntax when launching the `pyspark` shell.For example, if you want to read FITS files using the [spark-fits](https://github.com/astrolabsoftware/spark-fits) connector you would add the following:``` Direct download from central repositoryspark-submit --packages com.github.astrolabsoftware:spark-fits_2.11:0.7.1 ...```You can find the spark-fits entry in the Maven Central [here](https://search.maven.org/artifact/com.github.astrolabsoftware/spark-fits_2.11/0.7.1/jar) for reference.Alternatively you can download the source code for a particular connector, compile it and include the `jars`:``` Specify manually the dependencyspark-submit --jars /path/to/lib/spark-fits.jars ...```Note that when you launch `pyspark`, already a numbers of `jars` are included by default (the ones for Spark for example). Loading and distributing dataYou will find test data in the folder `data`.
###Code
from pyspark.sql import SparkSession
# Initialise our Spark session
spark = SparkSession.builder.getOrCreate()
###Output
_____no_output_____
###Markdown
Loading Data: simply structured data (text)You can load CSV data into a DataFrame by simply using:
###Code
# Load simple CSV file
df_csv = spark.read.format("csv")\
.load("../../data/clusters.csv")
df_csv.printSchema()
###Output
root
|-- _c0: string (nullable = true)
|-- _c1: string (nullable = true)
|-- _c2: string (nullable = true)
|-- _c3: string (nullable = true)
###Markdown
Notice by default the CSV connector interprets all entries as String, and give dummy names to columns. You can infer the data type and use the first row as column names by specifying options:
###Code
df_csv = spark.read.format("csv")\
.option('inferSchema', True)\
.option('header', True)\
.load("../../data/clusters.csv")
df_csv.printSchema()
# Make a nice representation of our data
df_csv.show(5)
###Output
_____no_output_____
###Markdown
Loading Data: complex structured data (Parquet)More complex data format can infer automatically schema, and data types.They are also optimised for fast data access and small memory consumption.
###Code
# Same using Parquet - Note that the schema and the data types
# are directly inferred.
df_parquet = spark.read.format("parquet").load("../../data/clusters.parquet")
df_parquet.printSchema()
df_parquet.show(5)
###Output
root
|-- x: double (nullable = true)
|-- y: double (nullable = true)
|-- z: double (nullable = true)
|-- id: integer (nullable = true)
+-------------------+-------------------+------------------+---+
| x| y| z| id|
+-------------------+-------------------+------------------+---+
|-1.4076402686194887| 6.673344773733206| 8.208460943517498| 2|
| 0.6498424376672443| 3.744291410605022|1.0917784706793445| 0|
| 1.3036201950328201|-2.0809475280266656| 4.704460741202294| 1|
|-1.3741641126376476| 4.791424573067701| 2.562770404033503| 0|
| 0.3454761504864363| -2.481008091382492|2.3088066072973583| 1|
+-------------------+-------------------+------------------+---+
only showing top 5 rows
###Markdown
Loading Data: astronomy format (FITS)To read FITS, you will need to specify a custom connector such as [spark-fits](https://github.com/astrolabsoftware/spark-fits) (this is done for you):```PYSPARK_DRIVER_PYTHON=jupyter-notebook pyspark --packages com.github.astrolabsoftware:spark-fits_2.11:0.8.3 ...```
###Code
df_fits = spark.read.format("fits").option("hdu", 1).load("../../data/clusters.fits")
df_fits.printSchema()
df_fits.show(5)
###Output
root
|-- x: double (nullable = true)
|-- y: double (nullable = true)
|-- z: double (nullable = true)
|-- id: integer (nullable = true)
+-------------------+-------------------+------------------+---+
| x| y| z| id|
+-------------------+-------------------+------------------+---+
|-1.4076402686194887| 6.673344773733206| 8.208460943517498| 2|
| 0.6498424376672443| 3.744291410605022|1.0917784706793445| 0|
| 1.3036201950328201|-2.0809475280266656| 4.704460741202294| 1|
|-1.3741641126376476| 4.791424573067701| 2.562770404033503| 0|
| 0.3454761504864363| -2.481008091382492|2.3088066072973583| 1|
+-------------------+-------------------+------------------+---+
only showing top 5 rows
###Markdown
PartitioningYou might noticed Spark cut out the dataset into partitions, and for each partition Spark will run one task.Following the principle that moving computation is usually cheaper than moving data, Spark reads file blocks in a performant way: instead of copying file blocks to a central compute node, which can be expensive, the driver sends the computation to worker nodes close to DataNodes where the data reside.Normally, Spark tries to set the number of partitions automatically based on your distributed file system configuration. For example in HDFS, the size of data blocks is typically 128 MB (tunable), therefore the default number of Spark partitions when reading data will be the total number of 128 MB chunks for your dataset.```How many partitions should I use?```There is no unique answer to that. You will often hear: `typically you want 2-4 partitions for each CPU in your cluster`, but that implies you can accomodate infinite number of CPUs at limited partition size. In practice it will mainly depend on: - the total volume of data you want to distribute, - the number of CPU you have access to and their RAM, - and the kind of task you want to perform.If you have too few partitions, you will not take benefit from all of the cores available in the cluster (time to solution can be longer, and you can run out of memory for intensive tasks). If you have too many partitions, there will be excessive overhead in managing many small tasks.In between, you are generally good.Note that when you load data, Spark assign itself the number of partitions, and you can repartition the dataset using:```python numPartitions is arbitrary but this operation will add a shuffle stepdf.repartition(numPartitions) Using either a number of partition or column names to repartition by rangedf.repartitionByRange(numPartitions, colnames) Using one or several columns to repartitiondf.orderBy(colnames) numPartitions must be lower than the current one, but no shuffle is performeddf.coalesce(numPartitions)```You can access the number of partitions in use using:```pythondf.rdd.getNumPartitions()```Frequent basic use-cases:- The standard: You have a lot of data stored in large files and data entries need to be process independently from each other --> keep the default.- The multi-files: When reading many small files (each being much smaller than the typical 128 MB data block size), you usually end up with way more partitions than if you were reading the same volume of data but with fewer files --> repartition your dataset with fewer partitions.- The shuffle: If your tasks involve a lot of data movement and communication between machines (data shuffle) --> it is usually a good idea to keep the number of partitions not too high.- The heavy filter: sometimes you filter out a lot of data based on some condition, and then you execute some action on the remaining subset. Because of the filering, you might end up with many empty partitions --> try to see if repartitioning with fewer partitions helps in processing the remaining faster.**In practice you will end up experimenting a bit with the number of partitions... But always keep in mind the main reason to repartition is to minimize data movement inside the cluster.** Basic operations on DataFramesLet's load our data
###Code
df = spark.read.format("parquet").load("../../data/clusters.parquet")
###Output
_____no_output_____
###Markdown
Select & filtersThere are powerful methods to select subsets of columns or to filter rows based on values. Note that column selection and row filtering are transformations (in the sense of functional programming) - nothing really happens to the data until you trigger an action.
###Code
# Filtering rows based on entry values
df_x_more_than_one = df.filter("x > 1")
# Same as before, but different syntax
df_x_more_than_one = df.filter(df["x"] > 1)
# Filtering column based on their name
df_y_only = df.select('y')
df_x_and_y = df.select(['x', 'y'])
# You can chain transformations
df_x_cluster_one = df.select('x').filter('id == 1')
# Trigger an action
row_with_x_more_than_one = df_x_more_than_one.count()
print("{} entries with x > 1".format(row_with_x_more_than_one))
###Output
1291 entries with x > 1
###Markdown
Map and mapPartitionsYou can also apply transformation on DataFrame values. The most simple transformation would use the `map` method which preserves the cardinality of the DataFrame. `mapPartitions` is similar, although the cardinality is not preserved.
###Code
# Example for map: multiply all elements by 2
def multiply_by_two(row):
"""
"""
new_row = [2*i for i in row]
return new_row
# map is a RDD method (not available for DataFrame in pyspark)
df.rdd.map(multiply_by_two).toDF(df.columns).show(5)
# Example for mapPartitions: count the number of rows per partition
def yield_num_rows(part, param1):
""" Yield the number of rows in the partition
Parameters
----------
part : Iterator
Iterator containing partition data
Yield
----------
length: integer
number of rows inside the partition
"""
partition_data = [*part]
print(param1)
yield len(partition_data)
# Let's repartition our DataFrame in 12 partitions
df_repart = df.repartition(12)
# mapPartitions is a RDD method(not available for DataFrame in pyspark)
print("Number of rows per Spark partitions:")
# df_repart.rdd.mapPartitions(lambda part: yield_num_rows(part)).collect()
param1 = 2
df_repart.rdd.mapPartitions(lambda part: yield_num_rows(part, param1)).collect()
###Output
Number of rows per Spark partitions:
###Markdown
notice the super good load balancing! Be careful though in using `collect`, as data flows from the executors to the (poor and lonely and undersized) driver. Always reducing the data first! **Exercise (££)**: Compute the barycentre of each partition (hint: repartition or re-order according to the `id` column).
###Code
import numpy as np
def yield_barycentre(part):
""" Yield the number of rows in the partition
Parameters
----------
part : Iterator
Iterator containing partition data
Yield
----------
length: integer
number of rows inside the partition
"""
try:
partition_data = [*part]
x, y, z, _ = np.transpose(partition_data)
yield np.mean([x, y, z], axis=1)
except ValueError as e:
# Empty partition
yield [None, None, None]
# Let's repartition our DataFrame according to "id"
df_repart = df.orderBy("id")
# mapPartitions is a RDD method(not available for DataFrame in pyspark)
print("Cluster coordinates:")
df_repart.rdd.mapPartitions(yield_barycentre).collect()
###Output
Cluster coordinates:
###Markdown
StatisticsYou can easily access basics statistics of your DataFrame:
###Code
# The describe method returns a DataFrame
df.describe().show()
###Output
+-------+-------------------+------------------+-------------------+------------------+
|summary| x| y| z| id|
+-------+-------------------+------------------+-------------------+------------------+
| count| 4000| 4000| 4000| 4000|
| mean|0.22461143161189448|3.5005327477749812| 4.746261685611487| 0.99975|
| stddev| 1.4333802737826418| 3.970358011802803| 3.3858958227831852|0.8166496596868619|
| min| -4.320974828599122|-5.207575440768161|-1.4595005976690572| 0|
| max| 4.077800662643146|10.854512466048538| 12.602016902866591| 2|
+-------+-------------------+------------------+-------------------+------------------+
###Markdown
AggregationApache Spark has built-in method to perform aggregation. Be careful though - this implies shuffle (i.e. communication between machines and data transfer), and can be a performance killer!**Exercise (£):** group by `id`, and count the number of elements per `id`
###Code
df.groupBy("id").count().show()
###Output
+---+-----+
| id|count|
+---+-----+
| 1| 1333|
| 2| 1333|
| 0| 1334|
+---+-----+
###Markdown
Direct acyclic graph (DAG)As quickly highlighted above, Spark commands are either transformations (filter, select, ...) or actions (show, take, ...). You can chain actions, and in the end you trigger the computation with an action. Before running any action, Spark will build a graph of the commands, called Direct Acyclic Graph, and... it will do some magic for you. **Exercise (£):** Look at the two commands and output. Do you notice the magic?
###Code
df.groupBy("id").count().filter('id >= 1').explain()
df.filter('id >= 1').groupBy("id").count().explain()
###Output
== Physical Plan ==
*(2) HashAggregate(keys=[id#81], functions=[count(1)])
+- Exchange hashpartitioning(id#81, 200)
+- *(1) HashAggregate(keys=[id#81], functions=[partial_count(1)])
+- *(1) Project [id#81]
+- *(1) Filter (isnotnull(id#81) && (id#81 >= 1))
+- *(1) FileScan parquet [id#81] Batched: true, Format: Parquet, Location: InMemoryFileIndex[file:/home/jovyan/work/data/clusters.parquet], PartitionFilters: [], PushedFilters: [IsNotNull(id), GreaterThanOrEqual(id,1)], ReadSchema: struct<id:int>
|
Python/tensorflow/DeepLearningZeroToAll/Lab09-2-xor-nn.ipynb | ###Markdown
Lab 9 XOR Lab09-2-xor-nn> XOR 데이터를 neural network를 이용한 경우(using sigmoid fn)>> hidden layer를 1개 만듬으로 인해 accuracy=1 이 됨
###Code
import tensorflow as tf
import numpy as np
tf.set_random_seed(777) # for reproducibility
learning_rate = 0.1
x_data = [[0, 0],
[0, 1],
[1, 0],
[1, 1]]
y_data = [[0],
[1],
[1],
[0]]
x_data = np.array(x_data, dtype=np.float32)
y_data = np.array(y_data, dtype=np.float32)
X = tf.placeholder(tf.float32, [None, 2])
Y = tf.placeholder(tf.float32, [None, 1])
# 1th hidden layer
W1 = tf.Variable(tf.random_normal([2, 2]), name='weight1')
b1 = tf.Variable(tf.random_normal([2]), name='bias1')
layer1 = tf.sigmoid(tf.matmul(X, W1) + b1)
# hidden variable은 2개!!
W2 = tf.Variable(tf.random_normal([2, 1]), name='weight2')
b2 = tf.Variable(tf.random_normal([1]), name='bias2')
hypothesis = tf.sigmoid(tf.matmul(layer1, W2) + b2)
# cost/loss function
cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis))
# Optimizer
train = tf.train.GradientDescentOptimizer(learning_rate=learning_rate).minimize(cost)
# Accuracy computation
# True if hypothesis>0.5 else False
predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32)
accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32))
# Launch graph
with tf.Session() as sess:
# Initialize TensorFlow variables
sess.run(tf.global_variables_initializer())
for step in range(10001):
sess.run(train, feed_dict={X: x_data, Y: y_data})
if step % 100 == 0:
print(step, sess.run(cost, feed_dict={
X: x_data, Y: y_data}), sess.run([W1, W2]))
# Accuracy report
h, c, a = sess.run([hypothesis, predicted, accuracy],
feed_dict={X: x_data, Y: y_data})
print("\nHypothesis: ", h, "\nCorrect: ", c, "\nAccuracy: ", a)
'''
Hypothesis: [[ 0.01338218]
[ 0.98166394]
[ 0.98809403]
[ 0.01135799]]
Correct: [[ 0.]
[ 1.]
[ 1.]
[ 0.]]
Accuracy: 1.0
'''
###Output
0 0.753902 [array([[ 0.7988674 , 0.6801188 ],
[-1.21986341, -0.30361032]], dtype=float32), array([[ 1.37522972],
[-0.78823847]], dtype=float32)]
100 0.695844 [array([[ 0.67166901, 0.71368533],
[-1.29171741, -0.24467795]], dtype=float32), array([[ 1.1212678 ],
[-0.90971714]], dtype=float32)]
200 0.694039 [array([[ 0.64527303, 0.70460987],
[-1.31893897, -0.22392064]], dtype=float32), array([[ 1.0992552 ],
[-0.89517188]], dtype=float32)]
300 0.692437 [array([[ 0.64527798, 0.69601011],
[-1.35297382, -0.20636199]], dtype=float32), array([[ 1.09027696],
[-0.88092268]], dtype=float32)]
400 0.690805 [array([[ 0.66653323, 0.68915927],
[-1.39480054, -0.19111179]], dtype=float32), array([[ 1.09085977],
[-0.86969197]], dtype=float32)]
500 0.688979 [array([[ 0.70635301, 0.68438065],
[-1.44506621, -0.17811842]], dtype=float32), array([[ 1.10219097],
[-0.86151737]], dtype=float32)]
600 0.686812 [array([[ 0.76320881, 0.68197995],
[-1.50462615, -0.16744027]], dtype=float32), array([[ 1.12604845],
[-0.85649955]], dtype=float32)]
700 0.684146 [array([[ 0.83649611, 0.682307 ],
[-1.57441962, -0.15923619]], dtype=float32), array([[ 1.16450787],
[-0.85485953]], dtype=float32)]
800 0.680802 [array([[ 0.92629212, 0.68579739],
[-1.65540218, -0.15377551]], dtype=float32), array([[ 1.21975505],
[-0.8569454 ]], dtype=float32)]
900 0.676573 [array([[ 1.03308451, 0.69301593],
[-1.74849081, -0.15145336]], dtype=float32), array([[ 1.29387808],
[-0.86324763]], dtype=float32)]
1000 0.671229 [array([[ 1.15745628, 0.70470023],
[-1.85447979, -0.15281102]], dtype=float32), array([[ 1.38862836],
[-0.87442559]], dtype=float32)]
1100 0.664536 [array([[ 1.29970884, 0.72180182],
[-1.97391415, -0.15856187]], dtype=float32), array([[ 1.50514567],
[-0.89133704]], dtype=float32)]
1200 0.656289 [array([[ 1.45945895, 0.74551833],
[-2.10691452, -0.16962206]], dtype=float32), array([[ 1.64368677],
[-0.91506481]], dtype=float32)]
1300 0.646356 [array([[ 1.63530898, 0.77730578],
[-2.25297427, -0.18714131]], dtype=float32), array([[ 1.80341673],
[-0.94692999]], dtype=float32)]
1400 0.634718 [array([[ 1.82472682, 0.81886631],
[-2.41079426, -0.21252976]], dtype=float32), array([[ 1.98234606],
[-0.98849398]], dtype=float32)]
1500 0.62147 [array([[ 2.02424455, 0.87210923],
[-2.57822776, -0.24747993]], dtype=float32), array([[ 2.17750072],
[-1.04155648]], dtype=float32)]
1600 0.606784 [array([[ 2.22997069, 0.93909889],
[-2.75242734, -0.29400012]], dtype=float32), array([[ 2.38533711],
[-1.10817075]], dtype=float32)]
1700 0.590812 [array([[ 2.43821955, 1.0220114 ],
[-2.93018866, -0.35449192]], dtype=float32), array([[ 2.60232115],
[-1.19066846]], dtype=float32)]
1800 0.573561 [array([[ 2.64602733, 1.12309039],
[-3.10837817, -0.43190041]], dtype=float32), array([[ 2.82550263],
[-1.29170716]], dtype=float32)]
1900 0.55479 [array([[ 2.85138035, 1.24457586],
[-3.28429747, -0.52991807]], dtype=float32), array([[ 3.05288291],
[-1.41430354]], dtype=float32)]
2000 0.533936 [array([[ 3.05309701, 1.38850856],
[-3.45586586, -0.65311372]], dtype=float32), array([[ 3.28347445],
[-1.56185937]], dtype=float32)]
2100 0.510125 [array([[ 3.2504313 , 1.55629778],
[-3.62161446, -0.80667043]], dtype=float32), array([[ 3.51700521],
[-1.73809361]], dtype=float32)]
2200 0.482341 [array([[ 3.44253826, 1.74794185],
[-3.78054547, -0.99522465]], dtype=float32), array([[ 3.75334358],
[-1.94665289]], dtype=float32)]
2300 0.449841 [array([[ 3.62807155, 1.96107233],
[-3.93194413, -1.22049022]], dtype=float32), array([[ 3.99188328],
[-2.18997478]], dtype=float32)]
2400 0.412751 [array([[ 3.80520439, 2.1904552 ],
[-4.07525444, -1.47842371]], dtype=float32), array([[ 4.23122358],
[-2.46730566]], dtype=float32)]
2500 0.372452 [array([[ 3.97205162, 2.4285593 ],
[-4.21003056, -1.75819361]], dtype=float32), array([[ 4.46930647],
[-2.7729063 ]], dtype=float32)]
2600 0.331327 [array([[ 4.12714481, 2.66700411],
[-4.33596134, -2.0447464 ]], dtype=float32), array([[ 4.70376778],
[-3.09626818]], dtype=float32)]
2700 0.291908 [array([[ 4.26968145, 2.89818406],
[-4.45292044, -2.32361126]], dtype=float32), array([[ 4.93228865],
[-3.42478943]], dtype=float32)]
2800 0.25606 [array([[ 4.3995347 , 3.11642528],
[-4.56101608, -2.58455968]], dtype=float32), array([[ 5.15286827],
[-3.74713326]], dtype=float32)]
2900 0.224682 [array([[ 4.51716137, 3.31841516],
[-4.66058874, -2.82245851]], dtype=float32), array([[ 5.36403084],
[-4.05526829]], dtype=float32)]
3000 0.197867 [array([[ 4.62342453, 3.502949 ],
[-4.75216293, -3.03614521]], dtype=float32), array([[ 5.56490612],
[-4.34472513]], dtype=float32)]
3100 0.175237 [array([[ 4.71940899, 3.67031145],
[-4.83638096, -3.22681522]], dtype=float32), array([[ 5.75516653],
[-4.61384392]], dtype=float32)]
3200 0.156222 [array([[ 4.80625057, 3.82165337],
[-4.91391754, -3.39673567]], dtype=float32), array([[ 5.93491077],
[-4.86275816]], dtype=float32)]
3300 0.140232 [array([[ 4.8850565 , 3.95850992],
[-4.98544693, -3.54847312]], dtype=float32), array([[ 6.10451794],
[-5.09259319]], dtype=float32)]
3400 0.126728 [array([[ 4.95684147, 4.08250618],
[-5.05160141, -3.68449736]], dtype=float32), array([[ 6.26453638],
[-5.30491543]], dtype=float32)]
3500 0.115258 [array([[ 5.02249527, 4.19519138],
[-5.11295462, -3.80701661]], dtype=float32), array([[ 6.4156003 ],
[-5.50141096]], dtype=float32)]
3600 0.105449 [array([[ 5.08279848, 4.29797363],
[-5.17001343, -3.91793513]], dtype=float32), array([[ 6.55835581],
[-5.68372202]], dtype=float32)]
3700 0.0970004 [array([[ 5.13840675, 4.39208651],
[-5.22323704, -4.01886606]], dtype=float32), array([[ 6.69344759],
[-5.85336399]], dtype=float32)]
3800 0.0896734 [array([[ 5.18989706, 4.47860336],
[-5.27302361, -4.11116266]], dtype=float32), array([[ 6.8214817 ],
[-6.01169872]], dtype=float32)]
3900 0.0832757 [array([[ 5.23775291, 4.55844879],
[-5.31972647, -4.19596148]], dtype=float32), array([[ 6.94302559],
[-6.15992975]], dtype=float32)]
4000 0.0776533 [array([[ 5.28239202, 4.63241005],
[-5.36364794, -4.27421331]], dtype=float32), array([[ 7.05860233],
[-6.29910755]], dtype=float32)]
4100 0.0726822 [array([[ 5.32416725, 4.70117092],
[-5.40506172, -4.34672022]], dtype=float32), array([[ 7.1686902 ],
[-6.43015718]], dtype=float32)]
4200 0.068262 [array([[ 5.36338091, 4.76529598],
[-5.4441967 , -4.41415644]], dtype=float32), array([[ 7.27372646],
[-6.55388308]], dtype=float32)]
4300 0.0643109 [array([[ 5.4003005 , 4.82529211],
[-5.48126364, -4.47710085]], dtype=float32), array([[ 7.37408733],
[-6.67099094]], dtype=float32)]
4400 0.0607616 [array([[ 5.43515253, 4.88159037],
[-5.51644325, -4.5360384 ]], dtype=float32), array([[ 7.47013569],
[-6.78209686]], dtype=float32)]
4500 0.0575586 [array([[ 5.46813297, 4.93456221],
[-5.54989767, -4.59139109]], dtype=float32), array([[ 7.56219149],
[-6.88773727]], dtype=float32)]
4600 0.0546559 [array([[ 5.49941397, 4.98452997],
[-5.58177042, -4.64352036]], dtype=float32), array([[ 7.6505456 ],
[-6.98838997]], dtype=float32)]
4700 0.0520148 [array([[ 5.52914906, 5.03177595],
[-5.61218834, -4.69273949]], dtype=float32), array([[ 7.73545742],
[-7.0844779 ]], dtype=float32)]
4800 0.0496029 [array([[ 5.55746984, 5.07654762],
[-5.64126539, -4.73932123]], dtype=float32), array([[ 7.81716585],
[-7.17637157]], dtype=float32)]
4900 0.0473928 [array([[ 5.58449221, 5.11906195],
[-5.66910315, -4.78350353]], dtype=float32), array([[ 7.8958869 ],
[-7.26440144]], dtype=float32)]
5000 0.0453608 [array([[ 5.61032104, 5.15951157],
[-5.69579315, -4.82549667]], dtype=float32), array([[ 7.97181797],
[-7.34886265]], dtype=float32)]
5100 0.0434872 [array([[ 5.63504982, 5.19806623],
[-5.72141838, -4.86548471]], dtype=float32), array([[ 8.04513645],
[-7.43001938]], dtype=float32)]
5200 0.0417548 [array([[ 5.65876055, 5.23487663],
[-5.74605036, -4.90363216]], dtype=float32), array([[ 8.11600304],
[-7.50811052]], dtype=float32)]
5300 0.0401485 [array([[ 5.68152809, 5.2700758 ],
[-5.76975775, -4.94008207]], dtype=float32), array([[ 8.18457127],
[-7.58334684]], dtype=float32)]
5400 0.0386556 [array([[ 5.70341969, 5.30378723],
[-5.79260159, -4.97496605]], dtype=float32), array([[ 8.25097561],
[-7.65592194]], dtype=float32)]
5500 0.0372648 [array([[ 5.72449446, 5.33611822],
[-5.81463623, -5.00840139]], dtype=float32), array([[ 8.31534195],
[-7.72600985]], dtype=float32)]
5600 0.0359662 [array([[ 5.74480629, 5.36716652],
[-5.83591318, -5.04048967]], dtype=float32), array([[ 8.37778664],
[-7.79377079]], dtype=float32)]
5700 0.0347513 [array([[ 5.76440477, 5.39701986],
[-5.85647774, -5.07132673]], dtype=float32), array([[ 8.43841457],
[-7.85934782]], dtype=float32)]
5800 0.0336124 [array([[ 5.78333664, 5.42575932],
[-5.87637377, -5.10099745]], dtype=float32), array([[ 8.49732208],
[-7.92287254]], dtype=float32)]
5900 0.0325428 [array([[ 5.80164194, 5.4534564 ],
[-5.89563751, -5.12957811]], dtype=float32), array([[ 8.55460644],
[-7.98446655]], dtype=float32)]
6000 0.0315366 [array([[ 5.81935787, 5.48017645],
[-5.91430616, -5.15713835]], dtype=float32), array([[ 8.61034107],
[-8.04423904]], dtype=float32)]
6100 0.0305883 [array([[ 5.83651781, 5.50598097],
[-5.93241215, -5.183743 ]], dtype=float32), array([[ 8.66460991],
[-8.1022892 ]], dtype=float32)]
6200 0.0296932 [array([[ 5.85315418, 5.53092432],
[-5.94998598, -5.20945024]], dtype=float32), array([[ 8.71748352],
[-8.1587162 ]], dtype=float32)]
6300 0.0288471 [array([[ 5.86929607, 5.55505514],
[-5.96705532, -5.23431301]], dtype=float32), array([[ 8.76903152],
[-8.21360016]], dtype=float32)]
6400 0.0280462 [array([[ 5.88496876, 5.57842398],
[-5.98364687, -5.25838041]], dtype=float32), array([[ 8.81931496],
[-8.26702309]], dtype=float32)]
6500 0.0272869 [array([[ 5.90019941, 5.6010704 ],
[-5.99978304, -5.28169632]], dtype=float32), array([[ 8.86839581],
[-8.31906128]], dtype=float32)]
6600 0.0265663 [array([[ 5.91500998, 5.62303448],
[-6.01548719, -5.30430317]], dtype=float32), array([[ 8.9163208 ],
[-8.36977959]], dtype=float32)]
6700 0.0258815 [array([[ 5.92942047, 5.64435434],
[-6.03078127, -5.32623911]], dtype=float32), array([[ 8.96314621],
[-8.41924191]], dtype=float32)]
6800 0.02523 [array([[ 5.94345284, 5.665061 ],
[-6.04568386, -5.34753942]], dtype=float32), array([[ 9.00891876],
[-8.46750832]], dtype=float32)]
6900 0.0246093 [array([[ 5.95712185, 5.68518734],
[-6.0602107 , -5.36823606]], dtype=float32), array([[ 9.05368328],
[-8.51463509]], dtype=float32)]
7000 0.0240175 [array([[ 5.97044706, 5.70476246],
[-6.07438278, -5.3883605 ]], dtype=float32), array([[ 9.09748268],
[-8.56067181]], dtype=float32)]
7100 0.0234526 [array([[ 5.98344469, 5.72381163],
[-6.08821344, -5.40793991]], dtype=float32), array([[ 9.14035606],
[-8.60566902]], dtype=float32)]
7200 0.0229129 [array([[ 5.99612761, 5.74236059],
[-6.10171843, -5.42700195]], dtype=float32), array([[ 9.18234062],
[-8.64966774]], dtype=float32)]
7300 0.0223966 [array([[ 6.00851202, 5.76043367],
[-6.11491203, -5.44557047]], dtype=float32), array([[ 9.22347069],
[-8.69271469]], dtype=float32)]
7400 0.0219025 [array([[ 6.02060843, 5.77805185],
[-6.12780571, -5.46366692]], dtype=float32), array([[ 9.2637825 ],
[-8.73484612]], dtype=float32)]
7500 0.021429 [array([[ 6.0324316 , 5.79523563],
[-6.14041281, -5.48131418]], dtype=float32), array([[ 9.30330181],
[-8.77610207]], dtype=float32)]
7600 0.020975 [array([[ 6.04399157, 5.81200504],
[-6.15274429, -5.49853134]], dtype=float32), array([[ 9.3420639 ],
[-8.81651688]], dtype=float32)]
7700 0.0205393 [array([[ 6.05530024, 5.82837725],
[-6.16481304, -5.51533747]], dtype=float32), array([[ 9.38009262],
[-8.85612106]], dtype=float32)]
7800 0.0201208 [array([[ 6.06636524, 5.84436989],
[-6.17662716, -5.53175116]], dtype=float32), array([[ 9.41741753],
[-8.89494801]], dtype=float32)]
7900 0.0197185 [array([[ 6.07719707, 5.85999727],
[-6.18819761, -5.54778862]], dtype=float32), array([[ 9.45406151],
[-8.93302822]], dtype=float32)]
8000 0.0193315 [array([[ 6.08780718, 5.87527657],
[-6.19953299, -5.56346369]], dtype=float32), array([[ 9.49004841],
[-8.97038651]], dtype=float32)]
8100 0.0189591 [array([[ 6.09820175, 5.89021969],
[-6.21064234, -5.57879305]], dtype=float32), array([[ 9.52540302],
[-9.00705242]], dtype=float32)]
8200 0.0186004 [array([[ 6.10839033, 5.9048419 ],
[-6.22153378, -5.5937891 ]], dtype=float32), array([[ 9.56014156],
[-9.04304886]], dtype=float32)]
8300 0.0182546 [array([[ 6.11837864, 5.91915369],
[-6.2322154 , -5.60846615]], dtype=float32), array([[ 9.59428978],
[-9.07840252]], dtype=float32)]
8400 0.0179211 [array([[ 6.12817478, 5.93316793],
[-6.24269438, -5.62283421]], dtype=float32), array([[ 9.6278677 ],
[-9.11312866]], dtype=float32)]
8500 0.0175993 [array([[ 6.13778496, 5.94689608],
[-6.25297785, -5.6369071 ]], dtype=float32), array([[ 9.66089153],
[-9.14725494]], dtype=float32)]
8600 0.0172885 [array([[ 6.14721632, 5.96034813],
[-6.26307201, -5.65069532]], dtype=float32), array([[ 9.69337463],
[-9.18080044]], dtype=float32)]
8700 0.0169882 [array([[ 6.15647697, 5.97353363],
[-6.27298355, -5.66420984]], dtype=float32), array([[ 9.7253437 ],
[-9.21378231]], dtype=float32)]
8800 0.016698 [array([[ 6.16556835, 5.98646402],
[-6.28271961, -5.67745829]], dtype=float32), array([[ 9.75680542],
[-9.24622154]], dtype=float32)]
8900 0.0164172 [array([[ 6.17450047, 5.99914646],
[-6.29228354, -5.6904521 ]], dtype=float32), array([[ 9.78778076],
[-9.27813339]], dtype=float32)]
9000 0.0161456 [array([[ 6.18327713, 6.01159048],
[-6.30168438, -5.70319986]], dtype=float32), array([[ 9.81828213],
[-9.30953789]], dtype=float32)]
9100 0.0158825 [array([[ 6.19190168, 6.02380228],
[-6.3109231 , -5.71570969]], dtype=float32), array([[ 9.84832191],
[-9.34044552]], dtype=float32)]
9200 0.0156277 [array([[ 6.2003808 , 6.03579283],
[-6.3200078 , -5.72799015]], dtype=float32), array([[ 9.87791538],
[-9.37087345]], dtype=float32)]
9300 0.0153807 [array([[ 6.20871925, 6.04756784],
[-6.32894278, -5.74004889]], dtype=float32), array([[ 9.90707874],
[-9.40083599]], dtype=float32)]
9400 0.0151413 [array([[ 6.21692038, 6.05913448],
[-6.33773184, -5.75189114]], dtype=float32), array([[ 9.93581772],
[-9.43034935]], dtype=float32)]
9500 0.014909 [array([[ 6.22498751, 6.07049847],
[-6.34637976, -5.76352596]], dtype=float32), array([[ 9.96414757],
[-9.45942593]], dtype=float32)]
9600 0.0146836 [array([[ 6.23292685, 6.08166742],
[-6.35489035, -5.77496052]], dtype=float32), array([[ 9.99207973],
[-9.48807526]], dtype=float32)]
9700 0.0144647 [array([[ 6.24074268, 6.09264851],
[-6.36326933, -5.78619957]], dtype=float32), array([[ 10.01962471],
[ -9.51631165]], dtype=float32)]
9800 0.0142521 [array([[ 6.24843407, 6.10344648],
[-6.37151814, -5.79724932]], dtype=float32), array([[ 10.04679298],
[ -9.54414845]], dtype=float32)]
9900 0.0140456 [array([[ 6.25601053, 6.11406422],
[-6.3796401 , -5.80811596]], dtype=float32), array([[ 10.07359505],
[ -9.57159519]], dtype=float32)]
10000 0.0138448 [array([[ 6.26347113, 6.12451124],
[-6.38764334, -5.81880617]], dtype=float32), array([[ 10.10004139],
[ -9.59866238]], dtype=float32)]
Hypothesis: [[ 0.01338218]
[ 0.98166394]
[ 0.98809403]
[ 0.01135799]]
Correct: [[ 0.]
[ 1.]
[ 1.]
[ 0.]]
Accuracy: 1.0
|
notebooks/target_vol_executor.ipynb | ###Markdown
Parameter Setting----------------------
###Code
def _map_freq(freq):
if freq == '1m':
horizon = 21
elif freq == '1w':
horizon = 4
elif freq == '2w':
horizon = 8
elif freq == '3w':
horizon = 12
elif freq == '1d':
horizon = 0
else:
raise ValueError("Unrecognized freq: {0}".format(freq))
return horizon
alpha_factors = {
'eps': LAST('eps_q'),
'roe': LAST('roe_q'),
'bdto': LAST('BDTO'),
'cfinc1': LAST('CFinc1'),
'chv': LAST('CHV'),
'rvol': LAST('RVOL'),
'val': LAST('VAL'),
'grev': LAST('GREV'),
'droeafternonorecurring': LAST('DROEAfterNonRecurring')
}
engine = SqlEngine()
universe = Universe('custom', ['zz500'])
benchmark_code = 905
neutralize_risk = ['SIZE'] + industry_styles
constraint_risk = ['SIZE'] + industry_styles
start_date = '2012-01-01'
end_date = '2017-11-02'
industry_lower = 1.
industry_upper = 1.
freq = '2w'
batch = 8
data_package = fetch_data_package(engine,
alpha_factors=alpha_factors,
start_date=start_date,
end_date=end_date,
frequency=freq,
universe=universe,
benchmark=benchmark_code,
batch=batch,
neutralized_risk=neutralize_risk,
pre_process=[winsorize_normal],
post_process=[winsorize_normal],
warm_start=batch)
train_x = data_package['train']['x']
train_y = data_package['train']['y']
predict_x = data_package['predict']['x']
predict_y = data_package['predict']['y']
settlement = data_package['settlement']
features = data_package['x_names']
###Output
_____no_output_____
###Markdown
Naive Executor Strategy---------------------------------
###Code
dates = sorted(train_x.keys())
model_df = pd.Series()
horizon = _map_freq(freq)
rets = []
turn_overs = []
executor = NaiveExecutor()
leverags = []
for i, ref_date in enumerate(dates):
# Model Training and Prediction
sample_train_x = train_x[ref_date]
sample_train_y = train_y[ref_date].flatten()
model = LinearRegression()
model.fit(sample_train_x, sample_train_y)
sample_test_x = predict_x[ref_date]
sample_test_y = predict_y[ref_date].flatten()
er = model.predict(sample_test_x)
model_df.loc[ref_date] = model
# Constraints Building #
today_settlement = settlement[settlement.trade_date == ref_date]
codes = today_settlement.code.tolist()
dx_return = None
risk_exp = today_settlement[neutralize_risk].values.astype(float)
industry = today_settlement.industry.values
benchmark_w = today_settlement.weight.values
constraint_exp = today_settlement[constraint_risk].values
risk_exp_expand = np.concatenate((constraint_exp, np.ones((len(risk_exp), 1))), axis=1).astype(float)
risk_names = constraint_risk + ['total']
risk_target = risk_exp_expand.T @ benchmark_w
lbound = np.zeros(len(today_settlement))
ubound = 0.01 + benchmark_w
constraint = Constraints(risk_exp_expand, risk_names)
for i, name in enumerate(risk_names):
if name == 'total' or name == 'SIZE':
constraint.set_constraints(name, lower_bound=risk_target[i], upper_bound=risk_target[i])
else:
constraint.set_constraints(name, lower_bound=risk_target[i]*industry_lower, upper_bound=risk_target[i]*industry_upper)
target_pos, _ = er_portfolio_analysis(er,
industry,
dx_return,
constraint,
False,
benchmark_w)
target_pos['code'] = today_settlement['code'].values
turn_over, executed_pos = executor.execute(target_pos=target_pos)
executed_codes = executed_pos.code.tolist()
dx_retuns = engine.fetch_dx_return(ref_date.strftime('%Y-%m-%d'), executed_codes, horizon=horizon)
result = pd.merge(executed_pos, today_settlement[['code', 'weight']], on=['code'], how='inner')
result = pd.merge(result, dx_retuns, on=['code'])
leverage = result.weight_x.abs().sum()
ret = (result.weight_x - result.weight_y * leverage / result.weight_y.sum()).values @ result.dx.values
rets.append(ret)
executor.set_current(executed_pos)
turn_overs.append(turn_over)
leverags.append(leverage)
print('{0} is finished'.format(ref_date))
ret_df1 = pd.DataFrame({'returns': rets, 'turn_over': turn_overs, 'leverage': leverage}, index=dates)
ret_df1.loc[advanceDateByCalendar('china.sse', dates[-1], freq)] = 0.
ret_df1 = ret_df1.shift(1)
ret_df1.iloc[0] = 0.
ret_df1['tc_cost'] = ret_df1.turn_over * 0.002
ret_df1[['returns', 'tc_cost']].cumsum().plot(figsize=(12, 6), title='Fixed frequency rebalanced: {0}'.format(freq), secondary_y='tc_cost')
ret_atfer_tc = ret_df1.returns - ret_df1.tc_cost
print("sharp: ", ret_atfer_tc.mean() / ret_atfer_tc.std() * np.sqrt(52))
ret_df1[['returns', 'leverage']].rolling(window=60).std().plot(figsize=(12, 6), title='rolling std', secondary_y='leverage')
###Output
_____no_output_____
###Markdown
Threshold Turn Over + Strategy------------------------------------
###Code
freq = '1d'
horizon = _map_freq(freq)
dates = makeSchedule(start_date, end_date, tenor=freq, calendar='china.sse', dateGenerationRule=DateGeneration.Backward)
all_data = engine.fetch_data_range(universe, alpha_factors, dates=dates, benchmark=905)
factor_all_data = all_data['factor']
factor_groups = factor_all_data.groupby('trade_date')
rets = []
turn_overs = []
turn_over_threshold = 0.90
executor = ThresholdExecutor(turn_over_threshold=turn_over_threshold)
execution_pipeline = ExecutionPipeline(executors=[executor])
leverags = []
execution_dates = []
horizon = _map_freq(freq)
for i, value in enumerate(factor_groups):
date = value[0]
data = value[1]
# get the latest model
models = model_df[model_df.index <= date]
if models.empty:
continue
execution_dates.append(date)
model = models[-1]
codes = data.code.tolist()
ref_date = date.strftime('%Y-%m-%d')
total_data = data.dropna()
dx_return = None
risk_exp = total_data[neutralize_risk].values.astype(float)
industry = total_data.industry.values
benchmark_w = total_data.weight.values
constraint_exp = total_data[constraint_risk].values
risk_exp_expand = np.concatenate((constraint_exp, np.ones((len(risk_exp), 1))), axis=1).astype(float)
risk_names = constraint_risk + ['total']
risk_target = risk_exp_expand.T @ benchmark_w
lbound = np.zeros(len(total_data))
ubound = 0.01 + benchmark_w
constraint = Constraints(risk_exp_expand, risk_names)
for i, name in enumerate(risk_names):
if name == 'total' or name == 'SIZE':
constraint.set_constraints(name, lower_bound=risk_target[i], upper_bound=risk_target[i])
else:
constraint.set_constraints(name, lower_bound=risk_target[i]*industry_lower, upper_bound=risk_target[i]*industry_upper)
factors_values = factor_processing(total_data[features].values,
pre_process=[winsorize_normal],
post_process=[winsorize_normal])
er = model.predict(factors_values)
target_pos, _ = er_portfolio_analysis(er,
industry,
dx_return,
constraint,
False,
benchmark_w)
target_pos['code'] = total_data['code'].values
turn_over, executed_pos = execution_pipeline.execute(target_pos=target_pos)
executed_codes = executed_pos.code.tolist()
dx_retuns = engine.fetch_dx_return(date, executed_codes, horizon=horizon)
result = pd.merge(executed_pos, total_data, on=['code'], how='inner')
result = pd.merge(result, dx_retuns, on=['code'])
leverage = result.weight_x.abs().sum()
ret = (result.weight_x - result.weight_y * leverage / result.weight_y.sum()).values @ result.dx.values
rets.append(ret)
leverags.append(executed_pos.weight.abs().sum())
turn_overs.append(turn_over)
print('{0} is finished: {1}'.format(date, turn_over))
ret_df2 = pd.DataFrame({'returns': rets, 'turn_over': turn_overs, 'leverage': leverags}, index=execution_dates)
ret_df2.loc[advanceDateByCalendar('china.sse', dates[-1], freq)] = 0.
ret_df2 = ret_df2.shift(1)
ret_df2.iloc[0] = 0.
ret_df2['tc_cost'] = ret_df2.turn_over * 0.002
ret_df2[['returns', 'tc_cost']].cumsum().plot(figsize=(12, 6),
title='Threshold tc rebalanced: Monitored freq {0}, {1} tc'.format(freq,
turn_over_threshold),
secondary_y='tc_cost')
ret_atfer_tc = ret_df2.returns - ret_df2.tc_cost
print("sharp: ", ret_atfer_tc.mean() / ret_atfer_tc.std() * np.sqrt(252))
ret_df2[['returns', 'leverage']].rolling(window=60).std().plot(figsize=(12, 6), title='rolling std', secondary_y='leverage')
###Output
_____no_output_____
###Markdown
Target Vol + Threshold Turn Over + Strategy------------------------
###Code
rets = []
turn_overs = []
target_vol = 0.002
turn_over_threshold = 0.70
window = 30
executor1 = TargetVolExecutor(window=window, target_vol=target_vol)
executor2 = ThresholdExecutor(turn_over_threshold=turn_over_threshold, is_relative=False)
execution_pipeline = ExecutionPipeline(executors=[executor1, executor2])
leverags = []
execution_dates = []
horizon = _map_freq(freq)
for i, value in enumerate(factor_groups):
date = value[0]
data = value[1]
# get the latest model
models = model_df[model_df.index <= date]
if models.empty:
continue
execution_dates.append(date)
model = models[-1]
codes = data.code.tolist()
ref_date = date.strftime('%Y-%m-%d')
total_data = data.dropna()
dx_return = None
risk_exp = total_data[neutralize_risk].values.astype(float)
industry = total_data.industry.values
benchmark_w = total_data.weight.values
constraint_exp = total_data[constraint_risk].values
risk_exp_expand = np.concatenate((constraint_exp, np.ones((len(risk_exp), 1))), axis=1).astype(float)
risk_names = constraint_risk + ['total']
risk_target = risk_exp_expand.T @ benchmark_w
lbound = np.zeros(len(total_data))
ubound = 0.01 + benchmark_w
constraint = Constraints(risk_exp_expand, risk_names)
for i, name in enumerate(risk_names):
if name == 'total' or name == 'SIZE':
constraint.set_constraints(name, lower_bound=risk_target[i], upper_bound=risk_target[i])
else:
constraint.set_constraints(name, lower_bound=risk_target[i]*industry_lower, upper_bound=risk_target[i]*industry_upper)
factors_values = factor_processing(total_data[features].values,
pre_process=[winsorize_normal],
post_process=[winsorize_normal])
er = model.predict(factors_values)
target_pos, _ = er_portfolio_analysis(er,
industry,
dx_return,
constraint,
False,
benchmark_w)
target_pos['code'] = total_data['code'].values
turn_over, executed_pos = execution_pipeline.execute(target_pos=target_pos)
executed_codes = executed_pos.code.tolist()
dx_retuns = engine.fetch_dx_return(date, executed_codes, horizon=horizon)
result = pd.merge(executed_pos, total_data, on=['code'], how='inner')
result = pd.merge(result, dx_retuns, on=['code'])
leverage = result.weight_x.abs().sum()
ret = (result.weight_x - result.weight_y * leverage / result.weight_y.sum()).values @ result.dx.values
rets.append(ret)
execution_pipeline.update({'return': ret})
turn_overs.append(turn_over)
leverags.append(executed_pos.weight.abs().sum())
print('{0} is finished: turn_over: {1}, levegare: {2}'.format(date, turn_over, leverags[-1]))
ret_df3 = pd.DataFrame({'returns': rets, 'turn_over': turn_overs, 'leverage': leverags}, index=execution_dates)
ret_df3.loc[advanceDateByCalendar('china.sse', dates[-1], freq)] = 0.
ret_df3 = ret_df3.shift(1)
ret_df3.iloc[0] = 0.
ret_df3['tc_cost'] = ret_df3.turn_over * 0.002
ret_df3[['returns', 'tc_cost']].cumsum().plot(figsize=(12, 6),
title='Threshold tc + Target vol rebalanced: Monitored freq {0}, {1} tc, {2} vol target'.format(freq,
turn_over_threshold,
target_vol),
secondary_y='tc_cost')
ret_df3[['returns', 'leverage']].rolling(window=60).std().plot(figsize=(12, 6), title='rolling std', secondary_y='leverage')
ret_atfer_tc = ret_df3.returns - ret_df3.tc_cost
print("sharp: ", ret_atfer_tc.mean() / ret_atfer_tc.std() * np.sqrt(252))
ret_df3.tail()
###Output
_____no_output_____
###Markdown
Target Turn Over + Strategy------------------------
###Code
rets = []
turn_overs = []
turn_over_target_base = 0.04
executor = NaiveExecutor()
execution_pipeline = ExecutionPipeline(executors=[executor])
leverags = []
previous_pos = pd.DataFrame()
execution_dates = []
horizon = _map_freq(freq)
for i, value in enumerate(factor_groups):
date = value[0]
data = value[1]
# get the latest model
models = model_df[model_df.index <= date]
if models.empty:
continue
execution_dates.append(date)
model = models[-1]
codes = data.code.tolist()
ref_date = date.strftime('%Y-%m-%d')
total_data = data.dropna()
dx_return = None
risk_exp = total_data[neutralize_risk].values.astype(float)
industry = total_data.industry.values
benchmark_w = total_data.weight.values
constraint_exp = total_data[constraint_risk].values
risk_exp_expand = np.concatenate((constraint_exp, np.ones((len(risk_exp), 1))), axis=1).astype(float)
risk_names = constraint_risk + ['total']
risk_target = risk_exp_expand.T @ benchmark_w
lbound = np.zeros(len(total_data))
ubound = 0.01 + benchmark_w
constraint = Constraints(risk_exp_expand, risk_names)
for i, name in enumerate(risk_names):
if name == 'total' or name == 'SIZE':
constraint.set_constraints(name, lower_bound=risk_target[i], upper_bound=risk_target[i])
else:
constraint.set_constraints(name, lower_bound=risk_target[i]*industry_lower, upper_bound=risk_target[i]*industry_upper)
factors_values = factor_processing(total_data[features].values,
pre_process=[winsorize_normal],
post_process=[winsorize_normal])
er = model.predict(factors_values)
codes = total_data['code'].values
if previous_pos.empty:
current_position = None
turn_over_target = None
else:
previous_pos.set_index('code', inplace=True)
remained_pos = previous_pos.loc[codes]
remained_pos.fillna(0., inplace=True)
turn_over_target = turn_over_target_base
current_position = remained_pos.weight.values
try:
target_pos, _ = er_portfolio_analysis(er,
industry,
dx_return,
constraint,
False,
benchmark_w,
current_position=current_position,
turn_over_target=turn_over_target)
except ValueError:
print('{0} full rebalance'.format(date))
target_pos, _ = er_portfolio_analysis(er,
industry,
dx_return,
constraint,
False,
benchmark_w)
target_pos['code'] = codes
turn_over, executed_pos = execution_pipeline.execute(target_pos=target_pos)
executed_codes = executed_pos.code.tolist()
dx_retuns = engine.fetch_dx_return(date, executed_codes, horizon=horizon)
result = pd.merge(executed_pos, total_data, on=['code'], how='inner')
result = pd.merge(result, dx_retuns, on=['code'])
leverage = result.weight_x.abs().sum()
ret = (result.weight_x - result.weight_y * leverage / result.weight_y.sum()).values @ result.dx.values
rets.append(ret)
leverags.append(executed_pos.weight.abs().sum())
turn_overs.append(turn_over)
previous_pos = executed_pos
print('{0} is finished: {1}'.format(date, turn_over))
ret_df4 = pd.DataFrame({'returns': rets, 'turn_over': turn_overs, 'leverage': leverags}, index=execution_dates)
ret_df4.loc[advanceDateByCalendar('china.sse', dates[-1], freq)] = 0.
ret_df4 = ret_df4.shift(1)
ret_df4.iloc[0] = 0.
ret_df4['tc_cost'] = ret_df4.turn_over * 0.002
ret_df4[['returns', 'tc_cost']].cumsum().plot(figsize=(12, 6),
title='Target turn over rebalanced: Rebalance freq {0}, {1} turnover_target'.format(freq,
turn_over_target_base),
secondary_y='tc_cost')
ret_atfer_tc = ret_df4.returns - ret_df4.tc_cost
print("sharp: ", ret_atfer_tc.mean() / ret_atfer_tc.std() * np.sqrt(252))
ret_df4[['returns', 'leverage']].rolling(window=60).std().plot(figsize=(12, 6), title='rolling std', secondary_y='leverage')
###Output
_____no_output_____ |
SQLite3_ELLY.ipynb | ###Markdown
From the graph of top 10 genre selling in USA, it can be easily said that given the a list of four new genre that are added into the store, Punk is going to be top selling, Blues, and Pop.
###Code
sale_agent = '''
with cust_total AS(
select
i.customer_id,
SUM(i.total) total_amount,
c.support_rep_id
from invoice i
inner join customer c ON c.customer_id = i.customer_id
GROUP BY 1
),
employ_sales_agent AS (
select
e.title,
AVG(ct.total_amount)
from employee e
inner join cust_total ct on ct.support_rep_id = e.employee_id
)
select
e.first_name||" "||e.last_name employee_name,
e.hire_date,
SUM(ct.total_amount) total_amount
from employee e
inner join cust_total ct ON e.employee_id = ct.support_rep_id
where e.title = 'Sales Support Agent'
Group by 1;
'''
run_query(sale_agent)
result_sale_agent = run_query(sale_agent)
result_sale_agent.set_index('employee_name', inplace = True, drop = True)
result_sale_agent.sort_values('total_amount', inplace = True)
result_sale_agent.plot.barh(title = 'Sale from Sale Agent', legend = False)
plt.show()
country_sales = '''
WITH customer_invoice AS
(
select
i.customer_id,
COUNT(i.invoice_id) invoice_number_by_customer,
SUM(i.total) invoice_total_by_customer,
c.country country_name
FROM invoice i
INNER JOIN customer c ON c.customer_id = i.customer_id
GROUP BY 1
),
country_sale AS (
select
country_name,
COUNT(customer_id) total_customer_country,
SUM(invoice_number_by_customer) total_invoice_country,
SUM(invoice_total_by_customer) total_sale_country
FROM customer_invoice
GROUP BY 1
),
country_other AS (
select
cs.*,
CASE WHEN cs.total_customer_country = 1 THEN "OTHER"
ELSE cs.country_name
END AS new_country,
CASE WHEN cs.total_customer_country = 1 THEN 0
ELSE 1
END AS sort
FROM country_sale cs
)
select
new_country country,
SUM(total_customer_country) Total_customer,
SUM(total_invoice_country) Total_invoice,
SUM(total_sale_country) Total_sale,
ROUND(SUM(total_sale_country)/SUM(total_customer_country),2) average_value_per_customer,
ROUND(SUM(total_sale_country)/SUM(total_invoice_country),2) average_value_per_order
FROM country_other co
GROUP BY new_country
ORDER BY sort DESC, 4 DESC;
'''
run_query(country_sales)
country_sale_result = run_query(country_sales)
country_sale_result.set_index("country", inplace = True)
fig = plt.figure(figsize=(10,10))
for i, col in enumerate(country_sale_result.columns):
ax_i = fig.add_subplot(2,3,i+1)
ax_i = country_sale_result[col].plot.bar(title = col, legend = False)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Even though the total sales in USA is the highest among other countries, it is because USA has more customers. However, it doesn't mean that the average sale value of order is the highest. It is better looking at average values per order among countries if we want to look for potential growth. Therefore, from the above graphs whose title is average_value_per_order, it can be easliy said that Czench Republic, United Kingdom and India.
###Code
combined_invoice_track = '''
select
t.track_id,
t.name track_name,
t.album_id album,
a.title artist_title,
i.invoice_id,
i.invoice_line_id
FROM track t
INNER JOIN invoice_line i ON i.track_id = t.track_id
INNER JOIN album a ON a.album_id = t.album_id;
'''
run_query(combined_invoice_track)
###Output
_____no_output_____ |
12-statistics.ipynb | ###Markdown
StatisticsThe [astropy.stats](http://docs.astropy.org/en/stable/stats/index.html) sub-package includes common statistical tools for astronomy. It is by no means complete and we are open to contributions for more functionality! In this tutorial, we will take a look at some of the existing functionality. ObjectivesUse sigma clipping on arraysUse automatic bin determination for histogramsWork with binomial and Poisson distributions DocumentationThis notebook only shows a subset of the functionality in astropy.stats. For more information about the features presented below as well as other available features, you can read the[astropy.stats documentation](https://docs.astropy.org/en/stable/stats/).
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.rc('image', origin='lower')
plt.rc('figure', figsize=(10, 6))
###Output
_____no_output_____ |
Assignments/Assignment_2/Q1/q1_Arch4_MNIST.ipynb | ###Markdown
Dropout with L2 Weight Regularization
###Code
import numpy as np
import keras
from keras.datasets import mnist
from keras.models import Sequential
from matplotlib import pyplot as plt
from keras.layers import Dense,Flatten
from keras.layers import Conv2D, MaxPooling2D,BatchNormalization,Dropout
from keras.utils import np_utils
from sklearn.metrics import confusion_matrix, f1_score, precision_score, recall_score, classification_report
from keras import optimizers,regularizers
class AccuracyHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.acc = []
self.loss = []
self.val_f1s = []
self.val_recalls = []
self.val_precisions = []
def on_epoch_end(self, batch, logs={}):
self.acc.append(logs.get('acc'))
self.loss.append(logs.get('loss'))
X_val, y_val = self.validation_data[0], self.validation_data[1]
y_predict = np.asarray(model.predict(X_val))
y_val = np.argmax(y_val, axis=1)
y_predict = np.argmax(y_predict, axis=1)
self.val_recalls.append(recall_score(y_val, y_predict, average=None))
self.val_precisions.append(precision_score(y_val, y_predict, average=None))
self.val_f1s.append(f1_score(y_val,y_predict, average=None))
(X_train, y_train), (X_test, y_test) = mnist.load_data()
# print(X_train.shape)
# reshape to be [samples][pixels][width][height]
X_train = X_train.reshape(X_train.shape[0],28, 28,1).astype('float32')
X_test = X_test.reshape(X_test.shape[0],28, 28,1).astype('float32')
# normalize inputs from 0-255 to 0-1
X_train = X_train / 255
X_test = X_test / 255
# # one hot encode outputs
y_train = np_utils.to_categorical(y_train)
y_test = np_utils.to_categorical(y_test)
print(y_train.shape)
num_classes = y_test.shape[1]
print(num_classes)
input_shape=(28,28,1)
epochs=10
batch_size = 512
history = AccuracyHistory()
def create_deep_model(opt,loss):
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),activation='relu',input_shape=input_shape))
model.add(MaxPooling2D((2, 2)))
model.add(Dropout(0.20))
model.add(Conv2D(64, (3, 3), activation='relu',padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(128, (3, 3), activation='relu',padding='same'))
model.add(Dropout(0.25))
model.add(BatchNormalization())
model.add(Flatten())
model.add(Dense(1024, activation='relu'))
model.add(Dropout(0.25))
model.add(Dense(num_classes, activation='softmax',kernel_regularizer=regularizers.l2(0.01)))
model.compile(optimizer=opt,loss=loss,metrics=['accuracy'])
return model
def create_optimizer(opt_name,lr,decay):
if opt_name == "SGD":
opt = optimizers.SGD(lr=lr, decay=decay)
elif opt_name == "Adam":
opt = optimizers.Adam(lr=lr, decay=decay)
elif opt_name == "RMSprop":
opt = optimizers.RMSprop(lr=lr, decay=decay)
elif opt_name == "Adagrad":
opt = optimizers.Adagrad(lr=lr, decay=decay)
return opt
def create_model(filters,filt1_size,conv_stride,pool_size,pool_stride,opt,loss):
model=Sequential()
model.add(Conv2D(filters, kernel_size=(filt1_size, filt1_size), strides=(conv_stride, conv_stride),activation='relu',input_shape=input_shape))
model.add(BatchNormalization())
model.add(MaxPooling2D(pool_size=(pool_size, pool_size), strides=(pool_stride,pool_stride), padding='valid'))
model.add(Flatten())
model.add(Dense(1024,activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(optimizer=opt,loss=loss,metrics=['accuracy'])
return model
def fit_model(epochs,batch_size):
model.fit(X_train, y_train,batch_size=batch_size,epochs=epochs,validation_split=0.05,callbacks=[history])
score = model.evaluate(X_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
y_pred = model.predict_classes(X_test)
cnf_mat = confusion_matrix(np.argmax(y_test,axis=1), y_pred)
return cnf_mat,score,y_pred
lr = 0.001
decay = 1e-6
#decay = 0.0
epochs=10
batch_size = 1024
opt = create_optimizer('Adam',lr,decay)
loss = "categorical_crossentropy"
filters,filt1_size,conv_stride,pool_size,pool_stride = 32,7,1,2,2
model = create_deep_model(opt,loss)
print(model.summary())
cnf_mat,score,y_pred = fit_model(epochs,batch_size)
from keras.models import load_model
model.save('Dropout_model_MNIST.h5')
fscore=f1_score(np.argmax(y_test,axis=1), y_pred,average=None)
recall=recall_score(np.argmax(y_test,axis=1), y_pred,average=None)
prec=precision_score(np.argmax(y_test,axis=1), y_pred,average=None)
def plot(r1,r2,data,Info):
plt.plot(range(r1,r2),data)
plt.xlabel('Epochs')
plt.ylabel(Info)
plt.show()
plot(1,epochs+1,history.acc,'Accuracy')
plot(1,epochs+1,history.loss,'Loss')
plt.plot(recall,label='Recall')
plt.plot(prec,label='Precision')
plt.xlabel('Class')
plt.ylabel('F-score vs Recall vs Precision')
plt.plot(fscore,label='F-score')
plt.legend()
avg_fscore=np.mean(fscore)
print(avg_fscore)
avg_precision=np.mean(prec)
print(avg_precision)
avg_recall=np.mean(recall)
print(avg_recall)
cnf_mat = confusion_matrix(np.argmax(y_test,axis=1), y_pred)
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
conf = cnf_mat
fig, ax = plt.subplots(figsize=(30,30))
im = ax.imshow(conf,alpha=0.5)
# plt.show()
# We want to show all ticks...
ax.set_xticks(np.arange(cnf_mat.shape[0]))
ax.set_yticks(np.arange(cnf_mat.shape[1]))
# ... and label them with the respective list entries
ax.set_xticklabels(np.arange(0,96))
ax.set_yticklabels(np.arange(0,96))
# Rotate the tick labels and set their alignment.
plt.setp(ax.get_xticklabels(), rotation=45, ha="right",
rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
for i in range(cnf_mat.shape[0]):
for j in range(cnf_mat.shape[1]):
text = ax.text(j, i, conf[i, j],
ha="center", va="center",color="black",fontsize=10)
ax.set_title("Confusion matrix",fontsize=20)
fig.tight_layout()
# fig.savefig('plot1_cnf.png')
plt.show()
del model
###Output
_____no_output_____ |
pitch_detection_from_samples.ipynb | ###Markdown
Implementing and comparing several pitch detection methods on sample filesFor simplicity I am using the Anaconda distribution on my Macbook Pro for this notebook. The purpose is to first experiment here with sample WAV files. Each file comes from a database of free samples provided free of rights by the Philharmonia Orchestra at [http://www.philharmonia.co.uk/explore/sound_samples/](http://www.philharmonia.co.uk/explore/sound_samples/).We will use 6 samples representing a long Forte string pick of each of the 6 strings of an accoustic guitar tuned in Standard E.Note: I have converted the sample files myself from their original mp3 format to wav format with 32bit, 44100Hz and mono channel.We will use two different methods for detecting the pitch and compare their results. For reference, here is the list of frequencies of all 6 strings expected for a well tuned guitar:String | Frequency | Scientific pitch notation | Sample--- | --- | --- | ---1 (E) | 329.63 Hz | E4 | [Sample file](samples/guitar_E2_very-long_forte_normal.wav)2 (B) | 246.94 Hz | B3 | [Sample file](samples/guitar_A2_very-long_forte_normal.wav)3 (G) | 196.00 Hz | G3 | [Sample file](samples/guitar_D3_very-long_forte_normal.wav)4 (D) | 146.83 Hz | D3 | [Sample file](samples/guitar_G3_very-long_forte_normal.wav)5 (A) | 110.00 Hz | A2 | [Sample file](samples/guitar_B3_very-long_forte_normal.wav)6 (E) | 82.41 Hz | E2 | [Sample file](samples/guitar_E4_very-long_forte_normal.wav)
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
/Users/bastien/dev/anaconda2/lib/python2.7/site-packages/matplotlib/font_manager.py:280: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
'Matplotlib is building the font cache using fc-list. '
###Markdown
**We will use scipy from the Anaconda distribution to read the WAV sample files**
###Code
from scipy.io import wavfile
# Let's start with the first sample corresponding to the lower string E2
rate, myrecording = wavfile.read("samples/guitar_E2_very-long_forte_normal.wav")
print(rate, np_array.size)
###Output
(44100, 208512)
###Markdown
**We define the length we want to record in seconds and the sampling rate to the source file sample rate (44100 Hz)**
###Code
duration = 1 # seconds
fs = rate # samples by second
# Let's restrict our sample to 1 second of the recording, after 0.5 second of sound to avoid the string picking
array = myrecording[int(0.5*fs):int(2.5*fs)]
print(array.size)
###Output
66150
###Markdown
**Let's plot a section of this array to look at it first**We notice a pretty periodic signal with a clear fundamental frequency: which makes sense since a guitar string vibrates producing an almost purely sinuzoidal wave
###Code
df = pd.DataFrame(array)
df.loc[25000:35000].plot()
###Output
_____no_output_____
###Markdown
First method: Naive pitch detection using Fast Fourier TransformOne first naive idea would be to "simply" take the (discrete) Fourier transform of the signal to find the fundamental frequency of the recording.Let's try that out and see what result we get. We use numpy to compute the discrete Fourier transform of the signal:
###Code
fourier = np.fft.fft(array)
###Output
_____no_output_____
###Markdown
We can visualise a section of the Fourier transform to notice there is a clear fundamental frequency:
###Code
plt.plot(abs(fourier[:len(fourier)/10]))
###Output
_____no_output_____
###Markdown
We notice already things are not going to be that easy. There are different harmonics picked here, and 2 of the most important ones are comparable in amplitude. We find the frequency corresponding to the maximum of this Fourier transform, and calculate the corresponding real frequency by re-multiplying by the sampling rate
###Code
f_max_index = np.argmax(abs(fourier[:fourier.size/2]))
freqs = np.fft.fftfreq(len(fourier))
freqs[f_max_index]*fs
###Output
_____no_output_____
###Markdown
**This method detects a fundamental frequency of 248Hz, which is wrong.** We notice that as suspected by looking at the chart of the Fourier transform, the 3rd harmonic of the expected fundamental is detected with this naive method: 248.5 = 3 x 82.41, where 82.41Hz was the expected fundamental frequency for this sample of the E2 note. Taking the convolution of the sample and a Hamming window before applying FFT One traditional way to deal with this is issue is to first convolute the sample with a window function, such as the [Hamming window](https://en.wikipedia.org/wiki/Window_functionHamming_window)
###Code
# Work in progress: coming soon
###Output
_____no_output_____
###Markdown
------- WIP: Using Autocorrelation method for pitch detection
###Code
rec = array
rec = rec[15000:35000]
autocorr = np.correlate(rec, rec, mode='same')
plt.plot(autocorr)
###Output
_____no_output_____ |
examples/4. Promoters Transcriptional Regulation and Gene Regulatory Networks.ipynb | ###Markdown
Building Gene Regulatory Networks with BioCRNpyler _William Poole_In this notebook, we will use RegulatedPromoter and CombinatorialPromoter to model Gene Regulation via Transcription Factors._Note: For this notebook to function, you need the default_parameter.txt file in the same folder as the notebook._ Modeling Gene Regulation by a Single Transcription FactorIn biology, it is very common for a transcription factor to turn on or off promoter. BioCRNpyler has a number of Promoter Components to do this.* RegulatedPromoter: A list of regulators each bind individually to turn a Promoter ON or OFF (No Combinatorial Logic)* RepressiblePromoter: A single repressor modelled with a hill function* ActivatablePromoter: A single activator modeled with a hill functionIn the next example, we will produce and compare the outputs of these kinds of promoters. Example 1: ActivatablePromoterA very simple Promoter Component modelled with a hill function. However, this class is not able to accurately capture the binding of Machinery like RNAP and shouldn't be used with Mixtures that include machinery.
###Code
from biocrnpyler import *
#ActivatedPromoter Example
activator = Species("activator", material_type = "small_molecule")
S_A = Species("A")
#Create a custom set of parameters
hill_parameters = {"k":1.0, "n":4, "K":20, "kleak":.01}
#By Loading custom parameters into the promoter, we override the default parameters of the Mixture
P_activatable = ActivatablePromoter("P_activtable", activator = activator, leak = True, parameters = hill_parameters)
#Create a DNA assembly "reporter" with P_activatable for its promoter
activatable_assembly = DNAassembly(name="activatable_assembly", promoter=P_activatable, rbs="Strong", protein = S_A)
M = SimpleTxTlExtract(name="SimpleTxTl", parameter_file = "default_parameters.txt", components=[activatable_assembly])
CRN = M.compile_crn();
print(CRN.pretty_print(show_rates = True, show_keys = True))
#Titrate the activator and plot the result
try:
%matplotlib inline
from biocrnpyler import *
import numpy as np
import pylab as plt
import pandas as pd
for a_c in np.linspace(0, 50, 5):
x0 = {activatable_assembly.dna:1, activator:a_c}
timepoints = np.linspace(0, 100, 100)
R = CRN.simulate_with_bioscrape_via_sbml(timepoints, initial_condition_dict = x0)
plt.plot(R["time"], R[str(S_A)], label = "[activator]="+str(a_c))
plt.ylabel(f"[{S_A}]")
plt.legend()
except ModuleNotFoundError:
print('please install the plotting libraries: pip install biocrnpyler[all]')
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
###Markdown
Example 2: RepressiblePromoterA very simple Promoter Component modelled with a hill function. However, this class is not able to accurately capture the binding of Machinery like RNAP and shouldn't be used with Mixtures that include machinery.
###Code
#ActivatedPromoter Example
repressor = S_A #defined in the previous example
reporter = Species("reporter", material_type = "protein")
#Create a custom set of parameters
hill_parameters = {"k":1.0, "n":4, "K":20, "kleak":.01}
#By Loading custom parameters into the promoter, we override the default parameters of the Mixture
P_repressible = RepressiblePromoter("P_repressible", repressor = repressor, leak = True, parameters = hill_parameters)
#Create a DNA assembly "reporter" with P_activatable for its promoter
repressible_assembly = DNAassembly(name="reporter", promoter=P_repressible, rbs="Strong", protein = reporter)
M = SimpleTxTlExtract(name="SimpleTxTl", parameter_file = "default_parameters.txt", components=[repressible_assembly])
CRN = M.compile_crn()
print(CRN.pretty_print(show_rates = True, show_keys = True))
#Titrate the repressor and plot the result
try:
import biocrnpyler
import numpy as np
import pylab as plt
import pandas as pd
for r_c in np.linspace(0, 50, 5):
x0 = {repressible_assembly.dna:1, repressor:r_c}
timepoints = np.linspace(0, 100, 100)
R = CRN.simulate_with_bioscrape_via_sbml(timepoints, initial_condition_dict = x0)
plt.plot(R["time"], R[str(reporter)], label = f"[{str(S_A)}]={r_c}")
plt.ylabel("[B]")
plt.legend()
except ModuleNotFoundError:
print('please install the plotting libraries: pip install biocrnpyler[all]')
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
###Markdown
Example 3: A Simple Genetic Regulatory NetworkIn this example, the activatable_assembly will produce a repressor that represses the repressable_assembly. Notice that activatable_assembly already procues the repressor of the RepressablePromoter...so this is easy!
###Code
M = SimpleTxTlExtract(name="SimpleTxTl", parameter_file = "default_parameters.txt", components=[repressible_assembly, activatable_assembly])
CRN = M.compile_crn()
print(CRN.pretty_print(show_rates = True, show_keys = False))
#Titrate the activator, which in turn will automatically produce the repressor
try:
import biocrnpyler
import numpy as np
import pylab as plt
import pandas as pd
plt.figure(figsize = (10, 6))
ax1, ax2 = plt.subplot(121), plt.subplot(122)#Create two subplots
for a_c in np.linspace(0, 50, 5):
x0 = {activatable_assembly.dna:1, repressible_assembly.dna:1, activator:a_c}
timepoints = np.linspace(0, 100, 100)
R = CRN.simulate_with_bioscrape_via_sbml(timepoints, initial_condition_dict = x0)
plt.sca(ax1)
plt.plot(R["time"], R[str(S_A)], label = "[activator]="+str(a_c))
plt.sca(ax2)
plt.plot(R["time"], R[str(reporter)], label = "[activator]="+str(a_c))
plt.sca(ax1)
plt.ylabel("activatable assembly output: [A]")
plt.legend()
plt.sca(ax2)
plt.ylabel("repressable assembly output: [B]")
plt.legend()
except ModuleNotFoundError:
print('please install the plotting libraries: pip install biocrnpyler[all]')
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
C:\ProgramData\Anaconda3\lib\site-packages\biocrnpyler-1.1.0-py3.7.egg\biocrnpyler\parameter.py:507: UserWarning: parameter file contains no unit column! Please add a column named ['unit', 'units'].
###Markdown
Example 4: RegulatedPromoterIn the below example, a CRN from RegulatedPromoter is generated. This Component models the detailed binding of regulators to the DNA and has seperate transcription rates for each regulator. It is suitable for complex models that include machinery. Regulators do not act Combinatorically.
###Code
#1 Regulated Promoter Needs lots of parameters!
component_parameters = {
#Promoter Activator Binding Parameters. Note the part_id = [promoter_name]_[regulator_name]
ParameterKey(mechanism = 'binding', part_id = 'regulated_promoter_A', name = 'kb'):100, #Promoter - Activator Binding
ParameterKey(mechanism = 'binding', part_id = "regulated_promoter_A", name = 'ku'):5.0, #Unbinding
ParameterKey(mechanism = 'binding', part_id = "regulated_promoter_A", name = 'cooperativity'):4.0, #Cooperativity
#Activated Promoter Transcription. Note the part_id = [promoter_name]_[regulator_name]
#These regulate RNAP binding to an activated promoter and transcription
ParameterKey(mechanism = 'transcription', part_id = 'regulated_promoter_A', name = 'kb'):100, #Promoter - Activator Binding
ParameterKey(mechanism = 'transcription', part_id = "regulated_promoter_A", name = 'ku'):1.0, #Unbinding
ParameterKey(mechanism = 'transcription', part_id = 'regulated_promoter_A', name = "ktx"): 1., #Transcription Rate
#Promoter Repressor Binding Parameters. Note the part_id = [promoter_name]_[regulator_name]
ParameterKey(mechanism = 'binding', part_id = 'regulated_promoter_R', name = 'kb'):100,
ParameterKey(mechanism = 'binding', part_id = "regulated_promoter_R", name = 'ku'):5.0,
ParameterKey(mechanism = 'binding', part_id = "regulated_promoter_R", name = 'cooperativity'):4.0,
#Repressed Promoter Transcription. Note the part_id = [promoter_name]_[regulator_name]
#These regulate RNAP binding to a repressed promoter and transcription
ParameterKey(mechanism = 'transcription', part_id = 'regulated_promoter_R', name = 'kb'):1,
ParameterKey(mechanism = 'transcription', part_id = "regulated_promoter_R", name = 'ku'):100.0,
ParameterKey(mechanism = 'transcription', part_id = 'regulated_promoter_R', name = "ktx"): 1.0, #Transcription Rate
#Leak Parameters for transcription
#These regulate expression of an unbound promoter
ParameterKey(mechanism = 'transcription', part_id = 'regulated_promoter_leak', name = "kb"): 2.,
ParameterKey(mechanism = 'transcription', part_id = 'regulated_promoter_leak', name = "ku"): 100,
ParameterKey(mechanism = 'transcription', part_id = 'regulated_promoter_leak', name = "ktx"): 1.0, #Transcription Rate
}
repressor = Species("R", material_type = "protein")
activator = Species("A", material_type = "protein")
reporter = Species("reporter", material_type = "protein")
#Create a RegulatedPromoter Object named "P_reg" with regulators "activator" and "repressor"
#By Loading custom parameters into the promoter, we override the default parameters of the Mixture
P_reg = RegulatedPromoter("regulated_promoter", regulators=[activator, repressor], leak=True, parameters = component_parameters)
#Create a DNA assembly "reporter" with P_reg for its promoter
reg_reporter = DNAassembly(name="reporter", promoter=P_reg, rbs="Strong", protein = reporter)
#Use a simple TxTl model with dilution
#M = SimpleTxTlDilutionMixture(name="e coli", parameter_file = "default_parameters.txt", components=[reg_reporter])
M = TxTlExtract(name="e coli extract", parameter_file = "default_parameters.txt", components=[reg_reporter])
CRN = M.compile_crn()
print(CRN.pretty_print(show_rates = True, show_keys = False))
#Lets titrate Repressor and Activator - notice the bahvior is not combinatorial
try:
import biocrnpyler
import numpy as np
import pylab as plt
import pandas as pd
plt.figure(figsize = (20, 10))
ax1, ax2, ax3 = plt.subplot(131), plt.subplot(132), plt.subplot(133)
titration_list = [0, .05, .1, .5, 1.0, 5.]
N = len(titration_list)
HM = np.zeros((N, N))
for a_ind, a_c in enumerate(titration_list):
for r_ind, r_c in enumerate(titration_list):
x0 = {reg_reporter.dna:.1, repressor:r_c, activator:a_c}
timepoints = np.linspace(0, 1000, 1000)
R = CRN.simulate_with_bioscrape_via_sbml(timepoints, initial_condition_dict = x0)
if a_ind == 0:
plt.sca(ax1)
plt.plot(R["time"], R[str(reporter)], label = "[A]="+str(a_c) +" [R]="+str(r_c))
if r_ind == 0:
plt.sca(ax2)
plt.plot(R["time"], R[str(reporter)], label = "[A]="+str(a_c) +" [R]="+str(r_c))
HM[a_ind, r_ind] = R[str(reporter)][len(timepoints)-1]
plt.sca(ax1)
plt.title("Repressor Titration")
plt.legend()
plt.sca(ax2)
plt.title("Activator Titration")
plt.legend()
plt.sca(ax3)
plt.title("Endpoint Heatmap (log)")
cb = plt.pcolor(np.log(HM))
plt.colorbar(cb)
plt.xlabel("Repressor")
plt.ylabel("Activator")
plt.xticks(np.arange(.5, N+.5, 1), [str(i) for i in titration_list])
plt.yticks(np.arange(.5, N+.5, 1), [str(i) for i in titration_list])
except ModuleNotFoundError:
print('please install the plotting libraries: pip install biocrnpyler[all]')
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
###Markdown
Example 5: Induction Model of a Ligand which Activates a Transcription FactorIn many biological circuits, small molecules (ligands) can bind to a transcription factor modulating its functionality. In BioCRNpyler, we will model this by creating a ChemicalComplex Component which consists of a Transcription Factor and a ligand. This the ComplexSpecies formed by by binding the transcription will also work as the regulator (activator or repressor) of a regulated promoter. In this example, we will use RepressablePromoter. In the activating case, the bound form of the ChemicalComplex will induce gene expression.
###Code
inactive_repressor = Species("A", material_type = "protein")
ligand = Species("L", material_type = "ligand")
#Create a ChemicalComplex to model ligand-inactive_repressor bindning
activatable_repressor = ChemicalComplex([inactive_repressor, ligand])
#Other Promoters could also be used
P_repressible = RepressiblePromoter("P_repressible", repressor = activatable_repressor.get_species(), leak = True, parameters = hill_parameters)
#Create a DNA assembly "reporter" with P_activatable for its promoter
repressible_assembly = DNAassembly(name="reporter", promoter=P_repressible, rbs="Strong", protein = "reporter")
M = ExpressionDilutionMixture(name="ExpressionDilutionMixture", parameter_file = "default_parameters.txt", components=[repressible_assembly, activatable_repressor])
CRN = M.compile_crn();print(CRN.pretty_print(show_rates = True, show_keys = False))
#Lets titrate ligand and repressor
try:
import biocrnpyler
import numpy as np
import pylab as plt
import pandas as pd
plt.figure(figsize = (8, 6))
N = 11 #Number of titrations
max_titration = 100
HM = np.zeros((N, N))
for r_ind, R_c in enumerate(np.linspace(0, max_titration, N)):
for l_ind, L_c in enumerate(np.linspace(0, max_titration, N)):
x0 = {repressible_assembly.dna:1, inactive_repressor:R_c, ligand:L_c}
timepoints = np.linspace(0, 1000, 1000)
R = CRN.simulate_with_bioscrape_via_sbml(timepoints, initial_condition_dict = x0)
HM[r_ind, l_ind] = R["protein_reporter"][len(timepoints)-1]
plt.title("Activatable Repressor vs Ligand Endpoint Heatmap\n Exhibits NOT AND Logic")
cb = plt.pcolor(HM)
plt.colorbar(cb)
plt.xlabel("Ligand")
plt.ylabel("Activatbale Repressor")
plt.xticks(np.arange(.5, N+.5, 1), [str(i) for i in np.linspace(0, max_titration, N)])
plt.yticks(np.arange(.5, N+.5, 1), [str(i) for i in np.linspace(0, max_titration, N)])
except ModuleNotFoundError:
print('please install the plotting libraries: pip install biocrnpyler[all]')
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
###Markdown
Example 6: Induction Models of a Ligand which Deactivates a Transcription FactorIn the inactivating case, the unbound transcription factor will activate the gene and the bound form will not.
###Code
repressor = Species("A", material_type = "protein")
ligand = Species("L", material_type = "ligand")
#Create a ChemicalComplex to model ligand-inactive_repressor bindning
inactive_repressor = ChemicalComplex([repressor, ligand])
#Other Promoters could also be Used
P_repressible = RepressiblePromoter("P_repressible", repressor = repressor, leak = True, parameters = hill_parameters)
#Create a DNA assembly "reporter" with P_activatable for its promoter
repressible_assembly = DNAassembly(name="reporter", promoter=P_repressible, rbs="Strong", protein = "reporter")
M = ExpressionDilutionMixture(name="ExpressionDilutionMixture", parameter_file = "default_parameters.txt", components=[repressible_assembly, activatable_repressor])
CRN = M.compile_crn();print(CRN.pretty_print(show_rates = True, show_keys = False))
#Titration of ligand and repressor
try:
import biocrnpyler
import numpy as np
import pylab as plt
import pandas as pd
plt.figure(figsize = (8, 6))
N = 11 #Number of titrations
max_titration = 100
HM = np.zeros((N, N))
for r_ind, R_c in enumerate(np.linspace(0, max_titration, N)):
for l_ind, L_c in enumerate(np.linspace(0, max_titration, N)):
x0 = {repressible_assembly.dna:1, repressor:R_c, ligand:L_c}
timepoints = np.linspace(0, 1000, 1000)
R = CRN.simulate_with_bioscrape_via_sbml(timepoints, initial_condition_dict = x0)
HM[r_ind, l_ind] = R["protein_reporter"][len(timepoints)-1]
plt.title("Deactivatable Repressor vs Ligand Endpoint Heatmap\nAllows for Tunable Induction")
cb = plt.pcolor(HM)
plt.colorbar(cb)
plt.xlabel("Ligand")
plt.ylabel("Deactivatbale Repressor")
plt.xticks(np.arange(.5, N+.5, 1), [str(i) for i in np.linspace(0, max_titration, N)])
plt.yticks(np.arange(.5, N+.5, 1), [str(i) for i in np.linspace(0, max_titration, N)])
except ModuleNotFoundError:
print('please install the plotting libraries: pip install biocrnpyler[all]')
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
###Markdown
Example 7: Modeling AND, OR, and XOR Promoters with Combinatorial PromoterCombinatorialPromoter is a Component designed to model arbitrary combinatorial logic on a promoter. For example, a promoter with 2 transcription factor binding sites can have 4 differents states:* Nothing bound* just factor 1 bound* just factor 2 bound* factors 1 and 2 boundIn general, a promoter with $N$ binding sites has up to $2^N$ possible states. Combinatorial promoter enumerates all these states and allows for the modeller to decide which are capable of transcription and which are not. For more details on this class, see the CombinatorialPromoter example ipython notebook in the BioCRNpyler examples folder.Below, we will use a Combinatorial Promoter to Produce OR, AND, and XOR logic with two species, $A$ and $B$ by passing in lists of the transcribable combinations of regulators to the tx_capable_list keyword.
###Code
#AND Logic
A = Species("A") ;B = Species("B") #Inducers
#Create the Combinatorial Promoter
Prom_AND = CombinatorialPromoter("combinatorial_promoter",[A,B], tx_capable_list = [[A,B]], leak = True) #the Combination A and B can be transcribed
AND_assembly = DNAassembly("AND",promoter=Prom_AND,rbs="medium",protein="GFP")
#Use an Expression Mixture to focus on Logic, not Transcription & Translation
M = ExpressionExtract(name="expression", parameter_file = "default_parameters.txt", components=[AND_assembly])
CRN = M.compile_crn(); print(CRN.pretty_print(show_rates = True, show_keys = False))
#Lets titrate A and B
try:
import biocrnpyler
import numpy as np
import pylab as plt
import pandas as pd
plt.figure(figsize = (8, 6))
N = 11 #Number of titrations
max_titration = 10
HM = np.zeros((N, N))
for a_ind, A_c in enumerate(np.linspace(0, max_titration, N)):
for b_ind, B_c in enumerate(np.linspace(0, max_titration, N)):
x0 = {AND_assembly.dna:1, A:A_c, B:B_c}
timepoints = np.linspace(0, 1000, 1000)
R = CRN.simulate_with_bioscrape_via_sbml(timepoints, initial_condition_dict = x0)
HM[a_ind, b_ind] = R["protein_GFP"][len(timepoints)-1]
plt.title("AND Endpoint Heatmap")
cb = plt.pcolor(HM)
plt.colorbar(cb)
plt.xlabel("B")
plt.ylabel("A")
plt.xticks(np.arange(.5, N+.5, 1), [str(i) for i in np.linspace(0, max_titration, N)])
plt.yticks(np.arange(.5, N+.5, 1), [str(i) for i in np.linspace(0, max_titration, N)])
except ModuleNotFoundError:
print('please install the plotting libraries: pip install biocrnpyler[all]')
#Create OR Logic
Prom_OR = CombinatorialPromoter("combinatorial_promoter",[A,B], leak=False,
tx_capable_list = [[A,B], [A], [B]]) #the Combinations A and B or just A or just B be transcribed
ORassembly = DNAassembly("OR",promoter=Prom_OR,rbs="medium",protein="GFP")
print(ORassembly)
#Use an Expression Mixture to focus on Logic, not Transcription & Translation
M = ExpressionExtract(name="expression", parameter_file = "default_parameters.txt", components=[ORassembly])
CRN = M.compile_crn()
print(CRN.pretty_print(show_rates = True, show_keys = False))
#Lets titrate A and B
try:
import biocrnpyler
import numpy as np
import pylab as plt
import pandas as pd
plt.figure(figsize = (6, 6))
N = 11 #Number of titrations
max_titration = 10
HM = np.zeros((N, N))
for a_ind, A_c in enumerate(np.linspace(0, max_titration, N)):
for b_ind, B_c in enumerate(np.linspace(0, max_titration, N)):
x0 = {ORassembly.dna:1, A:A_c, B:B_c}
timepoints = np.linspace(0, 1000, 1000)
R = CRN.simulate_with_bioscrape_via_sbml(timepoints, initial_condition_dict = x0)
HM[a_ind, b_ind] = R["protein_GFP"][len(timepoints)-1]
plt.title("OR Endpoint Heatmap")
cb = plt.pcolor(HM)
plt.colorbar(cb)
plt.xlabel("B")
plt.ylabel("A")
plt.xticks(np.arange(.5, N+.5, 1), [str(i) for i in np.linspace(0, max_titration, N)])
plt.yticks(np.arange(.5, N+.5, 1), [str(i) for i in np.linspace(0, max_titration, N)])
except ModuleNotFoundError:
print('please install the plotting libraries: pip install biocrnpyler[all]')
#Create XOR Logic
Prom_XOR = CombinatorialPromoter("combinatorial_promoter",[A,B], leak=False,
tx_capable_list = [[A], [B]]) #the Combinations just A or just B can be transcribed
XORassembly = DNAassembly("XOR",promoter=Prom_XOR,rbs="medium",protein="GFP")
#Use an Expression Mixture to focus on Logic, not Transcription & Translation
M = ExpressionExtract(name="expression", parameter_file = "default_parameters.txt", components=[XORassembly])
CRN = M.compile_crn()
print(CRN.pretty_print(show_rates = True, show_keys = False))
#Lets titrate A and B
try:
import biocrnpyler
import numpy as np
import pylab as plt
import pandas as pd
plt.figure(figsize = (6, 6))
N = 11 #Number of titrations
max_titration = 10
HM = np.zeros((N, N))
for a_ind, A_c in enumerate(np.linspace(0, max_titration, N)):
for b_ind, B_c in enumerate(np.linspace(0, max_titration, N)):
x0 = {XORassembly.dna:1, A:A_c, B:B_c}
timepoints = np.linspace(0, 1000, 1000)
R = CRN.simulate_with_bioscrape_via_sbml(timepoints, initial_condition_dict = x0)
HM[a_ind, b_ind] = R["protein_GFP"][len(timepoints)-1]
plt.title("XOR Endpoint Heatmap")
cb = plt.pcolor(HM)
plt.colorbar(cb)
plt.xlabel("B")
plt.ylabel("A")
plt.xticks(np.arange(.5, N+.5, 1), [str(i) for i in np.linspace(0, max_titration, N)])
plt.yticks(np.arange(.5, N+.5, 1), [str(i) for i in np.linspace(0, max_titration, N)])
except ModuleNotFoundError:
print('please install the plotting libraries: pip install biocrnpyler[all]')
###Output
C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
C:\ProgramData\Anaconda3\lib\site-packages\biocrnpyler-1.1.0-py3.7.egg\biocrnpyler\parameter.py:507: UserWarning: parameter file contains no unit column! Please add a column named ['unit', 'units'].
|
stock-books/03-Timeseries.ipynb | ###Markdown
**Chapter 03 시계열 모델링**정상정 **(Stationary)** 의 개념을 분석한다
###Code
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import warnings
# plt.style.use('seaborn')
# plt.rcParams['figure.dpi'] = 300
warnings.simplefilter(action='ignore', category=FutureWarning)
###Output
_____no_output_____
###Markdown
**1 시계열 데이터 분해** **01 Loading the DataSet**- 'GC=F' : Gold Price Code
###Code
DATA_FILENAME = 'data/stock-amazon.pkl'
# Loading the DataSet
import pandas as pd
import yfinance as yf
try:
data = pd.read_pickle(DATA_FILENAME)
except FileNotFoundError:
data = yf.download('AMZN', start='2010-01-01', end='2022-01-01',
progress=False, auto_adjust=True)
data.to_pickle(DATA_FILENAME)
data.head(3)
###Output
_____no_output_____
###Markdown
**02 Data Pre Processing**
###Code
# Data Pre Processing
data.rename(columns={'Close':'price'}, inplace=True)
df = data[['price']]
df = df.resample('M').last() # 월간 데이터로 변환하기
# 이동평균과 표준편차를 추가 한다
WINDOW_SIZE = 12
df['rolling_mean'] = df.price.rolling(window=WINDOW_SIZE).mean()
df['rolling_std'] = df.price.rolling(window=WINDOW_SIZE).std()
###Output
_____no_output_____
###Markdown
**03 Visualization**```pythonimport cufflinks as cffrom plotly.offline import iplot, init_notebook_modecf.set_config_file(world_readable=True, theme='pearl', offline=True) set up settings (run it once)init_notebook_mode(connected=True) initialize notebook displaydf.iplot(title="Gold Price") Plotly from DataFrame```
###Code
# Visualization
plt.rcParams['figure.figsize'] = [20, 5]
df.plot(title='Gold Price')
plt.show()
###Output
_____no_output_____
###Markdown
**04 승산적 모델을 활용하여 계절성 특징을 찾는다**
###Code
from statsmodels.tsa.seasonal import seasonal_decompose
plt.rcParams['figure.figsize'] = [20, 8]
decomposition_results = seasonal_decompose(df.price, model='multiplicative')
decomposition_results.plot().suptitle('Multiplicative Decomposition', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
**2 ProPhet 을 활용한 시계열 데이터 분해**[SettingWithCopyWarning: DataFrame 에 대해서](https://emilkwak.github.io/pandas-dataframe-settingwithcopywarning)https://github.com/facebook/prophet```r! pip install pystan! pip install fbprophet``` **01 Loading the DataSet**위 아마존 데이터를 시계열 분석한 뒤, 1년 변화를 예측합니다
###Code
df = data[['price']]
df.head(3)
df.reset_index(drop=False, inplace=True)
df.head(3)
df.rename(columns={'Date':'ds', 'price':'y'}, inplace=True)
df.head(3)
###Output
_____no_output_____ |
docs/tutorial/Advanced simulation.ipynb | ###Markdown
Advanced simulation In essence, a Model object is able to change the state of the system given a sample and evaluate certain metrics.  Model objects are able to drastically cut simulation time by sorting the samples to minimize perturbations to the system between simulations. This decreases the number of iterations required to solve recycle systems. The following examples show how Model objects can be used. Create a model object **Model objects are used to evaluate metrics around multiple parameters of a system.** Create a Model object of the lipidcane biorefinery with internal rate of return as a metric:
###Code
from biosteam.biorefineries import lipidcane as lc
import biosteam as bst
solve_IRR = lc.lipidcane_tea.solve_IRR
metrics = bst.Metric('IRR', solve_IRR),
model = bst.Model(lc.lipidcane_sys, metrics)
###Output
_____no_output_____
###Markdown
The Model object begins with no paramters:
###Code
model
###Output
Model: Biorefinery IRR
(No parameters)
###Markdown
Note: Here we defined only one metric, but more metrics are possible. Add design parameters **A design parameter is a Unit attribute that changes design requirements but does not affect mass and energy balances.** Add number of fermentation reactors as a "design" parameter:
###Code
R301 = bst.find.unit.R301 # The Fermentation Unit
@model.parameter(element=R301, kind='design', name='Number of reactors')
def set_N_reactors(N):
R301.N = N
###Output
_____no_output_____
###Markdown
The decorator returns a Parameter object and adds it to the model:
###Code
set_N_reactors
###Output
_____no_output_____
###Markdown
Calling a Parameter object will update the parameter and results:
###Code
set_N_reactors(5)
print('Puchase cost at 5 reactors: ' + str(R301.purchase_cost))
set_N_reactors(8)
print('Puchase cost at 8 reactors: ' + str(R301.purchase_cost))
###Output
Puchase cost at 5 reactors: 2259304.471641673
Puchase cost at 8 reactors: 2716237.9826850006
###Markdown
Add cost parameters **A cost parameter is a Unit attribute that affects cost but does not change design requirements.** Add the fermentation unit base cost as a "cost" parameter:
###Code
@model.parameter(element=R301, kind='cost') # Note: name argument not given this time
def set_base_cost(cost):
R301.cost_items['Reactors'].cost = cost
original = R301.cost_items['Reactors'].cost
set_base_cost(10e6)
print('Purchase cost at 10 million USD: ' + str(R301.purchase_cost))
set_base_cost(844e3)
print('Purchase cost at 844,000 USD: ' + str(R301.purchase_cost))
###Output
Purchase cost at 10 million USD: 21559566.90618983
Purchase cost at 844,000 USD: 2716237.9826850006
###Markdown
If the name was not defined, it defaults to the setter's signature:
###Code
set_base_cost
###Output
_____no_output_____
###Markdown
Add isolated parameters **An isolated parameter should not affect Unit objects in any way.** Add feedstock price as a "isolated" parameter:
###Code
lipid_cane = lc.lipid_cane # The feedstock stream
@model.parameter(element=lipid_cane, kind='isolated')
def set_feed_price(feedstock_price):
lipid_cane.price = feedstock_price
###Output
_____no_output_____
###Markdown
Add coupled parameters **A coupled parameter affects mass and energy balances of the system.** Add lipid fraction as a "coupled" parameter:
###Code
set_lipid_fraction = model.parameter(lc.set_lipid_fraction, element=lipid_cane, kind='coupled')
set_lipid_fraction(0.10)
print('IRR at 10% lipid: ' + str(solve_IRR()))
set_lipid_fraction(0.05)
print('IRR at 5% lipid: ' + str(solve_IRR()))
###Output
IRR at 10% lipid: 0.19480648192918615
IRR at 5% lipid: 0.15526233899905484
###Markdown
Add fermentation efficiency as a "coupled" parameter:
###Code
@model.parameter(element=R301, kind='coupled')
def set_fermentation_efficiency(efficiency):
R301.efficiency = efficiency
###Output
_____no_output_____
###Markdown
Evaluate metric given a sample **The model can be called to evaluate a sample of parameters.** All parameters are stored in the model with highly coupled parameters first:
###Code
model
###Output
Model: Biorefinery IRR
Element: Parameter:
Stream-lipid cane Lipid fraction
Fermentation-R301 Efficiency
Number of reactors
Cost
Stream-lipid cane Feedstock price
###Markdown
Get all parameters as ordered in the model:
###Code
model.get_parameters()
###Output
_____no_output_____
###Markdown
Evaluate sample:
###Code
model([0.05, 0.85, 8, 100000, 0.040])
###Output
_____no_output_____
###Markdown
Evaluate metric across samples Evaluate at give parameter values:
###Code
import numpy as np
samples = np.array([(0.05, 0.85, 8, 100000, 0.040),
(0.05, 0.90, 7, 100000, 0.040),
(0.09, 0.95, 8, 100000, 0.042)])
model.load_samples(samples)
model.evaluate()
model.table # All evaluations are stored as a pandas DataFrame
###Output
_____no_output_____
###Markdown
Note that coupled parameters are on the left most columns, and are ordered from upstream to downstream (e.g. is upstream from ) Evaluate multiple metrics Reset the metrics to include total utility cost:
###Code
def total_utility_cost():
"""Return utility costs in 10^6 USD/yr"""
return lc.lipidcane_tea.utility_cost / 10**6
# This time use detailed names and units for appearance
model.metrics = (bst.Metric('Internal rate of return', lc.lipidcane_tea.solve_IRR, '%'),
bst.Metric('Utility cost', total_utility_cost, 'USD/yr'))
model
model.evaluate()
model.table
###Output
_____no_output_____ |
Applying AI to 2D Medical Imaging Data/Models for Classification of 2D Medical Images/Exercise - Evaluating Your Model/Exercise.ipynb | ###Markdown
Image Augmentation:(no code changes needed)
###Code
IMG_SIZE = (224, 224)
train_idg = ImageDataGenerator(rescale=1. / 255.0,
horizontal_flip = True,
vertical_flip = False,
height_shift_range= 0.1,
width_shift_range=0.1,
rotation_range=20,
shear_range = 0.1,
zoom_range=0.1)
train_gen = train_idg.flow_from_dataframe(dataframe=train_df,
directory=None,
x_col = 'img_path',
y_col = 'class',
class_mode = 'binary',
target_size = IMG_SIZE,
batch_size = 9
)
# Note that the validation data should not be augmented! We only want to do some basic intensity rescaling here
val_idg = ImageDataGenerator(rescale=1. / 255.0
)
val_gen = val_idg.flow_from_dataframe(dataframe=valid_df,
directory=None,
x_col = 'img_path',
y_col = 'class',
class_mode = 'binary',
target_size = IMG_SIZE,
batch_size = 6) ## We've only been provided with 6 validation images
## Pull a single large batch of random validation data for testing after each epoch
testX, testY = val_gen.next()
###Output
_____no_output_____
###Markdown
Load in VGG16 with pre-trained ImageNet weights: (No code changes needed):
###Code
model = VGG16(include_top=True, weights='imagenet')
model.summary()
for indx, layer in enumerate(model.layers):
print(indx, layer.name, layer.output_shape)
transfer_layer = model.get_layer('block5_pool')
vgg_model = Model(inputs=model.input,
outputs=transfer_layer.output)
## Now, choose which layers of VGG16 we actually want to fine-tune (if any)
## Here, we'll freeze all but the last convolutional layer
for layer in vgg_model.layers[0:17]:
layer.trainable = False
new_model = Sequential()
# Add the convolutional part of the VGG16 model from above.
new_model.add(vgg_model)
# Flatten the output of the VGG16 model because it is from a
# convolutional layer.
new_model.add(Flatten())
# Add a dropout-layer which may prevent overfitting and
# improve generalization ability to unseen data e.g. the test-set.
new_model.add(Dropout(0.5))
# Add a dense (aka. fully-connected) layer.
# This is for combining features that the VGG16 model has
# recognized in the image.
new_model.add(Dense(1024, activation='relu'))
# Add a dropout-layer which may prevent overfitting and
# improve generalization ability to unseen data e.g. the test-set.
new_model.add(Dropout(0.5))
# Add a dense (aka. fully-connected) layer.
# This is for combining features that the VGG16 model has
# recognized in the image.
new_model.add(Dense(512, activation='relu'))
# Add a dropout-layer which may prevent overfitting and
# improve generalization ability to unseen data e.g. the test-set.
new_model.add(Dropout(0.5))
# Add a dense (aka. fully-connected) layer.
# This is for combining features that the VGG16 model has
# recognized in the image.
new_model.add(Dense(256, activation='relu'))
# Add a dropout-layer which may prevent overfitting and
# improve generalization ability to unseen data e.g. the test-set.
new_model.add(Dropout(0.5))
# Add a dense (aka. fully-connected) layer.
# This is for combining features that the VGG16 model has
# recognized in the image.
new_model.add(Dense(1, activation='relu'))
new_model.summary()
## Set our optimizer, loss function, and learning rate
optimizer = Adam(lr=1e-4)
loss = 'binary_crossentropy'
metrics = ['binary_accuracy']
new_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
## Run for 10 epochs to see if any learning occurs:
history = new_model.fit_generator(train_gen,
validation_data = (testX, testY),
epochs = 10)
###Output
Epoch 1/10
3/3 [==============================] - 21s 7s/step - loss: 7.8496 - binary_accuracy: 0.3000 - val_loss: 7.7125 - val_binary_accuracy: 0.5000
Epoch 2/10
3/3 [==============================] - 20s 7s/step - loss: 6.6667 - binary_accuracy: 0.3500 - val_loss: 7.7125 - val_binary_accuracy: 0.5000
Epoch 3/10
3/3 [==============================] - 20s 7s/step - loss: 6.5951 - binary_accuracy: 0.5000 - val_loss: 7.7125 - val_binary_accuracy: 0.5000
Epoch 4/10
3/3 [==============================] - 20s 7s/step - loss: 10.0094 - binary_accuracy: 0.4000 - val_loss: 7.7125 - val_binary_accuracy: 0.5000
Epoch 5/10
3/3 [==============================] - 20s 7s/step - loss: 5.1882 - binary_accuracy: 0.4500 - val_loss: 7.7125 - val_binary_accuracy: 0.5000
Epoch 6/10
3/3 [==============================] - 20s 7s/step - loss: 7.7935 - binary_accuracy: 0.3500 - val_loss: 7.7125 - val_binary_accuracy: 0.5000
Epoch 7/10
3/3 [==============================] - 20s 7s/step - loss: 4.3865 - binary_accuracy: 0.4500 - val_loss: 7.7125 - val_binary_accuracy: 0.5000
Epoch 8/10
3/3 [==============================] - 20s 7s/step - loss: 4.9961 - binary_accuracy: 0.5500 - val_loss: 7.7125 - val_binary_accuracy: 0.5000
Epoch 9/10
3/3 [==============================] - 20s 7s/step - loss: 6.4979 - binary_accuracy: 0.5000 - val_loss: 7.7125 - val_binary_accuracy: 0.5000
Epoch 10/10
3/3 [==============================] - 20s 7s/step - loss: 7.1889 - binary_accuracy: 0.4500 - val_loss: 7.7125 - val_binary_accuracy: 0.5000
###Markdown
Write a function below to plot the output of your training that is stored in the 'history' variable from above:
###Code
history.history
# Define a function here that will plot loss, val_loss, binary_accuracy, and val_binary_accuracy over all of
# your epochs:
def plot_history(history):
N = len(history.history['loss'])
plt.style.use('ggplot')
plt.figure()
plt.plot(np.arange(0,N), history.history['loss'], label='Training loss')
plt.plot(np.arange(0,N), history.history['val_loss'], label='Validtion Loss')
plt.plot(np.arange(0,N), history.history['binary_accuracy'], label='Training Accuracy')
plt.plot(np.arange(0,N), history.history['val_binary_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch Number')
plt.ylabel('Loss/Accuracy')
plt.legend(loc='lower left')
plt.show()
### YOUR CODE HERE
plot_history(history)
###Output
_____no_output_____
###Markdown
Try a model with less dropout, same learning rate:
###Code
## COPY AND PASTE THE ARCHITECTURE FROM ABOVE, BUT CHANGE THE AMOUNT OF DROPOUT
new_model = Sequential()
new_model.add(vgg_model)
new_model.add(Flatten())
new_model.add(Dense(1024, activation='relu'))
new_model.add(Dropout(0.25))
new_model.add(Dense(512, activation='relu'))
new_model.add(Dropout(0.20))
new_model.add(Dense(128, activation='relu'))
new_model.add(Dropout(0.15))
new_model.add(Dense(32, activation='relu'))
new_model.add(Dense(1, activation='sigmoid'))
new_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
history = new_model.fit_generator(train_gen,
validation_data = (testX, testY),
epochs = 10)
plot_history(history)
###Output
_____no_output_____
###Markdown
Finally, try a model with the same amount of dropout as you initiall had, but a slower learning rate:
###Code
## COPY AND PASTE THE ARCHITECTURE FROM THE FIRST EXAMPLE
new_model = Sequential()
new_model.add(vgg_model)
new_model.add(Flatten())
new_model.add(Dense(1024, activation='relu'))
new_model.add(Dropout(0.25))
new_model.add(Dense(512, activation='relu'))
new_model.add(Dropout(0.20))
new_model.add(Dense(128, activation='relu'))
new_model.add(Dropout(0.15))
new_model.add(Dense(32, activation='relu'))
new_model.add(Dense(1, activation='sigmoid'))
## CHANGE THE LEARNING RATE DEFINED IN Adam():
optimizer = Adam(lr=1e-6)
loss = 'binary_crossentropy'
metrics = ['binary_accuracy']
new_model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
history = new_model.fit_generator(train_gen,
validation_data = (testX, testY),
epochs = 10)
plot_history(history)
###Output
_____no_output_____ |
analysis/inference_human_model_behavior.ipynb | ###Markdown
Statistical inference over human and model behavioral performance **The purpose of this notebook is to:** * Visualize human and model prediction accuracy (proportion correct)* Visualize average-human and model agreement (RMSE)* Visualize human-human and model-human agreement (Cohen's kappa)* Compare performance between models**This notebook depends on:*** Running `./generate_dataframes.py` (INTERNAL USE ONLY)* Running `./upload_results.py` (INTERNAL USE ONLY)* Running `./download_results.py` (PUBLIC USE)* Running `./summarize_human_model_behavior.ipynb` (PUBLIC USE) Load packages
###Code
suppressMessages(suppressWarnings(suppressPackageStartupMessages({library(tidyverse)
library(ggthemes)
library(lme4)
library(lmerTest)
library(brms)
library(broom.mixed)
library(tidyboot)
require('MuMIn')
})))
A = read_csv('../results/csv/summary/model_human_accuracies.csv')
# A = read_csv('../results/csv/models/allModels_results.csv')
AH = read_csv('../results/csv/summary/human_accuracy_by_scenario.csv')
K = read_csv('../results/csv/summary/model_human_CohensK.csv')
## preprocessing
A <- A %>%
dplyr::rename('model_kind'='Model Kind',
'encoder_training_dataset_type'='Encoder Training Dataset Type',
'dynamics_training_dataset_type'='Dynamics Training Dataset Type',
'readout_training_data_type'='Readout Train Data Type',
'readout_type'='Readout Type',
'visual_encoder_architecture' = 'Visual encoder architecture',
'dynamics_model_architecture' = 'Dynamics model architecture') %>%
mutate(dynamics_model_architecture = factor(dynamics_model_architecture)) %>%
# left_join(AH, by='scenario') %>%
group_by(model_kind) %>%
mutate(avg_model_correct = mean(model_correct)) %>%
ungroup()
###Output
[36m──[39m [1m[1mColumn specification[1m[22m [36m────────────────────────────────────────────────────────[39m
cols(
.default = col_character(),
ratio = [32mcol_double()[39m,
diff = [32mcol_double()[39m,
human_correct = [32mcol_double()[39m,
model_correct = [32mcol_double()[39m,
`Encoder Pre-training Seed` = [33mcol_logical()[39m,
`Encoder Training Seed` = [32mcol_double()[39m,
`Dynamics Training Seed` = [32mcol_double()[39m,
ObjectCentric = [33mcol_logical()[39m,
Supervised = [33mcol_logical()[39m,
SelfSupervisedLossSelfSupervisedLoss = [33mcol_logical()[39m
)
[36mℹ[39m Use [30m[47m[30m[47m`spec()`[47m[30m[49m[39m for the full column specifications.
[36m──[39m [1m[1mColumn specification[1m[22m [36m────────────────────────────────────────────────────────[39m
cols(
agent = [31mcol_character()[39m,
scenario = [31mcol_character()[39m,
obs_mean = [32mcol_double()[39m,
boot_mean = [32mcol_double()[39m,
ci_lb = [32mcol_double()[39m,
ci_ub = [32mcol_double()[39m,
pct_2.5 = [32mcol_double()[39m,
pct_97.5 = [32mcol_double()[39m
)
[36m──[39m [1m[1mColumn specification[1m[22m [36m────────────────────────────────────────────────────────[39m
cols(
.default = col_character(),
Cohens_k_lb = [32mcol_double()[39m,
Cohens_k_med = [32mcol_double()[39m,
Cohens_k_ub = [32mcol_double()[39m,
num_datapoints = [32mcol_double()[39m,
`Encoder Pre-training Seed` = [33mcol_logical()[39m,
`Encoder Training Seed` = [32mcol_double()[39m,
`Dynamics Training Seed` = [32mcol_double()[39m,
ObjectCentric = [33mcol_logical()[39m,
Supervised = [33mcol_logical()[39m,
SelfSupervisedLossSelfSupervisedLoss = [33mcol_logical()[39m
)
[36mℹ[39m Use [30m[47m[30m[47m`spec()`[47m[30m[49m[39m for the full column specifications.
###Markdown
Visualize human and model prediction accuracy (proportion correct)
###Code
## human accuracy only
d = read_csv('../results/csv/summary/human_accuracy_by_scenario.csv')
## accuracy bar plot with 95% CIs
d %>%
ggplot(aes(x=reorder(scenario,-obs_mean), y=obs_mean, color=scenario, fill=scenario)) +
geom_bar(stat='identity') +
geom_errorbar(aes(ymin=ci_lb, ymax = ci_ub), width = 0, size = 1.5, color='black') +
theme_few() +
xlab('scenario') +
ylab('accuracy') +
theme(text = element_text(size=18),
element_line(size=1),
element_rect(size=2, color="#00000"),
axis.text.x = element_text(angle=90)) +
theme(legend.position = "none") +
scale_fill_brewer(palette="Spectral") + scale_color_brewer(palette="Spectral")
ggsave('../results/plots/human_accuracy_across_scenarios.pdf', width=12, height = 18, units='cm')
## human model accuracy comparison (MAIN FIGURE)
A %>%
filter(readout_type %in% c('B')) %>%
select(Model, model_correct, dynamics_training_dataset_type, readout_type, scenario) %>%
bind_rows(AH %>% dplyr::rename(Model=agent, model_correct = obs_mean) %>% select(Model, model_correct, scenario)) %>%
ggplot(aes(x=reorder(Model,-model_correct), y=model_correct,
color=Model, fill=Model, shape = factor(scenario))) +
geom_point(stat='identity', position=position_dodge(0.3)) +
# geom_hline(aes(yintercept = human_correct)) +
# geom_rect(aes(ymin = ci_lb, ymax = ci_ub, xmin = -Inf, xmax = Inf), color=NA, fill = 'gray', alpha = 0.05) +
# facet_grid(rows=vars(scenario), cols=vars(dynamics_training_dataset_type)) +
theme_few() +
theme(axis.text.x = element_text(angle=90)) +
xlab('models') +
ylab('accuracy') +
ylim(0,1) +
scale_fill_brewer(palette="Spectral") + scale_color_brewer(palette="Spectral")
ggsave('../results/plots/human_model_accuracy_across_scenarios.pdf', width=36, height = 36, units='cm')
## human model accuracy comparison by dynamics training data
A %>%
filter(readout_type %in% c('A','B','C')) %>%
ggplot(aes(x=reorder(Model,-model_correct), y=model_correct,
color=Model, fill=Model, shape = factor(dynamics_training_dataset_type))) +
geom_point(stat='identity', position=position_dodge(0.3)) +
#geom_hline(aes(yintercept = human_correct)) +
# geom_rect(aes(ymin = ci_lb, ymax = ci_ub, xmin = -Inf, xmax = Inf), color=NA, fill = 'gray', alpha = 0.05) +
facet_grid(rows=vars(scenario), cols=vars(dynamics_training_dataset_type)) +
theme_few() +
theme(axis.text.x = element_text(angle=90)) +
xlab('models') +
ylab('accuracy') +
scale_fill_brewer(palette="Spectral") + scale_color_brewer(palette="Spectral")
ggsave('../results/plots/human_model_accuracy_across_scenarios_by_dynamicsTraining.pdf', width=36, height = 36, units='cm')
## A = "full sequence"
## B = "initial + predicted"
## C = "initial only"
## human model accuracy comparison by readout type
A %>%
filter(readout_type %in% c('A','B','C')) %>%
ggplot(aes(x=reorder(Model,-model_correct), y=model_correct,
color=Model, fill=Model, shape = factor(readout_type))) +
geom_point(stat='identity', position=position_dodge(0.3)) +
#geom_hline(aes(yintercept = human_correct)) +
# geom_rect(aes(ymin = ci_lb, ymax = ci_ub, xmin = -Inf, xmax = Inf), color=NA, fill = 'gray', alpha = 0.05) +
facet_grid(rows=vars(scenario), cols=vars(readout_type)) +
theme_few() +
theme(axis.text.x = element_text(angle=90)) +
xlab('models') +
ylab('accuracy') +
scale_fill_brewer(palette="Spectral") + scale_color_brewer(palette="Spectral")
ggsave('../results/plots/human_model_accuracy_across_scenarios_by_readoutType.pdf', width=36, height = 36, units='cm')
## model accuracy by scenario & save out
Axs = A %>%
select(Model, model_correct, dynamics_training_dataset_type, readout_type, scenario) %>%
bind_rows(AH %>% dplyr::rename(Model=agent, model_correct = obs_mean) %>% select(Model, model_correct, scenario)) %>%
group_by(scenario, Model) %>%
tidyboot_mean(model_correct)
write_csv(Axs, '../results/csv/summary/model_human_accuracy_by_scenario.csv')
###Output
Warning message:
“`cols` is now required when using unnest().
Please use `cols = c(strap)`”
###Markdown
Visualize average-human and model agreement (RMSE)
###Code
## TODO
###Output
_____no_output_____
###Markdown
Visualize human-human and model-human agreement (Cohen's kappa)
###Code
## TODO
###Output
_____no_output_____
###Markdown
Comparing performance between models* Question 1: Visual encoder architecture (ConvNet [SVG/VGGFrozenLSTM] vs. transformer [DEITFrozenLSTM] … DEITFrozenMLP vs. SVG/VGGFrozenMLP)* Question 2: Dynamics model RNN vs. MLP (LSTM vs. MLP for above)* Question 3: Among unsupervised models, object-centric vs. non-object-centric * {CSWM, OP3} vs. {SVG}* Question 4: Latent vs. pixel reconstruction loss * CSWM vs. OP3* Question 5: RPIN vs. CSWM/OP3 (“supervised explicit object-centric” vs. “unsupervised implicit object-centric”)* Question 6: DPI vs GNS and GNS-RANSAC (DPI is object-centric, whereas GNS is not? and GNS-RANSAC is somewhere in between.)* Question 7: GNS vs. GNS-RANSAC* Question 8: Particle models (GNS, GNS-RANSAC, DPI) vs. humans - Estimate difference between particle model accuracy and human accuracy ``` model_correct ~ particleOrHuman + (1 | scenario) + (1 | training_data) + (1 | readout_type) ``` - Estimate similarity between model responses and human responses * Question 9: Particle models (GNS, GNS-RANSAC, DPI) vs. remaining vision models* Question 10: Among TDW-trained vision models, are supervised (RPIN) better than unsupervised (SVG, CSWM, OP3)* Question 11: Posgit chsibly useful to look at how well visual encoders do alone (e.g. Readout type A vs. humans)* Question 12: For pretrained encoder vision models (frozen), is readout B any better than readout C? If not, then none of the vision models are actually getting anything out of learning dynamics* Question 13: For end2end vision models (CSWM, OP3, SVG, RPIN), is readout B any better than readout C? If not, then none of the vision models are actually getting anything out of learning dynamics* Question 14: Impact of dynamics training data variability on accuracy* Question 15: is the supervised object-centric vision model (TDW training only) better than the best unsupervised? RPIN vs CSWM* Question 16: if possible, same comparison above but on Cohen's kappa? (nice-to-have)* Question 17: Is ImageNet pretraining better than object-centric TDW pretraining, assuming a CNN encoder? (VGGfrozen-MLP,LSTM ) vs (CSWM,RPIN)* Question 18: If the three particle models DPI, GNS, GNS-R aren't distinguishable, I would like to better understand why. Is this because GNS and GNS-R are being pulled way down by a single outlier (Dominoes)? I.e. does the result change if you exclude dominoes?* Question 19: Same as (4) but with Cohen's kappa (nice-to-have)* Question 20: Is readout type A (fully observed the movie) significantly better than readout type B or C (initial observations with/without simulation); best to do this separately for the "frozen" models (VGGfrozen, DEITfrozen, etc.) and for the TDW-only trained models (SVG, OP3, CSWM, RPIN)**Dimensions:** * “Visual encoder architecture” : [“ConvNet” “Transformer” “Neither”]* “Dynamics model architecture” : [“LSTM”, “MLP”, “Neither”]* “ObjectCentric”: [TRUE, FALSE, NA]* “Supervised”: [TRUE, FALSE]* “SelfSupervisedLoss”: [“latent”, “pixel”, “NA”] Q1: Visual encoder architecture (ConvNet vs. transformer)
###Code
## Comparison 1: Visual encoder architecture (ConvNet [SVG/VGGFrozenLSTM] vs. transformer [DEITFrozenLSTM] … DEITFrozenMLP vs. SVG/VGGFrozenMLP)
Q1 <- A %>%
filter(visual_encoder_architecture %in% c('ConvNet','Transformer')) %>%
filter(readout_type %in% c('A','B','C'))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | readout_type) + (1 | dynamics_training_dataset_type), data=Q1)
M1 <- lmer(model_correct ~ visual_encoder_architecture + (1 | scenario) + (1 | readout_type) + (1 | dynamics_training_dataset_type), data=Q1)
summary(M1)
###Output
_____no_output_____
###Markdown
Transformer architectures outperform convnet architectures.
###Code
## model comparison relative to null model
anova(M0,M1)
###Output
refitting model(s) with ML (instead of REML)
###Markdown
Model containing `visual_encoder_architecture` as a predictor outperforms null model without it .
###Code
## explained variance
r.squaredGLMM(M1)
###Output
Warning message:
“'r.squaredGLMM' now calculates a revised statistic. See the help page.”
###Markdown
Showing that your marginal R squared is 0.14 and your conditional R squared is 0.72. Transformer architecture outperforms ConvNet architecture: $b=6.670e-02,t(925)=21.2, p<0.001$ Q2: Dynamics model RNN vs. MLP (LSTM vs. MLP for above)
###Code
## Comparison 2
Q2 <- A %>%
filter(dynamics_model_architecture %in% c('LSTM','MLP')) %>%
mutate(dynamics_model_architecture = factor(dynamics_model_architecture)) %>%
filter(readout_type %in% c('A','B','C'))
Q2 %>%
group_by(dynamics_model_architecture, Model) %>%
summarise(mean(model_correct))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | readout_type) + (1 | dynamics_training_dataset_type), data=Q2)
M2 <- lmer(model_correct ~ dynamics_model_architecture + (1 | scenario) + (1 | readout_type) + (1 | dynamics_training_dataset_type), data=Q2)
summary(M2)
###Output
_____no_output_____
###Markdown
Recurrence (LSTM) does not outperform MLP:```Fixed effects: Estimate Std. Error df t value(Intercept) 6.310e-01 4.157e-02 2.351e+00 15.181dynamics_model_architectureMLP -8.559e-04 4.563e-03 2.750e+02 -0.188 Pr(>|t|) (Intercept) 0.00212 **dynamics_model_architectureMLP 0.85134 ```
###Code
anova(M0,M2)
## explained variance
r.squaredGLMM(M2)
###Output
_____no_output_____
###Markdown
Q3: Among unsupervised models, object-centric vs. non-object-centric{CSWM, OP3} vs. {SVG}
###Code
## Comparison 3
Q3 <- A %>%
filter(Supervised==FALSE) %>%
filter(ObjectCentric %in% c(TRUE,FALSE)) %>%
filter(readout_type %in% c('A','B','C'))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | readout_type) + (1 | dynamics_training_dataset_type), data=Q3)
M3 <- lmer(model_correct ~ ObjectCentric + (1 | scenario) + (1 | readout_type) + (1 | dynamics_training_dataset_type), data=Q3)
summary(M3)
###Output
_____no_output_____
###Markdown
ObjectCentric representations better than non-object centric: $b=-0.0145 ,t(635)=-3.154, p=0.0017$
###Code
anova(M0,M3)
## explained variance
r.squaredGLMM(M3)
###Output
_____no_output_____
###Markdown
Q4: Latent vs. pixel reconstruction lossCSWM vs. OP3
###Code
## Comparison 4
Q4 <- A %>%
filter(Supervised==FALSE) %>%
filter(ObjectCentric %in% c(TRUE,FALSE)) %>%
filter(Model %in% c('CSWM','OP3')) %>%
filter(readout_type %in% c('A','B','C'))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | readout_type) + (1 | dynamics_training_dataset_type), data=Q4)
M4 <- lmer(model_correct ~ Model + (1 | scenario) + (1 | readout_type) + (1 | dynamics_training_dataset_type), data=Q4)
summary(M4)
###Output
_____no_output_____
###Markdown
Latent (CSWM) better than pixel reconstruction (OP3) loss: $b= -0.059 ,t(275)=-11.04, p<2e-16$
###Code
anova(M0,M4)
## explained variance
r.squaredGLMM(M4)
###Output
_____no_output_____
###Markdown
Q5: RPIN vs. CSWM/OP3 (“supervised explicit object-centric” vs. “unsupervised implicit object-centric”)
###Code
## Comparison 3
Q5 <- A %>%
filter(Supervised %in% c(TRUE,FALSE)) %>%
filter(Model %in% c('CSWM','OP3', 'RPIN')) %>%
filter(readout_type %in% c('A','B','C'))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | readout_type) + (1 | dynamics_training_dataset_type), data=Q5)
M5 <- lmer(model_correct ~ Supervised + (1 | scenario) + (1 | readout_type) + (1 | dynamics_training_dataset_type), data=Q5)
summary(M5)
###Output
_____no_output_____
###Markdown
Supervised better than unsupervised: $b= 6.459e-02 ,t(707)=13.99, p<2e-16$
###Code
anova(M0,M5)
## explained variance
r.squaredGLMM(M5)
###Output
_____no_output_____
###Markdown
Question 6: DPI vs GNS and GNS-RANSAC (DPI is object-centric, whereas GNS is not? and GNS-RANSAC is somewhere in between.)
###Code
Q6 <- A %>%
filter(Model %in% c('DPI','GNS', 'GNS-ransac')) %>%
filter(readout_type %in% c('A','B','C')) %>%
mutate(isDPI = if_else(Model=='DPI', TRUE, FALSE))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q6)
M6 <- lmer(model_correct ~ isDPI + (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q6)
summary(M6)
###Output
_____no_output_____
###Markdown
DPI not better than {GNS, GNS-ransac}: $b= 3.982e-04 ,t(61)=0.018, p=0.985$
###Code
anova(M0,M6)
## explained variance
r.squaredGLMM(M6)
###Output
_____no_output_____
###Markdown
Question 7: GNS vs. GNS-RANSAC
###Code
Q7 <- A %>%
filter(Model %in% c('GNS', 'GNS-ransac')) %>%
filter(readout_type %in% c('A','B','C')) %>%
mutate(isRansac = if_else(Model=='GNS-ransac', TRUE, FALSE))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q7)
M7 <- lmer(model_correct ~ isRansac + (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q7)
summary(M7)
###Output
_____no_output_____
###Markdown
GNS not different from GNS-ransac: $b=0.007, t(37)=0.296, p=0.769$
###Code
anova(M0,M7)
## explained variance
r.squaredGLMM(M7)
Q7$readout_type
###Output
_____no_output_____
###Markdown
Question 8: Particle models (GNS, GNS-RANSAC, DPI) vs. humansEstimate difference between particle model accuracy and human accuracy```model_correct ~ particleOrHuman + (1 | scenario) + (1 | training_data) + (1 | readout_type)```Estimate similarity between model responses and human responsesQuestion 9: Particle models (GNS, GNS-RANSAC, DPI) vs. remaining vision models
###Code
Q8 <- A %>%
filter(Model %in% c('DPI','GNS', 'GNS-ransac')) %>%
filter(readout_type %in% c('A','B','C')) %>%
select(Model, model_correct, dynamics_training_dataset_type, readout_type, scenario) %>%
bind_rows(AH %>% dplyr::rename(Model=agent, model_correct = obs_mean) %>% select(Model, model_correct, scenario)) %>%
mutate(isHuman = if_else(Model=='human', TRUE, FALSE))
M0 <- lmer(model_correct ~ (1 | scenario) , data=Q8)
M8 <- lmer(model_correct ~ isHuman + (1 | scenario), data=Q8)
summary(M8)
###Output
_____no_output_____
###Markdown
Humans not that much better than particle models: $b=0.040, t(71)=1.228, p=0.223$
###Code
anova(M0,M8)
## explained variance
r.squaredGLMM(M8)
###Output
_____no_output_____
###Markdown
Question 9: Particle models (GNS, GNS-RANSAC, DPI) vs. remaining vision models
###Code
Q9 <- A %>%
mutate(isParticle = if_else(Model %in% c('GNS', 'GNS-ransac', 'DPI'), TRUE, FALSE)) %>%
filter(readout_type %in% c('A','B','C'))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | dynamics_training_dataset_type) , data=Q9)
M9 <- lmer(model_correct ~ isParticle + (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q9)
summary(M9)
###Output
_____no_output_____
###Markdown
Particle models better than non-particle models:```Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 5.977e-01 1.187e-02 5.575e+00 50.34 1.26e-08 ***isParticleTRUE 1.094e-01 9.251e-03 1.717e+03 11.83 < 2e-16 ***``` Question 10: Among TDW-trained vision models, are supervised (RPIN) better than unsupervised (SVG, CSWM, OP3)
###Code
Q10 <- A %>%
filter(Model %in% c('SVG', 'CSWM','OP3','RPIN')) %>%
mutate(isSupervised = if_else(Model %in% c('RPIN'), TRUE, FALSE)) %>%
filter(readout_type %in% c('A','B','C'))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | readout_type) + (1 | dynamics_training_dataset_type) , data=Q10)
M10 <- lmer(model_correct ~ isSupervised + (1 | scenario) + (1 | readout_type) + (1 | dynamics_training_dataset_type), data=Q10)
summary(M10)
###Output
_____no_output_____
###Markdown
Supervised better than unsupervised among TDW-trained models: ```Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 5.575e-01 1.344e-02 4.396e+00 41.48 7.17e-07 ***isSupervisedTRUE 5.656e-02 4.087e-03 1.069e+03 13.84 < 2e-16 ***``` Question 11: Possibly useful to look at how well visual encoders do alone (e.g. Readout type A vs. humans)
###Code
## TODO
###Output
_____no_output_____
###Markdown
Question 12: For pretrained encoder vision models (frozen), is readout B any better than readout C? If not, then none of the vision models are actually getting anything out of learning dynamics
###Code
Q12 <- A %>%
filter(visual_encoder_architecture %in% c('ConvNet','Transformer')) %>%
filter(readout_type %in% c('B','C'))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | dynamics_training_dataset_type) , data=Q12)
M12 <- lmer(model_correct ~ readout_type + (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q12)
summary(M12)
###Output
_____no_output_____
###Markdown
Readout type B not better than C: ```Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 5.687e-01 1.133e-02 8.972e+00 50.215 2.64e-12 ***readout_typeC 4.115e-03 3.524e-03 6.130e+02 1.168 0.243 ``` Question 13: For end2end vision models (CSWM, OP3, SVG, RPIN), is readout B any better than readout C? If not, then none of the vision models are actually getting anything out of learning dynamics
###Code
Q13 <- A %>%
filter(Model %in% c('SVG', 'CSWM','OP3','RPIN')) %>%
filter(readout_type %in% c('B','C'))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | dynamics_training_dataset_type) , data=Q13)
M13 <- lmer(model_correct ~ readout_type + (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q13)
summary(M13)
###Output
_____no_output_____
###Markdown
Readout B not better than readout C: ```Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 5.583e-01 1.060e-02 5.512e+00 52.663 1.16e-08 ***readout_typeC 2.664e-03 4.198e-03 7.090e+02 0.635 0.526 ``` Q14: Impact of dynamics training data variability on accuracy
###Code
Q14 <- A %>%
filter(readout_type %in% c('A','B','C')) %>%
dplyr::rename('dynamicsTrainVar'='dynamics_training_dataset_type')
M0 <- lmer(model_correct ~ (1 | scenario) , data=Q14)
M14 <- lmer(model_correct ~ dynamicsTrainVar + (1 | scenario), data=Q14)
summary(M14)
###Output
_____no_output_____
###Markdown
Training on only one (the same) scenario yields higher prediction accuracy than training on all:``` Estimate Std. Error df t value Pr(>|t|)dynamicsTrainVarsame 2.053e-02 4.708e-03 1.718e+03 4.361 1.37e-05``` Training on all-but-this scenario yields somewhat lower prediction accuracy than training on all:``` Estimate Std. Error df t value Pr(>|t|)dynamicsTrainVarall_but_this -8.965e-03 4.708e-03 1.718e+03 -1.904 0.057```
###Code
anova(M0,M14)
###Output
refitting model(s) with ML (instead of REML)
###Markdown
Question 15: is the supervised object-centric vision model (TDW training only) better than the best unsupervised? RPIN vs CSWM
###Code
Q15 <- A %>%
filter(readout_type %in% c('A','B','C')) %>%
filter(Model %in% c('RPIN','CSWM')) %>%
mutate(isRPIN = if_else(Model=='RPIN', TRUE, FALSE))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q15)
M15 <- lmer(model_correct ~ isRPIN + (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q15)
summary(M15)
###Output
_____no_output_____
###Markdown
RPIN better than CSWM ``` Estimate Std. Error df t value Pr(>|t|) isRPINTRUE 0.03506 0.00654 565.00000 5.36 1.21e-07 ***```
###Code
anova(M0,M15)
###Output
refitting model(s) with ML (instead of REML)
###Markdown
basic summary stats
###Code
Q15 %>% group_by(Model, dynamics_training_dataset_type) %>% mean(model_correct)
Q15 %>%
select(Model, scenario, dynamics_training_dataset_type, readout_type, model_correct) %>%
group_by(Model, dynamics_training_dataset_type) %>%
tidyboot_mean(model_correct)
# summarise(mean(model_correct))
###Output
Warning message:
“`as_data_frame()` is deprecated as of tibble 2.0.0.
Please use `as_tibble()` instead.
The signature and semantics have changed, see `?as_tibble`.
[90mThis warning is displayed once every 8 hours.[39m
[90mCall `lifecycle::last_warnings()` to see where this warning was generated.[39m”
Warning message:
“`cols` is now required when using unnest().
Please use `cols = c(strap)`”
###Markdown
Question 16: if possible, same comparison as Q15 but on Cohen's kappa? (nice-to-have)
###Code
## TODO
###Output
_____no_output_____
###Markdown
Question 17: Is ImageNet pretraining better than object-centric TDW pretraining, assuming a CNN encoder? (VGGfrozen-MLP,LSTM ) vs (CSWM,RPIN)
###Code
Q17 <- A %>%
filter(readout_type %in% c('A','B','C')) %>%
filter(Model %in% c('RPIN','CSWM', 'VGGFrozenLSTM','VGGFrozenMLP' )) %>%
mutate(isImagenetPretrained = if_else(Model %in% c('VGGFrozenLSTM','VGGFrozenMLP'), TRUE, FALSE))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q17)
M17 <- lmer(model_correct ~ isImagenetPretrained + (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q17)
summary(M17)
###Output
_____no_output_____
###Markdown
VGGFrozenMLP / VGGFrozenLSTM better than CSWM/RPIN```Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 6.053e-01 1.406e-02 5.070e+00 43.033 1.07e-07 ***isImagenetPretrainedTRUE 1.463e-02 5.036e-03 8.530e+02 2.905 0.00377 ** ``` Question 18: If the three particle models DPI, GNS, GNS-R aren't distinguishable, I would like to better understand why. Is this because GNS and GNS-R are being pulled way down by a single outlier (Dominoes)? I.e. does the result change if you exclude dominoes?
###Code
Q18 <- A %>%
filter(Model %in% c('GNS', 'GNS-ransac', 'DPI')) %>%
filter(readout_type %in% c('A','B','C')) %>%
filter(!scenario %in% c('dominoes')) ## excluding dominoes
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | dynamics_training_dataset_type) , data=Q18)
M18 <- lmer(model_correct ~ Model + (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q18)
summary(M18)
## including dominoes
Q18b <- A %>%
filter(Model %in% c('GNS', 'GNS-ransac', 'DPI')) %>%
filter(readout_type %in% c('A','B','C')) #%>%
# filter(!scenario %in% c('dominoes')) ## excluding dominoes
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | dynamics_training_dataset_type) , data=Q18b)
M18b <- lmer(model_correct ~ Model + (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q18b)
summary(M18b)
###Output
_____no_output_____
###Markdown
Lack of ability to distinguish between particle models (GNS, GNS-ransac, DPI) does not depend on dominoes outlier (we are comparing between models within scenario)With dominoes```Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 0.707349 0.035477 9.054228 19.938 8.62e-09 ***ModelGNS -0.003923 0.025232 59.999972 -0.155 0.877 ModelGNS-ransac 0.003126 0.025232 59.999972 0.124 0.902 ```Without dominoes```Fixed effects: Estimate Std. Error df t value Pr(>|t|) (Intercept) 0.70628 0.03097 8.46138 22.809 6.77e-09 ***ModelGNS 0.01133 0.01999 52.00024 0.567 0.573 ModelGNS-ransac 0.02128 0.01999 52.00024 1.064 0.292 ```
###Code
Q18b %>%
select(Model, scenario, dynamics_training_dataset_type, readout_type, model_correct) %>%
group_by(Model, dynamics_training_dataset_type) %>%
tidyboot_mean(model_correct)
# summarise(mean(model_correct))
## including dominoes, only comparing within "all" training regime
Q18c <- A %>%
filter(Model %in% c('GNS', 'GNS-ransac', 'DPI')) %>%
filter(dynamics_training_dataset_type %in% c('all')) ## only "all" training regimens
M0 <- lmer(model_correct ~ (1 | scenario) , data=Q18c)
M18c <- lmer(model_correct ~ Model + (1 | scenario) , data=Q18c)
summary(M18c)
Q18c %>%
select(Model, scenario, dynamics_training_dataset_type, readout_type, model_correct) %>%
group_by(Model) %>%
tidyboot_mean(model_correct)
###Output
Warning message:
“`cols` is now required when using unnest().
Please use `cols = c(strap)`”
###Markdown
Question 19: Same as (Q18) but with Cohen's kappa (nice-to-have)
###Code
## TODO
###Output
_____no_output_____
###Markdown
Question 20: Is readout type A (fully observed the movie) significantly better than readout type B or C (initial observations with/without simulation); best to do this separately for the "frozen" models (VGGfrozen, DEITfrozen, etc.) and for the TDW-only trained models (SVG, OP3, CSWM, RPIN)
###Code
## frozen pre-trained
Q20a <- A %>%
filter(readout_type %in% c('A','B','C')) %>%
filter(Model %in% c('VGGFrozenLSTM','VGGFrozenMLP', 'DEITFrozenLSTM','DEITFrozenMLP')) %>%
mutate(fullyObserved = if_else(readout_type=='A', TRUE, FALSE))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | dynamics_training_dataset_type) , data=Q20a)
M20a <- lmer(model_correct ~ fullyObserved + (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q20a)
summary(M20a)
## TDW-only pre-trained
Q20b <- A %>%
filter(readout_type %in% c('A','B','C')) %>%
filter(Model %in% c('SVG','OP3', 'CSWM','RPIN')) %>%
mutate(fullyObserved = if_else(readout_type=='A', TRUE, FALSE))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | dynamics_training_dataset_type) , data=Q20b)
M20b <- lmer(model_correct ~ fullyObserved + (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q20b)
summary(M20b)
## TDW-only pre-trained
Q20c <- A %>%
filter(readout_type %in% c('A','B','C')) %>%
mutate(fullyObserved = if_else(readout_type=='A', TRUE, FALSE))
M0 <- lmer(model_correct ~ (1 | scenario) + (1 | dynamics_training_dataset_type) , data=Q20c)
M20c <- lmer(model_correct ~ fullyObserved + (1 | scenario) + (1 | dynamics_training_dataset_type), data=Q20c)
summary(M20c)
###Output
_____no_output_____ |
nbs/2.0_repr.codebert.ipynb | ###Markdown
Code Bert Adapatation> This module adapts codebert from Microsoft >> By @danaderp 11-12-2020>
###Code
#export
# Imports
import torch
from transformers import RobertaTokenizer, RobertaConfig, RobertaModel
#hide
from nbdev.showdoc import *
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = RobertaTokenizer.from_pretrained("microsoft/codebert-base")
model = RobertaModel.from_pretrained("microsoft/codebert-base")
model.to(device)
path = Path('/tf/data/models/JavaBert-v1')
#Building Library
! nbdev_build_docs
###Output
_____no_output_____ |
.ipynb_checkpoints/Pie Charts-checkpoint.ipynb | ###Markdown
Prepared by Asif Bhat Data Visualization With MatplotlibFollow Me on - LinkedIn Twitter Instagram Facebook
###Code
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Pie Charts
###Code
plt.figure(figsize=(9,9))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.pie (tickets , labels= labels , colors= colors , startangle=45)
plt.show()
###Output
_____no_output_____
###Markdown
Display percentage and actual value in Pie Chart
###Code
# Display percentage in Pie Chart using autopct='%1.1f%%'
plt.figure(figsize=(8,8))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#7CB342','#C0CA33','#FFB300','#F57C00']
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , shadow='true', autopct='%1.1f%%', explode=[0,0 , 0 , 0])
plt.show()
plt.figure(figsize=(8,8))
tickets = [48 , 30 , 20 , 15]
total = np.sum(tickets)
labels = ['Low' , 'Medium' , 'High' , 'Critical']
def val_per(x):
return '{:.2f}%\n({:.0f})'.format(x, total*x/100)
colors = ['#7CB342','#C0CA33','#FFB300','#F57C00']
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , shadow='true', autopct=val_per, explode=[0,0 , 0 , 0])
plt.show()
###Output
_____no_output_____
###Markdown
Explode Slice in Pie Chart
###Code
#Explode 4th Slice
plt.figure(figsize=(8,8))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#7CB342','#C0CA33','#FFB300','#F57C00']
# explode = [0,0,0,0.1] will explode the fourth slice
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , autopct='%1.1f%%' , shadow='true', explode=[0,0 , 0 , 0.1])
plt.show()
#Explode 3rd & 4th Slice
plt.figure(figsize=(8,8))
tickets = [48 , 30 , 20 , 15]
label = ['Low' , 'Medium' , 'High' , 'Critical']
color = ['#7CB342','#C0CA33','#FFB300','#F57C00']
# explode = [0,0,0.1,0.1] will explode the 3rd & 4th slice
plt.pie (tickets , labels= label , colors= color , startangle=45 ,autopct='%1.1f%%', shadow='true', explode=[0,0 , 0.1 , 0.1])
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Display multiple pie plots in one figure
###Code
fig = plt.figure(figsize=(20,6))
tickets = [48 , 30 , 20 , 15]
priority = ['Low' , 'Medium' , 'High' , 'Critical']
status = ['Resolved' , 'Cancelled' , 'Pending' , 'Assigned']
company = ['IBM' , 'Microsoft', 'BMC' , 'Apple']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.subplot(1,3,1)
plt.pie (tickets , labels= priority , colors= colors , startangle=45)
plt.subplot(1,3,2)
plt.pie (tickets , labels= status , colors= colors , startangle=45)
plt.subplot(1,3,3)
plt.pie (tickets , labels= company , colors= colors , startangle=45)
plt.show()
fig = plt.figure(figsize=(20,13))
tickets = [48 , 30 , 20 , 15]
priority = ['Low' , 'Medium' , 'High' , 'Critical']
status = ['Resolved' , 'Cancelled' , 'Pending' , 'Assigned']
company = ['IBM' , 'Microsoft', 'BMC' , 'Apple']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.subplot(2,3,1)
plt.pie (tickets , labels= priority , colors= colors , startangle=45 , autopct='%1.1f%%')
plt.subplot(2,3,2)
plt.pie (tickets , labels= status , colors= colors , startangle=45 , autopct='%1.1f%%')
plt.subplot(2,3,3)
plt.pie (tickets , labels= company , colors= colors , startangle=45 , autopct='%1.1f%%')
plt.subplot(2,3,4)
plt.pie (tickets , labels= priority , colors= colors , startangle=45, autopct='%1.1f%%')
plt.subplot(2,3,5)
plt.pie (tickets , labels= status , colors= colors , startangle=45 ,autopct='%1.1f%%')
plt.subplot(2,3,6)
plt.pie (tickets , labels= company , colors= colors , startangle=45, autopct='%1.1f%%')
plt.show()
###Output
_____no_output_____
###Markdown
Donut plot
###Code
plt.figure(figsize=(9,9))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.pie (tickets , labels= labels , colors= colors , startangle=45)
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.show()
# Changing background color
fig = plt.figure(figsize=(9,9))
fig.patch.set_facecolor('#DADADA') # Changing background color of donut chart
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.pie (tickets , labels= labels , colors= colors , startangle=45)
my_circle=plt.Circle( (0,0), 0.7, color='#DADADA') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.show()
###Output
_____no_output_____
###Markdown
Explode Slice in Donut Chart
###Code
plt.figure(figsize=(9,9))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , explode=[0,0 , 0.0 , 0.1])
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.show()
plt.figure(figsize=(9,9))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , explode=[0,0 , 0.1 , 0.1])
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.show()
plt.figure(figsize=(9,9))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , explode=[0.03,0.03 , 0.03 , 0.03])
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.show()
###Output
_____no_output_____
###Markdown
Displaying percentage and actual values in Donut Chart
###Code
plt.figure(figsize=(9,9))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , autopct='%1.1f%%', explode=[0,0 , 0 , 0])
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.show()
plt.figure(figsize=(9,9))
tickets = [48 , 30 , 20 , 15]
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , autopct='%1.1f%%', pctdistance=0.85 ,explode=[0,0 , 0 , 0])
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.show()
plt.figure(figsize=(9,9))
tickets = [48 , 30 , 20 , 15]
total = np.sum(tickets)
def val_per(x):
return '{:.2f}%\n({:.0f})'.format(x, total*x/100)
labels = ['Low' , 'Medium' , 'High' , 'Critical']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.pie (tickets , labels= labels , colors= colors , startangle=45 , autopct=val_per, pctdistance=0.85 ,explode=[0,0 , 0 , 0])
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.show()
###Output
_____no_output_____
###Markdown
Display multiple Donut plots in one figure
###Code
fig = plt.figure(figsize=(20,6))
tickets = [48 , 30 , 20 , 15]
priority = ['Low' , 'Medium' , 'High' , 'Critical']
status = ['Resolved' , 'Cancelled' , 'Pending' , 'Assigned']
company = ['IBM' , 'Microsoft', 'BMC' , 'Apple']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.subplot(1,3,1)
plt.pie (tickets , labels= priority , colors= colors , startangle=45)
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.subplot(1,3,2)
plt.pie (tickets , labels= status , colors= colors , startangle=45)
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.subplot(1,3,3)
plt.pie (tickets , labels= company , colors= colors , startangle=45)
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.show()
fig = plt.figure(figsize=(20,13))
tickets = [48 , 30 , 20 , 15]
priority = ['Low' , 'Medium' , 'High' , 'Critical']
status = ['Resolved' , 'Cancelled' , 'Pending' , 'Assigned']
company = ['IBM' , 'Microsoft', 'BMC' , 'Apple']
colors = ['#8BC34A','#D4E157','#FFB300','#FF7043']
plt.subplot(2,3,1)
plt.pie (tickets , labels= priority , colors= colors , startangle=45)
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.subplot(2,3,2)
plt.pie (tickets , labels= status , colors= colors , startangle=45)
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.subplot(2,3,3)
plt.pie (tickets , labels= company , colors= colors , startangle=45)
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.subplot(2,3,4)
plt.pie (tickets , labels= priority , colors= colors , startangle=45)
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.subplot(2,3,5)
plt.pie (tickets , labels= status , colors= colors , startangle=45)
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.subplot(2,3,6)
plt.pie (tickets , labels= company , colors= colors , startangle=45)
my_circle=plt.Circle( (0,0), 0.7, color='white') # Adding circle at the centre
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.show()
###Output
_____no_output_____ |
03 Advanced/26-saving-objects-with-pickle.ipynb | ###Markdown
Examples Example 1: Saving A DictLet's save an employee's information into file using pickle. Run the code below then look in the example folder for the output.
###Code
import pandas as pd
import shutil
import glob
import os
import pickle
if not 'script_dir' in globals():
script_dir = os.getcwd()
data_directory = 'data\\'
example_directory = 'PickleExample\\'
target_file_name = 'employee.txt'
target_path = os.path.join(script_dir, data_directory, example_directory, target_file_name)
employee = {"Name": "Sam", "Age":25, "Height": 177, "Country" : "Brazil"}
filename = open(target_path,'wb')
pickle.dump(employee, filename)
filename.close()
#Here is how you will unpickle a saved file.
inputFile = open(target_path,'rb')
loaded_object = pickle.load(inputFile)
inputFile.close()
print(loaded_object)
###Output
_____no_output_____
###Markdown
Example 2: Saving A ListLet's save a list into file using pickle.
###Code
grades = [34,53,23,56,67,74,3,33,2,6,7,8,83,34,2,34,64,65]
target_file_name = 'grades.txt'
target_path = os.path.join(script_dir, data_directory, example_directory, target_file_name)
filename = open(target_path,'wb')
pickle.dump(grades,filename)
filename.close()
#Here is how you will unpickle a saved file.
inputFile = open(target_path,'rb')
loaded_object = pickle.load(inputFile)
inputFile.close()
print(loaded_object)
###Output
_____no_output_____ |
dmu1/dmu1_ml_EGS/1.6_CANDELS-EGS.ipynb | ###Markdown
EGS master catalogue Preparation of CANDELS-EGS dataCANDELS-EGS catalogue: the catalogue comes from `dmu0_CANDELS-EGS`.In the catalogue, we keep:- The identifier (it's unique in the catalogue);- The position;- The stellarity;- The magnitude for each band in 2 arcsec aperture (aperture 10).- The kron magnitude to be used as total magnitude (no “auto” magnitude is provided).We don't know when the maps have been observed. We will use the year of the reference paper.
###Code
from herschelhelp_internal import git_version
print("This notebook was run with herschelhelp_internal version: \n{}".format(git_version()))
import datetime
print("This notebook was executed on: \n{}".format(datetime.datetime.now()))
%matplotlib inline
#%config InlineBackend.figure_format = 'svg'
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(10, 6))
from collections import OrderedDict
import os
from astropy import units as u
from astropy.coordinates import SkyCoord
from astropy.table import Column, Table
import numpy as np
from herschelhelp_internal.flagging import gaia_flag_column
from herschelhelp_internal.masterlist import nb_astcor_diag_plot, remove_duplicates
from herschelhelp_internal.utils import astrometric_correction, flux_to_mag
OUT_DIR = os.environ.get('TMP_DIR', "./data_tmp")
try:
os.makedirs(OUT_DIR)
except FileExistsError:
pass
RA_COL = "candels-egs_ra"
DEC_COL = "candels-egs_dec"
###Output
_____no_output_____
###Markdown
I - Column selection
###Code
imported_columns = OrderedDict({
'ID': "candels-egs_id",
'RA': "candels-egs_ra",
'DEC': "candels-egs_dec",
'CLASS_STAR': "candels-egs_stellarity",
#HST data
'FLUX_APER_10_F606W': "f_ap_acs_f606w",
'FLUXERR_APER_10_F606W': "ferr_ap_acs_f606w",
'FLUX_AUTO_F606W': "f_acs_f606w",
'FLUXERR_AUTO_F606W': "ferr_acs_f606w",
'FLUX_APER_10_F814W': "f_ap_acs_f814w",
'FLUXERR_APER_10_F814W': "ferr_ap_acs_f814w",
'FLUX_AUTO_F814W': "f_acs_f814w",
'FLUXERR_AUTO_F814W': "ferr_acs_f814w",
'FLUX_APER_10_F125W': "f_ap_wfc3_f125w",
'FLUXERR_APER_10_F125W': "ferr_ap_wfc3_f125w",
'FLUX_AUTO_F125W': "f_wfc3_f125w",
'FLUXERR_AUTO_F125W': "ferr_wfc3_f125w",
'FLUX_APER_10_F140W': "f_ap_wfc3_f140w",
'FLUXERR_APER_10_F140W': "ferr_ap_wfc3_f140w",
'FLUX_AUTO_F140W': "f_wfc3_f140w",
'FLUXERR_AUTO_F140W': "ferr_wfc3_f140w",
'FLUX_APER_10_F160W': "f_ap_wfc3_f160w",
'FLUXERR_APER_10_F160W': "ferr_ap_wfc3_f160w",
'FLUX_AUTO_F160W': "f_wfc3_f160w",
'FLUXERR_AUTO_F160W': "ferr_wfc3_f160w",
#CFHT Megacam
'CFHT_u_FLUX': "f_candels-megacam_u", # 9 CFHT_u_FLUX Flux density (in μJy) in the u*-band (CFHT/MegaCam) (3)
'CFHT_u_FLUXERR': "ferr_candels-megacam_u",# 10 CFHT_u_FLUXERR Flux uncertainty (in μJy) in the u*-band (CFHT/MegaCam) (3)
'CFHT_g_FLUX': "f_candels-megacam_g",# 11 CFHT_g_FLUX Flux density (in μJy) in the g'-band (CFHT/MegaCam) (3)
'CFHT_g_FLUXERR': "ferr_candels-megacam_g",# 12 CFHT_g_FLUXERR Flux uncertainty (in μJy) in the g'-band (CFHT/MegaCam) (3)
'CFHT_r_FLUX': "f_candels-megacam_r",# 13 CFHT_r_FLUX Flux density (in μJy) in the r'-band (CFHT/MegaCam) (3)
'CFHT_r_FLUXERR': "ferr_candels-megacam_r",# 14 CFHT_r_FLUXERR Flux uncertainty (in μJy) in the r'-band (CFHT/MegaCam) (3)
'CFHT_i_FLUX': "f_candels-megacam_i",# 15 CFHT_i_FLUX Flux density (in μJy) in the i'-band (CFHT/MegaCam) (3)
'CFHT_i_FLUXERR': "ferr_candels-megacam_i",# 16 CFHT_i_FLUXERR Flux uncertainty (in μJy) in the i'-band (CFHT/MegaCam) (3)
'CFHT_z_FLUX': "f_candels-megacam_z",# 17 CFHT_z_FLUX Flux density (in μJy) in the z'-band (CFHT/MegaCam) (3)
'CFHT_z_FLUXERR': "ferr_candels-megacam_z",# 18 CFHT_z_FLUXERR
#CFHT WIRCAM
'WIRCAM_J_FLUX': "f_candels-wircam_j",# 29 WIRCAM_J_FLUX Flux density (in μJy) in the J-band (CFHT/WIRCam) (3)
'WIRCAM_J_FLUXERR': "ferr_candels-wircam_j",# 30 WIRCAM_J_FLUXERR Flux uncertainty (in μJy) in the J-band (CFHT/WIRCam) (3)
'WIRCAM_H_FLUX': "f_candels-wircam_h",# 31 WIRCAM_H_FLUX Flux density (in μJy) in the H-band (CFHT/WIRCam) (3)
'WIRCAM_H_FLUXERR': "ferr_candels-wircam_h",# 32 WIRCAM_H_FLUXERR Flux uncertainty (in μJy) in the H-band (CFHT/WIRCam) (3)
'WIRCAM_K_FLUX': "f_candels-wircam_k",# 33 WIRCAM_K_FLUX Flux density (in μJy) in the Ks-band (CFHT/WIRCam) (3)
'WIRCAM_K_FLUXERR': "ferr_candels-wircam_k",# 34 WIRCAM_K_FLUXERR
#Mayall/Newfirm
'NEWFIRM_J1_FLUX': "f_candels-newfirm_j1",# 35 NEWFIRM_J1_FLUX Flux density (in μJy) in the J1-band (Mayall/NEWFIRM) (3)
'NEWFIRM_J1_FLUXERR': "ferr_candels-newfirm_j1",# 36 NEWFIRM_J1_FLUXERR Flux uncertainty (in μJy) in the J1-band (Mayall/NEWFIRM) (3)
'NEWFIRM_J2_FLUX': "f_candels-newfirm_j2",# 37 NEWFIRM_J2_FLUX Flux density (in μJy) in the J2-band (Mayall/NEWFIRM) (3)
'NEWFIRM_J2_FLUXERR': "ferr_candels-newfirm_j2",# 38 NEWFIRM_J2_FLUXERR Flux uncertainty (in μJy) in the J2-band (Mayall/NEWFIRM) (3)
'NEWFIRM_J3_FLUX': "f_candels-newfirm_j3",# 39 NEWFIRM_J3_FLUX Flux density (in μJy) in the J3-band (Mayall/NEWFIRM) (3)
'NEWFIRM_J3_FLUXERR': "ferr_candels-newfirm_j3",# 40 NEWFIRM_J3_FLUXERR Flux uncertainty (in μJy) in the J3-band (Mayall/NEWFIRM) (3)
'NEWFIRM_H1_FLUX': "f_candels-newfirm_h1",# 41 NEWFIRM_H1_FLUX Flux density (in μJy) in the H1-band (Mayall/NEWFIRM) (3)
'NEWFIRM_H1_FLUXERR': "ferr_candels-newfirm_h1",# 42 NEWFIRM_H1_FLUXERR Flux uncertainty (in μJy) in the H1-band (Mayall/NEWFIRM) (3)
'NEWFIRM_H2_FLUX': "f_candels-newfirm_h2",# 43 NEWFIRM_H2_FLUX Flux density (in μJy) in the H2-band (Mayall/NEWFIRM) (3)
'NEWFIRM_H2_FLUXERR': "ferr_candels-newfirm_h2",# 44 NEWFIRM_H2_FLUXERR Flux uncertainty (in μJy) in the H2-band (Mayall/NEWFIRM) (3)
'NEWFIRM_K_FLUX': "f_candels-newfirm_k",# 45 NEWFIRM_K_FLUX Flux density (in μJy) in the K-band (Mayall/NEWFIRM) (3)
'NEWFIRM_K_FLUXERR': "ferr_candels-newfirm_k",# 46 NEWFIRM_K_FLUXERR
#Spitzer/IRAC
'IRAC_CH1_FLUX': "f_candels-irac_i1",# 47 IRAC_CH1_FLUX Flux density (in μJy) in the 3.6μm-band (Spitzer/IRAC) (3)
'IRAC_CH1_FLUXERR': "ferr_candels-irac_i1",# 48 IRAC_CH1_FLUXERR Flux uncertainty (in μJy) in the 3.6μm-band (Spitzer/IRAC) (3)
'IRAC_CH2_FLUX': "f_candels-irac_i2",# 49 IRAC_CH2_FLUX Flux density (in μJy) in the 4.5μm-band (Spitzer/IRAC) (3)
'IRAC_CH2_FLUXERR': "ferr_candels-irac_i2",# 50 IRAC_CH2_FLUXERR Flux uncertainty (in μJy) in the 4.5μm-band (Spitzer/IRAC) (3)
'IRAC_CH3_FLUX': "f_candels-irac_i3",# 51 IRAC_CH3_FLUX Flux density (in μJy) in the 5.8μm-band (Spitzer/IRAC) (3)
'IRAC_CH3_FLUXERR': "ferr_candels-irac_i3",# 52 IRAC_CH3_FLUXERR Flux uncertainty (in μJy) in the 5.8μm-band (Spitzer/IRAC) (3)
'IRAC_CH4_FLUX': "f_candels-irac_i4",# 53 IRAC_CH4_FLUX Flux density (in μJy) in the 8.0μm-band (Spitzer/IRAC) (3)
'IRAC_CH4_FLUXERR': "ferr_candels-irac_i4"# 54 IRAC_CH4_FLUXERR
})
catalogue = Table.read("../../dmu0/dmu0_CANDELS-EGS/data/hlsp_candels_hst_wfc3_egs-tot-multiband_f160w_v1_cat.fits")[list(imported_columns)]
for column in imported_columns:
catalogue[column].name = imported_columns[column]
epoch = 2011
# Clean table metadata
catalogue.meta = None
# Adding flux and band-flag columns
for col in catalogue.colnames:
if col.startswith('f_'):
errcol = "ferr{}".format(col[1:])
# Some object have a magnitude to 0, we suppose this means missing value
#catalogue[col][catalogue[col] <= 0] = np.nan
#catalogue[errcol][catalogue[errcol] <= 0] = np.nan
mag, error = flux_to_mag(np.array(catalogue[col])*1.e-6, np.array(catalogue[errcol])*1.e-6)
# Fluxes are added in µJy
catalogue.add_column(Column(mag, name="m{}".format(col[1:])))
catalogue.add_column(Column(error, name="m{}".format(errcol[1:])))
# Add nan col for aperture fluxes
if ('wfc' not in col) & ('acs' not in col):
catalogue.add_column(Column(np.full(len(catalogue), np.nan), name="m_ap{}".format(col[1:])))
catalogue.add_column(Column(np.full(len(catalogue), np.nan), name="merr_ap{}".format(col[1:])))
catalogue.add_column(Column(np.full(len(catalogue), np.nan), name="f_ap{}".format(col[1:])))
catalogue.add_column(Column(np.full(len(catalogue), np.nan), name="ferr_ap{}".format(col[1:])))
# Band-flag column
if "ap" not in col:
catalogue.add_column(Column(np.zeros(len(catalogue), dtype=bool), name="flag{}".format(col[1:])))
# TODO: Set to True the flag columns for fluxes that should not be used for SED fitting.
catalogue[:10].show_in_notebook()
###Output
_____no_output_____
###Markdown
II - Removal of duplicated sources We remove duplicated objects from the input catalogues.
###Code
SORT_COLS = ['ferr_ap_acs_f606w', 'ferr_ap_acs_f814w', 'ferr_ap_wfc3_f125w', 'ferr_ap_wfc3_f140w', 'ferr_ap_wfc3_f160w']
FLAG_NAME = 'candels-egs_flag_cleaned'
nb_orig_sources = len(catalogue)
catalogue = remove_duplicates(catalogue, RA_COL, DEC_COL, sort_col=SORT_COLS,flag_name=FLAG_NAME)
nb_sources = len(catalogue)
print("The initial catalogue had {} sources.".format(nb_orig_sources))
print("The cleaned catalogue has {} sources ({} removed).".format(nb_sources, nb_orig_sources - nb_sources))
print("The cleaned catalogue has {} sources flagged as having been cleaned".format(np.sum(catalogue[FLAG_NAME])))
###Output
The initial catalogue had 41457 sources.
The cleaned catalogue has 41449 sources (8 removed).
The cleaned catalogue has 8 sources flagged as having been cleaned
###Markdown
III - Astrometry correctionWe match the astrometry to the Gaia one. We limit the Gaia catalogue to sources with a g band flux between the 30th and the 70th percentile. Some quick tests show that this give the lower dispersion in the results.
###Code
gaia = Table.read("../../dmu0/dmu0_GAIA/data/GAIA_EGS.fits")
gaia_coords = SkyCoord(gaia['ra'], gaia['dec'])
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec)
delta_ra, delta_dec = astrometric_correction(
SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]),
gaia_coords
)
print("RA correction: {}".format(delta_ra))
print("Dec correction: {}".format(delta_dec))
catalogue[RA_COL].unit = u.deg
catalogue[DEC_COL].unit = u.deg
catalogue[RA_COL] = catalogue[RA_COL] + delta_ra.to(u.deg)
catalogue[DEC_COL] = catalogue[DEC_COL] + delta_dec.to(u.deg)
nb_astcor_diag_plot(catalogue[RA_COL], catalogue[DEC_COL],
gaia_coords.ra, gaia_coords.dec)
###Output
_____no_output_____
###Markdown
IV - Flagging Gaia objects
###Code
catalogue.add_column(
gaia_flag_column(SkyCoord(catalogue[RA_COL], catalogue[DEC_COL]), epoch, gaia)
)
GAIA_FLAG_NAME = "candels-egs_flag_gaia"
catalogue['flag_gaia'].name = GAIA_FLAG_NAME
print("{} sources flagged.".format(np.sum(catalogue[GAIA_FLAG_NAME] > 0)))
###Output
160 sources flagged.
###Markdown
V - Saving to disk
###Code
catalogue.write("{}/CANDELS-EGS.fits".format(OUT_DIR), overwrite=True)
###Output
_____no_output_____ |
iLOCM.ipynb | ###Markdown
Learn the domain model in PDDL using iLOCM**interactive-LOCM**This code combines LOCM1 and LOCM2 algorithms and is last part of the pipeline that I use in my thesis to generate PDDL models from instructional texts.- Step 0: Preprocess: Lemmatize, Coref resolve, action override rename and replacing empty parameters.- Step 1: Find classes and make transition graphs.- Step 2: Get transistion sets from LOCM2 algorithm- Step 3: Create FSMs- Step 4: Perform Zero Analysis and add new FSM if necessary.- Step 5: Create and test hypothesis for state parameters- Step 6: Create and merge state parameters- Step 7: Remove parameter flaws- Step 8: Extract static preconditions- Step 9: Form action schemas
###Code
from collections import defaultdict
import itertools
import os
from tabulate import tabulate
from pprint import pprint
import matplotlib.pyplot as plt
%matplotlib inline
import networkx as nx
import pandas as pd
pd.options.display.max_columns = 100
from IPython.display import display, Markdown
from ipycytoscape import *
import string
###Output
_____no_output_____
###Markdown
Read input sequences
###Code
input_seqs = u'''
board-truck(driver1, truck2, s4), disembark-truck(driver1, truck2, s4), board-truck(driver1, truck2, s4), load-truck(package5, truck2, s4), drive-truck(truck2, s4, s1, driver1), drive-truck(truck2, s1, s3, driver1), unload-truck(package5, truck2, s3), drive-truck(truck2, s3, s5, driver1), drive-truck(truck2, s5, s1, driver1), disembark-truck(driver1, truck2, s1), board-truck(driver2, truck1, s0), load-truck(package2, truck1, s0), drive-truck(truck1, s0, s1, driver2), unload-truck(package2, truck1, s1), drive-truck(truck1, s1, s0, driver2), disembark-truck(driver2, truck1, s0), board-truck(driver1, truck2, s1), load-truck(package1, truck2, s1), drive-truck(truck2, s1, s4, driver1), drive-truck(truck2, s4, s5, driver1), unload-truck(package1, truck2, s5), drive-truck(truck2, s5, s1, driver1), disembark-truck(driver1, truck2, s1), board-truck(driver1, truck2, s1), drive-truck(truck2, s1, s4, driver1), load-truck(package4, truck2, s4), drive-truck(truck2, s4, s1, driver1), unload-truck(package4, truck2, s1), disembark-truck(driver1, truck2, s1), board-truck(driver1, truck2, s1), drive-truck(truck2, s1, s4, driver1), load-truck(package3, truck2, s4), drive-truck(truck2, s4, s5, driver1), unload-truck(package3, truck2, s5), drive-truck(truck2, s5, s1, driver1), disembark-truck(driver1, truck2, s1)
'''
def read_input(input_seqs):
'''
Read the input data and return list of action sequences.
Each sequence is a list of action-argumentlist tuples.
'''
sequences = []
for seq in input_seqs.split('\n'):
actions = []
arguments = []
if seq and not seq.isspace() and len(seq)>1:
sequence = seq.rstrip("\n\r").lstrip("\n\r").lower()
action_defs = sequence.split("),")
for action_def in action_defs:
action = action_def.split('(')[0].strip(")\n\r").strip()
argument = action_def.split('(')[1].strip(")\n\r")
actions.append(action.translate(str.maketrans('', '', string.punctuation)))
argument_list = argument.split(',')
argument_list = [x.strip() for x in argument_list]
#argument_list.insert(0,'zero')
arguments.append(argument_list)
actarg_tuples = zip(actions,arguments)
sequences.append(list(actarg_tuples))
return sequences
def print_sequences(sequences):
for seq in sequences:
for index,action in enumerate(seq):
print(str(index) + ": " + str(action))
print()
sequences = read_input(input_seqs)
print_sequences(sequences)
domain_name = 'driverlog' #specify domain name to be used in PDDL here.
###Output
0: ('boardtruck', ['driver1', 'truck2', 's4'])
1: ('disembarktruck', ['driver1', 'truck2', 's4'])
2: ('boardtruck', ['driver1', 'truck2', 's4'])
3: ('loadtruck', ['package5', 'truck2', 's4'])
4: ('drivetruck', ['truck2', 's4', 's1', 'driver1'])
5: ('drivetruck', ['truck2', 's1', 's3', 'driver1'])
6: ('unloadtruck', ['package5', 'truck2', 's3'])
7: ('drivetruck', ['truck2', 's3', 's5', 'driver1'])
8: ('drivetruck', ['truck2', 's5', 's1', 'driver1'])
9: ('disembarktruck', ['driver1', 'truck2', 's1'])
10: ('boardtruck', ['driver2', 'truck1', 's0'])
11: ('loadtruck', ['package2', 'truck1', 's0'])
12: ('drivetruck', ['truck1', 's0', 's1', 'driver2'])
13: ('unloadtruck', ['package2', 'truck1', 's1'])
14: ('drivetruck', ['truck1', 's1', 's0', 'driver2'])
15: ('disembarktruck', ['driver2', 'truck1', 's0'])
16: ('boardtruck', ['driver1', 'truck2', 's1'])
17: ('loadtruck', ['package1', 'truck2', 's1'])
18: ('drivetruck', ['truck2', 's1', 's4', 'driver1'])
19: ('drivetruck', ['truck2', 's4', 's5', 'driver1'])
20: ('unloadtruck', ['package1', 'truck2', 's5'])
21: ('drivetruck', ['truck2', 's5', 's1', 'driver1'])
22: ('disembarktruck', ['driver1', 'truck2', 's1'])
23: ('boardtruck', ['driver1', 'truck2', 's1'])
24: ('drivetruck', ['truck2', 's1', 's4', 'driver1'])
25: ('loadtruck', ['package4', 'truck2', 's4'])
26: ('drivetruck', ['truck2', 's4', 's1', 'driver1'])
27: ('unloadtruck', ['package4', 'truck2', 's1'])
28: ('disembarktruck', ['driver1', 'truck2', 's1'])
29: ('boardtruck', ['driver1', 'truck2', 's1'])
30: ('drivetruck', ['truck2', 's1', 's4', 'driver1'])
31: ('loadtruck', ['package3', 'truck2', 's4'])
32: ('drivetruck', ['truck2', 's4', 's5', 'driver1'])
33: ('unloadtruck', ['package3', 'truck2', 's5'])
34: ('drivetruck', ['truck2', 's5', 's1', 'driver1'])
35: ('disembarktruck', ['driver1', 'truck2', 's1'])
###Markdown
Step 1.1: Find classes
###Code
transitions = set() # A transition is denoted by action_name + argument position
arguments = set()
actions = set()
for seq in sequences:
for actarg_tuple in seq:
actions.add(actarg_tuple[0])
for j, arg in enumerate(actarg_tuple[1]):
transitions.add(actarg_tuple[0]+"."+str(j))
arguments.add(arg)
print("\nActions")
print(actions)
# print("\nTransitions")
# print(transitions)
print("\nArguments/Objects")
print(arguments)
def get_actarg_dictionary(sequences):
d = defaultdict(list)
for seq in sequences:
for actarg_tuple in seq:
d[actarg_tuple[0]].append(actarg_tuple[1])
return d
d = get_actarg_dictionary(sequences)
# class util functions.
def get_classes(d):
# TODO incorporate word similarity in get classes.
c = defaultdict(set)
for k,v in d.items():
for arg_list in v:
for i,object in enumerate(arg_list):
c[k,i].add(object)
sets = c.values()
classes = []
# remove duplicate classes
for s in sets:
if s not in classes:
classes.append(s)
# now do pairwise intersections of all values. If intersection, combine them; then return the final sets.
classes_copy = list(classes)
while True:
combinations = list(itertools.combinations(classes_copy,2))
intersections_count = 0
for combination in combinations:
if combination[0].intersection(combination[1]):
intersections_count +=1
if combination[0] in classes_copy:
classes_copy.remove(combination[0])
if combination[1] in classes_copy:
classes_copy.remove(combination[1])
classes_copy.append(combination[0].union(combination[1]))
if intersections_count==0:
# print("no intersections left")
break
return classes_copy
# TODO: Can use better approach here. NER might help.
def get_class_names(classes):
# Name the class to first object found ignoring the digits in it
class_names = []
for c in classes:
for object in c:
# object = ''.join([i for i in object if not i.isdigit()])
class_names.append(object)
break
return class_names
def get_class_index(arg,classes):
for class_index, c in enumerate(classes):
if arg in c:
return class_index #it is like breaking out of the loop
print("Error:class index not found") #this statement is only executed if class index is not returned.
classes = get_classes(d) #sorts of object
print("\nSorts/Classes")
print(classes)
class_names = get_class_names(classes)
print("\nExtracted class names")
print(class_names)
###Output
Sorts/Classes
[{'driver2', 'driver1'}, {'truck2', 'truck1'}, {'package4', 'package2', 'package3', 'package1', 'package5'}, {'s3', 's5', 's0', 's4', 's1'}]
Extracted class names
['driver2', 'truck2', 'package4', 's3']
###Markdown
USER INPUT 1: Enter Correct Class namesEditing the extracted class names to more readable object classes will make the final PDDL model more readable.
###Code
############ (Optional) User Input ############
# Give user an option to change class names.
# class_names[0] = 'rocket'
#tyre
# class_names[0] = 'Jack'
# class_names[1] = 'Boot'
# class_names[2] = 'Wheel'
# class_names[3] = 'Hub'
# class_names[4] = 'Wrench'
# class_names[5] = 'Nut'
#driverlog
class_names[0] = 'Driver'
class_names[1] = 'Truck'
class_names[2] = 'Package'
class_names[3] = 'Location'
# #blocksworld
# class_names[0] = 'Block'
# class_names[1] = 'Gripper'
print("\nRenamed class names")
print(class_names)
###Output
Renamed class names
['Driver', 'Truck', 'Package', 'Location']
###Markdown
**Assumptions of LOCM2**- Each object of a same class undergoes similar kind of transition.- Objects of same class in a same action undergo similar kind of transition.
###Code
# change transitions to be more meaningful by incorporating class_names.
full_transitions = set()
for seq in sequences:
for actarg_tuple in seq:
actions.add(actarg_tuple[0])
for j, arg in enumerate(actarg_tuple[1]):
full_transitions.add(actarg_tuple[0]+"."+class_names[get_class_index(arg,classes)]+'.'+str(j))
arguments.add(arg)
print("\nActions")
print(actions)
print("\nTransitions")
print(full_transitions)
print("\nArguments/Objects")
print(arguments)
print("\nNumber of Actions: {},\nNumber of unique transitions: {},\nNumber of unique objects (arguments): {},\nNumber of classes/sorts: {}".format(len(actions), len(transitions), len(arguments), len(classes)))
###Output
Number of Actions: 5,
Number of unique transitions: 16,
Number of unique objects (arguments): 14,
Number of classes/sorts: 4
###Markdown
Building Transition graphs Utils
###Code
def empty_directory(folder):
for the_file in os.listdir(folder):
file_path = os.path.join(folder, the_file)
try:
if os.path.isfile(file_path):
os.unlink(file_path)
# elif os.path.isdir(file_path): shutil.rmtree(file_path)
except Exception as e:
print(e)
def findsubsets(S,m):
return set(itertools.combinations(S, m))
def print_table(matrix):
display(tabulate(matrix, headers='keys', tablefmt='html'))
def printmd(string):
display(Markdown(string))
###Output
_____no_output_____
###Markdown
Save graphs in graphml format (used in cytoscape)
###Code
def save(graphs, domain_name):
adjacency_matrix_list = [] # list of adjacency matrices per class
for index, G in enumerate(graphs):
nx.write_graphml(G, "output/"+ domain_name + "/" + class_names[index] + ".graphml")
df = nx.to_pandas_adjacency(G, nodelist=G.nodes(), dtype=int)
adjacency_matrix_list.append(df)
# print_table(df)
return adjacency_matrix_list
def plot_cytographs(graphs, domain_name, aml):
cytoscapeobs = []
for index, G in enumerate(graphs):
cytoscapeobj = CytoscapeWidget()
cytoscapeobj.graph.add_graph_from_networkx(G)
edge_list = list()
for source, target, data in G.edges(data=True):
edge_instance = Edge()
edge_instance.data['source'] = source
edge_instance.data['target'] = target
for k, v in data.items():
cyto_attrs = ['group', 'removed', 'selected', 'selectable',
'locked', 'grabbed', 'grabbable', 'classes', 'position', 'data']
if k in cyto_attrs:
setattr(edge_instance, k, v)
else:
edge_instance.data[k] = v
edge_list.append(edge_instance)
cytoscapeobj.graph.edges = edge_list
# cytoscapeobj.graph.add_graph_from_df(aml[index],aml[index].columns.tolist())
cytoscapeobs.append(cytoscapeobj)
# print(cytoscapeobj)
printmd('## class **'+class_names[index]+'**')
print_table(aml[index])
# print("Nodes:{}".format(G.nodes()))
# print("Edges:{}".format(G.edges()))
cytoscapeobj.set_style([{
'width':400,
'height':400,
'selector': 'node',
'style': {
'label': 'data(id)',
'font-family': 'helvetica',
'font-size': '8px',
'background-color': '#11479e',
'height':'10px',
'width':'10px',
}
},
{
'selector': 'node:parent',
'css': {
'background-opacity': 0.333,
'background-color': '#bbb'
}
},
{
'selector': '$node > node',
'css': {
'padding-top': '10px',
'padding-left': '10px',
'padding-bottom': '10px',
'padding-right': '10px',
'text-valign': 'top',
'text-halign': 'center',
'background-color': '#bbb'
}
},
{
'selector': 'edge',
'style': {
'label':'data(weight)',
'width': 1,
'line-color': '#9dbaea',
'target-arrow-shape': 'triangle',
'target-arrow-color': '#9dbaea',
'arrow-scale': 0.5,
'curve-style': 'bezier',
'font-family': 'helvetica',
'font-size': '8px',
'text-valign': 'top',
'text-halign':'center'
}
},
])
cytoscapeobj.max_zoom = 4.0
cytoscapeobj.min_zoom = 0.5
display(cytoscapeobj)
return cytoscapeobs
###Output
_____no_output_____
###Markdown
Build transitions graphs and call save function
###Code
def build_and_save_transition_graphs(classes, domain_name, class_names):
# There should be a graph for each class of objects.
graphs = []
# Initialize all graphs empty
for sort in classes:
graphs.append(nx.DiGraph())
consecutive_transition_lists = [] #list of consecutive transitions per object instance per sequence.
for m, arg in enumerate(arguments): # for all arguments (objects found in sequences)
for n, seq in enumerate(sequences): # for all sequences
consecutive_transition_list = list() # consecutive transition list for a sequence and an object (arg)
for i, actarg_tuple in enumerate(seq):
for j, arg_prime in enumerate(actarg_tuple[1]): # for all arguments in actarg tuples
if arg == arg_prime: # if argument matches arg
node = actarg_tuple[0] + "." + str(j)
# node = actarg_tuple[0] + "." + class_names[get_class_index(arg,classes)] + "." + str(j) # name the node of graph which represents a transition
consecutive_transition_list.append(node) # add node to the cons_transition for sequence and argument
# for each class append the nodes to the graph of that class
class_index = get_class_index(arg_prime, classes) # get index of class to which the object belongs to
graphs[class_index].add_node(node) # add node to the graph of that class
consecutive_transition_lists.append([n, arg, consecutive_transition_list])
# print(consecutive_transition_lists)
# for all consecutive transitions add edges to the appropriate graphs.
for cons_trans_list in consecutive_transition_lists:
# print(cons_trans_list)
seq_no = cons_trans_list[0] # get sequence number
arg = cons_trans_list[1] # get argument
class_index = get_class_index(arg, classes) # get index of class
# add directed edges to graph of that class
for i in range(0, len(cons_trans_list[2]) - 1):
if graphs[class_index].has_edge(cons_trans_list[2][i], cons_trans_list[2][i + 1]):
graphs[class_index][cons_trans_list[2][i]][cons_trans_list[2][i + 1]]['weight'] += 1
else:
graphs[class_index].add_edge(cons_trans_list[2][i], cons_trans_list[2][i + 1], weight=1)
# make directory if doesn't exist
dirName = "output/"+ domain_name
if not os.path.exists(dirName):
os.makedirs(dirName)
print("Directory ", dirName, " Created ")
else:
print("Directory ", dirName, " already exists")
empty_directory(dirName)
# save all the graphs
adjacency_matrix_list = save(graphs, domain_name) # list of adjacency matrices per class
# plot cytoscape interactive graphs
cytoscapeobs = plot_cytographs(graphs,domain_name, adjacency_matrix_list)
return adjacency_matrix_list, graphs, cytoscapeobs
###Output
_____no_output_____
###Markdown
Transition Graphs
###Code
#### Build weighted directed graphs for transitions.
printmd("## "+ domain_name.upper())
adjacency_matrix_list, graphs, cytoscapeobjs = build_and_save_transition_graphs(classes, domain_name, class_names)
###Output
_____no_output_____
###Markdown
USER INPUT 2: Edit transition graphsFor meaningful LOCM models, here one can edit the transition graphs to make them accurate. However, in the paper we don't do that in order to estimate what kind of models are learned automatically from natural language data. Option 1. **You can add or delete nodes/edges in transition graphs by following methods like add_node, delete_edges shown in the following library.**https://github.com/QuantStack/ipycytoscape/blob/master/ipycytoscape/cytoscape.pyOption 2. **Alternatively you can use the saved .graphml file. Open it up in Cytoscape, edit it within the GUI and load that graph into the graphs list.** Step 2: Get Transition Sets from LOCM2**Algorithm**: LOCM2**Input** : - T_all = set of observed transitions for a sort/class- H : Set of holes - each hole is a set of two transitions.- P : Set of pairs i.e. consecutive transitions.- E : Set of example sequences of actions.**Output**:- S : Set of transition sets. Finding holesHoles are transitions that LOCM1 will assume to be true due to the flaw of overgeneralizing
###Code
def get_adjacency_matrix_with_holes(adjacency_matrix_list):
adjacency_matrix_list_with_holes = []
for index,adjacency_matrix in enumerate(adjacency_matrix_list):
# print("\n ROWS ===========")
df = adjacency_matrix.copy()
df1 = adjacency_matrix.copy()
# for particular adjacency matrix's copy, loop over all pairs of rows
for i in range(df.shape[0] - 1):
for j in range(i+1, df.shape[0]):
idx1, idx2 = i, j
row1, row2 = df.iloc[idx1,:], df.iloc[idx2, :] #we have now all pairs of rows
common_values_flag = False #for each two rows we have a common_values_flag
# if there is a common value between two rows, turn common value flag to true
for col in range(row1.shape[0]):
if row1.iloc[col] > 0 and row2.iloc[col] > 0:
common_values_flag = True
break
# now if two rows have common values, we need to check for holes.
if common_values_flag:
for col in range(row1.shape[0]):
if row1.iloc[col] > 0 and row2.iloc[col] == 0:
df1.iloc[idx2,col] = 'hole'
elif row1.iloc[col] == 0 and row2.iloc[col] > 0:
df1.iloc[idx1, col] = 'hole'
adjacency_matrix_list_with_holes.append(df1)
return adjacency_matrix_list_with_holes
adjacency_matrix_list_with_holes = get_adjacency_matrix_with_holes(adjacency_matrix_list)
# Printing FSM matrices with and without holes
for index,adjacency_matrix in enumerate(adjacency_matrix_list):
printmd("\n#### " + class_names[index] )
print_table(adjacency_matrix)
printmd("\n#### HOLES: " + class_names[index])
print_table(adjacency_matrix_list_with_holes[index])
# Create list of set of holes per class (H)
holes_per_class = []
for index,df in enumerate(adjacency_matrix_list_with_holes):
holes = set()
for i in range(df.shape[0]):
for j in range(df.shape[1]):
if df.iloc[i,j] == 'hole':
holes.add(frozenset({df.index[i] , df.columns[j]}))
holes_per_class.append(holes)
for i, hole in enumerate(holes_per_class):
print("#holes in class " + class_names[i]+":" + str(len(hole)))
# for h in hole:
# print(list(h))
###Output
#holes in class Driver:0
#holes in class Truck:5
#holes in class Package:0
#holes in class Location:5
###Markdown
T_all - Set of observed transitions for a class.
###Code
# List of transitions per class (T_all). It is just a set of transitions that occur for a class.
transitions_per_class = []
for index, df in enumerate(adjacency_matrix_list_with_holes):
transitions_per_class.append(df.columns.values)
# for i, transition in enumerate(transitions_per_class):
# print('{}:{}'.format(class_names[i], transition))
###Output
_____no_output_____
###Markdown
P - set of pairs (consecutive transitions)
###Code
def get_consecutive_transitions_per_class(adjacency_matrix_list_with_holes):
consecutive_transitions_per_class = []
for index, df in enumerate(adjacency_matrix_list_with_holes):
consecutive_transitions = set() # for a class
for i in range(df.shape[0]):
for j in range(df.shape[1]):
if df.iloc[i, j] != 'hole':
if df.iloc[i, j] > 0:
# print("(" + df.index[i] + "," + df.columns[j] + ")")
consecutive_transitions.add((df.index[i], df.columns[j]))
consecutive_transitions_per_class.append(consecutive_transitions)
return consecutive_transitions_per_class
# Create list of consecutive transitions per class (P). If value is not null, ordered pair i,j would be consecutive transitions per class
consecutive_transitions_per_class = get_consecutive_transitions_per_class(adjacency_matrix_list_with_holes)
# printmd("### Consecutive transitions per class")
# for i, transition in enumerate(consecutive_transitions_per_class):
# printmd("#### "+class_names[i]+":")
# for x in list(transition):
# print(x)
# # print('{}:{}'.format(class_names[i], transition))
# print()
###Output
_____no_output_____
###Markdown
Check Well Formed
###Code
def check_well_formed(subset_df):
# got the adjacency matrix subset
df = subset_df.copy()
well_formed_flag = True
if (df == 0).all(axis=None): # all elements are zero
well_formed_flag = False
# for particular adjacency matrix's copy, loop over all pairs of rows
for i in range(0, df.shape[0]-1):
for j in range(i + 1, df.shape[0]):
print(i,j)
idx1, idx2 = i, j
row1, row2 = df.iloc[idx1, :], df.iloc[idx2, :] # we have now all pairs of rows
common_values_flag = False # for each two rows we have a common_values_flag
# if there is a common value between two rows, turn common value flag to true
for col in range(row1.shape[0]):
if row1.iloc[col] > 0 and row2.iloc[col] > 0:
common_values_flag = True
break
if common_values_flag:
for col in range(row1.shape[0]): # check for holes if common value
if row1.iloc[col] > 0 and row2.iloc[col] == 0:
well_formed_flag = False
elif row1.iloc[col] == 0 and row2.iloc[col] > 0:
well_formed_flag = False
if not well_formed_flag:
return False
elif well_formed_flag:
return True
###Output
_____no_output_____
###Markdown
Check Valid Transitions
###Code
def check_valid(subset_df,consecutive_transitions_per_class):
# Note: Essentially we check validity against P instead of E.
# In the paper of LOCM2, it isn't mentioned how to check against E.
# Reasoning: If we check against all consecutive transitions per class,
# we essentially check against all example sequences.
# check the candidate set which is well-formed (subset df against all consecutive transitions)
# got the adjacency matrix subset
df = subset_df.copy()
# for particular adjacency matrix's copy, loop over all pairs of rows
for i in range(df.shape[0]):
for j in range(df.shape[0]):
if df.iloc[i,j] > 0:
valid_val_flag = False
ordered_pair = (df.index[i], df.columns[j])
for ct_list in consecutive_transitions_per_class:
for ct in ct_list:
if ordered_pair == ct:
valid_val_flag=True
# if after all iteration ordered pair is not found, mark the subset as invalid.
if not valid_val_flag:
return False
# return True if all ordered pairs found.
return True
###Output
_____no_output_____
###Markdown
LOCM2 transition sets
###Code
def locm2_get_transition_sets_per_class(holes_per_class, transitions_per_class, consecutive_transitions_per_class):
"""LOCM 2 Algorithm in the original LOCM2 paper"""
# contains Solution Set S for each class.
transition_sets_per_class = []
# for each hole for a class/sort
for index, holes in enumerate(holes_per_class):
class_name = class_names[index]
printmd("### "+ class_name)
# S
transition_set_list = [] #transition_sets_of_a_class, # intially it's empty
if len(holes)==0:
print("no holes") # S will contain just T_all
if len(holes) > 0: # if there are any holes for a class
print(str(len(holes)) + " holes")
for ind, hole in enumerate(holes):
printmd("#### Hole " + str(ind + 1) + ": " + str(set(hole)))
is_hole_already_covered_flag = False
if len(transition_set_list)>0:
for s_prime in transition_set_list:
if hole.issubset(s_prime):
printmd("Hole "+ str(set(hole)) + " is already covered.")
is_hole_already_covered_flag = True
break
# discover a set which includes hole and is well-formed and valid against test data.
# if hole is not covered, do BFS with sets of increasing sizes starting with s=hole
if not is_hole_already_covered_flag:
h = hole.copy()
candidate_sets = []
# all subsets of T_all starting from hole's len +1 to T_all-1.
for i in range(len(h)+1,len(transitions_per_class[index])):
subsets = findsubsets(transitions_per_class[index],i) # all subsets of length i
for s in subsets:
if h.issubset(s): # if is subset of s
candidate_sets.append(set(s))
s_well_formed_and_valid = False
for s in candidate_sets:
if len(s)>=i:
printmd("Checking candidate set *" + str(s) + "* of class **" + class_name + "** for well formedness and Validity")
subset_df = adjacency_matrix_list[index].loc[list(s),list(s)]
print_table(subset_df)
# checking for well-formedness
well_formed_flag = False
well_formed_flag = check_well_formed(subset_df)
if not well_formed_flag:
print("This subset is NOT well-formed")
elif well_formed_flag:
print("This subset is well-formed.")
# if well-formed validate across the data E
# to remove inappropriate dead-ends
valid_against_data_flag = False
valid_against_data_flag = check_valid(subset_df, consecutive_transitions_per_class)
if not valid_against_data_flag:
print("This subset is well-formed but invalid against example data")
if valid_against_data_flag:
print("This subset is valid.")
print("Adding this subset " + str(s) +" to the locm2 transition set.")
if s not in transition_set_list: # do not allow copies.
transition_set_list.append(s)
print("Hole that is covered now:")
print(list(h))
s_well_formed_and_valid = True
break
if s_well_formed_and_valid:
break
print(transition_set_list)
#step 7 : remove redundant sets S - {s1}
ts_copy = transition_set_list.copy()
for i in range(len(ts_copy)):
for j in range(len(ts_copy)):
if ts_copy[i] < ts_copy[j]: #if subset
if ts_copy[i] in transition_set_list:
transition_set_list.remove(ts_copy[i])
elif ts_copy[i] > ts_copy[j]:
if ts_copy[j] in transition_set_list:
transition_set_list.remove(ts_copy[j])
print("\nRemoved redundancy transition set list")
print(transition_set_list)
#step-8: include all-transitions machine, even if it is not well-formed.
transition_set_list.append(set(transitions_per_class[index])) #fallback
printmd("#### Final transition set list")
print(transition_set_list)
transition_sets_per_class.append(transition_set_list)
return transition_sets_per_class
############ LOCM2 #################
#### Input ready for LOCM2, Starting LOCM2 algorithm now
#### Step 8: selecting transition sets (TS) [Main LOCM2 Algorithm]
printmd("### Getting transitions sets for each class using LOCM2")
transition_sets_per_class = locm2_get_transition_sets_per_class(holes_per_class, transitions_per_class, consecutive_transitions_per_class)
###Output
_____no_output_____
###Markdown
Step 3: Algorithm For Induction of State Machines- Input: Action training sequence of length N- Output: Transition Set TS, Object states OS.We already have transition set TS per class.
###Code
def plot_cytographs_fsm(graph, domain_name):
cytoscapeobj = CytoscapeWidget()
cytoscapeobj.graph.add_graph_from_networkx(graph)
edge_list = list()
for source, target, data in graph.edges(data=True):
edge_instance = Edge()
edge_instance.data['source'] = source
edge_instance.data['target'] = target
for k, v in data.items():
cyto_attrs = ['group', 'removed', 'selected', 'selectable',
'locked', 'grabbed', 'grabbable', 'classes', 'position', 'data']
if k in cyto_attrs:
setattr(edge_instance, k, v)
else:
edge_instance.data[k] = v
edge_list.append(edge_instance)
cytoscapeobj.graph.edges = edge_list
# print("Nodes:{}".format(graph.nodes()))
# print("Edges:{}".format(graph.edges()))
cytoscapeobj.set_style([{
'width':400,
'height':500,
'selector': 'node',
'style': {
'label': 'data(id)',
'font-family': 'helvetica',
'font-size': '8px',
'background-color': '#11479e',
'height':'10px',
'width':'10px',
}
},
{
'selector': 'node:parent',
'css': {
'background-opacity': 0.333,
'background-color': '#bbb'
}
},
{
'selector': '$node > node',
'css': {
'padding-top': '10px',
'padding-left': '10px',
'padding-bottom': '10px',
'padding-right': '10px',
'text-valign': 'top',
'text-halign': 'center',
'background-color': '#bbb'
}
},
{
'selector': 'edge',
'style': {
'label':'data(weight)',
'width': 1,
'line-color': '#9dbaea',
'target-arrow-shape': 'triangle',
'target-arrow-color': '#9dbaea',
'arrow-scale': 0.5,
'curve-style': 'bezier',
'font-family': 'helvetica',
'font-size': '8px',
'text-valign': 'top',
'text-halign':'center'
}
},
])
cytoscapeobj.max_zoom = 2.0
cytoscapeobj.min_zoom = 0.5
display(cytoscapeobj)
###Output
_____no_output_____
###Markdown
First make start(t) and end(t) as state for each transition in FSM.
###Code
state_machines_overall_list = []
for index, ts_class in enumerate(transition_sets_per_class):
fsms_per_class = []
printmd("### "+ class_names[index])
num_fsms = len(ts_class)
print("Number of FSMS:" + str(num_fsms))
for fsm_no, ts in enumerate(ts_class):
fsm_graph = nx.DiGraph()
printmd("#### FSM " + str(fsm_no))
for t in ts:
source = "s(" + str(t) + ")"
target = "e(" + str(t) + ")"
fsm_graph.add_edge(source,target,weight=t)
t_df = adjacency_matrix_list[index].loc[list(ts), list(ts)] #transition df for this fsm
print_table(t_df)
# merge end(t1) = start(t2) from transition df
edge_t_list = [] # edge transition list
for i in range(t_df.shape[0]):
for j in range(t_df.shape[1]):
if t_df.iloc[i, j] != 'hole':
if t_df.iloc[i, j] > 0:
for node in fsm_graph.nodes():
if "e("+t_df.index[i]+")" in node:
merge_node1 = node
if "s("+t_df.index[j]+")" in node:
merge_node2 = node
fsm_graph = nx.contracted_nodes(fsm_graph, merge_node1, merge_node2 , self_loops=True)
if merge_node1 != merge_node2:
mapping = {merge_node1: merge_node1 + "|" + merge_node2}
fsm_graph = nx.relabel_nodes(fsm_graph, mapping)
# we need to complete the list of transitions
# that can happen on self-loop nodes
# as these have been overwritten (as graph is not MultiDiGraph)
sl_state_list = list(nx.nodes_with_selfloops(fsm_graph)) # self looping states.
# if state is self-looping
t_list = []
if len(sl_state_list)>0:
# if s(T1) and e(T1) are there for same node, this T1 can self-loop occur.
for s in sl_state_list:
for sub_s in s.split('|'):
if sub_s[0] == 'e':
if ('s' + sub_s[1:]) in s.split('|'):
t_list.append(sub_s[2:-1])
fsm_graph[s][s]['weight'] = '|'.join(t_list)
plot_cytographs_fsm(fsm_graph,domain_name)
df = nx.to_pandas_adjacency(fsm_graph, nodelist=fsm_graph.nodes(), weight = 1)
print_table(df)
fsms_per_class.append(fsm_graph)
state_machines_overall_list.append(fsms_per_class)
###Output
_____no_output_____
###Markdown
USER INPUT 3: Rename States. As states are shown in terms of end and start of transitions, user can rename them for easy readability later on.If states are renamed, certain hardcoded aspects of code won't work. It is advisable to create a separate state dictionary and use it after step 9: (formation of PDDL model) to replace states in PDDL code.This also makes it easier to specify problem statements. Automatic creation: rename states as integers 0, 1, 2 .. etc. for each fsm.
###Code
# An Automatic state dictionary is added here where states are
# renamed as 0, 1, 2 etc. for a specific FSM
state_mappings_class = []
state_machines_overall_list_2 = []
for index, fsm_graphs in enumerate(state_machines_overall_list):
state_mappings_fsm = []
fsms_per_class_2 = []
printmd("### "+ class_names[index])
num_fsms = len(fsm_graphs)
print("Number of FSMS:" + str(num_fsms))
for fsm_no, G in enumerate(fsm_graphs):
state_mapping = {k: v for v, k in enumerate(G.nodes())}
G_copy = nx.relabel_nodes(G, state_mapping)
plot_cytographs_fsm(G, domain_name)
plot_cytographs_fsm(G_copy, domain_name)
printmd("Fsm "+ str(fsm_no))
fsms_per_class_2.append(G_copy)
state_mappings_fsm.append(state_mapping)
state_machines_overall_list_2.append(fsms_per_class_2)
state_mappings_class.append(state_mappings_fsm)
###Output
_____no_output_____
###Markdown
Looking at the graph, User can specify states here
###Code
# User can specify states here.
# assign your states in state dictionary called state_mapping
# e.g. state_mapping['e(removewheek.2)|s(putonwheel.2)'] = 'jack_free_to_use'
###Output
_____no_output_____
###Markdown
Step 5: Induction of parameterized state machinesCreate and test hypothesis for state parameters Form Hyp for HS (Hypothesis set)
###Code
HS_list = []
ct_list = []
# for transition set of each class
for index, ts_class in enumerate(transition_sets_per_class):
printmd("### "+ class_names[index])
ct_per_class = []
HS_per_class = []
# for transition set of each fsm in a class
for fsm_no, ts in enumerate(ts_class):
printmd("#### FSM: " + str(fsm_no) + " Hypothesis Set")
# transition matrix for the ts
t_df = adjacency_matrix_list[index].loc[list(ts), list(ts)]
ct_in_fsm = set() # find consecutive transition set for a state machine in a class.
for i in range(t_df.shape[0]):
for j in range(t_df.shape[1]):
if t_df.iloc[i, j] != 'hole':
if t_df.iloc[i, j] > 0:
ct_in_fsm.add((t_df.index[i], t_df.columns[j]))
ct_per_class.append(ct_in_fsm)
# add to hypothesis set
HS = set()
# for each pair B.k and C.l in TS s.t. e(B.k) = S = s(C.l)
for ct in ct_in_fsm:
B = ct[0].split('.')[0] # action name of T1
k = int(ct[0].split('.')[1]) # argument index of T1
C = ct[1].split('.')[0] # action name of T2
l = int(ct[1].split('.')[1]) # argument index of T2
# When both actions B and C contain another argument of the same sort G' in position k' and l' respectively,
# we hypothesise that there may be a relation between sorts G and G'.
for seq in sequences:
for actarg_tuple in seq:
arglist1 = []
arglist2 = []
if actarg_tuple[0] == B: #if action name is same as B
arglist1 = actarg_tuple[1].copy()
# arglist1.remove(actarg_tuple[1][k]) # remove k from arglist
for actarg_tuple_prime in seq: #loop through seq again.
if actarg_tuple_prime[0] == C:
arglist2 = actarg_tuple_prime[1].copy()
# arglist2.remove(actarg_tuple_prime[1][l]) # remove l from arglist
# for arg lists of actions B and C, if class is same add a hypothesis set.
for i in range(len(arglist1)): # if len is 0, we don't go in
for j in range(len(arglist2)):
class1 = get_class_index(arglist1[i], classes)
class2 = get_class_index(arglist2[j], classes)
if class1 == class2: # if object at same position have same classes
# add hypothesis to hypothesis set.
if (k!=i) and (l!=j):
HS.add((frozenset({"e("+B+"."+ str(k)+")", "s("+C+"."+str(l)+")"}),B,k,i,C,l,j,class_names[index],class_names[class1]))
print(str(len(HS))+ " hypothesis created")
# for h in HS:
# print(h)
HS_per_class.append(HS)
HS_list.append(HS_per_class)
ct_list.append(ct_per_class)
###Output
_____no_output_____
###Markdown
Test hyp against E
###Code
HS_list_retained = []
for index, HS_class in enumerate(HS_list):
printmd("### "+ class_names[index])
HS_per_class_retained = []
for fsm_no, HS in enumerate(HS_class):
printmd("#### FSM: " + str(fsm_no) + " Hypothesis Set")
count=0
HS_copy = HS.copy()
HS_copy2 = HS.copy()
# for each object O occuring in Ou
for O in arguments:
# for each pair of transitions Ap.m and Aq.n consecutive for O in seq
ct = []
for seq in sequences:
for actarg_tuple in seq:
act = actarg_tuple[0]
for j, arg in enumerate(actarg_tuple[1]):
if arg == O:
ct.append((act + '.' + str(j), actarg_tuple[1]))
for i in range(len(ct)-1):
A_p = ct[i][0].split('.')[0]
m = int(ct[i][0].split('.')[1])
A_q = ct[i+1][0].split('.')[0]
n = int(ct[i+1][0].split('.')[1])
# for each hypothesis H s.t. A_p = B, m = k, A_q = C, n = l
for H in HS_copy2:
if A_p == H[1] and m == H[2] and A_q == H[4] and n == H[5]:
k_prime = H[3]
l_prime = H[6]
# if O_p,k_prime = Q_q,l_prime
if ct[i][1][k_prime] != ct[i+1][1][l_prime]:
if H in HS_copy:
HS_copy.remove(H)
count += 1
print(str(len(HS_copy))+ " hypothesis retained")
# state machine
# if len(HS_copy)>0:
# plot_cytographs_fsm(state_machines_overall_list[index][fsm_no],domain_name)
# for H in HS_copy:
# print(H)
HS_per_class_retained.append(HS_copy)
HS_list_retained.append(HS_per_class_retained)
###Output
_____no_output_____
###Markdown
Step 6: Creation and merging of state parameters
###Code
# Each hypothesis refers to an incoming and outgoing transition
# through a particular state of an FSM
# and matching associated transitions can be considered
# to set and read parameters of a state.
# Since there maybe multiple transitions through a give state,
# it is possible for the same parameter to have multiple
# pairwise occurences.
print("Step 6: creating and merging state params")
param_bindings_list_overall = []
for classindex, HS_per_class in enumerate(HS_list_retained):
param_bind_per_class = []
for fsm_no, HS_per_fsm in enumerate(HS_per_class):
param_binding_list = []
# fsm in consideration
G = state_machines_overall_list[classindex][fsm_no]
state_list = G.nodes()
# creation
for index,h in enumerate(HS_per_fsm):
param_binding_list.append((h,"v"+str(index)))
merge_pl = [] # parameter to merge list
if len(param_binding_list)>1:
# merging
pairs = findsubsets(param_binding_list, 2)
for pair in pairs:
h_1 = pair[0][0]
h_2 = pair[1][0]
# equate states
state_eq_flag = False
for s_index, state in enumerate(state_list):
# if both hyp states appear in single state in fsm
if list(h_1[0])[0] in state:
if list(h_1[0])[0] in state:
state_eq_flag =True
if ((state_eq_flag and h_1[1] == h_2[1] and h_1[2] == h_2[2] and h_1[3] == h_2[3]) or (state_eq_flag and h_1[4] == h_2[4] and h_1[5] == h_2[5] and h_1[6] == h_2[6])):
merge_pl.append(list([pair[0][1], pair[1][1]]))
#inner lists to sets (to list of sets)
l=[set(x) for x in merge_pl]
#cartesian product merging elements if some element in common
for a,b in itertools.product(l,l):
if a.intersection( b ):
a.update(b)
b.update(a)
#back to list of lists
l = sorted( [sorted(list(x)) for x in l])
#remove dups
merge_pl = list(l for l,_ in itertools.groupby(l))
# sort
for pos, l in enumerate(merge_pl):
merge_pl[pos] = sorted(l, key = lambda x: int(x[1:]))
print(merge_pl) # equal params appear in a list in this list.
for z,pb in enumerate(param_binding_list):
for l in merge_pl:
if pb[1] in l:
# update pb
param_binding_list[z] = (param_binding_list[z][0], l[0])
param_bind_per_class.append(param_binding_list)
print(class_names[classindex])
# set of params per class
param = set()
for pb in param_binding_list:
# print(pb)
param.add(pb[1])
# num of params per class
printmd("No. of params earlier:" + str(len(param_binding_list)))
printmd("No. of params after merging:" + str(len(param)))
param_bindings_list_overall.append(param_bind_per_class)
###Output
Step 6: creating and merging state params
[['v1', 'v7', 'v8', 'v9'], ['v2', 'v3', 'v4', 'v5']]
Driver
###Markdown
Step 7: Remove Parameter Flaws
###Code
# Removing State Params.
# Flaw occurs Object can reach state S with param P having an inderminate value.
# There is transition s.t. end(B.k) = S.
# but there is no h = <S,B,k,k',C,l,l',G,G') and <h,P> is in bindings.
para_bind_overall_fault_removed = []
for classindex, fsm_per_class in enumerate(state_machines_overall_list):
print(class_names[classindex])
pb_per_class_fault_removed = []
for fsm_no, G in enumerate(fsm_per_class):
pb_per_fsm_fault_removed = []
# G is fsm in consideration
faulty_pb = []
for state in G.nodes():
inedges = G.in_edges(state, data=True)
for ie in inedges:
tr = ie[2]['weight']
t_list = tr.split('|')
for t in t_list:
B = t.split('.')[0]
k = t.split('.')[1]
S = 'e(' + t + ')'
flaw = True
for pb in param_bindings_list_overall[classindex][fsm_no]:
H = pb[0]
v = pb[1]
if (S in set(H[0])) and (B==H[1]) and (int(k)==H[2]) :
# this pb is okay
flaw=False
# print(flaw)
if flaw:
for pb in param_bindings_list_overall[classindex][fsm_no]:
H = pb[0]
H_states = list(H[0])
for h_state in H_states:
if h_state in state:
if pb not in faulty_pb:
faulty_pb.append(pb) # no duplicates
for pb in param_bindings_list_overall[classindex][fsm_no]:
if pb not in faulty_pb:
pb_per_fsm_fault_removed.append(pb)
print(str(len(pb_per_fsm_fault_removed)) + "/" + str(len(param_bindings_list_overall[classindex][fsm_no])) + " param retained")
for pb in pb_per_fsm_fault_removed:
print(pb)
pb_per_class_fault_removed.append(pb_per_fsm_fault_removed)
para_bind_overall_fault_removed.append(pb_per_class_fault_removed)
###Output
Driver
10/10 param retained
((frozenset({'e(disembarktruck.0)', 's(boardtruck.0)'}), 'disembarktruck', 0, 2, 'boardtruck', 0, 2, 'Driver', 'Location'), 'v0')
((frozenset({'e(boardtruck.0)', 's(disembarktruck.0)'}), 'boardtruck', 0, 1, 'disembarktruck', 0, 1, 'Driver', 'Truck'), 'v1')
((frozenset({'e(drivetruck.3)', 's(drivetruck.3)'}), 'drivetruck', 3, 2, 'drivetruck', 3, 1, 'Driver', 'Location'), 'v2')
((frozenset({'e(drivetruck.3)', 's(disembarktruck.0)'}), 'drivetruck', 3, 2, 'disembarktruck', 0, 2, 'Driver', 'Location'), 'v2')
((frozenset({'e(boardtruck.0)', 's(drivetruck.3)'}), 'boardtruck', 0, 2, 'drivetruck', 3, 1, 'Driver', 'Location'), 'v2')
((frozenset({'e(boardtruck.0)', 's(disembarktruck.0)'}), 'boardtruck', 0, 2, 'disembarktruck', 0, 2, 'Driver', 'Location'), 'v2')
((frozenset({'e(disembarktruck.0)', 's(boardtruck.0)'}), 'disembarktruck', 0, 1, 'boardtruck', 0, 1, 'Driver', 'Truck'), 'v6')
((frozenset({'e(boardtruck.0)', 's(drivetruck.3)'}), 'boardtruck', 0, 1, 'drivetruck', 3, 0, 'Driver', 'Truck'), 'v1')
((frozenset({'e(drivetruck.3)', 's(drivetruck.3)'}), 'drivetruck', 3, 0, 'drivetruck', 3, 0, 'Driver', 'Truck'), 'v1')
((frozenset({'e(drivetruck.3)', 's(disembarktruck.0)'}), 'drivetruck', 3, 0, 'disembarktruck', 0, 1, 'Driver', 'Truck'), 'v1')
Truck
1/1 param retained
((frozenset({'e(boardtruck.1)', 's(loadtruck.1)'}), 'boardtruck', 1, 2, 'loadtruck', 1, 2, 'Truck', 'Location'), 'v0')
5/5 param retained
((frozenset({'e(boardtruck.1)', 's(disembarktruck.1)'}), 'boardtruck', 1, 2, 'disembarktruck', 1, 2, 'Truck', 'Location'), 'v0')
((frozenset({'s(disembarktruck.1)', 'e(unloadtruck.1)'}), 'unloadtruck', 1, 2, 'disembarktruck', 1, 2, 'Truck', 'Location'), 'v0')
((frozenset({'s(boardtruck.1)', 'e(disembarktruck.1)'}), 'disembarktruck', 1, 0, 'boardtruck', 1, 0, 'Truck', 'Driver'), 'v2')
((frozenset({'e(boardtruck.1)', 's(disembarktruck.1)'}), 'boardtruck', 1, 0, 'disembarktruck', 1, 0, 'Truck', 'Driver'), 'v3')
((frozenset({'s(boardtruck.1)', 'e(disembarktruck.1)'}), 'disembarktruck', 1, 2, 'boardtruck', 1, 2, 'Truck', 'Location'), 'v4')
1/1 param retained
((frozenset({'s(disembarktruck.1)', 'e(unloadtruck.1)'}), 'unloadtruck', 1, 2, 'disembarktruck', 1, 2, 'Truck', 'Location'), 'v0')
16/16 param retained
((frozenset({'s(drivetruck.0)', 'e(boardtruck.1)'}), 'boardtruck', 1, 2, 'drivetruck', 0, 1, 'Truck', 'Location'), 'v0')
((frozenset({'s(boardtruck.1)', 'e(disembarktruck.1)'}), 'disembarktruck', 1, 2, 'boardtruck', 1, 2, 'Truck', 'Location'), 'v1')
((frozenset({'e(boardtruck.1)', 's(disembarktruck.1)'}), 'boardtruck', 1, 0, 'disembarktruck', 1, 0, 'Truck', 'Driver'), 'v2')
((frozenset({'e(boardtruck.1)', 's(disembarktruck.1)'}), 'boardtruck', 1, 2, 'disembarktruck', 1, 2, 'Truck', 'Location'), 'v0')
((frozenset({'s(disembarktruck.1)', 'e(unloadtruck.1)'}), 'unloadtruck', 1, 2, 'disembarktruck', 1, 2, 'Truck', 'Location'), 'v0')
((frozenset({'s(drivetruck.0)', 'e(loadtruck.1)'}), 'loadtruck', 1, 2, 'drivetruck', 0, 1, 'Truck', 'Location'), 'v0')
((frozenset({'s(boardtruck.1)', 'e(disembarktruck.1)'}), 'disembarktruck', 1, 0, 'boardtruck', 1, 0, 'Truck', 'Driver'), 'v6')
((frozenset({'e(drivetruck.0)', 's(drivetruck.0)'}), 'drivetruck', 0, 2, 'drivetruck', 0, 1, 'Truck', 'Location'), 'v0')
((frozenset({'e(drivetruck.0)', 's(disembarktruck.1)'}), 'drivetruck', 0, 2, 'disembarktruck', 1, 2, 'Truck', 'Location'), 'v0')
((frozenset({'s(drivetruck.0)', 'e(unloadtruck.1)'}), 'unloadtruck', 1, 2, 'drivetruck', 0, 1, 'Truck', 'Location'), 'v0')
((frozenset({'e(boardtruck.1)', 's(loadtruck.1)'}), 'boardtruck', 1, 2, 'loadtruck', 1, 2, 'Truck', 'Location'), 'v0')
((frozenset({'e(drivetruck.0)', 's(unloadtruck.1)'}), 'drivetruck', 0, 2, 'unloadtruck', 1, 2, 'Truck', 'Location'), 'v0')
((frozenset({'e(drivetruck.0)', 's(disembarktruck.1)'}), 'drivetruck', 0, 3, 'disembarktruck', 1, 0, 'Truck', 'Driver'), 'v2')
((frozenset({'s(drivetruck.0)', 'e(boardtruck.1)'}), 'boardtruck', 1, 0, 'drivetruck', 0, 3, 'Truck', 'Driver'), 'v2')
((frozenset({'e(drivetruck.0)', 's(loadtruck.1)'}), 'drivetruck', 0, 2, 'loadtruck', 1, 2, 'Truck', 'Location'), 'v0')
((frozenset({'e(drivetruck.0)', 's(drivetruck.0)'}), 'drivetruck', 0, 3, 'drivetruck', 0, 3, 'Truck', 'Driver'), 'v2')
Package
1/1 param retained
((frozenset({'s(unloadtruck.0)', 'e(loadtruck.0)'}), 'loadtruck', 0, 1, 'unloadtruck', 0, 1, 'Package', 'Truck'), 'v0')
Location
2/2 param retained
((frozenset({'e(drivetruck.2)', 's(unloadtruck.2)'}), 'drivetruck', 2, 0, 'unloadtruck', 2, 1, 'Location', 'Truck'), 'v0')
((frozenset({'e(drivetruck.2)', 's(loadtruck.2)'}), 'drivetruck', 2, 0, 'loadtruck', 2, 1, 'Location', 'Truck'), 'v0')
1/1 param retained
((frozenset({'e(loadtruck.2)', 's(drivetruck.1)'}), 'loadtruck', 2, 1, 'drivetruck', 1, 0, 'Location', 'Truck'), 'v0')
3/3 param retained
((frozenset({'e(boardtruck.2)', 's(drivetruck.1)'}), 'boardtruck', 2, 1, 'drivetruck', 1, 0, 'Location', 'Truck'), 'v0')
((frozenset({'e(boardtruck.2)', 's(drivetruck.1)'}), 'boardtruck', 2, 0, 'drivetruck', 1, 3, 'Location', 'Driver'), 'v1')
((frozenset({'e(unloadtruck.2)', 's(drivetruck.1)'}), 'unloadtruck', 2, 1, 'drivetruck', 1, 0, 'Location', 'Truck'), 'v0')
18/18 param retained
((frozenset({'e(drivetruck.2)', 's(drivetruck.1)'}), 'drivetruck', 2, 3, 'drivetruck', 1, 3, 'Location', 'Driver'), 'v0')
((frozenset({'e(loadtruck.2)', 's(drivetruck.1)'}), 'loadtruck', 2, 1, 'drivetruck', 1, 0, 'Location', 'Truck'), 'v1')
((frozenset({'e(disembarktruck.2)', 's(boardtruck.2)'}), 'disembarktruck', 2, 0, 'boardtruck', 2, 0, 'Location', 'Driver'), 'v2')
((frozenset({'e(drivetruck.2)', 's(disembarktruck.2)'}), 'drivetruck', 2, 0, 'disembarktruck', 2, 1, 'Location', 'Truck'), 'v1')
((frozenset({'e(disembarktruck.2)', 's(boardtruck.2)'}), 'disembarktruck', 2, 1, 'boardtruck', 2, 1, 'Location', 'Truck'), 'v4')
((frozenset({'s(disembarktruck.2)', 'e(boardtruck.2)'}), 'boardtruck', 2, 0, 'disembarktruck', 2, 0, 'Location', 'Driver'), 'v0')
((frozenset({'e(drivetruck.2)', 's(unloadtruck.2)'}), 'drivetruck', 2, 0, 'unloadtruck', 2, 1, 'Location', 'Truck'), 'v1')
((frozenset({'s(disembarktruck.2)', 'e(boardtruck.2)'}), 'boardtruck', 2, 1, 'disembarktruck', 2, 1, 'Location', 'Truck'), 'v1')
((frozenset({'s(disembarktruck.2)', 'e(unloadtruck.2)'}), 'unloadtruck', 2, 1, 'disembarktruck', 2, 1, 'Location', 'Truck'), 'v1')
((frozenset({'e(boardtruck.2)', 's(drivetruck.1)'}), 'boardtruck', 2, 1, 'drivetruck', 1, 0, 'Location', 'Truck'), 'v1')
((frozenset({'e(drivetruck.1)', 's(drivetruck.2)'}), 'drivetruck', 1, 3, 'drivetruck', 2, 3, 'Location', 'Driver'), 'v10')
((frozenset({'e(drivetruck.2)', 's(drivetruck.1)'}), 'drivetruck', 2, 0, 'drivetruck', 1, 0, 'Location', 'Truck'), 'v1')
((frozenset({'s(loadtruck.2)', 'e(boardtruck.2)'}), 'boardtruck', 2, 1, 'loadtruck', 2, 1, 'Location', 'Truck'), 'v1')
((frozenset({'e(boardtruck.2)', 's(drivetruck.1)'}), 'boardtruck', 2, 0, 'drivetruck', 1, 3, 'Location', 'Driver'), 'v0')
((frozenset({'e(unloadtruck.2)', 's(drivetruck.1)'}), 'unloadtruck', 2, 1, 'drivetruck', 1, 0, 'Location', 'Truck'), 'v1')
((frozenset({'e(drivetruck.2)', 's(disembarktruck.2)'}), 'drivetruck', 2, 3, 'disembarktruck', 2, 0, 'Location', 'Driver'), 'v0')
((frozenset({'e(drivetruck.2)', 's(loadtruck.2)'}), 'drivetruck', 2, 0, 'loadtruck', 2, 1, 'Location', 'Truck'), 'v1')
((frozenset({'e(drivetruck.1)', 's(drivetruck.2)'}), 'drivetruck', 1, 0, 'drivetruck', 2, 0, 'Location', 'Truck'), 'v17')
###Markdown
Step 8: (TODO) Static Preconditions via LOPAs further enhancement, one can add step 8: Extraction of static preconditions from the LOCM paper.However, LOP algorithm is better version of that step.Insert [LOP](https://www.aaai.org/ocs/index.php/ICAPS/ICAPS15/paper/viewFile/10621/10401) here for finding static preconditions Step 9: Formation of PDDL Schema
###Code
# get action schema
print(";;********************Learned PDDL domain******************")
output_file = "output/"+ domain_name + "/" + domain_name + ".pddl"
write_file = open(output_file, 'w')
write_line = "(define"
write_line += " (domain "+ domain_name+")\n"
write_line += " (:requirements :typing)\n"
write_line += " (:types"
for class_name in class_names:
write_line += " " + class_name
write_line += ")\n"
write_line += " (:predicates\n"
# one predicate to represent each object state
predicates = []
for class_index, pb_per_class in enumerate(para_bind_overall_fault_removed):
for fsm_no, pbs_per_fsm in enumerate(pb_per_class):
for state_index, state in enumerate(state_machines_overall_list[class_index][fsm_no].nodes()):
state_set = set(state.split('|'))
predicate = ""
write_line += " (" + class_names[class_index] + "_fsm" + str(fsm_no) + "_" + state
predicate += " (" + class_names[class_index] + "_fsm" + str(fsm_no) + "_" + state
for pb in pbs_per_fsm:
if set(pb[0][0]) <= state_set:
if " ?"+pb[1] + " - " + str(pb[0][8]) not in predicate:
write_line += " ?"+pb[1] + " - " + str(pb[0][8])
predicate += " ?"+pb[1] + " - " + str(pb[0][8])
write_line += ")\n"
predicate += ")"
predicates.append(predicate)
write_line += " )\n"
for action_index, action in enumerate(actions):
write_line += "\n"
write_line += " (:action"
write_line += " " + action + " "
write_line += " :parameters"
write_line += " ("
arg_already_written_flag = False
params_per_action = []
args_per_action = []
for seq in sequences:
for actarg_tuple in seq:
if not arg_already_written_flag:
if actarg_tuple[0] == action:
arglist = []
for arg in actarg_tuple[1]:
write_line += "?"+arg + " - " + class_names[get_class_index(arg,classes)] + " "
arglist.append(arg)
args_per_action.append(arglist)
params_per_action.append(actarg_tuple[1])
arg_already_written_flag = True
write_line += ")\n"
# need to use FSMS to get preconditions and effects.
# Start-state = precondition. End state= Effect
preconditions = []
effects = []
for arglist in params_per_action:
for arg in arglist:
current_class_index = get_class_index(arg, classes)
for fsm_no, G in enumerate(state_machines_overall_list[current_class_index]):
#
for start, end, weight in G.edges(data='weight'):
_actions = weight.split('|')
for _action in _actions:
if _action.split('.')[0] == action:
for predicate in predicates:
pred = predicate.split()[0].lstrip("(")
clss = pred.split('_')[0]
fsm = pred.split('_')[1]
state = set(pred.split('_')[2].replace('))',')').split('|'))
if clss == class_names[current_class_index]:
if fsm == "fsm" + str(fsm_no):
if state == set(start.split('|')):
if predicate not in preconditions:
preconditions.append(predicate)
if state == set(end.split('|')):
if predicate not in effects:
effects.append(predicate)
break
write_line += " :precondition"
write_line += " (and\n"
for precondition in preconditions:
# precondition = precondition.replace(?)
write_line += " "+precondition+"\n"
write_line += " )\n"
write_line += " :effect"
write_line += " (and\n"
for effect in effects:
write_line += " " + effect + "\n"
write_line += " )"
write_line += ")\n"
write_line += ")\n" #domain ending bracket
print(write_line)
write_file.write(write_line)
write_file.close()
###Output
;;********************Learned PDDL domain******************
(define (domain driverlog)
(:requirements :typing)
(:types Driver Truck Package Location)
(:predicates
(Driver_fsm0_e(disembarktruck.0)|s(boardtruck.0) ?v0 - Location ?v6 - Truck)
(Driver_fsm0_e(boardtruck.0)|e(drivetruck.3)|s(disembarktruck.0)|s(drivetruck.3) ?v1 - Truck ?v2 - Location)
(Truck_fsm0_s(boardtruck.1))
(Truck_fsm0_e(boardtruck.1)|s(loadtruck.1) ?v0 - Location)
(Truck_fsm0_e(loadtruck.1))
(Truck_fsm1_e(disembarktruck.1)|s(boardtruck.1) ?v2 - Driver ?v4 - Location)
(Truck_fsm1_s(unloadtruck.1))
(Truck_fsm1_e(boardtruck.1)|e(unloadtruck.1)|s(disembarktruck.1) ?v0 - Location ?v3 - Driver)
(Truck_fsm2_e(disembarktruck.1))
(Truck_fsm2_s(unloadtruck.1))
(Truck_fsm2_e(unloadtruck.1)|s(disembarktruck.1) ?v0 - Location)
(Truck_fsm2_s(loadtruck.1))
(Truck_fsm2_e(loadtruck.1))
(Truck_fsm3_e(disembarktruck.1)|s(boardtruck.1) ?v1 - Location ?v6 - Driver)
(Truck_fsm3_e(loadtruck.1)|e(unloadtruck.1)|e(drivetruck.0)|e(boardtruck.1)|s(drivetruck.0)|s(disembarktruck.1)|s(loadtruck.1)|s(unloadtruck.1) ?v0 - Location ?v2 - Driver)
(Package_fsm0_s(loadtruck.0))
(Package_fsm0_e(loadtruck.0)|s(unloadtruck.0) ?v0 - Truck)
(Package_fsm0_e(unloadtruck.0))
(Location_fsm0_e(unloadtruck.2))
(Location_fsm0_e(loadtruck.2))
(Location_fsm0_s(drivetruck.2))
(Location_fsm0_e(drivetruck.2)|s(unloadtruck.2)|s(loadtruck.2) ?v0 - Truck)
(Location_fsm1_s(loadtruck.2))
(Location_fsm1_e(loadtruck.2)|s(drivetruck.1) ?v0 - Truck)
(Location_fsm1_s(disembarktruck.2))
(Location_fsm1_e(disembarktruck.2))
(Location_fsm1_e(drivetruck.1))
(Location_fsm2_s(unloadtruck.2))
(Location_fsm2_e(unloadtruck.2)|e(boardtruck.2)|s(drivetruck.1) ?v0 - Truck ?v1 - Driver)
(Location_fsm2_e(drivetruck.1)|s(boardtruck.2))
(Location_fsm3_e(disembarktruck.2)|e(drivetruck.1)|s(boardtruck.2)|s(drivetruck.2) ?v2 - Driver ?v4 - Truck ?v10 - Driver ?v17 - Truck)
(Location_fsm3_e(drivetruck.2)|s(unloadtruck.2)|e(boardtruck.2)|e(loadtruck.2)|e(unloadtruck.2)|s(drivetruck.1)|s(disembarktruck.2)|s(loadtruck.2) ?v0 - Driver ?v1 - Truck)
)
(:action disembarktruck :parameters (?driver1 - Driver ?truck2 - Truck ?s4 - Location )
:precondition (and
(Driver_fsm0_e(boardtruck.0)|e(drivetruck.3)|s(disembarktruck.0)|s(drivetruck.3) ?v1 - Truck ?v2 - Location)
(Truck_fsm1_e(boardtruck.1)|e(unloadtruck.1)|s(disembarktruck.1) ?v0 - Location ?v3 - Driver)
(Truck_fsm2_e(unloadtruck.1)|s(disembarktruck.1) ?v0 - Location)
(Truck_fsm3_e(loadtruck.1)|e(unloadtruck.1)|e(drivetruck.0)|e(boardtruck.1)|s(drivetruck.0)|s(disembarktruck.1)|s(loadtruck.1)|s(unloadtruck.1) ?v0 - Location ?v2 - Driver)
(Location_fsm1_s(disembarktruck.2))
)
:effect (and
(Driver_fsm0_e(disembarktruck.0)|s(boardtruck.0) ?v0 - Location ?v6 - Truck)
(Truck_fsm1_e(disembarktruck.1)|s(boardtruck.1) ?v2 - Driver ?v4 - Location)
(Truck_fsm2_e(disembarktruck.1))
(Truck_fsm3_e(disembarktruck.1)|s(boardtruck.1) ?v1 - Location ?v6 - Driver)
(Location_fsm1_e(disembarktruck.2))
))
(:action boardtruck :parameters (?driver1 - Driver ?truck2 - Truck ?s4 - Location )
:precondition (and
(Driver_fsm0_e(disembarktruck.0)|s(boardtruck.0) ?v0 - Location ?v6 - Truck)
(Truck_fsm0_s(boardtruck.1))
(Truck_fsm1_e(disembarktruck.1)|s(boardtruck.1) ?v2 - Driver ?v4 - Location)
(Truck_fsm3_e(disembarktruck.1)|s(boardtruck.1) ?v1 - Location ?v6 - Driver)
(Location_fsm2_e(drivetruck.1)|s(boardtruck.2))
(Location_fsm3_e(disembarktruck.2)|e(drivetruck.1)|s(boardtruck.2)|s(drivetruck.2) ?v2 - Driver ?v4 - Truck ?v10 - Driver ?v17 - Truck)
)
:effect (and
(Driver_fsm0_e(boardtruck.0)|e(drivetruck.3)|s(disembarktruck.0)|s(drivetruck.3) ?v1 - Truck ?v2 - Location)
(Truck_fsm0_e(boardtruck.1)|s(loadtruck.1) ?v0 - Location)
(Truck_fsm1_e(boardtruck.1)|e(unloadtruck.1)|s(disembarktruck.1) ?v0 - Location ?v3 - Driver)
(Truck_fsm3_e(loadtruck.1)|e(unloadtruck.1)|e(drivetruck.0)|e(boardtruck.1)|s(drivetruck.0)|s(disembarktruck.1)|s(loadtruck.1)|s(unloadtruck.1) ?v0 - Location ?v2 - Driver)
(Location_fsm2_e(unloadtruck.2)|e(boardtruck.2)|s(drivetruck.1) ?v0 - Truck ?v1 - Driver)
(Location_fsm3_e(drivetruck.2)|s(unloadtruck.2)|e(boardtruck.2)|e(loadtruck.2)|e(unloadtruck.2)|s(drivetruck.1)|s(disembarktruck.2)|s(loadtruck.2) ?v0 - Driver ?v1 - Truck)
))
(:action unloadtruck :parameters (?package5 - Package ?truck2 - Truck ?s3 - Location )
:precondition (and
(Package_fsm0_e(loadtruck.0)|s(unloadtruck.0) ?v0 - Truck)
(Truck_fsm1_s(unloadtruck.1))
(Truck_fsm2_s(unloadtruck.1))
(Truck_fsm3_e(loadtruck.1)|e(unloadtruck.1)|e(drivetruck.0)|e(boardtruck.1)|s(drivetruck.0)|s(disembarktruck.1)|s(loadtruck.1)|s(unloadtruck.1) ?v0 - Location ?v2 - Driver)
(Location_fsm0_e(drivetruck.2)|s(unloadtruck.2)|s(loadtruck.2) ?v0 - Truck)
(Location_fsm2_s(unloadtruck.2))
(Location_fsm3_e(drivetruck.2)|s(unloadtruck.2)|e(boardtruck.2)|e(loadtruck.2)|e(unloadtruck.2)|s(drivetruck.1)|s(disembarktruck.2)|s(loadtruck.2) ?v0 - Driver ?v1 - Truck)
)
:effect (and
(Package_fsm0_e(unloadtruck.0))
(Truck_fsm1_e(boardtruck.1)|e(unloadtruck.1)|s(disembarktruck.1) ?v0 - Location ?v3 - Driver)
(Truck_fsm2_e(unloadtruck.1)|s(disembarktruck.1) ?v0 - Location)
(Truck_fsm3_e(loadtruck.1)|e(unloadtruck.1)|e(drivetruck.0)|e(boardtruck.1)|s(drivetruck.0)|s(disembarktruck.1)|s(loadtruck.1)|s(unloadtruck.1) ?v0 - Location ?v2 - Driver)
(Location_fsm0_e(unloadtruck.2))
(Location_fsm2_e(unloadtruck.2)|e(boardtruck.2)|s(drivetruck.1) ?v0 - Truck ?v1 - Driver)
(Location_fsm3_e(drivetruck.2)|s(unloadtruck.2)|e(boardtruck.2)|e(loadtruck.2)|e(unloadtruck.2)|s(drivetruck.1)|s(disembarktruck.2)|s(loadtruck.2) ?v0 - Driver ?v1 - Truck)
))
(:action loadtruck :parameters (?package5 - Package ?truck2 - Truck ?s4 - Location )
:precondition (and
(Package_fsm0_s(loadtruck.0))
(Truck_fsm0_e(boardtruck.1)|s(loadtruck.1) ?v0 - Location)
(Truck_fsm2_s(loadtruck.1))
(Truck_fsm3_e(loadtruck.1)|e(unloadtruck.1)|e(drivetruck.0)|e(boardtruck.1)|s(drivetruck.0)|s(disembarktruck.1)|s(loadtruck.1)|s(unloadtruck.1) ?v0 - Location ?v2 - Driver)
(Location_fsm0_e(drivetruck.2)|s(unloadtruck.2)|s(loadtruck.2) ?v0 - Truck)
(Location_fsm1_s(loadtruck.2))
(Location_fsm3_e(drivetruck.2)|s(unloadtruck.2)|e(boardtruck.2)|e(loadtruck.2)|e(unloadtruck.2)|s(drivetruck.1)|s(disembarktruck.2)|s(loadtruck.2) ?v0 - Driver ?v1 - Truck)
)
:effect (and
(Package_fsm0_e(loadtruck.0)|s(unloadtruck.0) ?v0 - Truck)
(Truck_fsm0_e(loadtruck.1))
(Truck_fsm2_e(loadtruck.1))
(Truck_fsm3_e(loadtruck.1)|e(unloadtruck.1)|e(drivetruck.0)|e(boardtruck.1)|s(drivetruck.0)|s(disembarktruck.1)|s(loadtruck.1)|s(unloadtruck.1) ?v0 - Location ?v2 - Driver)
(Location_fsm0_e(loadtruck.2))
(Location_fsm1_e(loadtruck.2)|s(drivetruck.1) ?v0 - Truck)
(Location_fsm3_e(drivetruck.2)|s(unloadtruck.2)|e(boardtruck.2)|e(loadtruck.2)|e(unloadtruck.2)|s(drivetruck.1)|s(disembarktruck.2)|s(loadtruck.2) ?v0 - Driver ?v1 - Truck)
))
(:action drivetruck :parameters (?truck2 - Truck ?s4 - Location ?s1 - Location ?driver1 - Driver )
:precondition (and
(Truck_fsm3_e(loadtruck.1)|e(unloadtruck.1)|e(drivetruck.0)|e(boardtruck.1)|s(drivetruck.0)|s(disembarktruck.1)|s(loadtruck.1)|s(unloadtruck.1) ?v0 - Location ?v2 - Driver)
(Location_fsm0_s(drivetruck.2))
(Location_fsm1_e(loadtruck.2)|s(drivetruck.1) ?v0 - Truck)
(Location_fsm2_e(unloadtruck.2)|e(boardtruck.2)|s(drivetruck.1) ?v0 - Truck ?v1 - Driver)
(Location_fsm3_e(drivetruck.2)|s(unloadtruck.2)|e(boardtruck.2)|e(loadtruck.2)|e(unloadtruck.2)|s(drivetruck.1)|s(disembarktruck.2)|s(loadtruck.2) ?v0 - Driver ?v1 - Truck)
(Driver_fsm0_e(boardtruck.0)|e(drivetruck.3)|s(disembarktruck.0)|s(drivetruck.3) ?v1 - Truck ?v2 - Location)
)
:effect (and
(Truck_fsm3_e(loadtruck.1)|e(unloadtruck.1)|e(drivetruck.0)|e(boardtruck.1)|s(drivetruck.0)|s(disembarktruck.1)|s(loadtruck.1)|s(unloadtruck.1) ?v0 - Location ?v2 - Driver)
(Location_fsm0_e(drivetruck.2)|s(unloadtruck.2)|s(loadtruck.2) ?v0 - Truck)
(Location_fsm1_e(drivetruck.1))
(Location_fsm2_e(drivetruck.1)|s(boardtruck.2))
(Location_fsm3_e(disembarktruck.2)|e(drivetruck.1)|s(boardtruck.2)|s(drivetruck.2) ?v2 - Driver ?v4 - Truck ?v10 - Driver ?v17 - Truck)
(Driver_fsm0_e(boardtruck.0)|e(drivetruck.3)|s(disembarktruck.0)|s(drivetruck.3) ?v1 - Truck ?v2 - Location)
))
)
###Markdown
Validating PDDL -- Fixing Syntax by replacing predicates with state dictionary valuesThis is required because PDDL syntax doesn't support extra paranthesis () which occur in states (transitions occuring in states as 'start(t1)' or 'end(t1)')
###Code
# get action schema
print(";;********************Learned PDDL domain******************")
output_file = "output/"+ domain_name + "/" + domain_name + ".pddl"
write_file = open(output_file, 'w')
write_line = "(define"
write_line += " (domain "+ domain_name+")\n"
write_line += " (:requirements :typing)\n"
write_line += " (:types"
for class_name in class_names:
write_line += " " + class_name
write_line += ")\n"
write_line += " (:predicates\n"
# one predicate to represent each object state
predicates = []
for class_index, pb_per_class in enumerate(para_bind_overall_fault_removed):
for fsm_no, pbs_per_fsm in enumerate(pb_per_class):
state_mapping = state_mappings_class[class_index][fsm_no]
for state_index, state in enumerate(state_machines_overall_list[class_index][fsm_no].nodes()):
state_set = set(state.split('|'))
predicate = ""
write_line += " (" + class_names[class_index] + "_fsm" + str(fsm_no) + "_state" + str(state_mapping[state])
predicate += " (" + class_names[class_index] + "_fsm" + str(fsm_no) + "_state" + str(state_mapping[state])
for pb in pbs_per_fsm:
if set(pb[0][0]) <= state_set:
if " ?"+pb[1] + " - " + str(pb[0][8]) not in predicate:
write_line += " ?"+pb[1] + " - " + str(pb[0][8])
predicate += " ?"+pb[1] + " - " + str(pb[0][8])
write_line += ")\n"
predicate += ")"
predicates.append(predicate)
write_line += " )\n"
for action_index, action in enumerate(actions):
write_line += " (:action"
write_line += " " + action + " "
write_line += " :parameters"
write_line += " ("
arg_already_written_flag = False
params_per_action = []
args_per_action = []
for seq in sequences:
for actarg_tuple in seq:
if not arg_already_written_flag:
if actarg_tuple[0] == action:
arglist = []
for arg in actarg_tuple[1]:
write_line += "?"+arg + " - " + class_names[get_class_index(arg,classes)] + " "
arglist.append(arg)
args_per_action.append(arglist)
params_per_action.append(actarg_tuple[1])
arg_already_written_flag = True
write_line += ")\n"
# need to use FSMS to get preconditions and effects.
# Start-state = precondition. End state= Effect
preconditions = []
effects = []
for arglist in params_per_action:
for arg in arglist:
current_class_index = get_class_index(arg, classes)
for fsm_no, G in enumerate(state_machines_overall_list[current_class_index]):
G_int = state_machines_overall_list_2[current_class_index][fsm_no]
state_mapping = state_mappings_class[current_class_index][fsm_no]
for start, end, weight in G_int.edges(data='weight'):
_actions = weight.split('|')
for _action in _actions:
if _action.split('.')[0] == action:
for predicate in predicates:
pred = predicate.split()[0].lstrip("(")
clss = pred.split('_')[0]
fsm = pred.split('_')[1]
state_ind = pred.split('_')[2].rstrip(")")[-1]
if clss == class_names[current_class_index]:
if fsm == "fsm" + str(fsm_no):
if int(state_ind) == int(start):
if predicate not in preconditions:
preconditions.append(predicate)
if int(state_ind) == int(end):
if predicate not in effects:
effects.append(predicate)
break
write_line += " :precondition"
write_line += " (and\n"
for precondition in preconditions:
write_line += " "+precondition+"\n"
write_line += " )\n"
write_line += " :effect"
write_line += " (and\n"
for effect in effects:
write_line += " " + effect + "\n"
write_line += " )"
write_line += ")\n\n"
write_line += ")\n" #domain ending bracket
print(write_line)
write_file.write(write_line)
write_file.close()
###Output
;;********************Learned PDDL domain******************
(define (domain driverlog)
(:requirements :typing)
(:types Driver Truck Package Location)
(:predicates
(Driver_fsm0_state0 ?v0 - Location ?v6 - Truck)
(Driver_fsm0_state1 ?v1 - Truck ?v2 - Location)
(Truck_fsm0_state0)
(Truck_fsm0_state1 ?v0 - Location)
(Truck_fsm0_state2)
(Truck_fsm1_state0 ?v2 - Driver ?v4 - Location)
(Truck_fsm1_state1)
(Truck_fsm1_state2 ?v0 - Location ?v3 - Driver)
(Truck_fsm2_state0)
(Truck_fsm2_state1)
(Truck_fsm2_state2 ?v0 - Location)
(Truck_fsm2_state3)
(Truck_fsm2_state4)
(Truck_fsm3_state0 ?v1 - Location ?v6 - Driver)
(Truck_fsm3_state1 ?v0 - Location ?v2 - Driver)
(Package_fsm0_state0)
(Package_fsm0_state1 ?v0 - Truck)
(Package_fsm0_state2)
(Location_fsm0_state0)
(Location_fsm0_state1)
(Location_fsm0_state2)
(Location_fsm0_state3 ?v0 - Truck)
(Location_fsm1_state0)
(Location_fsm1_state1 ?v0 - Truck)
(Location_fsm1_state2)
(Location_fsm1_state3)
(Location_fsm1_state4)
(Location_fsm2_state0)
(Location_fsm2_state1 ?v0 - Truck ?v1 - Driver)
(Location_fsm2_state2)
(Location_fsm3_state0 ?v2 - Driver ?v4 - Truck ?v10 - Driver ?v17 - Truck)
(Location_fsm3_state1 ?v0 - Driver ?v1 - Truck)
)
(:action disembarktruck :parameters (?driver1 - Driver ?truck2 - Truck ?s4 - Location )
:precondition (and
(Driver_fsm0_state1 ?v1 - Truck ?v2 - Location)
(Truck_fsm1_state2 ?v0 - Location ?v3 - Driver)
(Truck_fsm2_state2 ?v0 - Location)
(Truck_fsm3_state1 ?v0 - Location ?v2 - Driver)
(Location_fsm1_state2)
)
:effect (and
(Driver_fsm0_state0 ?v0 - Location ?v6 - Truck)
(Truck_fsm1_state0 ?v2 - Driver ?v4 - Location)
(Truck_fsm2_state0)
(Truck_fsm3_state0 ?v1 - Location ?v6 - Driver)
(Location_fsm1_state3)
))
(:action boardtruck :parameters (?driver1 - Driver ?truck2 - Truck ?s4 - Location )
:precondition (and
(Driver_fsm0_state0 ?v0 - Location ?v6 - Truck)
(Truck_fsm0_state0)
(Truck_fsm1_state0 ?v2 - Driver ?v4 - Location)
(Truck_fsm3_state0 ?v1 - Location ?v6 - Driver)
(Location_fsm2_state2)
(Location_fsm3_state0 ?v2 - Driver ?v4 - Truck ?v10 - Driver ?v17 - Truck)
)
:effect (and
(Driver_fsm0_state1 ?v1 - Truck ?v2 - Location)
(Truck_fsm0_state1 ?v0 - Location)
(Truck_fsm1_state2 ?v0 - Location ?v3 - Driver)
(Truck_fsm3_state1 ?v0 - Location ?v2 - Driver)
(Location_fsm2_state1 ?v0 - Truck ?v1 - Driver)
(Location_fsm3_state1 ?v0 - Driver ?v1 - Truck)
))
(:action unloadtruck :parameters (?package5 - Package ?truck2 - Truck ?s3 - Location )
:precondition (and
(Package_fsm0_state1 ?v0 - Truck)
(Truck_fsm1_state1)
(Truck_fsm2_state1)
(Truck_fsm3_state1 ?v0 - Location ?v2 - Driver)
(Location_fsm0_state3 ?v0 - Truck)
(Location_fsm2_state0)
(Location_fsm3_state1 ?v0 - Driver ?v1 - Truck)
)
:effect (and
(Package_fsm0_state2)
(Truck_fsm1_state2 ?v0 - Location ?v3 - Driver)
(Truck_fsm2_state2 ?v0 - Location)
(Truck_fsm3_state1 ?v0 - Location ?v2 - Driver)
(Location_fsm0_state0)
(Location_fsm2_state1 ?v0 - Truck ?v1 - Driver)
(Location_fsm3_state1 ?v0 - Driver ?v1 - Truck)
))
(:action loadtruck :parameters (?package5 - Package ?truck2 - Truck ?s4 - Location )
:precondition (and
(Package_fsm0_state0)
(Truck_fsm0_state1 ?v0 - Location)
(Truck_fsm2_state3)
(Truck_fsm3_state1 ?v0 - Location ?v2 - Driver)
(Location_fsm0_state3 ?v0 - Truck)
(Location_fsm1_state0)
(Location_fsm3_state1 ?v0 - Driver ?v1 - Truck)
)
:effect (and
(Package_fsm0_state1 ?v0 - Truck)
(Truck_fsm0_state2)
(Truck_fsm2_state4)
(Truck_fsm3_state1 ?v0 - Location ?v2 - Driver)
(Location_fsm0_state1)
(Location_fsm1_state1 ?v0 - Truck)
(Location_fsm3_state1 ?v0 - Driver ?v1 - Truck)
))
(:action drivetruck :parameters (?truck2 - Truck ?s4 - Location ?s1 - Location ?driver1 - Driver )
:precondition (and
(Truck_fsm3_state1 ?v0 - Location ?v2 - Driver)
(Location_fsm0_state2)
(Location_fsm1_state1 ?v0 - Truck)
(Location_fsm2_state1 ?v0 - Truck ?v1 - Driver)
(Location_fsm3_state1 ?v0 - Driver ?v1 - Truck)
(Driver_fsm0_state1 ?v1 - Truck ?v2 - Location)
)
:effect (and
(Truck_fsm3_state1 ?v0 - Location ?v2 - Driver)
(Location_fsm0_state3 ?v0 - Truck)
(Location_fsm1_state4)
(Location_fsm2_state2)
(Location_fsm3_state0 ?v2 - Driver ?v4 - Truck ?v10 - Driver ?v17 - Truck)
(Driver_fsm0_state1 ?v1 - Truck ?v2 - Location)
))
)
###Markdown
State Mapping: What are these states?
###Code
# To see what these states are, look at the following graphs
for index, fsm_graphs in enumerate(state_machines_overall_list):
printmd("## Class " + str(index))
printmd("### "+ class_names[index])
print("Number of FSMS:" + str(num_fsms))
for fsm_no, G in enumerate(fsm_graphs):
printmd("Fsm "+ str(fsm_no))
plot_cytographs_fsm(state_machines_overall_list_2[index][fsm_no], domain_name)
plot_cytographs_fsm(G, domain_name)
###Output
_____no_output_____
###Markdown
State Mappings: Text format
###Code
for index, sm_fsm in enumerate(state_mappings_class):
printmd("## Class " + str(index))
printmd("### "+ class_names[index])
for fsm_no, mapping in enumerate(sm_fsm):
printmd("Fsm "+ str(fsm_no))
pprint(mapping)
###Output
_____no_output_____ |
2020/Day 05.ipynb | ###Markdown
Binary by any other name* https://adventofcode.com/2020/day/5Today's task is to decode binary. The seat ID, with its row and column numbers, is a typical pack-two-values-into-one-binary number scheme, with the number here consisting of 10 digits; the 3 least significant are encoding the column, the other 7 the row. Eric Wastle's description is just reading out the binary number from the most significant bit to the least.In this scheme, the 0's and 1's have been replaced by letters. `F` and `B` for the row number, and `L` and `R` for the column number. If you use the [`str.translate()` function](https://docs.python.org/3/library/stdtypes.htmlstr.translate) this is trivially returned to base binary notation. Don't worry about separating out the two values again, that's just a [right-shift operation](https://en.wikipedia.org/wiki/Bitwise_operationArithmetic_shift) and a [bit mask](https://en.wikipedia.org/wiki/Bitwise_operationAND).As so often with AoC puzzles, I've used a dataclass for this (I'm rather fond of the module) so I can easily inspect the puzzle results and have a nice representation. It also made it trivially easy to make the objects orderable, to find the max seat id. I've given the implementation a [`__slots__` attribute](https://docs.python.org/3/reference/datamodel.htmlslots) almost out of habit, it is very simple immutable data object, and we don't _need_ to store arbitrary additional attributes, so why pay the memory price? And by making the class immutable (`frozen=True`) we get hashability for free, which came in helpful in part 2.
###Code
from dataclasses import dataclass
from typing import FrozenSet, Mapping, Sequence, Tuple
from itertools import product
_from_pass = str.maketrans('FBLR', '0101')
_to_row = str.maketrans('01', 'FB')
_to_col = str.maketrans('01', 'LR')
@dataclass(order=True, frozen=True)
class SeatId:
__slots__ = ('id',)
id: int
@property
def row(self) -> int:
return self.id >> 3
@property
def rowid(self, _tm=_to_row) -> str:
return format(self.row, '07b').translate(_tm)
@property
def col(self) -> int:
return self.id & 7
@property
def colid(self, _tm=_to_col) -> str:
return format(self.col, '03b').translate(_tm)
def __repr__(self) -> str:
return f"<SeatID {self.id} {self.rowid}-{self.colid}>"
@classmethod
def from_boardingpass(cls, pass_: str, _tm=_from_pass) -> 'SeatId':
return cls(int(pass_.translate(_tm), 2))
tests: Mapping[str, Tuple[int, int, int]] = {
"FBFBBFFRLR": (44, 5, 357),
"BFFFBBFRRR": (70, 7, 567),
"FFFBBBFRRR": (14, 7, 119),
"BBFFBBFRLL": (102, 4, 820),
}
for pass_, (row, col, id_) in tests.items():
seatid = SeatId.from_boardingpass(pass_)
assert (seatid.row, seatid.col, seatid.id) == (row, col, id_)
assert max(map(SeatId.from_boardingpass, tests)).id == 820
import aocd
seatids: Sequence[SeatId] = [
SeatId.from_boardingpass(pass_)
for pass_ in aocd.get_data(day=5, year=2020).splitlines()
]
print("Part 1:", max(seatids).id)
###Output
Part 1: 866
###Markdown
Finding your seatNow we get to apply a little logic. From the instructions we know that not all possible row numbers _exist_; the plane is missing row numbers at the front and back, but the plane is also *full*, so we can assume that there is someone sitting in every _existing_ row. So the possible row ids are simply the range from the minimum to the maximum existing row id in the input. We also know we are not sitting in the first nor last row.This is not a large problem; even if there were no seats missing, there are _at most_ 1008 possible candidate seats (8 columns times 126 rows). Since we are looking for a numeric seat id where the ids before and after do exist (and so are occupied), only need to generate all possible seat id numbers (from `min(rowids) << 3 & 7` through to `max(rowids) << 7`), iterate over these with a *sliding window*, and find the one case where a missing seat is flanked by two occupied seats.A sliding window is an iterator from an input iterable, gives you the first `n` elements as a tuple, as the first element it produces. Then the next element it gives you is a tuple with the first element dropped, and another element from the input iterable added to the end. E.g. if your iterator starts with `'foo'`, `'bar'`, `'baz'`, `'spam'`, `'ham'`, then a sliding window with `n` set to `3`, from those inputs would first produce `('foo', 'bar', 'baz')`, then `('bar', 'baz', 'spam')`, and then `('baz', 'spam', 'ham')`. To find our own seat, a sliding window with 3 elements gives us the preceding ID, our possible boarding pass, and the subsequent ID. I built my sliding window using [`itertools.tee()`](https://docs.python.org/3/library/itertools.htmlitertools.tee) and [`itertools.islice()`](https://docs.python.org/3/library/itertools.htmlitertools.islice); the `tee()` object handles the buffering for us, with only 3 elements the warnings in the documentation about *significant auxiliary storage* don't apply.Note that we generate IDs for the seat just before the first possible seat, and for the one just after. Say, row 5 is the first possible row, then our seat would have, at minimum, id `(6 * 8) + 0 == 48`. But then the boarding pass with id 47 would have to exist (as well as boarding pass 49), so we generate all numbers starting at `(8 * 6) - 1`. Ditto for the last possible seat; we need not just `(max(rowids) - 1) * 8 + 7`, but want to generate `(max(rowids) * 8 + 0)` too, just so we can iterate with a sliding window that includes that those two ids at the start and end.
###Code
from typing import Set
from itertools import tee, islice
def find_empty(seatids: Sequence[SeatId]) -> SeatId:
occupied = set(seatids)
# not the first, and not the last row, but include the seat ids before and after
candidates = map(SeatId, range(min(s.row for s in occupied) << 3 & 7, max(s.row for s in occupied) << 3))
# sliding window, every 3 seatids, the one before, the candidate seat, and the one after.
windowed = zip(*(islice(it, start, None) for start, it in enumerate(tee(candidates, 3))))
# b, s, a => before, seat, after. Yes, I fell for the lure of the single line expression.
return next(s for b, s, a in windowed if s not in occupied and len({a, b} & occupied) == 2)
print("Part 2:", find_empty(seatids).id)
###Output
Part 2: 583
|
hw7-fastSLAM/.ipynb_checkpoints/fastSLAM ground robot-checkpoint.ipynb | ###Markdown
ME 595r - Autonomous Sytems Extended Kalman Filter Dynamic ModelThis filter will estimate the states of a ground robot with velocity inputs and a sensor that measures range and bearing to landmarks. The state is parameterized as$$ x = \begin{bmatrix}x \\ y \\ \theta \end{bmatrix} $$The commanded input is$$ \hat{u} = \begin{bmatrix} \hat{v} \\ \hat{\omega} \end{bmatrix} $$The true input to the system is equal to the commanded input corrupted by noise$$ u = \hat{u} + \xi_u $$Where $ \xi_u $ is a zero-mean multivariate random variable with covariance$$ \Sigma_{\xi_u} = \begin{bmatrix} \alpha_1 v_t^2 + \alpha_2 \omega_t^2 & 0 \\ 0 & \alpha_3 v_t^2 + \alpha_4 \omega_t^2 \end{bmatrix} $$The state evolves as$$ \bar{x}_t = f(x, u) = x_{t-1} + \begin{bmatrix} -\tfrac{v_t}{\omega_t}\sin(\theta_{t-1}) + \tfrac{v_t}{\omega_t}\sin(\theta_{t-1} + \omega_t \Delta t) \\\tfrac{v_t}{\omega_t}\cos(\theta_{t-1}) - \tfrac{v_t}{\omega_t}\cos(\theta_{t-1} + \omega_t \Delta t) \\\omega_t \Delta t\end{bmatrix} $$For the Extended Kalman filter, we need to linearize the dynamic model about our state and our input$$ A_d = \frac{\partial f}{\partial x} = \begin{bmatrix}1 & 0 & -\tfrac{v_t}{\omega_t}\cos(\theta_{t-1}) + \tfrac{v_t}{\omega_t}\cos(\theta_{t-1} + \omega_t \Delta t) \\0 & 1 & -\tfrac{v_t}{\omega_t}\sin(\theta_{t-1}) + \tfrac{v_t}{\omega_t}\sin(\theta_{t-1} + \omega_t \Delta t) \\0 & 0 & 1\end{bmatrix} $$ Measurements and NoiseWe will measure the range and bearing to landmarks Implementation
###Code
from __future__ import division
from __future__ import print_function
import numpy as np
import matplotlib.pyplot as plt
#####
# Enable this to be able to zoom plots, but it kills patches
# %matplotlib inline
# import mpld3
# mpld3.enable_notebook()
#####
from matplotlib import animation, rc
from IPython.display import HTML
from tqdm import tqdm, tqdm_notebook
import copy
#import plotly.plotly as pl
plt.rc('text', usetex=True)
plt.rc('font', family='serif')
figWidth = 11
figHeight = 8
from scipy.stats import multivariate_normal as mvn
def wrap_each(x):
for i, y in enumerate(x):
x[i] = wrap(y)
return x
def wrap(x):
while x < -np.pi:
x += 2*np.pi
while x > np.pi:
x -= 2*np.pi
return x
class Particle(object):
def __init__(self, x0, num_landmarks, g, del_g_x, R, Ts):
self.g = g
self.del_g_x = del_g_x
self.n = len(x0) # state dimension
self.l = num_landmarks
self.R = 1*R
self.x = x0
self.lx = np.zeros((2, num_landmarks))
# self.P = np.array([1e10*np.eye(2*num_landmarks) for i in xrange(num_landmarks)])
self.P = 1e10*np.eye(2)[:, :, None] + np.zeros((2, 2, num_landmarks))
# self.P = self.R[:, :, None] + np.zeros((2, 2, num_landmarks))
self.Ts = Ts
def update(self, z, landmark_idx):
# landmark_idx is a list of indices of landmarks that correspond to the z measurements
# landmark_idx should be the same length as the second dimension of z
# for any landmarks that haven't been initialized
for i, idx in enumerate(landmark_idx):
if self.lx[0, idx] == 0.:
self.lx[:, idx] = self.x[:2] + np.array([z[0, i]*np.cos(z[1, i] + self.x[2]),
z[0, i]*np.sin(z[1, i] + self.x[2])])
# self.P[:, :, idx] = np.copy(self.R)
C = self.del_g_x(self.x, self.lx[:, landmark_idx])
# use Einstein summation notation to do some crazy linalg
# for example np.einsum('mnr,ndr->mdr', A, B)
# does matrix multiply on first two dimensions, broadcasting the operation along the third
# S = C.dot(self.P.dot(C.T)) + self.R
# C_T = np.einsum('ijk->jik', C)
# print(C.shape)
# print(self.P[:, :, landmark_idx].shape)
# similar to P.dot(C.T)
S1 = np.einsum('mnr,dnr->mdr', self.P[:, :, landmark_idx], C)
# print(S1.shape)
S = np.einsum('mnr,ndr->mdr', C, S1) + self.R[:, :, None]
S_inv = np.zeros_like(S)
for i in xrange(S.shape[-1]):
S_inv[:, :, i] = np.linalg.inv(S[:, :, i])
# now do some Einstein stuff for the rest
# self.K = self.P.dot(C.T).dot(np.linalg.inv(S))
K1 = np.einsum('mnr,dnr->mdr', self.P[:, :, landmark_idx], C)
K = np.einsum('mnr,ndr->mdr', K1, S_inv)
z_hat = self.g(self.x, self.lx[:, landmark_idx])
res = z - z_hat
res[1, :] = wrap_each(res[1, :])
# self.lx[:, landmark_idx] = self.lx[:, landmark_idx] + self.K.dot(res)
# Q1 = np.einsum('nmr,ndr->mdr', C, self.P[:, :, landmark_idx])
# Q = np.einsum('mnr,ndr->mdr', Q1, C) + self.R[:, :, None]
w = 0;
for i in xrange(S.shape[-1]):
w += mvn.logpdf(res[:, i], mean=(0, 0), cov=S[:, :, i])
# print("z: {}".format(z))
# print("zHat: {}".format(z_hat[1]))
# print("x: {}".format(self.x))
# print("res: {}".format(res[1]))
# update the estimates
self.lx[:, landmark_idx] = self.lx[:, landmark_idx] + np.einsum('mnr,nr->mr', K, res)
# self.P = (np.eye(self.n + 2*self.l) - self.K.dot(C_aug)).dot(self.P)
# update the covariances
P1 = np.eye(2)[:, :, None] - np.einsum('mnr,ndr->mdr', K, C)
self.P[:, :, landmark_idx] = np.einsum('mnr,ndr->mdr', P1, self.P[:, :, landmark_idx])
return w
from scipy.stats import multivariate_normal as mvn
import copy
class FastSLAM(object):
def __init__(self, x0, num_particles, state_dim, input_dim, num_landmarks, f, g, del_g_x, R, Ts, Q=None, Qu=None):
self.f = f
self.g = g
self.n = state_dim
self.m = input_dim # input dimension
self.num_particles = num_particles
self.num_landmarks = num_landmarks
self.Qu = Qu
self.Q = Q
self.X = []
P0 = 0.0*np.eye(3)
for i in xrange(self.num_particles):
x0_p = np.random.multivariate_normal(x0, P0)
self.X.append(Particle(x0_p, num_landmarks, g, del_g_x, R, Ts))
self.best = self.X[0]
self.Ts = Ts
def lowVarSample(self, w):
Xbar = []
M = self.num_particles
r = np.random.uniform(0, 1/M)
c = w[0]
i = 0
last_i = i
unique = 1
for m in xrange(M):
u = r + m/M
while u > c:
i += 1
c = c + w[i]
Xbar.append(copy.deepcopy(self.X[i]))
if last_i != i:
unique += 1
last_i = i
self.X = Xbar
return unique
def predict(self, u):
self.u = u
# input noise case
# uHat = u[:, np.newaxis] + np.zeros((self.m, self.num_particles))
uHat = u
# propagate the particles
# pdb.set_trace()
for particle in self.X:
if self.Qu is not None:
uHat += np.random.multivariate_normal(np.zeros(self.m), self.Qu(u))
particle.x = self.f(particle.x, uHat, self.Ts)
if self.Q is not None:
particle.x += np.random.multivariate_normal(np.zeros(self.n), self.Q)
# self.X = self.f(self.X, uHat, dt)
# self.x = np.mean(self.X, axis=1)[:, np.newaxis]
# self.P = np.cov(self.X, rowvar=True)
# print(self.X.shape)
# print(self.P.shape)
# print(self.x)
def update(self, z, landmark_idx):
w = np.zeros(self.num_particles)
for i, x in enumerate(self.X):
# wi = 0.9*mvn.pdf(zHat[:, i, :].T, mean=z[:, i], cov=self.R).T
# # add in a 1% mixture of uniform over range measurements between 1m and 11m
# wi += 0.1*0.1
# w += np.log(wi)
w[i] = x.update(z, landmark_idx)
# print(w)
# logsumexp
# print("log w: {}".format(w))
max_w = np.max(w)
w = np.exp(w-max_w)
# for code simplicity, normalize the weights here
w = w/np.sum(w)
self.best_idx = np.argmax(w)
best = self.X[self.best_idx]
# print("w: {}".format(w))
unique = self.lowVarSample(w)
# print(unique)
# add some noise to account for sparsity in particles
# if unique/self.num_particles < 0.5:
# Q = self.P/((self.num_particles*unique)**(1/self.n))
# self.X += np.random.multivariate_normal(np.zeros(self.n), Q, size=self.num_particles).T
# grab the most likely particle before resampling instead
# self.x = np.mean(self.X, axis=1)[:, np.newaxis]
# self.P = np.cov(self.X, rowvar=True)
self.best = best
# initialize inputs and state truth
Ts = 0.1
Tend = 30
num_particles = 10
num_landmarks = 5
t = np.arange(start=Ts, stop=Tend+Ts, step = Ts)
alpha = np.array([0.1, 0.01, 0.01, 0.1])
v_c = 1 + 0.5*np.cos(2*np.pi*0.2*t)
omega_c = -0.2 + 2*np.cos(2*np.pi*0.6*t)
v = v_c + np.random.normal(0, alpha[0]*np.square(v_c) + alpha[1]*np.square(omega_c))
omega = omega_c + np.random.normal(0, alpha[2]*np.square(v_c) + alpha[3]*np.square(omega_c))
u_c = np.vstack((v_c, omega_c))
u = np.vstack((v, omega))
# print(u.shape)
state_dim = 3
x = np.zeros((state_dim, len(t)))
# x[:, 0] = np.array([-5, -3, np.pi/2])
x[:, 0] = np.array([0, 0, 0])
#landmarks = np.array([[6, -7, 6], [4, 8, -4]])
# num_landmarks = 40
# np.random.seed(4)
np.random.seed(5)
landmarks = np.random.uniform(low=-10., high=10., size=(2, num_landmarks))
# # define the model
# def f(x, u, dt):
# v = u.flatten()[0]
# w = u.flatten()[1]
# theta = x.flatten()[2]
# dx = np.array([-v/w*np.sin(theta) + v/w*np.sin(theta + w*dt),
# v/w*np.cos(theta) - v/w*np.cos(theta + w*dt),
# w*dt])
# x_next = x.flatten() + dx
# #print(x_next)
# return x_next
# define the model
def f(x, u, dt):
v = u[0]
w = u[1]
if np.abs(w) < 10*np.finfo(np.float32).eps:
w = 10*np.finfo(np.float32).eps
theta = x[2]
dx = np.array([-v/w*np.sin(theta) + v/w*np.sin(theta + w*dt),
v/w*np.cos(theta) - v/w*np.cos(theta + w*dt),
w*dt])
x_next = x + dx
#print(x_next)
return x_next
def f_parallel(x, u, dt):
v = u[0, :]
w = u[1, :]
w[np.abs(w) < 10*np.finfo(np.float32).eps] = 10*np.finfo(np.float32).eps
theta = x[2, :]
dx = np.array([-v/w*np.sin(theta) + v/w*np.sin(theta + w*dt),
v/w*np.cos(theta) - v/w*np.cos(theta + w*dt),
w*dt])
x_next = x + dx
#print(x_next)
return x_next
def g(x, landmark):
q = (landmark[0] - x[0])**2 + (landmark[1] - x[1])**2
theta = np.arctan2(landmark[1] - x[1], landmark[0] - x[0]) - x[2]
return np.array([np.sqrt(q),
wrap(theta)])
def g_parallel(x, landmark):
q = (landmark[0, :] - x[0])**2 + (landmark[1, :] - x[1])**2
theta = np.arctan2(landmark[1, :] - x[1], landmark[0, :] - x[0]) - x[2]
# theta = ( theta + np.pi) % (2 * np.pi ) - np.pi
theta = wrap_each(theta)
return np.concatenate((np.sqrt(q)[None, :], theta[None, :]), axis=0)
def del_g_x(x, landmark):
lx = landmark[0, :]
ly = landmark[1, :]
dx = lx - x[0]
dy = ly - x[1]
q = (dx)**2 + (dy)**2
sq = np.sqrt(q)
zero = np.zeros_like(dx)
one = np.ones_like(dx)
# C = np.array([[-dx/sq, -dy/sq, zero, dx/sq, dy/sq],
# [dy/q, -dx/q, -one, -dy/q, dx/q]])
C = np.array([[dx/sq, dy/sq],
[-dy/q, dx/q]])
# Ca = np.copy(C)
# # try numeric differentiation
# delta = 0.0000001
# for i in xrange(len(x)):
# C[:, i] = (g(x + delta*np.eye(1, len(x), i).flatten(), landmark) - g(x, landmark))/delta
# print(C - Ca)
# print(C.shape)
return C
def Qu(u):
v = u[0]
w = u[1]
return np.array([[alpha[0]*v**2 + alpha[1]*w**2, 0],
[0, alpha[2]*v**2 + alpha[3]*w**2]])
sigma_r = 0.1
sigma_phi = 0.05
R = np.array([[sigma_r**2, 0],
[0, sigma_phi**2]])
# P = np.array([[1, 0, 0],
# [0, 1, 0],
# [0, 0, 0.1]])
P = np.array([[0, 0, 0],
[0, 0, 0],
[0, 0, 0]])
# for landmark in landmarks.T:
# print(landmark)
# generate truth data
for i in tqdm(xrange(1, len(t)), desc="Generating Truth", ncols=110):
x[:, i:i+1] = f(x[:, i-1:i], u[:, i:i+1], Ts)
xHat = np.zeros_like(x)
xHat[:, 0] = x[:, 0]
best_idx = np.zeros(len(t), dtype=np.int32)
sig = np.zeros_like(x)
sig[:, 0] = np.sqrt(P.diagonal())
landmark_P = np.zeros((2, 2, num_landmarks, len(t)))
K = np.zeros((3, 2, len(t)-1))
landmarksHat = np.zeros((2, num_landmarks, len(t)))
input_dim = u.shape[0]
X = np.zeros((3, num_particles, len(t)))
pf = FastSLAM(xHat[:, 0], num_particles, state_dim, input_dim, num_landmarks, f, g_parallel, del_g_x, R, Ts, Qu=Qu)
zHat = np.zeros((2, len(t)))
for i in tqdm_notebook(xrange(1, len(t)), desc="Estimating"):
uHat = u[:, i] + np.random.multivariate_normal([0, 0], Qu(u[:, i]))
pf.predict(uHat)
z_all = []
landmark_idx = []
for j, landmark in enumerate(landmarks.T):
z = g(x[:, i], landmark) + np.random.multivariate_normal([0, 0], R)
z[1] = wrap(z[1])
if abs(z[1]) < np.pi/4:
z_all.append(z)
landmark_idx.append(j)
# print("z_all: {}".format(z_all))
# print("x: {}".format(x[:, i]))
pf.update(np.array(z_all).T, landmark_idx)
xHat[:, i] = pf.best.x
best_idx[i] = pf.best_idx
# sig[:, i] = np.sqrt(pf.P.diagonal())
for j in xrange(num_landmarks):
landmarksHat[:, j, i] = pf.best.lx[:, j]
# idx = 3+2*j
landmark_P[:, :, j, i] = pf.best.P[:, :, j]
for j in xrange(num_particles):
X[:, j, i] = pf.X[j].x
# e = np.sqrt(((x[0, :] - xHat[0, :])**2 + (x[1, :] - xHat[1, :])**2))
# print("Error norm = {}".format(np.linalg.norm(e)))
from matplotlib.patches import Ellipse
def plot_ellipse(loc, P):
U, s, _ = np.linalg.svd(P)
s = np.sqrt(5.991)*np.sqrt(s)
alpha = np.arctan2(U[1, 0], U[0, 0])
ellipse = Ellipse(loc, s[0], s[1], alpha*180/np.pi, ec='r', fill=False)
return ellipse
def update_ellipse(ellipse, loc, P):
U, s, _ = np.linalg.svd(P)
s = np.sqrt(5.991)*np.sqrt(s)
alpha = np.arctan2(U[1, 0], U[0, 0])
ellipse.center = loc
ellipse.width = s[0]
ellipse.height = s[1]
ellipse.angle = alpha*180/np.pi
plt.close('all')
env = plt.figure(figsize=(6, 6))
ax = env.add_subplot(1, 1, 1)
ax.set_xlim((-10, 10))
ax.set_ylim((-10, 10))
ax.set_title("Robot Environment",fontsize=20)
ax.set_xlabel("X position (m)", fontsize=16)
ax.set_ylabel("Y position (m)", fontsize=16)
robot = plt.Circle((x[0, -1], x[1, -1]), 0.5, fill=False, linestyle=":")
robotHat = plt.Circle((xHat[0, -1], xHat[1, -1]), 0.5, fill=False)
ax.add_artist(robot)
ax.add_artist(robotHat)
direction = np.array([[0, np.cos(x[2, -1])], [0, np.sin(x[2, -1])]])/2
line, = ax.plot(x[0, -1] + direction[0, :], x[1, -1] + direction[1, :], 'k:')
directionHat = np.array([[0, np.cos(xHat[2, -1])], [0, np.sin(xHat[2, -1])]])/2
lineHat, = ax.plot(xHat[0, -1] + directionHat[0, :], xHat[1, -1] + directionHat[1, :], 'k')
features, = ax.plot(landmarks[0, :], landmarks[1, :], 'b*', markersize=6)
# featuresHat, = ax.plot(landmarksHat[0, :, -1], landmarksHat[1, :, -1], 'r*', markersize=10)
particles, = ax.plot(X[0, :, -1], X[1, :, -1], 'go', markersize=1.5, markeredgewidth=0.0)
ellipses = []
for j in xrange(num_landmarks):
ell = plot_ellipse(landmarksHat[:, j, -1], landmark_P[:, :, j, -1])
ell2 = plot_ellipse(landmarksHat[:, j, -1] - X[:2, best_idx[-1], -1] + x[:2, -1], landmark_P[:, :, j, -1])
ax.add_artist(ell)
ellipses.append(ell)
truth, = ax.plot(x[0, :], x[1, :], 'b:')
# estimate, = ax.plot(xHat[0, :], xHat[1, :], 'r')
estimate, = ax.plot(X[0, best_idx[-1], :], X[1, best_idx[-1], :], 'r')
plt.show()
plt.close('all')
env = plt.figure(figsize=(6, 6))
ax = env.add_subplot(1, 1, 1)
ax.set_xlim((-10, 10))
ax.set_ylim((-10, 10))
ax.set_title("Robot Environment",fontsize=20)
ax.set_xlabel("X position (m)", fontsize=16)
ax.set_ylabel("Y position (m)", fontsize=16)
robot = plt.Circle((x[0, 0], x[1, 0]), 0.5, fill=False, linestyle=":")
robotHat = plt.Circle((xHat[0, 0], xHat[1, 0]), 0.5, fill=False)
ax.add_artist(robot)
ax.add_artist(robotHat)
direction = np.array([[0, np.cos(x[2, 0])], [0, np.sin(x[2, 0])]])/2
line, = ax.plot(x[0, 0] + direction[0, :], x[1, 0] + direction[1, :], 'k:')
directionHat = np.array([[0, np.cos(xHat[2, 0])], [0, np.sin(xHat[2, 0])]])/2
lineHat, = ax.plot(xHat[0, 0] + directionHat[0, :], xHat[1, 0] + directionHat[1, :], 'k')
features, = ax.plot(landmarks[0, :], landmarks[1, :], 'b*', markersize=5)
particles, = ax.plot(X[0, :, -1], X[1, :, -1], 'go', markersize=1.5, markeredgewidth=0.0)
# featuresHat, = ax.plot(landmarksHat[0, :, 0], landmarksHat[1, :, 0], 'r*', markersize=5)
ellipses = []
for j in xrange(num_landmarks):
ell = plot_ellipse(landmarksHat[:, j, 0], landmark_P[:, :, j, 0])
ax.add_artist(ell)
ellipses.append(ell)
truth, = ax.plot(x[0, 0], x[1, 0], 'b:')
# estimate, = ax.plot(xHat[0, 0], xHat[1, 0], 'r')
estimate, = ax.plot(X[0, best_idx[0], :], X[1, best_idx[0], :], 'r')
# cart = np.array([zHat[0, 0]*np.cos(zHat[1, 0]+xHat[2, 0]), zHat[0, 0]*np.sin(zHat[1, 0]+xHat[2, 0])])
# measurement, = ax.plot([xHat[0, 0], xHat[0, 0] + cart[0]], [xHat[1, 0], xHat[1, 0] + cart[1]], 'y--')
# animation function. This is called sequentially
def animate(i):
direction = np.array([[0, np.cos(x[2, i])], [0, np.sin(x[2, i])]])/2
line.set_data(x[0, i] + direction[0, :], x[1, i] + direction[1, :])
robot.center = x[0, i], x[1, i]
directionHat = np.array([[0, np.cos(xHat[2, i])], [0, np.sin(xHat[2, i])]])/2
lineHat.set_data(xHat[0, i] + directionHat[0, :], xHat[1, i] + directionHat[1, :])
robotHat.center = xHat[0, i], xHat[1, i]
truth.set_data(x[0, :i], x[1, :i])
# estimate.set_data(xHat[0, :i], xHat[1, :i])
estimate.set_data(X[0, best_idx[i], :i], X[1, best_idx[i], :i])
particles.set_data(X[0, :, i], X[1, :, i])
# featuresHat.set_data(landmarksHat[0, :, i], landmarksHat[1, :, i])
for j in xrange(num_landmarks):
if landmark_P[0, 0, j, i] != 1e10:
update_ellipse(ellipses[j], landmarksHat[:, j, i], landmark_P[:, :, j, i])
# measurement to first landmark
# cart = np.array([zHat[0, i]*np.cos(zHat[1, i]+xHat[2, i]), zHat[0, i]*np.sin(zHat[1, i]+xHat[2, i])])
# measurement.set_data([xHat[0, i], xHat[0, i] + cart[0]], [xHat[1, i], xHat[1, i] + cart[1]])
return (line,)
# call the animator. blit=True means only re-draw the parts that have changed.
speedup = 1
anim = animation.FuncAnimation(env, animate, frames=len(t), interval=Ts*1000/speedup, blit=True)
# anim = animation.FuncAnimation(env, animate, frames=20, interval=Ts*1000/speedup, blit=True)
#print(animation.writers.list())
HTML(anim.to_html5_video())
fig = plt.figure(figsize=(14,16))
fig.clear()
ax1 = fig.add_subplot(4, 1, 1)
ax1.plot(t, x[0, :] - xHat[0, :])
ax1.plot(t, 2*sig[0, :], 'r:')
ax1.plot(t, -2*sig[0, :], 'r:')
ax1.set_title("Error",fontsize=20)
ax1.legend(["error", "2 sigma bound"])
ax1.set_xlabel("Time (s)", fontsize=16)
ax1.set_ylabel("X Error (m)", fontsize=16)
# ax1.set_ylim([-0.5, 0.5])
ax1 = fig.add_subplot(4, 1, 2)
ax1.plot(t, x[1, :] - xHat[1, :])
ax1.plot(t, 2*sig[1, :], 'r:')
ax1.plot(t, -2*sig[1, :], 'r:')
#ax1.set_title("Error",fontsize=20)
ax1.legend(["error", "2 sigma bound"])
ax1.set_xlabel("Time (s)", fontsize=16)
ax1.set_ylabel("Y Error (m)", fontsize=16)
# ax1.set_ylim([-0.5, 0.5])
ax1 = fig.add_subplot(4, 1, 3)
ax1.plot(t, x[2, :] - xHat[2, :])
ax1.plot(t, 2*sig[2, :], 'r:')
ax1.plot(t, -2*sig[2, :], 'r:')
#ax1.set_title("Error",fontsize=20)
ax1.legend(["error", "2 sigma bound"])
ax1.set_xlabel("Time (s)", fontsize=16)
ax1.set_ylabel("Heading Error (rad)", fontsize=16)
# ax1.set_ylim([-0.2, 0.2])
ax1 = fig.add_subplot(4, 1, 4)
e = np.sqrt(((x[0, :] - xHat[0, :])**2 + (x[1, :] - xHat[1, :])**2))
ax1.plot(t, e)
ax1.set_title("Total Distance Error",fontsize=20)
ax1.legend(["error"])
ax1.set_xlabel("Time (s)", fontsize=16)
ax1.set_ylabel("Error (m)", fontsize=16)
print("Error norm = {}".format(np.linalg.norm(e)))
plt.tight_layout()
plt.show()
###Output
Error norm = 4.76634378895
###Markdown
Questions* Q: How does the the system behave with poor initial conditions?* A: The system converges within a few time steps, even with very poor initial conditions.* Q: How does the system behave with changes in process/noise convariances?* A: Increasing measurement noise increases estimation error and decreases the Kalman gains. Increasing process noise increases noise in truth, but marginally decreases estimation error. * Q: What happens to the quality of your estimates if you reduce the number of landmarks? increase?* A: Fewer landmarks degrades the estimate. More landmarks marginally improves the localization unless the robot gets too close to a landmark, then it can cause it to diverge.
###Code
from tqdm import trange
Ts = 1
Tend = 20
t = np.arange(start=Ts, stop=Tend+Ts, step = Ts)
alpha = np.array([0.1, 0.01, 0.01, 0.1])
v_c = 1 + 0.5*np.cos(2*np.pi*0.2*t)
omega_c = -0.2 + 2*np.cos(2*np.pi*0.6*t)
v = v_c + np.random.normal(0, alpha[0]*np.square(v_c) + alpha[1]*np.square(omega_c))
omega = omega_c + np.random.normal(0, alpha[2]*np.square(v_c) + alpha[3]*np.square(omega_c))
u_c = np.vstack((v_c, omega_c))
u = np.vstack((v, omega))
# print(u.shape)
x = np.zeros((3, len(t)))
x[:, 0] = np.array([-5, -3, np.pi/2])
N = 100
e = np.zeros(N)
for j in trange(N):
# generate truth data
for i in xrange(1, len(t)):
x[:, i] = f(x[:, i-1], u[:, i], Ts)
xHat = np.zeros_like(x)
xHat[:, 0] = x[:, 0]
sig = np.zeros_like(x)
sig[:, 0] = np.sqrt(P.diagonal())
K = np.zeros((3, 2, len(t)-1))
input_dim = u.shape[0]
ekf = EKF(xHat[:, 0], input_dim, f, g, del_f_x, del_g_x, R, P, Ts, del_f_u=del_f_u, Qu=Qu)
zHat = np.zeros((2, len(t)))
for i in xrange(1, len(t)):
uHat = u[:, i] + np.random.multivariate_normal([0, 0], Qu(u[:, i]))
ekf.predict(uHat)
for landmark in landmarks.T:
z = g(x[:, i], landmark) + np.random.multivariate_normal([0, 0], R)
# zdeg = z - x[2, i]
# zdeg[1] = zdeg[1]*180/np.pi
# print(zdeg)
zHat[:, i] = z
ekf.update(z, landmark)
# landmark = landmarks[:, 0]
# z = g(x[:, i], landmark) + np.random.multivariate_normal([0, 0], R)
# ekf.update(z, landmark)
xHat[:, i] = ekf.x
K[:, :, i-1] = ekf.K
sig[:, i] = np.sqrt(ekf.P.diagonal())
e[j] = np.linalg.norm(np.sqrt(((x[0, :] - xHat[0, :])**2 + (x[1, :] - xHat[1, :])**2)))
print("Over {} runs:".format(N))
print("Mean error norm = {}".format(np.mean(e*Ts)))
print("Standard deviation of error norm = {}".format(np.std(e*Ts)))
1/6.66
###Output
_____no_output_____ |
notebooks/data-prep/MIT-BIH Supraventricular Arrhythmia DB.ipynb | ###Markdown
MIT-BIH Supraventricular Arrhythmia Database (_svdb_)Part of the ECG Database Collection:| Short Name | Long Name || :--- | :--- || _mitdb_ | MIT-BIH Arrhythmia Database || _svdb_ | MIT-BIH Supraventricular Arrhythmia Database || _ltdb_ | MIT-BIH Long-Term ECG Database |[Docu](https://wfdb.readthedocs.io/en/latest) of the `wfdb`-package.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import wfdb
import os
from typing import Final
from collections.abc import Callable
from config import data_raw_folder, data_processed_folder
from timeeval import Datasets
import matplotlib.pyplot as plt
from IPython.display import display, Markdown, Latex
dataset_collection_name = "SVDB"
source_folder = os.path.join(data_raw_folder, "MIT-BIH Supraventricular Arrhythmia Database")
target_folder = data_processed_folder
from pathlib import Path
print(f"Looking for source datasets in {Path(source_folder).absolute()} and\nsaving processed datasets in {Path(target_folder).absolute()}")
def load_dataset_names() -> list[str]:
with open(os.path.join(source_folder, "RECORDS"), 'r') as f:
records = [l.rstrip('\n') for l in f]
return records
def transform_and_label(source_file: str, target: str) -> int:
print(f"Transforming {os.path.basename(source_file)}")
# load dataset
record = wfdb.rdrecord(source_file)
df_record = pd.DataFrame(record.p_signal, columns=record.sig_name)
print(f" record {record.file_name[0]} loaded")
# load annotation file
atr = wfdb.rdann(source_file, "atr")
assert record.fs == atr.fs, "Sample frequency of records and annotations does not match!"
df_annotation = pd.DataFrame(atr.symbol, index=atr.sample, columns=["Label"])
df_annotation = df_annotation.reset_index()
df_annotation.columns = ["position", "label"]
print(f" {atr.ann_len} beat annotations for {source_file} loaded")
# calculate normal beat length
print(" preparing windows for labeling...")
df_normal_beat = df_annotation.copy()
df_normal_beat["prev_position"] = df_annotation["position"].shift()
df_normal_beat["prev_label"] = df_annotation["label"].shift()
df_normal_beat = df_normal_beat[(df_normal_beat["label"] == "N") & (df_normal_beat["prev_label"] == "N")]
s_normal_beat_lengths = df_normal_beat["position"] - df_normal_beat["prev_position"]
print(f" normal beat distance samples = {len(s_normal_beat_lengths)}")
normal_beat_length = s_normal_beat_lengths.median()
if (normal_beat_length % 2) == 0:
normal_beat_length += 1
beat_window_size = int(normal_beat_length)
beat_window_margin = (beat_window_size - 1)//2
print(f" window size = {beat_window_size}")
print(f" window margins (left and right) = {beat_window_margin}")
# calculate beat windows
## for external anomalies
df_ext = df_annotation[(df_annotation["label"] == "|") | (df_annotation["label"] == "Q")].copy()
df_ext["window_start"] = df_ext["position"]-beat_window_margin
df_ext["window_end"] = df_ext["position"]+beat_window_margin
df_ext = df_ext[["position", "window_start", "window_end"]]
print(f" {len(df_ext)} windows for external anomalies")
## for anomalous beats
df_svf = df_annotation[(df_annotation["label"] != "|") & (df_annotation["label"] != "~") & (df_annotation["label"] != "+")].copy()
df_svf["position_next"] = df_svf["position"].shift(-1)
df_svf["position_prev"] = df_svf["position"].shift(1)
#df_svf = df_svf[(df_svf["position_prev"].notnull()) & (df_svf["position_next"].notnull())]
df_svf = df_svf[(df_svf["label"] != "Q") & (df_svf["label"] != "N")]
df_svf["window_start"] = np.minimum(df_svf["position"].values-beat_window_margin, df_svf["position_prev"].values+beat_window_margin)
df_svf["window_end"] = np.maximum(df_svf["position"].values+beat_window_margin, df_svf["position_next"].values-beat_window_margin)
df_svf = df_svf[["position", "window_start", "window_end"]]
print(f" {len(df_svf)} windows for anomalous beats")
## merge
df_windows = pd.concat([df_ext, df_svf])
print(f" ...done.")
# add labels based on anomaly windows
print(" labeling")
df_record["is_anomaly"] = 0
for _, (_, t1, t2) in df_windows.iterrows():
tmp = df_record[df_record.index >= t1]
tmp = tmp[tmp.index <= t2]
df_record["is_anomaly"].values[tmp.index] = 1
# reconstruct timestamps and set as index
print(" reconstructing timestamps")
df_record["timestamp"] = pd.to_datetime(df_record.index.values * 1e+9/record.fs, unit='ns')
df_record = df_record.set_index("timestamp")
df_record.to_csv(target)
print(f"Dataset {os.path.basename(source_file)} transformed and saved!")
# return dataset length
return record.sig_len
# shared by all datasets
dataset_type = "real"
input_type = "multivariate"
datetime_index = True
train_type = "unsupervised"
train_is_normal = False
# create target directory
dataset_subfolder = os.path.join(input_type, dataset_collection_name)
target_subfolder = os.path.join(target_folder, dataset_subfolder)
try:
os.makedirs(target_subfolder)
print(f"Created directories {target_subfolder}")
except FileExistsError:
print(f"Directories {target_subfolder} already exist")
pass
dm = Datasets(target_folder)
# dataset transformation
transform_file: Callable[[str, str], int] = transform_and_label
for dataset_name in load_dataset_names():
# intentionally no file suffix (.dat)
source_file = os.path.join(source_folder, dataset_name)
filename = f"{dataset_name}.test.csv"
path = os.path.join(dataset_subfolder, filename)
target_filepath = os.path.join(target_subfolder, filename)
# transform file and label it
dataset_length = transform_file(source_file, target_filepath)
print(f"Processed source dataset {source_file} -> {target_filepath}")
# save metadata
dm.add_dataset((dataset_collection_name, dataset_name),
train_path = None,
test_path = path,
dataset_type = dataset_type,
datetime_index = datetime_index,
split_at = None,
train_type = train_type,
train_is_normal = train_is_normal,
input_type = input_type,
dataset_length = dataset_length
)
# save metadata of benchmark
dm.save()
dm.refresh()
dm.df().loc[(slice(dataset_collection_name,dataset_collection_name), slice(None))]
###Output
_____no_output_____
###Markdown
Dataset transformation walk-through
###Code
def print_obj_attr(obj, name="Object"):
print(name)
tmp = vars(obj)
for key in tmp:
print(key, tmp[key])
print("")
records = load_dataset_names()
###Output
_____no_output_____
###Markdown
Load and parse dataset
###Code
# dataset
record = wfdb.rdrecord(os.path.join(source_folder, records[51]))
#print_obj_attr(record, "Record object")
df_record = pd.DataFrame(record.p_signal, columns=record.sig_name)
df_record
###Output
_____no_output_____
###Markdown
Add timestamp information based on sample interval ($$[fs] = samples/second$$):
###Code
display(Latex(f"Samples per second: $$fs = {record.fs} \\frac{{1}}{{s}}$$"))
display(Markdown(f"This gives a sample interval of {1e+9/record.fs} nanoseconds"))
df_record["timestamp"] = pd.to_datetime(df_record.index.values * 1e+9/record.fs, unit='ns')
df_record
# find all annotations
records = load_dataset_names()
annotations = {}
for r in records:
atr = wfdb.rdann(os.path.join(source_folder, r), "atr")
df_annotation = pd.DataFrame(atr.symbol, index=atr.sample, columns=["Label"])
for an in df_annotation["Label"].unique():
if an not in annotations:
annotations[an] = set()
annotations[an].add(atr.record_name)
for an in annotations:
annotations[an] = ", ".join(annotations[an])
annotations
###Output
_____no_output_____
###Markdown
Annotations| Annotation | Description || :--------- | :---------- ||| **Considered normal** || `N` | Normal beat ||| **Anomalous beats** (use double-window labeling) || `F` | Fusion of ventricular and normal beat || `S` | Supraventricular premature or ectopic beat || `a` | Aberrated atrial premature beat || `V` | Premature ventricular contraction || `J` | Nodal (junctional) premature beat || `B` | Bundle branch block beat (unspecified) ||| **External anomalies** (single window labeling) || `Q` | Unclassifiable beat || `\|` | Isolated QRS-like artifact ||| **Ignored, bc hard to parse and to label** || `+` | Rythm change || `~` | Change in signal quality (usually noise level changes) | Load and parse annotation
###Code
atr = wfdb.rdann(os.path.join(source_folder, records[51]), "atr")
#print_obj_attr(atr, "Annotation object")
assert record.fs == atr.fs, "Sample frequency of records and annotations does not match!"
df_annotation = pd.DataFrame(atr.symbol, index=atr.sample, columns=["Label"])
df_annotation = df_annotation.reset_index()
df_annotation.columns = ["position", "label"]
df_annotation.groupby("label").count()
###Output
_____no_output_____
###Markdown
Calculate beat windowWe assume that the normal beats (annotated with `N`) occur in a regular interval and that the expert annotations (from the dataset) are directly in the middle of a beat window.A beat window is a fixed length subsequence of the time series and shows a heart beat in its direct (local) context.We calculate the beat window length for each dataset based on the median distance between normal beats (`N`).The index (autoincrementing integers) serves as the measurement unit.Create DataFrame containing all annotated beats:
###Code
df_beat = df_annotation[["position", "label"]]
df_beat
###Output
_____no_output_____
###Markdown
Shifted-by-one self-join and filter out all beat-pairs that contain anomalous beats.We want to calculate the beat windows only based on the normal beats.We then calculate the distance between two neighboring heart beats:
###Code
df_normal_beat = df_beat.copy()
df_normal_beat["prev_position"] = df_beat["position"].shift()
df_normal_beat["prev_label"] = df_beat["label"].shift()
df_normal_beat = df_normal_beat[(df_normal_beat["label"] == "N") & (df_normal_beat["prev_label"] == "N")]
df_normal_beat = df_normal_beat.drop(columns=["label", "prev_label"])
df_normal_beat["length"] = df_normal_beat["position"] - df_normal_beat["prev_position"]
df_normal_beat.describe()
###Output
_____no_output_____
###Markdown
The median of all normal beat lengths is the beat window size.We require the beat window size to be odd.This allows us to center the window at the beat annotation.
###Code
normal_beat_length = df_normal_beat["length"].median()
if (normal_beat_length%2) == 0:
normal_beat_length += 1
beat_window_size = int(normal_beat_length)
beat_window_margin = (beat_window_size - 1)//2
print(f"window size = {beat_window_size}\nwindow margins (left and right) = {beat_window_margin}")
###Output
_____no_output_____
###Markdown
Calculate anomalous windowsThe experts from PhysioNet annotated only the beats itself with a label, but the actual anomaly is also comprised of the beat surroundings.We assume that anomalous beats (such as `V` or `F`; see table above) require looking at a window around the actual beat as being anomalous.External anomalies (such as `|`; see table above) also mark a window around it as anomalous, because those artefacts comprise multiple points.We completely ignore `~` and `+`-annotations that indicate signal quality or rythm changes, because they are not relevant for our analysis.We automatically label a variable-sized window around an annotated beat as an anomalous subsequence using the following technique:1. For anomalous annotations (`S`, `V`, `a`, `J`, `B`, and `F` annotations): - Remove `~`, `+`, and `|` annotations - Calculate anomaly window using `beat_window_size` aligned with its center on the beat annotation. - Calculate end of previous beat window _e_ and beginning of next beat window _b_. Use _e_ as beginning and _b_ as end for a second anomaly window. - Mark the union of both anomaly windows' points as anomalous.2. For `|` and `Q` annotations, mark all points of an anomaly window centered on the annotation as anomalous.3. Mark all other points as normal.> **Explain, why we used the combined windows for anomalous beats!!**>> - pattern/shape of signal may be ok> - but we consider distance to other beats also> - if too narrow or too far away, it's also anomalousThe figure shows an anomalous beat with its anomaly window (in red) and the windows of its previous and subsequent normal beats (in green).We mark all points in the interval $$[min(W_{end}, X_{start}), max(X_{end}, Y_{start})]$$
###Code
# reverse lookup from timestamp to annotation index in df_beat
p = df_record[df_record["timestamp"] == "1970-01-01 00:11:03.000"].index.values[0]
df_beat[df_beat["position"] >= p].index[0]
def plot_window(pos, color="blue", **kvs):
start = pos - beat_window_margin
end = pos + beat_window_margin
plt.axvspan(start, end, color=color, alpha=0.5, **kvs)
index = 798
beat_n = df_beat.loc[index, "position"]
print("Selected beat is annotated as", df_beat.loc[index, "label"])
print("with timestamp", df_record.loc[beat_n, "timestamp"])
ax = df_record.iloc[beat_n-500:beat_n+500].plot(kind='line', y=['ECG1', 'ECG2'], use_index=True, figsize=(20,10))
plot_window(df_beat.loc[index-1, "position"], label="$W$")
plot_window(beat_n, color="orange", label="$X$")
plot_window(df_beat.loc[index+1, "position"], label="$Y$")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Windows for external anomalies
###Code
df_pipe = df_beat.copy()
df_pipe = df_pipe[(df_pipe["label"] == "|") | (df_pipe["label"] == "Q")]
df_pipe["window_start"] = df_pipe["position"]-beat_window_margin
df_pipe["window_end"] = df_pipe["position"]+beat_window_margin
df_pipe = df_pipe[["position", "window_start", "window_end"]]
df_pipe.head()
###Output
_____no_output_____
###Markdown
Windows for anomalous beats
###Code
df_tmp = df_beat.copy()
df_tmp = df_tmp[(df_tmp["label"] != "|") & (df_tmp["label"] != "~") & (df_tmp["label"] != "+")]
df_tmp["position_next"] = df_tmp["position"].shift(-1)
df_tmp["position_prev"] = df_tmp["position"].shift(1)
#df_tmp = df_tmp[(df_tmp["position_prev"].notnull()) & (df_tmp["position_next"].notnull())]
df_tmp = df_tmp[(df_tmp["label"] != "Q") & (df_tmp["label"] != "N")]
df_tmp["window_start"] = np.minimum(df_tmp["position"].values-beat_window_margin, df_tmp["position_prev"].values+beat_window_margin)
df_tmp["window_end"] = np.maximum(df_tmp["position"].values+beat_window_margin, df_tmp["position_next"].values-beat_window_margin)
df_svf = df_tmp[["position", "window_start", "window_end"]]
df_tmp.groupby("label").count()
###Output
_____no_output_____
###Markdown
Merge everything together
###Code
df_windows = pd.concat([df_pipe, df_svf])
df_windows.head()
index = 798
beat = df_windows.loc[index, "position"]
start = df_windows.loc[index, "window_start"]
end = df_windows.loc[index, "window_end"]
print("Selected beat is annotated as", df_beat.loc[index, "label"])
print("with timestamp", df_record.loc[beat, "timestamp"])
ax = df_record.iloc[beat-500:beat+500].plot(kind='line', y=['ECG1', 'ECG2'], use_index=True, figsize=(20,10))
plt.axvspan(beat-500, start-1, color="green", alpha=0.5, label="normal region 1", ymin=.5)
plt.axvspan(start, end, color="red", alpha=0.5, label="anomalous region", ymin=.5)
plt.axvspan(end+1, beat+500, color="green", alpha=0.5, label="normal region 2", ymin=.5)
plot_window(df_beat.loc[index-1, "position"], label="$W$", ymax=.5)
plot_window(beat_n, color="orange", label="$X$", ymax=.5)
plot_window(df_beat.loc[index+1, "position"], label="$Y$", ymax=.5)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Add labels
###Code
df = df_record.copy()
df["is_anomaly"] = 0
for _, (_, t1, t2) in df_windows.iterrows():
tmp = df[df.index >= t1]
tmp = tmp[tmp.index <= t2]
df["is_anomaly"].values[tmp.index] = 1
#df = df.set_index("timestamp")
df[df["is_anomaly"] == 1]
df_beat[(df_beat["label"] == "|")]
start = 21700
end = 22500
df_show = df.loc[start:end]
df_show.plot(kind='line', y=['ECG1', 'ECG2', 'is_anomaly'], use_index=True, figsize=(20,10))
labels = df_beat[(df_beat["position"] > start) & (df_beat["position"] < end)]
for i, (position, label) in labels.iterrows():
plt.text(position, -2.5, label)
plt.show()
###Output
_____no_output_____
###Markdown
Experimentation
###Code
df = pd.merge(df_record, df_annotation, left_index=True, right_index=True, how="outer")
#df = df.fillna(value={"Label": ".", "is_anomaly": 0})
df.groupby(["is_anomaly"]).count()
df[df["Label"].notna()]
import matplotlib.pyplot as plt
df_show = df.loc[27000:28000]
df_show.plot(kind='line', y=['ECG1', 'ECG2', 'is_anomaly'], use_index=True, figsize=(20,10))
plt.show()
df = pd.read_csv(os.path.join(dataset_subfolder, "800.test.csv"), index_col="timestamp")
df.loc["1970-01-01 00:21:20":"1970-01-01 00:21:40"].plot(figsize=(20,10))
plt.show()
###Output
_____no_output_____ |
Sucess1.ipynb | ###Markdown
Mask R-CNN - Inspect Custom Trained ModelCode and visualizations to test, debug, and evaluate the Mask R-CNN model.
###Code
import os
import cv2
import sys
import random
import math
import re
import time
import numpy as np
import tensorflow as tf
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import skimage
import glob
# Root directory of the project
ROOT_DIR = os.getcwd()
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
import mrcnn.model as modellib
from mrcnn.model import log
import food
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
FOOD_WEIGHTS_PATH = "C:\FOOD\logs\food20191001T0029\mask_rcnn_food_0010.h5" # TODO: update this path
IMAGE_DIR = os.path.join(ROOT_DIR, "images")
###Output
Using TensorFlow backend.
###Markdown
Configurations
###Code
config = food.foodConfig()
FOOD_DIR = os.path.join(ROOT_DIR, "food/dataset")
# Override the training configurations with a few
# changes for inferencing.
class InferenceConfig(config.__class__):
# Run detection on one image at a time
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
###Output
Configurations:
BACKBONE resnet101
BACKBONE_STRIDES [4, 8, 16, 32, 64]
BATCH_SIZE 1
BBOX_STD_DEV [0.1 0.1 0.2 0.2]
COMPUTE_BACKBONE_SHAPE None
DETECTION_MAX_INSTANCES 100
DETECTION_MIN_CONFIDENCE 0.9
DETECTION_NMS_THRESHOLD 0.3
FPN_CLASSIF_FC_LAYERS_SIZE 1024
GPU_COUNT 1
GRADIENT_CLIP_NORM 5.0
IMAGES_PER_GPU 1
IMAGE_CHANNEL_COUNT 3
IMAGE_MAX_DIM 1024
IMAGE_META_SIZE 16
IMAGE_MIN_DIM 800
IMAGE_MIN_SCALE 0
IMAGE_RESIZE_MODE square
IMAGE_SHAPE [1024 1024 3]
LEARNING_MOMENTUM 0.9
LEARNING_RATE 0.001
LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0}
MASK_POOL_SIZE 14
MASK_SHAPE [28, 28]
MAX_GT_INSTANCES 100
MEAN_PIXEL [123.7 116.8 103.9]
MINI_MASK_SHAPE (56, 56)
NAME food
NUM_CLASSES 4
POOL_SIZE 7
POST_NMS_ROIS_INFERENCE 1000
POST_NMS_ROIS_TRAINING 2000
PRE_NMS_LIMIT 6000
ROI_POSITIVE_RATIO 0.33
RPN_ANCHOR_RATIOS [0.5, 1, 2]
RPN_ANCHOR_SCALES (32, 64, 128, 256, 512)
RPN_ANCHOR_STRIDE 1
RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2]
RPN_NMS_THRESHOLD 0.7
RPN_TRAIN_ANCHORS_PER_IMAGE 256
STEPS_PER_EPOCH 10
TOP_DOWN_PYRAMID_SIZE 256
TRAIN_BN False
TRAIN_ROIS_PER_IMAGE 200
USE_MINI_MASK True
USE_RPN_ROIS True
VALIDATION_STEPS 50
WEIGHT_DECAY 0.0001
###Markdown
Notebook Preferences
###Code
# Device to load the neural network on.
# Useful if you're training a model on the same
# machine, in which case use CPU and leave the
# GPU for training.
DEVICE = "/cpu:0" # /cpu:0 or /gpu:0
# Inspect the model in training or inference modes
# values: 'inference' or 'training'
# TODO: code for 'training' test mode not ready yet
TEST_MODE = "inference"
def get_ax(rows=1, cols=1, size=16):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Adjust the size attribute to control how big to render images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
###Output
_____no_output_____
###Markdown
Load Validation Dataset
###Code
# Load validation dataset
dataset = food.foodDataset()
dataset.load_food(FOOD_DIR, "val")
# Must call before using the dataset
dataset.prepare()
print("Images: {}\nClasses: {}".format(len(dataset.image_ids), dataset.class_names))
###Output
_____no_output_____
###Markdown
Load Model
###Code
# Create model in inference mode
with tf.device(DEVICE):
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR,
config=config)
# load the last model you trained
# weights_path = model.find_last()[1]
# Load weights
print("Loading weights ", custom_WEIGHTS_PATH)
model.load_weights(custom_WEIGHTS_PATH, by_name=True)
from importlib import reload # was constantly changin the visualization, so I decided to reload it instead of notebook
reload(visualize)
###Output
_____no_output_____
###Markdown
Run Detection on Images
###Code
image_id = random.choice(dataset.image_ids)
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset, config, image_id, use_mini_mask=False)
info = dataset.image_info[image_id]
print("image ID: {}.{} ({}) {}".format(info["source"], info["id"], image_id,
dataset.image_reference(image_id)))
# Run object detection
results = model.detect([image], verbose=1)
# Display results
ax = get_ax(1)
r = results[0]
visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
dataset.class_names, r['scores'], ax=ax,
title="Predictions")
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
###Output
image ID: food.images (22).jpg (21) C:\FOOD\samples\food/dataset\val\images (22).jpg
Processing 1 images
image shape: (1024, 1024, 3) min: 0.00000 max: 255.00000 uint8
molded_images shape: (1, 1024, 1024, 3) min: -123.70000 max: 151.10000 float64
image_metas shape: (1, 25) min: 0.00000 max: 1024.00000 int32
anchors shape: (1, 261888, 4) min: -0.35390 max: 1.29134 float32
gt_class_id shape: (1,) min: 1.00000 max: 1.00000 int32
gt_bbox shape: (1, 4) min: 257.00000 max: 756.00000 int32
gt_mask shape: (1024, 1024, 1) min: 0.00000 max: 1.00000 bool
|
reviews/Panel/WMTS_Example.ipynb | ###Markdown
This notebook demonstrates how to use Quest to download imagery from a Web Map Tile Service (WMTS).In addition to quest the following packages need to be installed to use this notebook: * holoviews * geoviews * param * paramnb * xarray The can be installed with the following command:```conda install -c conda-forge -c pyviz/label/dev holoviews geoviews param paramnb xarray```
###Code
import param
import quest
import geoviews as gv
import holoviews as hv
import xarray as xr
from cartopy import crs as ccrs
from holoviews.streams import BoxEdit
from parambokeh import Widgets
hv.extension('bokeh')
quest_service = 'svc://wmts:seamless_imagery'
tile_service_options = quest.api.get_download_options(quest_service, fmt='param')[quest_service]
tile_service_options.params()['bbox'].precedence = -1 # hide bbox input
Widgets(tile_service_options)
tiles = gv.WMTS(tile_service_options.url).options(width=950, height=600, global_extent=True)
boxes = gv.Polygons([]).options(fill_alpha=0.4, line_width=2)
box_stream = BoxEdit(source=boxes, num_objects=1)
tiles * boxes
if box_stream.element:
data = box_stream.data
bbox = [data['x0'][0], data['y0'][0], data['x1'][0], data['y1'][0]]
else:
bbox = [-72.43925984610391, 45.8471360126193, -68.81252476472281, 47.856449699679516]
# bbox = [-83.28613682601174, 24.206059963486737, -81.93264005405752, 30.251169660148314]
# bbox = [-81.82408198648336, 25.227665888548458, -80.86355086047537, 31.548730116206755]
print(bbox)
tile_service_options.bbox = bbox
arr = quest.api.get_data(
service_uri=quest_service,
search_filters=None,
download_options=tile_service_options,
collection_name='examples',
use_cache=False,
as_open_datasets=True,
)[0]
image = gv.RGB((arr.x, arr.y, arr[0].values,
arr[1].values, arr[2].values),
vdims=['R', 'G', 'B']).options(width=950, height=600, alpha=0.7)
gv.tile_sources.Wikipedia * image
###Output
_____no_output_____ |
Exercise 03 - Find Minimum.ipynb | ###Markdown
t1 = t0 - a*dy(t0)
###Code
t0 = 84 #pocetna tacka
t1 = 0
a = 0.01
iteracija = 0
provera = 0
preciznost = 1/1000000
plot = True
iteracijaMaks = 10000 #najveci broj iteracija posle kojih treba da se odustane
divergencijaMaks = 50 #parametar za sprecavanje divergencije
while True:
t1 = t0 - a*N(dy.subs(x, t0)).evalf()
#dy.subs direktno menja t0 da bismo izracunali dy(t0)
iteracija+=1 #povecaj broj iteracija
#ako ima previse iteracija to znaci da verovatno param nisu ok
if iteracija>iteracijaMaks:
print("Previse iteracija")
break
#sada ide provera da li t0 > t1 ako nije onda dozvoljavamo
#najvise 50 divergiranja
if t0<t1:
print("t0 divergira")
provera+=1
if provera>divergencijaMaks:
print("Previse iteracija (%s), t0 divergira"%divergencijaMaks)
print("Manje a ili proveriti da li fja konvekna")
plot = False
break
#sada ide uslov kojim mi zakljucujemo da t0 konvergira
#to je zapravo ovo t0-t1< preciznosti i tako izlazimo iz petlje
if abs(t0-t1)<preciznost:
break
#obnavlajmo vrednost za sledecu iteracijuw
t0=t1
if plot:
print("Broj iteracija",iteracija,"t1=",t1)
plt.plot(t0,N(y.subs(x,t0)).evalf(),marker='o',color='r')
plotF()
###Output
Broj iteracija 709 t1= 3.00004872733784
|
experiments/Exons_MarkovModel_KMER3_v0.ipynb | ###Markdown
Setup
###Code
!pip install git+https://github.com/ML-Bioinfo-CEITEC/rbp
import pandas as pd
import numpy as np
from rbp.random.markov_model import MarkovModel
from tqdm import tqdm, notebook
notebook.tqdm().pandas()
# Mount to your Google Drive allowing lesson files will be saved to your Drive location
from google.colab import drive
drive.mount('/content/drive')
###Output
Go to this URL in a browser: https://accounts.google.com/o/oauth2/auth?client_id=947318989803-6bn6qk8qdgf4n4g3pfee6491hc0brc4i.apps.googleusercontent.com&redirect_uri=urn%3aietf%3awg%3aoauth%3a2.0%3aoob&response_type=code&scope=email%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdocs.test%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive%20https%3a%2f%2fwww.googleapis.com%2fauth%2fdrive.photos.readonly%20https%3a%2f%2fwww.googleapis.com%2fauth%2fpeopleapi.readonly
Enter your authorization code:
··········
Mounted at /content/drive
###Markdown
Data
###Code
dt = pd.read_csv("/content/drive/My Drive/data/random/random_exons.csv")
dt.columns = ['chr', 'start', 'end', 'seq']
dt = dt[~dt.seq.str.contains("N")] # just for sure
train = dt[dt.chr!="1"]
test = dt[dt.chr=="1"]
print(dt.shape, train.shape, test.shape)
dt.head()
###Output
(50000, 4) (45336, 4) (4664, 4)
###Markdown
Markov Model
###Code
mm = MarkovModel(k=3)
mm.fit(train.seq)
def predict3(prev3):
tmp = mm.kmer_pairs[mm.kmer_pairs.index.get_level_values('prev') == prev3].reset_index().drop(columns='prev').sort_values(0, ascending=False)
return tmp.curr.iloc[0]
predict3("ACG")
next3 = {x: predict3(x) for x in mm.kmer_pairs.index.get_level_values('prev')}
next3['ACG']
def compare3(x, y):
return int(x[0] == y[0]), int(x[1] == y[1]), int(x[2] == y[2])
compare3('GGG', 'AGG')
def stat3(s):
sdf = pd.DataFrame({'prev': [s[i:i+3] for i in range(50,200-3,3)], 'next': [s[i:i+3] for i in range(53,200,3)]})
sdf['nextpred'] = sdf.prev.apply(predict3)
sdf[['c1', 'c2', 'c3']] = pd.DataFrame.from_records(sdf.apply(lambda row: compare3(row['next'], row['nextpred']), axis=1), columns=['c1', 'c2', 'c3'])
return sdf[['c1', 'c2', 'c3']].mean(axis=0)
stats = test.seq.progress_apply(stat3)
stats.mean(axis=0)
###Output
_____no_output_____ |
Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization/week1/Regularization+-+v2.ipynb | ###Markdown
RegularizationWelcome to the second assignment of this week. Deep Learning models have so much flexibility and capacity that **overfitting can be a serious problem**, if the training dataset is not big enough. Sure it does well on the training set, but the learned network **doesn't generalize to new examples** that it has never seen!**You will learn to:** Use regularization in your deep learning models.Let's first import the packages you are going to use.
###Code
# import packages
import numpy as np
import matplotlib.pyplot as plt
from reg_utils import sigmoid, relu, plot_decision_boundary, initialize_parameters, load_2D_dataset, predict_dec
from reg_utils import compute_cost, predict, forward_propagation, backward_propagation, update_parameters
import sklearn
import sklearn.datasets
import scipy.io
from testCases import *
%matplotlib inline
plt.rcParams['figure.figsize'] = (7.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
###Output
_____no_output_____
###Markdown
**Problem Statement**: You have just been hired as an AI expert by the French Football Corporation. They would like you to recommend positions where France's goal keeper should kick the ball so that the French team's players can then hit it with their head. **Figure 1** : **Football field** The goal keeper kicks the ball in the air, the players of each team are fighting to hit the ball with their head They give you the following 2D dataset from France's past 10 games.
###Code
train_X, train_Y, test_X, test_Y = load_2D_dataset()
###Output
_____no_output_____
###Markdown
Each dot corresponds to a position on the football field where a football player has hit the ball with his/her head after the French goal keeper has shot the ball from the left side of the football field.- If the dot is blue, it means the French player managed to hit the ball with his/her head- If the dot is red, it means the other team's player hit the ball with their head**Your goal**: Use a deep learning model to find the positions on the field where the goalkeeper should kick the ball. **Analysis of the dataset**: This dataset is a little noisy, but it looks like a diagonal line separating the upper left half (blue) from the lower right half (red) would work well. You will first try a non-regularized model. Then you'll learn how to regularize it and decide which model you will choose to solve the French Football Corporation's problem. 1 - Non-regularized modelYou will use the following neural network (already implemented for you below). This model can be used:- in *regularization mode* -- by setting the `lambd` input to a non-zero value. We use "`lambd`" instead of "`lambda`" because "`lambda`" is a reserved keyword in Python. - in *dropout mode* -- by setting the `keep_prob` to a value less than oneYou will first try the model without any regularization. Then, you will implement:- *L2 regularization* -- functions: "`compute_cost_with_regularization()`" and "`backward_propagation_with_regularization()`"- *Dropout* -- functions: "`forward_propagation_with_dropout()`" and "`backward_propagation_with_dropout()`"In each part, you will run this model with the correct inputs so that it calls the functions you've implemented. Take a look at the code below to familiarize yourself with the model.
###Code
def model(X, Y, learning_rate = 0.3, num_iterations = 30000, print_cost = True, lambd = 0, keep_prob = 1):
"""
Implements a three-layer neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (input size, number of examples)
Y -- true "label" vector (1 for blue dot / 0 for red dot), of shape (output size, number of examples)
learning_rate -- learning rate of the optimization
num_iterations -- number of iterations of the optimization loop
print_cost -- If True, print the cost every 10000 iterations
lambd -- regularization hyperparameter, scalar
keep_prob - probability of keeping a neuron active during drop-out, scalar.
Returns:
parameters -- parameters learned by the model. They can then be used to predict.
"""
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
layers_dims = [X.shape[0], 20, 3, 1]
# Initialize parameters dictionary.
parameters = initialize_parameters(layers_dims)
# Loop (gradient descent)
for i in range(0, num_iterations):
# Forward propagation: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID.
if keep_prob == 1:
a3, cache = forward_propagation(X, parameters)
elif keep_prob < 1:
a3, cache = forward_propagation_with_dropout(X, parameters, keep_prob)
# Cost function
if lambd == 0:
cost = compute_cost(a3, Y)
else:
cost = compute_cost_with_regularization(a3, Y, parameters, lambd)
# Backward propagation.
assert(lambd==0 or keep_prob==1) # it is possible to use both L2 regularization and dropout,
# but this assignment will only explore one at a time
if lambd == 0 and keep_prob == 1:
grads = backward_propagation(X, Y, cache)
elif lambd != 0:
grads = backward_propagation_with_regularization(X, Y, cache, lambd)
elif keep_prob < 1:
grads = backward_propagation_with_dropout(X, Y, cache, keep_prob)
# Update parameters.
parameters = update_parameters(parameters, grads, learning_rate)
# Print the loss every 10000 iterations
if print_cost and i % 10000 == 0:
print("Cost after iteration {}: {}".format(i, cost))
if print_cost and i % 1000 == 0:
costs.append(cost)
# plot the cost
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (x1,000)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
return parameters
###Output
_____no_output_____
###Markdown
Let's train the model without any regularization, and observe the accuracy on the train/test sets.
###Code
parameters = model(train_X, train_Y)
print ("On the training set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6557412523481002
Cost after iteration 10000: 0.16329987525724216
Cost after iteration 20000: 0.13851642423255986
###Markdown
The train accuracy is 94.8% while the test accuracy is 91.5%. This is the **baseline model** (you will observe the impact of regularization on this model). Run the following code to plot the decision boundary of your model.
###Code
plt.title("Model without regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
The non-regularized model is obviously overfitting the training set. It is fitting the noisy points! Lets now look at two techniques to reduce overfitting. 2 - L2 RegularizationThe standard way to avoid overfitting is called **L2 regularization**. It consists of appropriately modifying your cost function, from:$$J = -\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} \tag{1}$$To:$$J_{regularized} = \small \underbrace{-\frac{1}{m} \sum\limits_{i = 1}^{m} \large{(}\small y^{(i)}\log\left(a^{[L](i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right) \large{)} }_\text{cross-entropy cost} + \underbrace{\frac{1}{m} \frac{\lambda}{2} \sum\limits_l\sum\limits_k\sum\limits_j W_{k,j}^{[l]2} }_\text{L2 regularization cost} \tag{2}$$Let's modify your cost and observe the consequences.**Exercise**: Implement `compute_cost_with_regularization()` which computes the cost given by formula (2). To calculate $\sum\limits_k\sum\limits_j W_{k,j}^{[l]2}$ , use :```pythonnp.sum(np.square(Wl))```Note that you have to do this for $W^{[1]}$, $W^{[2]}$ and $W^{[3]}$, then sum the three terms and multiply by $ \frac{1}{m} \frac{\lambda}{2} $.
###Code
# GRADED FUNCTION: compute_cost_with_regularization
def compute_cost_with_regularization(A3, Y, parameters, lambd):
"""
Implement the cost function with L2 regularization. See formula (2) above.
Arguments:
A3 -- post-activation, output of forward propagation, of shape (output size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
parameters -- python dictionary containing parameters of the model
Returns:
cost - value of the regularized loss function (formula (2))
"""
m = Y.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
W3 = parameters["W3"]
cross_entropy_cost = compute_cost(A3, Y) # This gives you the cross-entropy part of the cost
### START CODE HERE ### (approx. 1 line)
L2_regularization_cost = (lambd/(2*m)) * (np.sum(np.square(W1)) + np.sum(np.square(W2)) + np.sum(np.square(W3)))
### END CODER HERE ###
cost = cross_entropy_cost + L2_regularization_cost
return cost
A3, Y_assess, parameters = compute_cost_with_regularization_test_case()
print("cost = " + str(compute_cost_with_regularization(A3, Y_assess, parameters, lambd = 0.1)))
###Output
cost = 1.78648594516
###Markdown
**Expected Output**: **cost** 1.78648594516 Of course, because you changed the cost, you have to change backward propagation as well! All the gradients have to be computed with respect to this new cost. **Exercise**: Implement the changes needed in backward propagation to take into account regularization. The changes only concern dW1, dW2 and dW3. For each, you have to add the regularization term's gradient ($\frac{d}{dW} ( \frac{1}{2}\frac{\lambda}{m} W^2) = \frac{\lambda}{m} W$).
###Code
# GRADED FUNCTION: backward_propagation_with_regularization
def backward_propagation_with_regularization(X, Y, cache, lambd):
"""
Implements the backward propagation of our baseline model to which we added an L2 regularization.
Arguments:
X -- input dataset, of shape (input size, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation()
lambd -- regularization hyperparameter, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
### START CODE HERE ### (approx. 1 line)
dW3 = 1./m * np.dot(dZ3, A2.T) + (lambd/m) * W3
### END CODE HERE ###
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
### START CODE HERE ### (approx. 1 line)
dW2 = 1./m * np.dot(dZ2, A1.T) + (lambd/m) * W2
### END CODE HERE ###
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
### START CODE HERE ### (approx. 1 line)
dW1 = 1./m * np.dot(dZ1, X.T) + (lambd/m) * W1
### END CODE HERE ###
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_regularization_test_case()
grads = backward_propagation_with_regularization(X_assess, Y_assess, cache, lambd = 0.7)
print ("dW1 = "+ str(grads["dW1"]))
print ("dW2 = "+ str(grads["dW2"]))
print ("dW3 = "+ str(grads["dW3"]))
###Output
dW1 = [[-0.25604646 0.12298827 -0.28297129]
[-0.17706303 0.34536094 -0.4410571 ]]
dW2 = [[ 0.79276486 0.85133918]
[-0.0957219 -0.01720463]
[-0.13100772 -0.03750433]]
dW3 = [[-1.77691347 -0.11832879 -0.09397446]]
###Markdown
**Expected Output**: **dW1** [[-0.25604646 0.12298827 -0.28297129] [-0.17706303 0.34536094 -0.4410571 ]] **dW2** [[ 0.79276486 0.85133918] [-0.0957219 -0.01720463] [-0.13100772 -0.03750433]] **dW3** [[-1.77691347 -0.11832879 -0.09397446]] Let's now run the model with L2 regularization $(\lambda = 0.7)$. The `model()` function will call: - `compute_cost_with_regularization` instead of `compute_cost`- `backward_propagation_with_regularization` instead of `backward_propagation`
###Code
parameters = model(train_X, train_Y, lambd = 0.7)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6974484493131264
Cost after iteration 10000: 0.2684918873282239
Cost after iteration 20000: 0.2680916337127301
###Markdown
Congrats, the test set accuracy increased to 93%. You have saved the French football team!You are not overfitting the training data anymore. Let's plot the decision boundary.
###Code
plt.title("Model with L2-regularization")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____
###Markdown
**Observations**:- The value of $\lambda$ is a hyperparameter that you can tune using a dev set.- L2 regularization makes your decision boundary smoother. If $\lambda$ is too large, it is also possible to "oversmooth", resulting in a model with high bias.**What is L2-regularization actually doing?**:L2-regularization relies on the assumption that a model with small weights is simpler than a model with large weights. Thus, by penalizing the square values of the weights in the cost function you drive all the weights to smaller values. It becomes too costly for the cost to have large weights! This leads to a smoother model in which the output changes more slowly as the input changes. **What you should remember** -- the implications of L2-regularization on:- The cost computation: - A regularization term is added to the cost- The backpropagation function: - There are extra terms in the gradients with respect to weight matrices- Weights end up smaller ("weight decay"): - Weights are pushed to smaller values. 3 - DropoutFinally, **dropout** is a widely used regularization technique that is specific to deep learning. **It randomly shuts down some neurons in each iteration.** Watch these two videos to see what this means!<!--To understand drop-out, consider this conversation with a friend:- Friend: "Why do you need all these neurons to train your network and classify images?". - You: "Because each neuron contains a weight and can learn specific features/details/shape of an image. The more neurons I have, the more featurse my model learns!"- Friend: "I see, but are you sure that your neurons are learning different features and not all the same features?"- You: "Good point... Neurons in the same layer actually don't talk to each other. It should be definitly possible that they learn the same image features/shapes/forms/details... which would be redundant. There should be a solution."!--> Figure 2 : Drop-out on the second hidden layer. At each iteration, you shut down (= set to zero) each neuron of a layer with probability $1 - keep\_prob$ or keep it with probability $keep\_prob$ (50% here). The dropped neurons don't contribute to the training in both the forward and backward propagations of the iteration. Figure 3 : Drop-out on the first and third hidden layers. $1^{st}$ layer: we shut down on average 40% of the neurons. $3^{rd}$ layer: we shut down on average 20% of the neurons. When you shut some neurons down, you actually modify your model. The idea behind drop-out is that at each iteration, you train a different model that uses only a subset of your neurons. With dropout, your neurons thus become less sensitive to the activation of one other specific neuron, because that other neuron might be shut down at any time. 3.1 - Forward propagation with dropout**Exercise**: Implement the forward propagation with dropout. You are using a 3 layer neural network, and will add dropout to the first and second hidden layers. We will not apply dropout to the input layer or output layer. **Instructions**:You would like to shut down some neurons in the first and second layers. To do that, you are going to carry out 4 Steps:1. In lecture, we dicussed creating a variable $d^{[1]}$ with the same shape as $a^{[1]}$ using `np.random.rand()` to randomly get numbers between 0 and 1. Here, you will use a vectorized implementation, so create a random matrix $D^{[1]} = [d^{[1](1)} d^{[1](2)} ... d^{[1](m)}] $ of the same dimension as $A^{[1]}$.2. Set each entry of $D^{[1]}$ to be 0 with probability (`1-keep_prob`) or 1 with probability (`keep_prob`), by thresholding values in $D^{[1]}$ appropriately. Hint: to set all the entries of a matrix X to 0 (if entry is less than 0.5) or 1 (if entry is more than 0.5) you would do: `X = (X < 0.5)`. Note that 0 and 1 are respectively equivalent to False and True.3. Set $A^{[1]}$ to $A^{[1]} * D^{[1]}$. (You are shutting down some neurons). You can think of $D^{[1]}$ as a mask, so that when it is multiplied with another matrix, it shuts down some of the values.4. Divide $A^{[1]}$ by `keep_prob`. By doing this you are assuring that the result of the cost will still have the same expected value as without drop-out. (This technique is also called inverted dropout.)
###Code
# GRADED FUNCTION: forward_propagation_with_dropout
def forward_propagation_with_dropout(X, parameters, keep_prob = 0.5):
"""
Implements the forward propagation: LINEAR -> RELU + DROPOUT -> LINEAR -> RELU + DROPOUT -> LINEAR -> SIGMOID.
Arguments:
X -- input dataset, of shape (2, number of examples)
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (20, 2)
b1 -- bias vector of shape (20, 1)
W2 -- weight matrix of shape (3, 20)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
A3 -- last activation value, output of the forward propagation, of shape (1,1)
cache -- tuple, information stored for computing the backward propagation
"""
np.random.seed(1)
# retrieve parameters
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
### START CODE HERE ### (approx. 4 lines) # Steps 1-4 below correspond to the Steps 1-4 described above.
D1 = np.random.rand(A1.shape[0],A1.shape[1]) # Step 1: initialize matrix D1 = np.random.rand(..., ...)
D1 = D1<keep_prob # Step 2: convert entries of D1 to 0 or 1 (using keep_prob as the threshold)
A1 = A1*D1 # Step 3: shut down some neurons of A1
A1 = A1/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
### START CODE HERE ### (approx. 4 lines)
D2 = np.random.rand(A2.shape[0],A2.shape[1]) # Step 1: initialize matrix D2 = np.random.rand(..., ...)
D2 = D2<keep_prob # Step 2: convert entries of D2 to 0 or 1 (using keep_prob as the threshold)
A2 = A2*D2 # Step 3: shut down some neurons of A2
A2 = A2/keep_prob # Step 4: scale the value of neurons that haven't been shut down
### END CODE HERE ###
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
cache = (Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3)
return A3, cache
X_assess, parameters = forward_propagation_with_dropout_test_case()
A3, cache = forward_propagation_with_dropout(X_assess, parameters, keep_prob = 0.7)
print ("A3 = " + str(A3))
###Output
A3 = [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]]
###Markdown
**Expected Output**: **A3** [[ 0.36974721 0.00305176 0.04565099 0.49683389 0.36974721]] 3.2 - Backward propagation with dropout**Exercise**: Implement the backward propagation with dropout. As before, you are training a 3 layer network. Add dropout to the first and second hidden layers, using the masks $D^{[1]}$ and $D^{[2]}$ stored in the cache. **Instruction**:Backpropagation with dropout is actually quite easy. You will have to carry out 2 Steps:1. You had previously shut down some neurons during forward propagation, by applying a mask $D^{[1]}$ to `A1`. In backpropagation, you will have to shut down the same neurons, by reapplying the same mask $D^{[1]}$ to `dA1`. 2. During forward propagation, you had divided `A1` by `keep_prob`. In backpropagation, you'll therefore have to divide `dA1` by `keep_prob` again (the calculus interpretation is that if $A^{[1]}$ is scaled by `keep_prob`, then its derivative $dA^{[1]}$ is also scaled by the same `keep_prob`).
###Code
# GRADED FUNCTION: backward_propagation_with_dropout
def backward_propagation_with_dropout(X, Y, cache, keep_prob):
"""
Implements the backward propagation of our baseline model to which we added dropout.
Arguments:
X -- input dataset, of shape (2, number of examples)
Y -- "true" labels vector, of shape (output size, number of examples)
cache -- cache output from forward_propagation_with_dropout()
keep_prob - probability of keeping a neuron active during drop-out, scalar
Returns:
gradients -- A dictionary with the gradients with respect to each parameter, activation and pre-activation variables
"""
m = X.shape[1]
(Z1, D1, A1, W1, b1, Z2, D2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
### START CODE HERE ### (≈ 2 lines of code)
dA2 = dA2*D2 # Step 1: Apply mask D2 to shut down the same neurons as during the forward propagation
dA2 = dA2/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
### START CODE HERE ### (≈ 2 lines of code)
dA1 = dA1*D1 # Step 1: Apply mask D1 to shut down the same neurons as during the forward propagation
dA1 = dA1/keep_prob # Step 2: Scale the value of neurons that haven't been shut down
### END CODE HERE ###
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,"dA2": dA2,
"dZ2": dZ2, "dW2": dW2, "db2": db2, "dA1": dA1,
"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
X_assess, Y_assess, cache = backward_propagation_with_dropout_test_case()
gradients = backward_propagation_with_dropout(X_assess, Y_assess, cache, keep_prob = 0.8)
print ("dA1 = " + str(gradients["dA1"]))
print ("dA2 = " + str(gradients["dA2"]))
###Output
dA1 = [[ 0.36544439 0. -0.00188233 0. -0.17408748]
[ 0.65515713 0. -0.00337459 0. -0. ]]
dA2 = [[ 0.58180856 0. -0.00299679 0. -0.27715731]
[ 0. 0.53159854 -0. 0.53159854 -0.34089673]
[ 0. 0. -0.00292733 0. -0. ]]
###Markdown
**Expected Output**: **dA1** [[ 0.36544439 0. -0.00188233 0. -0.17408748] [ 0.65515713 0. -0.00337459 0. -0. ]] **dA2** [[ 0.58180856 0. -0.00299679 0. -0.27715731] [ 0. 0.53159854 -0. 0.53159854 -0.34089673] [ 0. 0. -0.00292733 0. -0. ]] Let's now run the model with dropout (`keep_prob = 0.86`). It means at every iteration you shut down each neurons of layer 1 and 2 with 14% probability. The function `model()` will now call:- `forward_propagation_with_dropout` instead of `forward_propagation`.- `backward_propagation_with_dropout` instead of `backward_propagation`.
###Code
parameters = model(train_X, train_Y, keep_prob = 0.86, learning_rate = 0.3)
print ("On the train set:")
predictions_train = predict(train_X, train_Y, parameters)
print ("On the test set:")
predictions_test = predict(test_X, test_Y, parameters)
###Output
Cost after iteration 0: 0.6543912405149825
###Markdown
Dropout works great! The test accuracy has increased again (to 95%)! Your model is not overfitting the training set and does a great job on the test set. The French football team will be forever grateful to you! Run the code below to plot the decision boundary.
###Code
plt.title("Model with dropout")
axes = plt.gca()
axes.set_xlim([-0.75,0.40])
axes.set_ylim([-0.75,0.65])
plot_decision_boundary(lambda x: predict_dec(parameters, x.T), train_X, train_Y)
###Output
_____no_output_____ |
src/03_ai_wiki/09_score_viz.ipynb | ###Markdown
Score Distribution
###Code
import pandas as pd
import statistics #calculate mean and others
import matplotlib.pyplot as plt
from scipy import stats
from scipy.stats import skew
import numpy as np
#import data
abstracts_all = pd.read_csv(r'/home/zz3hs/git/dspg21RnD/data/dspg21RnD/abstracts_embedding_score_stats.csv')
abstracts_all
###Output
_____no_output_____
###Markdown
Distribution of number of sentences per abstract
###Code
hist = abstracts_all.num_sentences.hist(bins=30)
print(abstracts_all.num_sentences.describe())
## Distribution of sentences skewness
hist = abstracts_all.skewness_sentence_score.hist(bins=30)
print(abstracts_all.skewness_sentence_score.describe())
###Output
count 690807.000000
mean 0.100119
std 0.582395
min -2.869451
25% -0.266175
50% 0.079460
75% 0.465173
max 3.357550
Name: skewness_sentence_score, dtype: float64
###Markdown
Abstract score distribution
###Code
## Distribution of sentences average
plt.figure(figsize=[15,8])
hist = abstracts_all.mean_abstract_score.hist(bins=30)
hist.axvline(x=np.mean(abstracts_all.mean_abstract_score)+2*np.std(abstracts_all.mean_abstract_score), ls = "-", color='#F18015', alpha=5)
plt.xlabel('Abstract Cosine-similarity Score',fontsize=15)
plt.ylabel('Frequency',fontsize=15)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.title('Distribution of Abstract Cosine Similarity Score' ,fontsize=20)
plt.show()
print(abstracts_all.mean_abstract_score.describe())
###Output
_____no_output_____
###Markdown
All abstract: mean sentence score vs max sentence score
###Code
#Distribution of abstract average score
fig, axes = plt.subplots(1, 2)
abstracts_all.mean_abstract_score.hist(bins=80,ax=axes[0])
abstracts_all.max_sentence_score.hist(bins=80, ax=axes[1])
print(abstracts_all.mean_abstract_score.describe())
print(abstracts_all.max_sentence_score.describe())
###Output
count 690807.000000
mean 0.302218
std 0.048859
min 0.081673
25% 0.266460
50% 0.298573
75% 0.334063
max 0.576933
Name: mean_abstract_score, dtype: float64
count 690807.000000
mean 0.390864
std 0.055314
min 0.081673
25% 0.354213
50% 0.392803
75% 0.428260
max 0.718298
Name: max_sentence_score, dtype: float64
###Markdown
Comparing the mean embedding score between AI and Not AI abstracts
###Code
#Distribution of abstract average score
abstracts_not_ai = abstracts_all[abstracts_all["IS_AI"] == False]
abstracts_ai = abstracts_all[abstracts_all["IS_AI"] == True]
fig, axes = plt.subplots(1, 2)
abstracts_not_ai.abstract_score.hist(bins=80,ax=axes[0])
abstracts_ai.abstract_score.hist(bins=20, ax=axes[1])
print(abstracts_not_ai.mean_abstract_score.describe())
print(abstracts_ai.mean_abstract_score.describe())
abstracts_not_ai = abstracts_all.query('IS_AI == False')['abstract_score']
abstracts_ai = abstracts_all.query('IS_AI == True')['abstract_score']
res = stats.ttest_ind(abstracts_ai, abstracts_not_ai, equal_var=True)
display(res)
###Output
_____no_output_____
###Markdown
Comparing the median embedding score between AI and Not AI abstracts
###Code
#Distribution of abstract average score
abstracts_not_ai = abstracts_all[abstracts_all["IS_AI"] == False]
abstracts_ai = abstracts_all[abstracts_all["IS_AI"] == True]
fig, axes = plt.subplots(1, 2)
abstracts_not_ai.median_sentence_score.hist(bins=30,ax=axes[0])
abstracts_ai.median_sentence_score.hist(bins=20, ax=axes[1])
print(abstracts_not_ai.median_sentence_score.describe())
print(abstracts_ai.median_sentence_score.describe())
###Output
count 685060.000000
mean 0.300391
std 0.051816
min 0.081673
25% 0.262005
50% 0.297113
75% 0.334778
max 0.585794
Name: median_sentence_score, dtype: float64
count 5747.000000
mean 0.375073
std 0.055289
min 0.200108
25% 0.337349
50% 0.376024
75% 0.413810
max 0.585170
Name: median_sentence_score, dtype: float64
###Markdown
Comparing the max embedding score between AI and Not AI abstracts
###Code
#Distribution of abstract average score
abstracts_not_ai = abstracts_all[abstracts_all["IS_AI"] == False]
abstracts_ai = abstracts_all[abstracts_all["IS_AI"] == True]
fig, axes = plt.subplots(1, 2)
abstracts_not_ai.max_sentence_score.hist(bins=30,ax=axes[0])
abstracts_ai.max_sentence_score.hist(bins=20, ax=axes[1])
print(abstracts_not_ai.max_sentence_score.describe())
print(abstracts_ai.max_sentence_score.describe())
###Output
count 685060.000000
mean 0.390211
std 0.054847
min 0.081673
25% 0.353858
50% 0.392310
75% 0.427536
max 0.718298
Name: max_sentence_score, dtype: float64
count 5747.000000
mean 0.468651
std 0.055617
min 0.257805
25% 0.431931
50% 0.468303
75% 0.506304
max 0.667100
Name: max_sentence_score, dtype: float64
###Markdown
Distribution of the difference between max and min sentence score per abstract
###Code
hist = abstracts_all.range_sentence_score.hist(bins=100)
print(abstracts_all.range_sentence_score.describe())
###Output
count 690773.000000
mean 0.175742
std 0.062373
min 0.000000
25% 0.136448
50% 0.175998
75% 0.217013
max 0.506595
Name: diff_sentence_score, dtype: float64
###Markdown
Choose a cutoff
###Code
sd = abstracts_all.mean_abstract_score.std()
mean = abstracts_all.mean_abstract_score.mean()
cutoff = mean + 2.5*sd
cutoff
abstracts_ai = abstracts_all[(abstracts_all["mean_abstract_score"] > cutoff)]
abstracts_ai = abstracts_ai[["PROJECT_ID_x", "ABSTRACT_x", "final_frqwds_removed_x", "PROJECT_TITLE_x", "mean_abstract_score"]]
abstracts_ai = abstracts_ai.rename(columns={
"PROJECT_ID_x":"PROJECT_ID",
"ABSTRACT_x":"ABSTRACT",
"final_frqwds_removed_x": "final_frqwds_removed",
"PROJECT_TITLE_x": "PROJECT_TITLE",
"mean_abstract_score": "cosine_similarity_score"})
abstracts_ai["IS_AI_BERT"] = True
abstracts_ai["PROJECT_ID"] = abstracts_ai["PROJECT_ID"].astype(int)
abstracts_ai.info()
print("Results: ",len(abstracts_ai)/len(abstracts_all)*100,"(N=",len(abstracts_ai),")% of the projects are classified as AI related.")
#abstracts_ai.to_csv(r'/home/zz3hs/git/dspg21RnD/data/dspg21RnD/bert_ai_abstracts_2.csv', index = False)
###Output
_____no_output_____ |
Lec 5 Case Study 1/Dataset/CaseStudy_main.ipynb | ###Markdown
Stack OverFlow CaseStudy
###Code
import pandas as pd
# question 1: Find the display name and no. of posts created by the user who has got maximum reputation.
user_data = pd.read_csv("Users.csv")
post_data = pd.read_csv("post.csv")
comment_data = pd.read_csv("comments.csv")
PostType_data = pd.read_csv("posttypes.csv")
#user_data.max('reputation')
user_data['reputation'].max()
###Output
_____no_output_____ |
vui_notebook_spectrogram.ipynb | ###Markdown
Artificial Intelligence Nanodegree Voice User Interfaces Project: Speech Recognition with Neural Networks---In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with **'(IMPLEMENTATION)'** in the header indicate that the following blocks of code will require additional functionality which you must provide. Please be sure to read the instructions carefully! > **Note**: Once you have completed all of the code implementations, you need to finalize your work by exporting the Jupyter Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "**File -> Download as -> HTML (.html)**. Include the finished document along with this notebook as your submission.In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.>**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.The rubric contains _optional_ "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. If you decide to pursue the "Stand Out Suggestions", you should include the code in this Jupyter notebook.--- Introduction In this notebook, you will build a deep neural network that functions as part of an end-to-end automatic speech recognition (ASR) pipeline! Your completed pipeline will accept raw audio as input and return a predicted transcription of the spoken language. The full pipeline is summarized in the figure below.- **STEP 1** is a pre-processing step that converts raw audio to one of two feature representations that are commonly used for ASR. - **STEP 2** is an acoustic model which accepts audio features as input and returns a probability distribution over all potential transcriptions. After learning about the basic types of neural networks that are often used for acoustic modeling, you will engage in your own investigations, to design your own acoustic model!- **STEP 3** in the pipeline takes the output from the acoustic model and returns a predicted transcription. Feel free to use the links below to navigate the notebook:- [The Data](thedata)- [**STEP 1**](step1): Acoustic Features for Speech Recognition- [**STEP 2**](step2): Deep Neural Networks for Acoustic Modeling - [Model 0](model0): RNN - [Model 1](model1): RNN + TimeDistributed Dense - [Model 2](model2): CNN + RNN + TimeDistributed Dense - [Model 3](model3): Deeper RNN + TimeDistributed Dense - [Model 4](model4): Bidirectional RNN + TimeDistributed Dense - [Models 5+](model5) - [Compare the Models](compare) - [Final Model](final)- [**STEP 3**](step3): Obtain Predictions The DataWe begin by investigating the dataset that will be used to train and evaluate your pipeline. [LibriSpeech](http://www.danielpovey.com/files/2015_icassp_librispeech.pdf) is a large corpus of English-read speech, designed for training and evaluating models for ASR. The dataset contains 1000 hours of speech derived from audiobooks. We will work with a small subset in this project, since larger-scale data would take a long while to train. However, after completing this project, if you are interested in exploring further, you are encouraged to work with more of the data that is provided [online](http://www.openslr.org/12/).In the code cells below, you will use the `vis_train_features` module to visualize a training example. The supplied argument `index=0` tells the module to extract the first example in the training set. (You are welcome to change `index=0` to point to a different training example, if you like, but please **DO NOT** amend any other code in the cell.) The returned variables are:- `vis_text` - transcribed text (label) for the training example.- `vis_raw_audio` - raw audio waveform for the training example.- `vis_mfcc_feature` - mel-frequency cepstral coefficients (MFCCs) for the training example.- `vis_spectrogram_feature` - spectrogram for the training example. - `vis_audio_path` - the file path to the training example.
###Code
from data_generator import vis_train_features
# extract label and audio features for a single training example
vis_text, vis_raw_audio, vis_mfcc_feature, vis_spectrogram_feature, vis_audio_path = vis_train_features()
###Output
There are 2023 total training examples.
###Markdown
The following code cell visualizes the audio waveform for your chosen example, along with the corresponding transcript. You also have the option to play the audio in the notebook!
###Code
from IPython.display import Markdown, display
from data_generator import vis_train_features, plot_raw_audio
from IPython.display import Audio
%matplotlib inline
# plot audio signal
plot_raw_audio(vis_raw_audio)
# print length of audio signal
display(Markdown('**Shape of Audio Signal** : ' + str(vis_raw_audio.shape)))
# print transcript corresponding to audio clip
display(Markdown('**Transcript** : ' + str(vis_text)))
# play the audio file
Audio(vis_audio_path)
###Output
_____no_output_____
###Markdown
STEP 1: Acoustic Features for Speech RecognitionFor this project, you won't use the raw audio waveform as input to your model. Instead, we provide code that first performs a pre-processing step to convert the raw audio to a feature representation that has historically proven successful for ASR models. Your acoustic model will accept the feature representation as input.In this project, you will explore two possible feature representations. _After completing the project_, if you'd like to read more about deep learning architectures that can accept raw audio input, you are encouraged to explore this [research paper](https://pdfs.semanticscholar.org/a566/cd4a8623d661a4931814d9dffc72ecbf63c4.pdf). SpectrogramsThe first option for an audio feature representation is the [spectrogram](https://www.youtube.com/watch?v=_FatxGN3vAM). In order to complete this project, you will **not** need to dig deeply into the details of how a spectrogram is calculated; but, if you are curious, the code for calculating the spectrogram was borrowed from [this repository](https://github.com/baidu-research/ba-dls-deepspeech). The implementation appears in the `utils.py` file in your repository.The code that we give you returns the spectrogram as a 2D tensor, where the first (_vertical_) dimension indexes time, and the second (_horizontal_) dimension indexes frequency. To speed the convergence of your algorithm, we have also normalized the spectrogram. (You can see this quickly in the visualization below by noting that the mean value hovers around zero, and most entries in the tensor assume values close to zero.)
###Code
from data_generator import plot_spectrogram_feature
# plot normalized spectrogram
plot_spectrogram_feature(vis_spectrogram_feature)
# print shape of spectrogram
display(Markdown('**Shape of Spectrogram** : ' + str(vis_spectrogram_feature.shape)))
###Output
_____no_output_____
###Markdown
Mel-Frequency Cepstral Coefficients (MFCCs)The second option for an audio feature representation is [MFCCs](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum). You do **not** need to dig deeply into the details of how MFCCs are calculated, but if you would like more information, you are welcome to peruse the [documentation](https://github.com/jameslyons/python_speech_features) of the `python_speech_features` Python package. Just as with the spectrogram features, the MFCCs are normalized in the supplied code.The main idea behind MFCC features is the same as spectrogram features: at each time window, the MFCC feature yields a feature vector that characterizes the sound within the window. Note that the MFCC feature is much lower-dimensional than the spectrogram feature, which could help an acoustic model to avoid overfitting to the training dataset.
###Code
from data_generator import plot_mfcc_feature
# plot normalized MFCC
plot_mfcc_feature(vis_mfcc_feature)
# print shape of MFCC
display(Markdown('**Shape of MFCC** : ' + str(vis_mfcc_feature.shape)))
###Output
_____no_output_____
###Markdown
When you construct your pipeline, you will be able to choose to use either spectrogram or MFCC features. If you would like to see different implementations that make use of MFCCs and/or spectrograms, please check out the links below:- This [repository](https://github.com/baidu-research/ba-dls-deepspeech) uses spectrograms.- This [repository](https://github.com/mozilla/DeepSpeech) uses MFCCs.- This [repository](https://github.com/buriburisuri/speech-to-text-wavenet) also uses MFCCs.- This [repository](https://github.com/pannous/tensorflow-speech-recognition/blob/master/speech_data.py) experiments with raw audio, spectrograms, and MFCCs as features. STEP 2: Deep Neural Networks for Acoustic ModelingIn this section, you will experiment with various neural network architectures for acoustic modeling. You will begin by training five relatively simple architectures. **Model 0** is provided for you. You will write code to implement **Models 1**, **2**, **3**, and **4**. If you would like to experiment further, you are welcome to create and train more models under the **Models 5+** heading. All models will be specified in the `sample_models.py` file. After importing the `sample_models` module, you will train your architectures in the notebook.After experimenting with the five simple architectures, you will have the opportunity to compare their performance. Based on your findings, you will construct a deeper architecture that is designed to outperform all of the shallow models.For your convenience, we have designed the notebook so that each model can be specified and trained on separate occasions. That is, say you decide to take a break from the notebook after training **Model 1**. Then, you need not re-execute all prior code cells in the notebook before training **Model 2**. You need only re-execute the code cell below, that is marked with **`RUN THIS CODE CELL IF YOU ARE RESUMING THE NOTEBOOK AFTER A BREAK`**, before transitioning to the code cells corresponding to **Model 2**.
###Code
#####################################################################
# RUN THIS CODE CELL IF YOU ARE RESUMING THE NOTEBOOK AFTER A BREAK #
#####################################################################
# allocate 50% of GPU memory (if you like, feel free to change this)
from keras.backend.tensorflow_backend import set_session
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.per_process_gpu_memory_fraction = 0.5
set_session(tf.Session(config=config))
# watch for any changes in the sample_models module, and reload it automatically
%load_ext autoreload
%autoreload 2
# import NN architectures for speech recognition
from sample_models import *
# import function for training acoustic model
from train_utils import train_model
# for random seed.
import numpy as np
import random as python_random
seed = 345
np.random.seed(seed)
python_random.seed(seed)
tf.set_random_seed(seed)
# for wake_up while long training
from workspace_utils import active_session
###Output
_____no_output_____
###Markdown
Model 0: RNNGiven their effectiveness in modeling sequential data, the first acoustic model you will use is an RNN. As shown in the figure below, the RNN we supply to you will take the time sequence of audio features as input.At each time step, the speaker pronounces one of 28 possible characters, including each of the 26 letters in the English alphabet, along with a space character (" "), and an apostrophe (').The output of the RNN at each time step is a vector of probabilities with 29 entries, where the $i$-th entry encodes the probability that the $i$-th character is spoken in the time sequence. (The extra 29th character is an empty "character" used to pad training examples within batches containing uneven lengths.) If you would like to peek under the hood at how characters are mapped to indices in the probability vector, look at the `char_map.py` file in the repository. The figure below shows an equivalent, rolled depiction of the RNN that shows the output layer in greater detail. The model has already been specified for you in Keras. To import it, you need only run the code cell below.
###Code
model_0 = simple_rnn_model(input_dim=161) # change to 13 if you would like to use MFCC features
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
rnn (GRU) (None, None, 29) 16617
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 16,617
Trainable params: 16,617
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
As explored in the lesson, you will train the acoustic model with the [CTC loss](http://www.cs.toronto.edu/~graves/icml_2006.pdf) criterion. Custom loss functions take a bit of hacking in Keras, and so we have implemented the CTC loss function for you, so that you can focus on trying out as many deep learning architectures as possible :). If you'd like to peek at the implementation details, look at the `add_ctc_loss` function within the `train_utils.py` file in the repository.To train your architecture, you will use the `train_model` function within the `train_utils` module; it has already been imported in one of the above code cells. The `train_model` function takes three **required** arguments:- `input_to_softmax` - a Keras model instance.- `pickle_path` - the name of the pickle file where the loss history will be saved.- `save_model_path` - the name of the HDF5 file where the model will be saved.If we have already supplied values for `input_to_softmax`, `pickle_path`, and `save_model_path`, please **DO NOT** modify these values. There are several **optional** arguments that allow you to have more control over the training process. You are welcome to, but not required to, supply your own values for these arguments.- `minibatch_size` - the size of the minibatches that are generated while training the model (default: `20`).- `spectrogram` - Boolean value dictating whether spectrogram (`True`) or MFCC (`False`) features are used for training (default: `True`).- `mfcc_dim` - the size of the feature dimension to use when generating MFCC features (default: `13`).- `optimizer` - the Keras optimizer used to train the model (default: `SGD(lr=0.02, decay=1e-6, momentum=0.9, nesterov=True, clipnorm=5)`). - `epochs` - the number of epochs to use to train the model (default: `20`). If you choose to modify this parameter, make sure that it is *at least* 20.- `verbose` - controls the verbosity of the training output in the `model.fit_generator` method (default: `1`).- `sort_by_duration` - Boolean value dictating whether the training and validation sets are sorted by (increasing) duration before the start of the first epoch (default: `False`).The `train_model` function defaults to using spectrogram features; if you choose to use these features, note that the acoustic model in `simple_rnn_model` should have `input_dim=161`. Otherwise, if you choose to use MFCC features, the acoustic model should have `input_dim=13`.We have chosen to use `GRU` units in the supplied RNN. If you would like to experiment with `LSTM` or `SimpleRNN` cells, feel free to do so here. If you change the `GRU` units to `SimpleRNN` cells in `simple_rnn_model`, you may notice that the loss quickly becomes undefined (`nan`) - you are strongly encouraged to check this for yourself! This is due to the [exploding gradients problem](http://www.wildml.com/2015/10/recurrent-neural-networks-tutorial-part-3-backpropagation-through-time-and-vanishing-gradients/). We have already implemented [gradient clipping](https://arxiv.org/pdf/1211.5063.pdf) in your optimizer to help you avoid this issue.__IMPORTANT NOTE:__ If you notice that your gradient has exploded in any of the models below, feel free to explore more with gradient clipping (the `clipnorm` argument in your optimizer) or swap out any `SimpleRNN` cells for `LSTM` or `GRU` cells. You can also try restarting the kernel to restart the training process.
###Code
train_model(input_to_softmax=model_0,
pickle_path='model_0.pickle',
save_model_path='model_0.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
Epoch 1/20
101/101 [==============================] - 202s 2s/step - loss: 858.7424 - val_loss: 758.3793
Epoch 2/20
101/101 [==============================] - 204s 2s/step - loss: 779.6717 - val_loss: 758.2544
Epoch 3/20
101/101 [==============================] - 204s 2s/step - loss: 780.0187 - val_loss: 753.3230
Epoch 4/20
101/101 [==============================] - 200s 2s/step - loss: 779.4973 - val_loss: 760.3230
Epoch 5/20
101/101 [==============================] - 200s 2s/step - loss: 779.3189 - val_loss: 757.1243
Epoch 6/20
101/101 [==============================] - 199s 2s/step - loss: 778.5486 - val_loss: 756.9253
Epoch 7/20
101/101 [==============================] - 199s 2s/step - loss: 779.4043 - val_loss: 758.1969
Epoch 8/20
101/101 [==============================] - 199s 2s/step - loss: 778.9669 - val_loss: 758.7745
Epoch 9/20
101/101 [==============================] - 201s 2s/step - loss: 779.4619 - val_loss: 758.1130
Epoch 10/20
101/101 [==============================] - 199s 2s/step - loss: 779.4053 - val_loss: 755.8315
Epoch 11/20
101/101 [==============================] - 200s 2s/step - loss: 779.5066 - val_loss: 756.8292
Epoch 12/20
101/101 [==============================] - 202s 2s/step - loss: 779.3254 - val_loss: 760.4322
Epoch 13/20
101/101 [==============================] - 199s 2s/step - loss: 778.9393 - val_loss: 762.6774
Epoch 14/20
101/101 [==============================] - 200s 2s/step - loss: 779.1311 - val_loss: 756.8228
Epoch 15/20
101/101 [==============================] - 198s 2s/step - loss: 778.7422 - val_loss: 755.1023
Epoch 16/20
101/101 [==============================] - 199s 2s/step - loss: 779.2149 - val_loss: 749.6814
Epoch 17/20
101/101 [==============================] - 200s 2s/step - loss: 779.5027 - val_loss: 760.2210
Epoch 18/20
101/101 [==============================] - 199s 2s/step - loss: 779.0870 - val_loss: 759.4421
Epoch 19/20
101/101 [==============================] - 199s 2s/step - loss: 778.2054 - val_loss: 755.4136
Epoch 20/20
101/101 [==============================] - 199s 2s/step - loss: 778.0067 - val_loss: 758.3084
###Markdown
(IMPLEMENTATION) Model 1: RNN + TimeDistributed DenseRead about the [TimeDistributed](https://keras.io/layers/wrappers/) wrapper and the [BatchNormalization](https://keras.io/layers/normalization/) layer in the Keras documentation. For your next architecture, you will add [batch normalization](https://arxiv.org/pdf/1510.01378.pdf) to the recurrent layer to reduce training times. The `TimeDistributed` layer will be used to find more complex patterns in the dataset. The unrolled snapshot of the architecture is depicted below.The next figure shows an equivalent, rolled depiction of the RNN that shows the (`TimeDistrbuted`) dense and output layers in greater detail. Use your research to complete the `rnn_model` function within the `sample_models.py` file. The function should specify an architecture that satisfies the following requirements:- The first layer of the neural network should be an RNN (`SimpleRNN`, `LSTM`, or `GRU`) that takes the time sequence of audio features as input. We have added `GRU` units for you, but feel free to change `GRU` to `SimpleRNN` or `LSTM`, if you like!- Whereas the architecture in `simple_rnn_model` treated the RNN output as the final layer of the model, you will use the output of your RNN as a hidden layer. Use `TimeDistributed` to apply a `Dense` layer to each of the time steps in the RNN output. Ensure that each `Dense` layer has `output_dim` units.Use the code cell below to load your model into the `model_1` variable. Use a value for `input_dim` that matches your chosen audio features, and feel free to change the values for `units` and `activation` to tweak the behavior of your recurrent layer.
###Code
model_1 = rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
units=200,
activation='relu')
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
rnn (GRU) (None, None, 200) 217200
_________________________________________________________________
batch_normalization_1 (Batch (None, None, 200) 800
_________________________________________________________________
time_distributed_1 (TimeDist (None, None, 29) 5829
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 223,829
Trainable params: 223,429
Non-trainable params: 400
_________________________________________________________________
None
###Markdown
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_1.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_1.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
###Code
with active_session():
train_model(input_to_softmax=model_1,
pickle_path='model_1.pickle',
save_model_path='model_1.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
Epoch 1/20
101/101 [==============================] - 197s 2s/step - loss: 336.2018 - val_loss: 380.6678
Epoch 2/20
101/101 [==============================] - 200s 2s/step - loss: 228.6369 - val_loss: 212.3645
Epoch 3/20
101/101 [==============================] - 200s 2s/step - loss: 193.3269 - val_loss: 187.7139
Epoch 4/20
101/101 [==============================] - 199s 2s/step - loss: 172.0708 - val_loss: 174.9101
Epoch 5/20
101/101 [==============================] - 200s 2s/step - loss: 160.3862 - val_loss: 160.4344
Epoch 6/20
101/101 [==============================] - 199s 2s/step - loss: 152.9146 - val_loss: 155.3362
Epoch 7/20
101/101 [==============================] - 198s 2s/step - loss: 146.3518 - val_loss: 151.9829
Epoch 8/20
101/101 [==============================] - 199s 2s/step - loss: 141.4301 - val_loss: 150.5347
Epoch 9/20
101/101 [==============================] - 198s 2s/step - loss: 137.4764 - val_loss: 148.3308
Epoch 10/20
101/101 [==============================] - 197s 2s/step - loss: 134.7255 - val_loss: 146.5818
Epoch 11/20
101/101 [==============================] - 197s 2s/step - loss: 131.0521 - val_loss: 147.0272
Epoch 12/20
101/101 [==============================] - 197s 2s/step - loss: 128.9655 - val_loss: 141.9102
Epoch 13/20
101/101 [==============================] - 197s 2s/step - loss: 126.1449 - val_loss: 138.6537
Epoch 14/20
101/101 [==============================] - 198s 2s/step - loss: 124.5688 - val_loss: 139.4451
Epoch 15/20
101/101 [==============================] - 197s 2s/step - loss: 122.5694 - val_loss: 141.0313
Epoch 16/20
101/101 [==============================] - 196s 2s/step - loss: 120.5637 - val_loss: 141.2753
Epoch 17/20
101/101 [==============================] - 197s 2s/step - loss: 119.7514 - val_loss: 140.7500
Epoch 18/20
101/101 [==============================] - 196s 2s/step - loss: 117.5524 - val_loss: 136.4533
Epoch 19/20
101/101 [==============================] - 198s 2s/step - loss: 117.4131 - val_loss: 136.2045
Epoch 20/20
101/101 [==============================] - 196s 2s/step - loss: 117.1511 - val_loss: 136.9958
###Markdown
(IMPLEMENTATION) Model 2: CNN + RNN + TimeDistributed DenseThe architecture in `cnn_rnn_model` adds an additional level of complexity, by introducing a [1D convolution layer](https://keras.io/layers/convolutional/conv1d). This layer incorporates many arguments that can be (optionally) tuned when calling the `cnn_rnn_model` module. We provide sample starting parameters, which you might find useful if you choose to use spectrogram audio features. If you instead want to use MFCC features, these arguments will have to be tuned. Note that the current architecture only supports values of `'same'` or `'valid'` for the `conv_border_mode` argument.When tuning the parameters, be careful not to choose settings that make the convolutional layer overly small. If the temporal length of the CNN layer is shorter than the length of the transcribed text label, your code will throw an error.Before running the code cell below, you must modify the `cnn_rnn_model` function in `sample_models.py`. Please add batch normalization to the recurrent layer, and provide the same `TimeDistributed` layer as before.
###Code
model_2 = cnn_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
filters=200,
kernel_size=11,
conv_stride=2,
conv_border_mode='valid',
units=200)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
conv1d (Conv1D) (None, None, 200) 354400
_________________________________________________________________
bn_conv_1d (BatchNormalizati (None, None, 200) 800
_________________________________________________________________
rnn (GRU) (None, None, 200) 240600
_________________________________________________________________
batch_normalization_2 (Batch (None, None, 200) 800
_________________________________________________________________
time_distributed_2 (TimeDist (None, None, 29) 5829
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 602,429
Trainable params: 601,629
Non-trainable params: 800
_________________________________________________________________
None
###Markdown
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_2.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_2.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
###Code
with active_session():
train_model(input_to_softmax=model_2,
pickle_path='model_2.pickle',
save_model_path='model_2.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
Epoch 1/20
101/101 [==============================] - 108s 1s/step - loss: 258.8556 - val_loss: 241.0477
Epoch 2/20
101/101 [==============================] - 107s 1s/step - loss: 204.1177 - val_loss: 180.1264
Epoch 3/20
101/101 [==============================] - 106s 1s/step - loss: 168.8613 - val_loss: 161.4778
Epoch 4/20
101/101 [==============================] - 107s 1s/step - loss: 153.1842 - val_loss: 152.2105
Epoch 5/20
101/101 [==============================] - 107s 1s/step - loss: 142.4154 - val_loss: 145.8282
Epoch 6/20
101/101 [==============================] - 106s 1s/step - loss: 134.1486 - val_loss: 144.7146
Epoch 7/20
101/101 [==============================] - 106s 1s/step - loss: 127.5103 - val_loss: 139.6551
Epoch 8/20
101/101 [==============================] - 106s 1s/step - loss: 121.6361 - val_loss: 140.1253
Epoch 9/20
101/101 [==============================] - 106s 1s/step - loss: 116.3218 - val_loss: 140.4265
Epoch 10/20
101/101 [==============================] - 106s 1s/step - loss: 111.2732 - val_loss: 139.5150
Epoch 11/20
101/101 [==============================] - 106s 1s/step - loss: 106.8357 - val_loss: 137.2623
Epoch 12/20
101/101 [==============================] - 106s 1s/step - loss: 102.2337 - val_loss: 140.0873
Epoch 13/20
101/101 [==============================] - 106s 1s/step - loss: 97.8085 - val_loss: 141.1025
Epoch 14/20
101/101 [==============================] - 107s 1s/step - loss: 93.3791 - val_loss: 142.1605
Epoch 15/20
101/101 [==============================] - 106s 1s/step - loss: 88.8957 - val_loss: 142.6381
Epoch 16/20
101/101 [==============================] - 106s 1s/step - loss: 84.9113 - val_loss: 147.5348
Epoch 17/20
101/101 [==============================] - 105s 1s/step - loss: 80.8978 - val_loss: 148.4056
Epoch 18/20
101/101 [==============================] - 105s 1s/step - loss: 76.6982 - val_loss: 155.7585
Epoch 19/20
101/101 [==============================] - 106s 1s/step - loss: 72.7583 - val_loss: 157.0546
Epoch 20/20
101/101 [==============================] - 107s 1s/step - loss: 68.9831 - val_loss: 161.0130
###Markdown
(IMPLEMENTATION) Model 3: Deeper RNN + TimeDistributed DenseReview the code in `rnn_model`, which makes use of a single recurrent layer. Now, specify an architecture in `deep_rnn_model` that utilizes a variable number `recur_layers` of recurrent layers. The figure below shows the architecture that should be returned if `recur_layers=2`. In the figure, the output sequence of the first recurrent layer is used as input for the next recurrent layer.Feel free to change the supplied values of `units` to whatever you think performs best. You can change the value of `recur_layers`, as long as your final value is greater than 1. (As a quick check that you have implemented the additional functionality in `deep_rnn_model` correctly, make sure that the architecture that you specify here is identical to `rnn_model` if `recur_layers=1`.)
###Code
model_3 = deep_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
units=200,
recur_layers=2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
rnn_0 (GRU) (None, None, 200) 217200
_________________________________________________________________
bn_0 (BatchNormalization) (None, None, 200) 800
_________________________________________________________________
rnn_1 (GRU) (None, None, 200) 240600
_________________________________________________________________
bn_1 (BatchNormalization) (None, None, 200) 800
_________________________________________________________________
time_distributed_3 (TimeDist (None, None, 29) 5829
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 465,229
Trainable params: 464,429
Non-trainable params: 800
_________________________________________________________________
None
###Markdown
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_3.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_3.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
###Code
with active_session():
train_model(input_to_softmax=model_3,
pickle_path='model_3.pickle',
save_model_path='model_3.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
Epoch 1/20
101/101 [==============================] - 358s 4s/step - loss: 295.9175 - val_loss: 238.1677
Epoch 2/20
101/101 [==============================] - 364s 4s/step - loss: 240.3753 - val_loss: 222.7296
Epoch 3/20
101/101 [==============================] - 364s 4s/step - loss: 226.2026 - val_loss: 211.4369
Epoch 4/20
101/101 [==============================] - 364s 4s/step - loss: 204.9017 - val_loss: 190.3486
Epoch 5/20
101/101 [==============================] - 363s 4s/step - loss: 180.4207 - val_loss: 172.4951
Epoch 6/20
101/101 [==============================] - 365s 4s/step - loss: 164.6859 - val_loss: 161.6273
Epoch 7/20
101/101 [==============================] - 362s 4s/step - loss: 153.6807 - val_loss: 152.6831
Epoch 8/20
101/101 [==============================] - 364s 4s/step - loss: 145.7942 - val_loss: 148.4268
Epoch 9/20
101/101 [==============================] - 364s 4s/step - loss: 138.3407 - val_loss: 141.0202
Epoch 10/20
101/101 [==============================] - 365s 4s/step - loss: 132.8073 - val_loss: 141.4218
Epoch 11/20
101/101 [==============================] - 365s 4s/step - loss: 127.6570 - val_loss: 135.2657
Epoch 12/20
101/101 [==============================] - 365s 4s/step - loss: 123.1933 - val_loss: 138.6469
Epoch 13/20
101/101 [==============================] - 365s 4s/step - loss: 119.1319 - val_loss: 134.4311
Epoch 14/20
101/101 [==============================] - 365s 4s/step - loss: 115.0217 - val_loss: 131.9549
Epoch 15/20
101/101 [==============================] - 368s 4s/step - loss: 111.5545 - val_loss: 136.3133
Epoch 16/20
101/101 [==============================] - 365s 4s/step - loss: 108.7070 - val_loss: 129.7552
Epoch 17/20
101/101 [==============================] - 367s 4s/step - loss: 105.8040 - val_loss: 133.2860
Epoch 18/20
101/101 [==============================] - 366s 4s/step - loss: 102.5849 - val_loss: 130.5720
Epoch 19/20
101/101 [==============================] - 364s 4s/step - loss: 99.5056 - val_loss: 131.1141
Epoch 20/20
101/101 [==============================] - 366s 4s/step - loss: 96.6930 - val_loss: 129.6807
###Markdown
(IMPLEMENTATION) Model 4: Bidirectional RNN + TimeDistributed DenseRead about the [Bidirectional](https://keras.io/layers/wrappers/) wrapper in the Keras documentation. For your next architecture, you will specify an architecture that uses a single bidirectional RNN layer, before a (`TimeDistributed`) dense layer. The added value of a bidirectional RNN is described well in [this paper](http://www.cs.toronto.edu/~hinton/absps/DRNN_speech.pdf).> One shortcoming of conventional RNNs is that they are only able to make use of previous context. In speech recognition, where whole utterances are transcribed at once, there is no reason not to exploit future context as well. Bidirectional RNNs (BRNNs) do this by processing the data in both directions with two separate hidden layers which are then fed forwards to the same output layer.Before running the code cell below, you must complete the `bidirectional_rnn_model` function in `sample_models.py`. Feel free to use `SimpleRNN`, `LSTM`, or `GRU` units. When specifying the `Bidirectional` wrapper, use `merge_mode='concat'`.
###Code
model_4 = bidirectional_rnn_model(input_dim=161, # change to 13 if you would like to use MFCC features
units=200)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
bidirectional_1 (Bidirection (None, None, 400) 434400
_________________________________________________________________
time_distributed_4 (TimeDist (None, None, 29) 11629
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 446,029
Trainable params: 446,029
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_4.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_4.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
###Code
with active_session():
train_model(input_to_softmax=model_4,
pickle_path='model_4.pickle',
save_model_path='model_4.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
Epoch 1/20
101/101 [==============================] - 339s 3s/step - loss: 304.1116 - val_loss: 236.2367
Epoch 2/20
101/101 [==============================] - 344s 3s/step - loss: 236.2877 - val_loss: 225.5000
Epoch 3/20
101/101 [==============================] - 344s 3s/step - loss: 218.8700 - val_loss: 198.7864
Epoch 4/20
101/101 [==============================] - 343s 3s/step - loss: 196.7006 - val_loss: 180.8866
Epoch 5/20
101/101 [==============================] - 344s 3s/step - loss: 180.4375 - val_loss: 173.0114
Epoch 6/20
101/101 [==============================] - 344s 3s/step - loss: 168.1183 - val_loss: 159.5462
Epoch 7/20
101/101 [==============================] - 345s 3s/step - loss: 158.8454 - val_loss: 159.5195
Epoch 8/20
101/101 [==============================] - 344s 3s/step - loss: 151.1906 - val_loss: 150.0958
Epoch 9/20
101/101 [==============================] - 343s 3s/step - loss: 144.8633 - val_loss: 147.8683
Epoch 10/20
101/101 [==============================] - 342s 3s/step - loss: 139.3913 - val_loss: 144.4031
Epoch 11/20
101/101 [==============================] - 345s 3s/step - loss: 134.8313 - val_loss: 143.2869
Epoch 12/20
101/101 [==============================] - 345s 3s/step - loss: 130.4882 - val_loss: 140.1201
Epoch 13/20
101/101 [==============================] - 346s 3s/step - loss: 126.5693 - val_loss: 140.3012
Epoch 14/20
101/101 [==============================] - 346s 3s/step - loss: 123.1083 - val_loss: 138.3673
Epoch 15/20
101/101 [==============================] - 346s 3s/step - loss: 119.7010 - val_loss: 137.6217
Epoch 16/20
101/101 [==============================] - 342s 3s/step - loss: 116.4917 - val_loss: 137.2787
Epoch 17/20
101/101 [==============================] - 345s 3s/step - loss: 113.7822 - val_loss: 135.1154
Epoch 18/20
101/101 [==============================] - 345s 3s/step - loss: 110.8309 - val_loss: 134.5141
Epoch 19/20
101/101 [==============================] - 342s 3s/step - loss: 107.9680 - val_loss: 135.6813
Epoch 20/20
101/101 [==============================] - 345s 3s/step - loss: 105.2320 - val_loss: 136.3337
###Markdown
(OPTIONAL IMPLEMENTATION) Models 5+If you would like to try out more architectures than the ones above, please use the code cell below. Please continue to follow the same convention for saving the models; for the $i$-th sample model, please save the loss at **`model_i.pickle`** and saving the trained model at **`model_i.h5`**.
###Code
## (Optional) TODO: Try out some more models!
### Feel free to use as many code cells as needed.
model_5 = final_model(input_dim=161,
filters=200,
kernel_size=11,
conv_stride=2,
conv_border_mode='valid',
units=200,
recur_layers=3)
with active_session():
train_model(input_to_softmax=model_5,
pickle_path='model_5.pickle',
save_model_path='model_5.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
Epoch 1/20
101/101 [==============================] - 472s 5s/step - loss: 254.1992 - val_loss: 250.0440
Epoch 2/20
101/101 [==============================] - 479s 5s/step - loss: 198.5832 - val_loss: 186.9320
Epoch 3/20
101/101 [==============================] - 477s 5s/step - loss: 163.7157 - val_loss: 156.1267
Epoch 4/20
101/101 [==============================] - 480s 5s/step - loss: 143.1842 - val_loss: 140.8628
Epoch 5/20
101/101 [==============================] - 479s 5s/step - loss: 129.4621 - val_loss: 134.7253
Epoch 6/20
101/101 [==============================] - 476s 5s/step - loss: 117.8062 - val_loss: 128.0800
Epoch 7/20
101/101 [==============================] - 475s 5s/step - loss: 108.7173 - val_loss: 128.2791
Epoch 8/20
101/101 [==============================] - 479s 5s/step - loss: 100.1878 - val_loss: 124.3406
Epoch 9/20
101/101 [==============================] - 474s 5s/step - loss: 92.5292 - val_loss: 126.6611
Epoch 10/20
101/101 [==============================] - 477s 5s/step - loss: 84.9840 - val_loss: 122.9594
Epoch 11/20
101/101 [==============================] - 478s 5s/step - loss: 77.9299 - val_loss: 125.5228
Epoch 12/20
101/101 [==============================] - 478s 5s/step - loss: 70.7663 - val_loss: 128.9322
Epoch 13/20
101/101 [==============================] - 476s 5s/step - loss: 64.0748 - val_loss: 131.3930
Epoch 14/20
101/101 [==============================] - 476s 5s/step - loss: 57.7354 - val_loss: 133.9233
Epoch 15/20
101/101 [==============================] - 479s 5s/step - loss: 51.6576 - val_loss: 140.5048
Epoch 16/20
101/101 [==============================] - 476s 5s/step - loss: 45.7165 - val_loss: 147.7190
Epoch 17/20
101/101 [==============================] - 476s 5s/step - loss: 40.8136 - val_loss: 152.4066
Epoch 18/20
101/101 [==============================] - 481s 5s/step - loss: 36.2807 - val_loss: 160.1687
Epoch 19/20
101/101 [==============================] - 481s 5s/step - loss: 31.5830 - val_loss: 164.0648
Epoch 20/20
101/101 [==============================] - 476s 5s/step - loss: 28.4361 - val_loss: 167.0342
###Markdown
Compare the ModelsExecute the code cell below to evaluate the performance of the drafted deep learning models. The training and validation loss are plotted for each model.
###Code
from glob import glob
import numpy as np
import _pickle as pickle
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.set_style(style='white')
# obtain the paths for the saved model history
all_pickles = sorted(glob("results/*.pickle"))
# extract the name of each model
model_names = [item[8:-7] for item in all_pickles]
# extract the loss history for each model
valid_loss = [pickle.load( open( i, "rb" ) )['val_loss'] for i in all_pickles]
train_loss = [pickle.load( open( i, "rb" ) )['loss'] for i in all_pickles]
# save the number of epochs used to train each model
num_epochs = [len(valid_loss[i]) for i in range(len(valid_loss))]
fig = plt.figure(figsize=(16,5))
# plot the training loss vs. epoch for each model
ax1 = fig.add_subplot(121)
for i in range(len(all_pickles)):
ax1.plot(np.linspace(1, num_epochs[i], num_epochs[i]),
train_loss[i], label=model_names[i])
# clean up the plot
ax1.legend()
ax1.set_xlim([1, max(num_epochs)])
plt.xlabel('Epoch')
plt.ylabel('Training Loss')
# plot the validation loss vs. epoch for each model
ax2 = fig.add_subplot(122)
for i in range(len(all_pickles)):
ax2.plot(np.linspace(1, num_epochs[i], num_epochs[i]),
valid_loss[i], label=model_names[i])
# clean up the plot
ax2.legend()
ax2.set_xlim([1, max(num_epochs)])
plt.xlabel('Epoch')
plt.ylabel('Validation Loss')
plt.show()
###Output
_____no_output_____
###Markdown
__Question 1:__ Use the plot above to analyze the performance of each of the attempted architectures. Which performs best? Provide an explanation regarding why you think some models perform better than others. __Answer:__ (IMPLEMENTATION) Final ModelNow that you've tried out many sample models, use what you've learned to draft your own architecture! While your final acoustic model should not be identical to any of the architectures explored above, you are welcome to merely combine the explored layers above into a deeper architecture. It is **NOT** necessary to include new layer types that were not explored in the notebook.However, if you would like some ideas for even more layer types, check out these ideas for some additional, optional extensions to your model:- If you notice your model is overfitting to the training dataset, consider adding **dropout**! To add dropout to [recurrent layers](https://faroit.github.io/keras-docs/1.0.2/layers/recurrent/), pay special attention to the `dropout_W` and `dropout_U` arguments. This [paper](http://arxiv.org/abs/1512.05287) may also provide some interesting theoretical background.- If you choose to include a convolutional layer in your model, you may get better results by working with **dilated convolutions**. If you choose to use dilated convolutions, make sure that you are able to accurately calculate the length of the acoustic model's output in the `model.output_length` lambda function. You can read more about dilated convolutions in Google's [WaveNet paper](https://arxiv.org/abs/1609.03499). For an example of a speech-to-text system that makes use of dilated convolutions, check out this GitHub [repository](https://github.com/buriburisuri/speech-to-text-wavenet). You can work with dilated convolutions [in Keras](https://keras.io/layers/convolutional/) by paying special attention to the `padding` argument when you specify a convolutional layer.- If your model makes use of convolutional layers, why not also experiment with adding **max pooling**? Check out [this paper](https://arxiv.org/pdf/1701.02720.pdf) for example architecture that makes use of max pooling in an acoustic model.- So far, you have experimented with a single bidirectional RNN layer. Consider stacking the bidirectional layers, to produce a [deep bidirectional RNN](https://www.cs.toronto.edu/~graves/asru_2013.pdf)!All models that you specify in this repository should have `output_length` defined as an attribute. This attribute is a lambda function that maps the (temporal) length of the input acoustic features to the (temporal) length of the output softmax layer. This function is used in the computation of CTC loss; to see this, look at the `add_ctc_loss` function in `train_utils.py`. To see where the `output_length` attribute is defined for the models in the code, take a look at the `sample_models.py` file. You will notice this line of code within most models:```model.output_length = lambda x: x```The acoustic model that incorporates a convolutional layer (`cnn_rnn_model`) has a line that is a bit different:```model.output_length = lambda x: cnn_output_length( x, kernel_size, conv_border_mode, conv_stride)```In the case of models that use purely recurrent layers, the lambda function is the identity function, as the recurrent layers do not modify the (temporal) length of their input tensors. However, convolutional layers are more complicated and require a specialized function (`cnn_output_length` in `sample_models.py`) to determine the temporal length of their output.You will have to add the `output_length` attribute to your final model before running the code cell below. Feel free to use the `cnn_output_length` function, if it suits your model.
###Code
# specify the model
model_end = final_model(input_dim=161,
filters=200,
kernel_size=11,
conv_stride=2,
conv_border_mode='valid',
units=200,
recur_layers=2,
dropout=0.2)
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
conv1d (Conv1D) (None, None, 200) 354400
_________________________________________________________________
bn_conv_1d (BatchNormalizati (None, None, 200) 800
_________________________________________________________________
bd_rnn_0 (Bidirectional) (None, None, 400) 481200
_________________________________________________________________
bn_0 (BatchNormalization) (None, None, 400) 1600
_________________________________________________________________
bd_rnn_1 (Bidirectional) (None, None, 400) 721200
_________________________________________________________________
bn_1 (BatchNormalization) (None, None, 400) 1600
_________________________________________________________________
time_distributed_2 (TimeDist (None, None, 29) 11629
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 1,572,429
Trainable params: 1,570,429
Non-trainable params: 2,000
_________________________________________________________________
None
###Markdown
Please execute the code cell below to train the neural network you specified in `input_to_softmax`. After the model has finished training, the model is [saved](https://keras.io/getting-started/faq/how-can-i-save-a-keras-model) in the HDF5 file `model_end.h5`. The loss history is [saved](https://wiki.python.org/moin/UsingPickle) in `model_end.pickle`. You are welcome to tweak any of the optional parameters while calling the `train_model` function, but this is not required.
###Code
with active_session():
train_model(input_to_softmax=model_end,
pickle_path='model_end.pickle',
save_model_path='model_end.h5',
spectrogram=True) # change to False if you would like to use MFCC features
###Output
_____no_output_____
###Markdown
__Question 2:__ Describe your final model architecture and your reasoning at each step. __Answer:__ STEP 3: Obtain PredictionsWe have written a function for you to decode the predictions of your acoustic model. To use the function, please execute the code cell below.
###Code
import numpy as np
from data_generator import AudioGenerator
from keras import backend as K
from utils import int_sequence_to_text
from IPython.display import Audio
def get_predictions(index, partition, input_to_softmax, model_path):
""" Print a model's decoded predictions
Params:
index (int): The example you would like to visualize
partition (str): One of 'train' or 'validation'
input_to_softmax (Model): The acoustic model
model_path (str): Path to saved acoustic model's weights
"""
# load the train and test data
data_gen = AudioGenerator()
data_gen.load_train_data()
data_gen.load_validation_data()
# obtain the true transcription and the audio features
if partition == 'validation':
transcr = data_gen.valid_texts[index]
audio_path = data_gen.valid_audio_paths[index]
data_point = data_gen.normalize(data_gen.featurize(audio_path))
elif partition == 'train':
transcr = data_gen.train_texts[index]
audio_path = data_gen.train_audio_paths[index]
data_point = data_gen.normalize(data_gen.featurize(audio_path))
else:
raise Exception('Invalid partition! Must be "train" or "validation"')
# obtain and decode the acoustic model's predictions
input_to_softmax.load_weights(model_path)
prediction = input_to_softmax.predict(np.expand_dims(data_point, axis=0))
output_length = [input_to_softmax.output_length(data_point.shape[0])]
pred_ints = (K.eval(K.ctc_decode(
prediction, output_length)[0][0])+1).flatten().tolist()
# play the audio file, and display the true and predicted transcriptions
print('-'*80)
Audio(audio_path)
print('True transcription:\n' + '\n' + transcr)
print('-'*80)
print('Predicted transcription:\n' + '\n' + ''.join(int_sequence_to_text(pred_ints)))
print('-'*80)
###Output
Using TensorFlow backend.
###Markdown
Use the code cell below to obtain the transcription predicted by your final model for the first example in the training dataset.
###Code
get_predictions(index=0,
partition='train',
input_to_softmax=final_model(
input_dim=161,
filters=200,
kernel_size=11,
conv_stride=2,
conv_border_mode='valid',
units=200,
recur_layers=5),
model_path='results_spectrogram/model_end.h5')
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
conv1d (Conv1D) (None, None, 200) 354400
_________________________________________________________________
bn_conv_1d (BatchNormalizati (None, None, 200) 800
_________________________________________________________________
bd_rnn_0 (Bidirectional) (None, None, 400) 481200
_________________________________________________________________
bn_0 (BatchNormalization) (None, None, 400) 1600
_________________________________________________________________
bd_rnn_1 (Bidirectional) (None, None, 400) 721200
_________________________________________________________________
bn_1 (BatchNormalization) (None, None, 400) 1600
_________________________________________________________________
bd_rnn_2 (Bidirectional) (None, None, 400) 721200
_________________________________________________________________
bn_2 (BatchNormalization) (None, None, 400) 1600
_________________________________________________________________
bd_rnn_3 (Bidirectional) (None, None, 400) 721200
_________________________________________________________________
bn_3 (BatchNormalization) (None, None, 400) 1600
_________________________________________________________________
bd_rnn_4 (Bidirectional) (None, None, 400) 721200
_________________________________________________________________
bn_4 (BatchNormalization) (None, None, 400) 1600
_________________________________________________________________
time_distributed_1 (TimeDist (None, None, 29) 11629
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 3,740,829
Trainable params: 3,736,429
Non-trainable params: 4,400
_________________________________________________________________
None
--------------------------------------------------------------------------------
True transcription:
her father is a most remarkable person to say the least
--------------------------------------------------------------------------------
Predicted transcription:
her father is a most rmarkcal person to say the least
--------------------------------------------------------------------------------
###Markdown
Use the next code cell to visualize the model's prediction for the first example in the validation dataset.
###Code
get_predictions(index=0,
partition='validation',
input_to_softmax=final_model(
input_dim=161,
filters=200,
kernel_size=11,
conv_stride=2,
conv_border_mode='valid',
units=200,
recur_layers=5),
model_path='results_spectrogram/model_end.h5')
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
the_input (InputLayer) (None, None, 161) 0
_________________________________________________________________
conv1d (Conv1D) (None, None, 200) 354400
_________________________________________________________________
bn_conv_1d (BatchNormalizati (None, None, 200) 800
_________________________________________________________________
bd_rnn_0 (Bidirectional) (None, None, 400) 481200
_________________________________________________________________
bn_0 (BatchNormalization) (None, None, 400) 1600
_________________________________________________________________
bd_rnn_1 (Bidirectional) (None, None, 400) 721200
_________________________________________________________________
bn_1 (BatchNormalization) (None, None, 400) 1600
_________________________________________________________________
bd_rnn_2 (Bidirectional) (None, None, 400) 721200
_________________________________________________________________
bn_2 (BatchNormalization) (None, None, 400) 1600
_________________________________________________________________
bd_rnn_3 (Bidirectional) (None, None, 400) 721200
_________________________________________________________________
bn_3 (BatchNormalization) (None, None, 400) 1600
_________________________________________________________________
bd_rnn_4 (Bidirectional) (None, None, 400) 721200
_________________________________________________________________
bn_4 (BatchNormalization) (None, None, 400) 1600
_________________________________________________________________
time_distributed_2 (TimeDist (None, None, 29) 11629
_________________________________________________________________
softmax (Activation) (None, None, 29) 0
=================================================================
Total params: 3,740,829
Trainable params: 3,736,429
Non-trainable params: 4,400
_________________________________________________________________
None
--------------------------------------------------------------------------------
True transcription:
the bogus legislature numbered thirty six members
--------------------------------------------------------------------------------
Predicted transcription:
the bu thas lusyuslageured nevvere therty sicxs mhevers
--------------------------------------------------------------------------------
|
exam_project_bloom_filter/Bloom Filter.ipynb | ###Markdown
Probabilistic Data Structures - Bloom Filter Data Structure DefinitionIn computer science, a data structure is a data organization, management, and storage format that enables efficient access and modification. More precisely, a data structure is a collection of data values, the relationships among them, and the functions or operations that can be applied to the data1. A data structure is a particular way of organizing data in a computer so that it can be used effectively2.UsageData structures serve as the basis for abstract data types (ADT). The ADT defines the logical form of the data type. The data structure implements the physical form of the data type. Different types of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. Usually, efficient data structures are key to designing efficient algorithms. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design. Data structures can be used to organize the storage and retrieval of information stored in both main memory and secondary memory.ImplementationData structures are generally based on the ability of a computer to fetch and store data at any place in its memory, specified by a pointer-a bit string, representing a memory address, that can be itself stored in memory and manipulated by the program. Thus, the array and record data structures are based on computing the addresses of data items with arithmetic operations, while the linked data structures are based on storing addresses of data items within the structure itself.The implementation of a data structure usually requires writing a set of procedures that create and manipulate instances of that structure. The efficiency of a data structure cannot be analyzed separately from those operations. This observation motivates the theoretical concept of an abstract data type, a data structure that is defined indirectly by the operations that may be performed on it, and the mathematical properties of those operations (including their space and time cost).Data Structure TypesAs data structures are used to store data in an organized form, and since data is the most crucial entity in computer science, the true worth of data structures is clear. No matter what problem are you solving, in one way or another you have to deal with data - whether it’s an employee’s salary, stock prices, a grocery list, or even a simple telephone directory.Based on different scenarios, data needs to be stored in a specific format. We have a handful of data structures that cover our need to store data in different formats.Here are some of the data structure types3: Data types Primitive types Composite types or non-primitive type Abstract data types Linear data structures Arrays Lists Trees (non-linear) Binary trees B-trees Heaps Trees Multi-way trees Space-partitioning trees Application-specific trees Hash-based structures Graphs Other Hundreds of books have been written on why data structures and algorithms are important. Particularly impressive are Donald Knut's four volumes, entitled "The Art of Computer Programming", in which data structures and algorithms are discussed in over 2,500 pages. One author has even titled a book answering the question "why data structures are so important". This is Nicklaus Wirth's book "Algorithms + Data Structures = Programs", which looks again at data structures and fundamental algorithms in programming4.AlgorhitmsSpeaking of data structures, we can not pass the algorithms. In mathematics and computer science, an algorithm is a finite sequence of well-defined, computer-implementable instructions, typically to solve a class of problems or to perform a computation. Algorithms are always unambiguous and are used as specifications for performing calculations, data processing, automated reasoning, and other tasks. Data structures and algorithms are the base of the programming.We cannot talk about the efficiency of algorithms and data structures without using the term "algorithm complexity". The complexity of an algorithm is a rough estimate of the number of steps that the algorithm will take depending on the size of the input data. This is a rough estimate that is interested in the order of the number of steps, not the exact number. We'll not dig deeper on that, but if you are interested you can check more about algorhitm complexity Probabilistic Structures Probabilistic data structures is a common name for data structuresbased mostly on different hashing techniques. Unlike regular (ordeterministic) data structures, they always provide approximatedanswers but with reliable ways to estimate possible errors. Fortunately,the potential losses and errors are fully compensated for by extremelylow memory requirements, constant query time, and scaling, the factorsthat become essential in Big Data applications5.Big data is characterized by three fundamental dimensions: Volume,Velocity, and Variety, The Three V’s of Big Data. The Volumeexpresses the amount of data, Velocity describes the speed at which datais arriving and being processed, and Variety refers to the number oftypes of data.The data could come from anywhere, including social media, varioussensors, financial transactions, etc. IBM has statedthat people create2.5 quintillion bytes of data every day, this number is growingconstantly and most of it cannot be stored and is usually wastedwithout being processed. Today, it is not uncommon to process terabyte or petabyte-sized corporation's data and gigabit-rate streams.On the other hand, nowadays every company wants to fullyunderstand the data it has, in order to find value and act on it. This ledto the rapid growth in the Big Data Software market. However,the traditional technologies which include data structures andalgorithms, become ineffective when dealing with Big Data. Therefore,many software practitioners, again and again, refer to computer sciencefor the most appropriate solutions and one option is to use probabilisticdata structures and algorithms.When processing large data sets, we often want to do some simple checks, such as number of unique items, most frequent items, and whether some items exist in the data set. The common approach is to use some kind of deterministic data structure like HashSet or Hashtable for such purposes. But when the data set we are dealing with becomes very large, such data structures are simply not feasible because the data is too big to fit in the memory. It becomes even more difficult for streaming applications which typically require data to be processed in one pass and perform incremental updates.Probabilistic data structures are a group of data structures that are extremely useful for big data and streaming applications. Generally speaking, these data structures use hash functions to randomize and compactly represent a set of items. Collisions are ignored but errors can be well-controlled under certain threshold. The more number of hash function the more accurate result. Comparing with error-free approaches, these algorithms use much less memory and have constant query time. They usually support union and intersection operations and therefore can be easily parallelized6.By using this type of data structure, we can only safely assume that we have an approximately solution which may or may not be the exact answer but it's in the right direction. These data structures are proven to use either a fixed or sublinear memory and have constant execution time. The answers may not be exact and have some probability of error.Any probablistic data structure will rely on some form of probability such as using randomness, hasing, and etc. to reach an approximate solution.Some of the data structures are rather proven alternative approaches for a data structure but often times they are needed for these cases7: Analzing / mining big data sets (more than what a deterministic data structure can handle). Statistical analysis. A stream of data that need an answer aftwares. In majority of the cases these data structures use hash functions to randomize the items as mentioned before. Because they ignore collisions they keep the size constant, but this is also a reason why they can't give you exact values. The advantages they bring8: Use small amount of memory (you can control how much) Easily parallelizable (hashes are independent) Constant query time (not even amortized constant like in dictionary) Here are some of the Probabilistic Data Structure Types: Membership querying (Bloom filter, Counting Bloom filter, Quotient filter, Cuckoo filter). Cardinality (Linear counting, probabilistic counting, LogLog, HyperLogLog, HyperLogLog++). Frequency (Majority algorithm, Frequent, Count Sketch, Count-Min Sketch). Rank (Random sampling, q-digest, t-digest). Similarity (LSH, MinHash, SimHash). We will now look at one of the most commonly used probabilistic data structures - it is called Bloom filter. Bloom filter9 Bloom filters test if an element is part of a set, without needing to store the entire set.A Bloom filter is a space-efficient probabilistic data structure, conceived by Burton Howard Bloom in 1970, that is used to test whether an element is a member of a set. False positive matches are possible, but false negatives are not – in other words, a query returns either "possibly in set" or "definitely not in set". Elements can be added to the set, but not removed (though this can be addressed with the counting Bloom filter variant); the more items added, the larger the probability of false positives.One simple way to think about Bloom filters is that they support insert and lookup in the same way the hash tables do, but using very little space, i.e., one byte per item or less. This is a significant saving when you have many items and each item takes up, say 8 bytes.Algorithm descriptionAn empty Bloom filter is a bit array of $m$ bits, all set to 0. There must also be $k$ different hash functions defined, each of which maps or hashes some set element to one of the $m$ array positions, generating a uniform random distribution. Typically, $k$ is a small constant which depends on the desired false error rate $\varepsilon$, while $m$ is proportional to $k$ and the number of elements to be added. To add an element, feed it to each of the $k$ hash functions to get $k$ array positions. Set the bits at all these positions to 1. To add/insert an item $x$ into the Bloom filter, we first compute the $k$ hash functions on $x$, and for each resulting hash, set the corresponding slot of A to 1 - see picture below10. Example of insert into Bloom filter. In this example, an initially empty Bloom filter has $m$=8, and $k$=2 (two hash functions). To insert an element $x$, we first compute the two hashes on $x$, the first one of which generates 1 and the second one generates 5. We proceed to set A[1] and A[5] to 1. To insert $y$, we also compute the hashes and similarly, set positions A[4] and A[6] to 1. To query for an element (test whether it is in the set), feed it to each of the $k$ hash functions to get $k$ array positions. If any of the bits at these positions is 0, the element is definitely not in the set; if it were, then all the bits would have been set to 1 when it was inserted. If all are 1, then either the element is in the set, or the bits have by chance been set to 1 during the insertion of other elements, resulting in a false positive. Similarly to insert, lookup computes $k$ hash functions on $x$, and the first time one of the corresponding slots of A equal to 0, the lookup reports the item as Not Present, otherwise it reports the item as Present. Example of a lookup on a Bloom filter. We take the resulting Bloom filter from picture above, where we inserted elements $x$ and $y$. To do a lookup on $x$, we compute the hashes (which are the same as in the case of an insert), and we return Found/Present, as both bits in corresponding locations equal 1. Then we do a lookup of an element $z$, which we never inserted, and its hashes are respectively 4 and 5, and the bits at locations A[4] and A[5] equal 1, thus we again return Found/Present. This is an example of a false positive, where two other items together set the bits of the third item to 1. An example of a negative (negative is always true), would be if we did a lookup on an element $w$, whose hashes are 2 and 5, (0 and 1), or 0 and 3 (0 and 0). If the Bloom filter reports an element as Not Found/Not Present, then we can be sure that this element was never inserted into a Bloom filter. Asymptotically, the insert operation on the Bloom filter costs $O(k)$. Considering that the number of hash functions rarely goes above 12, this is a constant-time operation. The lookup might also need $O(k)$, in case the operation has to check all the bits, but most unsuccessful lookups will give up way before; usually on average, an unsuccessful lookup takes about 1-2 probes before giving up. Removing an element from this simple Bloom filter is impossible because there is no way to tell which of the $k$ bits it maps to should be cleared. Although setting any one of those $k$ bits to zero suffices to remove the element, it would also remove any other elements that happen to map onto that bit. Since the simple algorithm provides no way to determine whether any other elements have been added that affect the bits for the element to be removed, clearing any of the bits would introduce the possibility of false negatives.It is often the case that all the keys are available but are expensive to enumerate (for example, requiring many disk reads). When the false positive rate gets too high, the filter can be regenerated; this should be a relatively rare event.For example, if we have inserted $\{x, y, z\}$ into the bloom filter, with $k=3$ hash functions like the picture above. Each of these three elements has three bits each set to 1 in the bit array. When we look up for $w$ in the set, because one of the bits is not set to 1, the bloom filter will tell us that it is not in the set. Bloom filter has the following properties: False positive is possible when the queried positions are already set to 1. But false negative is impossible. Query time is $O(k)$. Union and intersection of bloom filters with same size and hash functions can be implemented with bitwise OR and AND operations. Cannot remove an element from the set. Bloom filter requires the following inputs: $m$: size of the bit array $n$: estimated insertion $p$: false positive probability Space and time advantagesWhile risking false positives, Bloom filters have a substantial space advantage over other data structures for representing sets, such as self-balancing binary search trees, tries, hash tables, or simple arrays or linked lists of the entries. Most of these require storing at least the data items themselves, which can require anywhere from a small number of bits, for small integers, to an arbitrary number of bits, such as for strings (tries are an exception since they can share storage between elements with equal prefixes). However, Bloom filters do not store the data items at all, and a separate solution must be provided for the actual storage. Linked structures incur an additional linear space overhead for pointers. A Bloom filter with a 1% error and an optimal value of $k$, in contrast, requires only about 9.6 bits per element, regardless of the size of the elements. This advantage comes partly from its compactness, inherited from arrays, and partly from its probabilistic nature. The 1% false-positive rate can be reduced by a factor of ten by adding only about 4.8 bits per element.To understand its space efficiency, it is instructive to compare the general Bloom filter with its special case when $k = 1$. If $k = 1$, then in order to keep the false positive rate sufficiently low, a small fraction of bits should be set, which means the array must be very large and contain long runs of zeros. The information content of the array relative to its size is low. The generalized Bloom filter ($k$ greater than 1) allows many more bits to be set while still maintaining a low false positive rate; if the parameters ($k$ and $m$) are chosen well, about half of the bits will be set, and these will be apparently random, minimizing redundancy and maximizing information content. Let's detail a little bit on the space-efficiency. If you want to store a long list of items in a set, you could do in various ways. You could store that in a hashmap and then check existence in the hashmap which would allow you to insert and query very efficiently. However, since you will be storing the items as they are, it will not be very space efficient.If we want to also be space efficient, we could hash the items before putting into a set. We could use bit arrays to store hash of the items. Let's also allow hash collision in the bit array. That is pretty much how Bloom Filters work, they are under the hood bit arrays which allow hash collisions; that produces false positives. Hash collisions exist in the Bloom Filters by design. Otherwise, they would not be compact. Whenever a list or set is used, and space efficiency is important and significant, Bloom filter should be considered11. Bloom filters are deterministic. If we are using the same size and same number of hash functions as well as the hash function, bloom filter is deterministic on which items it gives positive response and which items it gives negative response. For an item $x$, if it gives it is probably in to that particular item, it will give the same response as 5 minutes later, 1 hour later, 1 day later and 1 week later. It was "probabilistic" so the response of the bloom filter should be somehow random, right? Not really. It is probabilistic in the sense that you cannot know which item it will say it is probably in. Otherwise, when it says that it is probably in, it keeps saying the same thing.Controlling accuracy with memoryThe more memory you give a bloom filter, the more accurate it will become. Why’s that? Simple: the more memory it has to store data in, the more information it can store, and the more accurate it can be. But, of course, we use a Bloom filter to save memory, so you need to find a balance between memory usage and the number of false positives that are acceptable.Given these facts, you may already get a feeling for when to use a Bloom filter. In general terms, you use them to reduce the load on a system by reducing expense lookups in some data tables at a moderate memory expense. This data table can be anything. Some examples: A database A filesystem Some kind of key-value storageProbability of false positivesWhile Bloom Filters can say "definitely not in" with confidence, they will also say possibly in for some number of items. Depending on the application, this could be a huge downside or it could be relatively okay. If it is okay to introduce false positives every now and then, you should definitely consider using Bloom Filters for membership existence for set operations.Also note that if you are decreasing the false positive rate arbitrarily, you would increase the number of hash functions which would add latency to both insertion and membership existence. One more thing in this section is that, if the hash functions are independent each other and distribute the input space pretty uniformly, then the theoretic false positive rate can be satisfied. Otherwise, the false positive rate of the bloom filter will be worse than the theoretic false positive rate as hash functions correlate each other and hash collisions would occur more often than desired. When using a Bloom filter, we should consider the potential effects of false positives. Assume that a hash function selects each array position with equal probability. If $m$ is the number of bits in the array, the probability that a certain bit is not set to 1 by a certain hash function during the insertion of an element is$${\displaystyle 1-{\frac {1}{m}}}$$ If $k$ is the number of hash functions and each has no significant correlation between each other, then the probability that the bit is not set to 1 by any of the hash functions is$${\displaystyle \left(1-{\frac {1}{m}}\right)^{k}}$$ We can use the well-known identity for $e−1$$${\displaystyle \lim _{m\to \infty }\left(1-{\frac {1}{m}}\right)^{m}={\frac {1}{e}}}$$to conclude that, for large $m$,$${\displaystyle \left(1-{\frac {1}{m}}\right)^{k}=\left(\left(1-{\frac {1}{m}}\right)^{m}\right)^{\frac {k}{m}}\approx e^{-\frac {k}{m}}}$$ If we have inserted $n$ elements, the probability that a certain bit is still 0 is$${\displaystyle \left(1-{\frac {1}{m}}\right)^{kn}\approx e^{-\frac {kn}{m}}}$$the probability that it is 1 is therefore$${\displaystyle 1-\left(1-{\frac {1}{m}}\right)^{kn}\approx 1-e^{-\frac {kn}{m}}}$$ Now test membership of an element that is not in the set. Each of the $k$ array positions computed by the hash functions is 1 with a probability as above. The probability of all of them being 1, which would cause the algorithm to erroneously claim that the element is in the set, is often given as$${\displaystyle \varepsilon =\left(1-\left[1-{\frac {1}{m}}\right]^{kn}\right)^{k}\approx \left(1-e^{-\frac {kn}{m}}\right)^{k}}$$ This is not strictly correct as it assumes independence for the probabilities of each bit being set. However, assuming it is a close approximation we have that the probability of false positives decreases as $m$ (the number of bits in the array) increases, and increases as $n$ (the number of inserted elements) increases.The true probability of a false positive, without assuming independence, is$${\displaystyle {\frac {1}{m^{k(n+1)}}}\sum _{i=1}^{m}i^{k}i!{m \choose i}\left\{{kn \atop i}\right\}}$$where the {braces} denote Stirling numbers of the second kind.An alternative analysis arriving at the same approximation without the assumption of independence is given by Mitzenmacher and Upfal. After all $n$ items have been added to the Bloom filter, let $q$ be the fraction of the $m$ bits that are set to 0. (That is, the number of bits still set to 0 is $qm$.) Then, when testing membership of an element not in the set, for the array position given by any of the $k$ hash functions, the probability that the bit is found set to 1 is ${\displaystyle 1-q}$. So the probability that all $k$ hash functions find their bit set to 1 is ${\displaystyle (1-q)^{k}}$. Further, the expected value of $q$ is the probability that a given array position is left untouched by each of the $k$ hash functions for each of the $n$ items, which is (as above)$${\displaystyle E[q]=\left(1-{\frac {1}{m}}\right)^{kn}}$$ It is possible to prove, without the independence assumption, that $q$ is very strongly concentrated around its expected value. In particular, from the Azuma–Hoeffding inequality, they prove that$${\displaystyle \Pr(\left|q-E[q]\right|\geq {\frac {\lambda }{m})}\leq 2\exp \left( \frac {-2\lambda ^2}{kn} \right) }$$ Because of this, we can say that the exact probability of false positives is$${\displaystyle \sum _{t}\Pr(q=t)(1-t)^{k}\approx (1-E[q])^{k}=\left(1-\left[1-{\frac {1}{m}}\right]^{kn}\right)^{k}\approx \left(1-e^{-\frac {kn}{m}}\right)^{k}}$$as before.Optimal number of hash functionsThe optimum number of hash functions $k$ can be determined using the formula: $${\displaystyle k={\frac {m}{n}}\ln 2}$$Given false positive probability $p$ and the estimated number of insertions $n$, the length of the bit array can be calculated as:$${\displaystyle m=-{\frac {n \ln p}{(\ln 2)^{2}}}}$$The hash functions used for bloom filter should generally be faster than cryptographic hash algorithms with good distribution and collision resistance. Commonly used hash functions for bloom filter include Murmur hash, fnv series of hashes and Jenkins hashes. Murmur hash is the fastest among them. MurmurHash3 is used by Google Guava library's bloom filter implementation.The sieve analogy12We can compare Bloom filters with a sieve, specially formed to only let through certain elements: The known elements will fit the holes in the sieve and fall through. Even though they’ve never been seen before, some elements will fit the holes in the sieve too and fall through. These are our false positives. Other elements, never seen before, won’t fall through: the negatives. DisadvantagesThe size of the Bloom Filters need to be known a priori based on the number of items that you are going to insert. This is not so great if you do not know or cannot approximate the number of items. You could put an arbitrarily large size, but that would be a waste in terms of space which we are trying to optimize in the very first place and the reason why we adopt to choose Bloom Filter. This could be fixed to create a bloom filter dynamic to the list of items that you want to fit, but depending on the application, this may not be always possible. There is a variant called Scalable Bloom Filter which dynamically adjusts its size for different number of items. This could mitigate some of its shortcomings.Constructing and Membership Existence in Bloom Filter While using the Bloom Filters, you not only accept false positive rates, but also you are willing to have a little bit overhead in terms of speed. Comparing to an hashmap, there is definitely an overhead in terms of hashing the items as well as constructing the bloom filter.Cannot give the items that you inserted Bloom Filter cannot produce a list of items that are inserted, you could only check if an item is in it, but never get the full item list because of hash collisions and hash functions. This is due to arguably the most significant advantage over other data structures; its space efficiency which comes with this disadvantage.Removing an element Removing an element from the Bloom Filter is not possible, you cannot undo an insertion operation as hash results for different items can be indexed in the same position. If you want to do undo inserts, either you need to count the inserts for each index in the BloomFilter or you need to construct the BloomFilter from the start excluding a single item. Both methods involve an overhead and not straightforward. Depending on the application, one might want to try to reconstruct the bloom filter from the start instead of removing or deleting items from the Bloom Filter.Use-casesLet’s look at an example use-case to get a better feeling for how Bloom filters in Python can help.Imagine a large, multi-machine, networked database. Each lookup for a record in that distributed database requires, at its worst, querying multiple machines at once. On each machine, a lookup means accessing large data structures stored on a disk. As you know, disk access, together with networking, is one of the slowest operations in computer science.Now imagine that each machine uses a bloom filter, trained with the records stored on disk. Before accessing any data structure on disks, the machine first checks the filter. If it gives a negative, we can be certain that this machine does not store that record, and we can return this result without accessing disks at all.If a bloom filter can prevent 80% of these disk lookups, in exchange for some extra memory usage, that may be well worth it! Even if the filter would saves only 30% of disk lookups, that may still be an enormous increase in speed and efficiency.Google’s Webtable and Apache CassandraFor instance, this is how Bloom filters are used in Google’s Webtable and Apache Cassandra that are among the most widely used distributed storage systems designed to handle massive amounts of data. Namely, these systems organize their data into a number of tables called Sorted String Tables (SSTs) that reside on the disk and are structured as key-value maps. In Webtable, keys might be website names, and values might be website attributes or contents. In Cassandra, the type of data depends on what system is using it, so for example, for Twitter, a key might be a User ID, and the value could be user’s tweets.When users query for data, a problem arises because we do not know which table contains the desired result. To help locate the right table without checking explicitly on the disk, we maintain a dedicated Bloom filter in RAM for each of the tables, and use them to route the query to the correct table, in the way described in picture on the left.Bloom filters in distributed storage systems. In this example, we have 50 sorted string tables (SSTs) on disk, and each table has a dedicated Bloom filter that can fit into RAM due to its much smaller size. When a user does a lookup, the lookup first checks the Bloom filters. In this example, the first Bloom filter that reports the item as Present is Bloom filter No.3. Then we go ahead and check in the SST3 on disk whether the item is present. In this case, it was a false alarm. We continue checking until another Bloom filter reports Present. Bloom filter No.50 reports present, we go to the disk and actually locate and return the requested item.Bloom filters are most useful when they are strategically placed in high-ingestion systems, in parts of the application where they can prevent expensive disk seeks. For example, having an application perform a lookup of an element in a large table on a disk can easily bring down the throughput of an application from hundreds of thousands ops/sec to only a couple of thousands ops/sec. Instead, if we place a Bloom filter in RAM to serve the lookups, this will deem the disk seek unnecessary except when the Bloom filter reports the key as Present. This way the Bloom filter can remove disk bottlenecks and help the application maintain consistently high throughput across its different components.Bitcoin mobile appPeer-to-peer networks use Bloom filters to communicate data, and a well-known example of that is Bitcoin. An important feature of Bitcoin is ensuring transparency between clients, i.e., each node should be able to see everyone’s transactions. However, for nodes that are operating from a smartphone or a similar device of limited memory and bandwidth, keeping the copy of all transactions is highly impractical. This is why Bitcoin offers the option of simplified payment verification (SPV), where a node can choose to be a light node by advertising a list of transactions it is interested in. This is in contrast to full nodes that contain all the data. In Bitcoin, light clients can broadcast what transactions they are interested in, and thereby block the deluge of updates from the network.Light nodes compute and transmit a Bloom filter of the list of transactions they are interested in to the full nodes. This way, before a full node sends information about a transaction to the light node, it first checks its Bloom filter to see whether a node is interested in it. If the false positive occurs, the light node can discard the information upon its arrival.Bloom filter and PokemonOne really interesting implementation of Bloom filter is for Pokemon game. You can see it here if you're interesting in it - Bloom filter and Pokemon.Now let's show some basic bloom filter in Python.
###Code
class BloomFilter:
'''
Class for Bloom filter, using murmur3 hash function
'''
def __init__(self, items_count: int, fp_prob: float):
'''
items_count : int
Number of items expected to be stored in bloom filter - n
fp_prob : float
False Positive probability in decimal - f
'''
# False posible probability in decimal
self.fp_prob = fp_prob
# Size of bit array to use
self.size = self.__get_size(items_count, fp_prob)
# number of hash functions to use
self.hash_count = self.__get_hash_count(self.size, items_count)
# Bit array of given size
self.bit_array = bitarray(self.size)
# initialize all bits as 0
self.bit_array.setall(0)
def add(self, item):
'''
Add an item in the filter
'''
digests = []
for i in range(self.hash_count):
# create digest for given item.
# i work as seed to mmh3.hash() function
# With different seed, digest created is different
digest = mmh3.hash(item, i) % self.size
digests.append(digest)
# set the bit True in bit_array
self.bit_array[digest] = True
def check(self, item):
'''
Check for existence of an item in filter
'''
for i in range(self.hash_count):
digest = mmh3.hash(item, i) % self.size
if self.bit_array[digest] == False:
# if any of bit is False then,its not present
# in filter
# else there is probability that it exist
return False
return True
@staticmethod
def __get_size(n, p):
'''
Return the size of bit array(m) to used using
following formula
m = -(n * lg(p)) / (lg(2)^2)
n : int
number of items expected to be stored in filter
p : float
False Positive probability in decimal
'''
m = -(n * np.log(p))/(np.log(2)**2)
return int(m)
@staticmethod
def __get_hash_count(m, n):
'''
Return the hash function(k) to be used using
following formula
k = (m/n) * lg(2)
m : int
size of bit array
n : int
number of items expected to be stored in filter
'''
k = (m/n) * np.log(2)
return int(k)
# Example 1
n = 20 # number of items to add
p = 0.05 # false positive probability
bloom_filter = BloomFilter(n,p)
print(f"Size of bit array: {bloom_filter.size}")
print(f"False positive Probability: {bloom_filter.fp_prob}")
print(f"Number of hash functions: {bloom_filter.hash_count}")
# Words to be added
word_present = ['abound','abounds','abundance','abundant','accessable',
'bloom','blossom','bolster','bonny','bonus','bonuses',
'coherent','cohesive','colorful','comely','comfort',
'gems','generosity','generous','generously','genial'
]
random.shuffle(word_present)
# Word not added
word_absent = ['bluff','cheater','hate','war','humanity',
'racism','hurt','nuke','gloomy','facebook',
'geeksforgeeks','twitter'
]
random.shuffle(word_absent)
# Add words to bloom filter
for item in word_present:
bloom_filter.add(item)
test_words = word_present + word_absent
random.shuffle(test_words)
for word in test_words:
if bloom_filter.check(word):
if word in word_absent:
print(f"{word.upper()} IS A FALSE POSITIVE!")
else:
print(f"{word.upper()} is probably present!")
else:
print(f"{word.upper()} definitely not present!")
# Example 2
n = 10 # number of items to add
p = 1e-4 # false positive probability
bloom_filter = BloomFilter(n,p)
animals = ["dog", "cat", "giraffe", "fly", "mosquito", "horse", "eagle",
"bird", "bison", "boar", "butterfly", "ant", "anaconda", "bear",
"chicken", "dolphin", "donkey", "crow", "crocodile"
]
other_animals = ["badger", "cow", "pig", "sheep", "bee", "wolf", "fox", "whale",
"shark", "fish", "turkey", "duck", "dove", "deer", "elephant",
"frog", "falcon", "goat", "gorilla", "hawk"
]
# Add animals into Bloom filter
for animal in animals:
bloom_filter.add(animal)
# Print several statistics of the filter
print(f"Size of bit array: {bloom_filter.size}")
print(f"False positive Probability: {bloom_filter.fp_prob}")
print(f"Number of hash functions: {bloom_filter.hash_count}")
# Check whether an item is in the filter or not
for animal in animals + other_animals:
if bloom_filter.check(animal):
if animal in other_animals:
print(
f'{animal.upper()} is a FALSE POSITIVE case (please adjust fp_prob to a smaller value).'
)
else:
print(f'{animal.upper()} is PROBABLY IN the filter.')
else:
print(f'{animal.upper()} is DEFINITELY NOT IN the filter as expected.')
###Output
Size of bit array: 191
False positive Probability: 0.0001
Number of hash functions: 13
DOG is PROBABLY IN the filter.
CAT is PROBABLY IN the filter.
GIRAFFE is PROBABLY IN the filter.
FLY is PROBABLY IN the filter.
MOSQUITO is PROBABLY IN the filter.
HORSE is PROBABLY IN the filter.
EAGLE is PROBABLY IN the filter.
BIRD is PROBABLY IN the filter.
BISON is PROBABLY IN the filter.
BOAR is PROBABLY IN the filter.
BUTTERFLY is PROBABLY IN the filter.
ANT is PROBABLY IN the filter.
ANACONDA is PROBABLY IN the filter.
BEAR is PROBABLY IN the filter.
CHICKEN is PROBABLY IN the filter.
DOLPHIN is PROBABLY IN the filter.
DONKEY is PROBABLY IN the filter.
CROW is PROBABLY IN the filter.
CROCODILE is PROBABLY IN the filter.
BADGER is DEFINITELY NOT IN the filter as expected.
COW is DEFINITELY NOT IN the filter as expected.
PIG is DEFINITELY NOT IN the filter as expected.
SHEEP is DEFINITELY NOT IN the filter as expected.
BEE is DEFINITELY NOT IN the filter as expected.
WOLF is DEFINITELY NOT IN the filter as expected.
FOX is DEFINITELY NOT IN the filter as expected.
WHALE is DEFINITELY NOT IN the filter as expected.
SHARK is DEFINITELY NOT IN the filter as expected.
FISH is DEFINITELY NOT IN the filter as expected.
TURKEY is DEFINITELY NOT IN the filter as expected.
DUCK is DEFINITELY NOT IN the filter as expected.
DOVE is DEFINITELY NOT IN the filter as expected.
DEER is DEFINITELY NOT IN the filter as expected.
ELEPHANT is DEFINITELY NOT IN the filter as expected.
FROG is a FALSE POSITIVE case (please adjust fp_prob to a smaller value).
FALCON is DEFINITELY NOT IN the filter as expected.
GOAT is DEFINITELY NOT IN the filter as expected.
GORILLA is DEFINITELY NOT IN the filter as expected.
HAWK is DEFINITELY NOT IN the filter as expected.
###Markdown
Now let’s show the formula that determines the false positive rate as a function of $m$ = number of bits in a Bloom filter, $n$ = number of elements to insert and $k$ = number of hash functions visually: $$f \approx \left(1-e^{-\frac {kn}{m}}\right)^{k}$$The graph below shows the plot of $f$ as a function of $k$ for different choices of $m/n$ (bits per element). In many real-life applications, fixing bits-per-element ratio is meaningful because we often have an idea of how many bits we can spend per element. Common values for the bits-per-element ratio are between 6 and 14, and such ratios allow us fairly low false positive rates as shown in the graph below:
###Code
def cm_to_inch(value):
# Figsize works with inches - converter to cm needed
return value/2.54
def plot_fp_rate_vs_num_hash_functions(m, n, labels, linestyles):
k = np.linspace(0, 20, 1000)
plt.figure(figsize=(cm_to_inch(30), cm_to_inch(20)))
plt.xlim(0, 21, 1)
plt.ylim(0, 0.20)
plt.xticks(np.arange(0, 21, 1))
plt.yticks(np.arange(0.00, 0.21, 0.01))
plt.xlabel("$k$ - number of hash functions", loc = 'center', rotation = 0)
plt.ylabel("$f$ - false positive rate", loc = 'center', rotation = 90)
plt.title("Flase Positive Rate vs. Number of Hash Functions", pad=20)
for i in range(len(m)):
f = (1 - np.e ** (-n[i] * k / m[i])) ** k
plt.plot(k, f, label=labels[i], linestyle = linestyles[i])
plt.legend(loc='upper right')
plt.grid()
return plt.show()
# Set "m" and "n" as list of int elements (with same lenght) with random numbers to get proper m/n ratios.
m = [8, 10, 30, 80, 110, 60, 140]
n = [2, 2, 5, 10, 11, 5, 10]
# Set labels and linestyles for each m/n ratio as list of strings - should be same length as "m" and "n" lists.
labels = ['$m/n$ = 4', '$m/n$ = 5', '$m/n$ = 6', '$m/n$ = 8', '$m/n$ = 10', '$m/n$ = 12', '$m/n$ = 14']
linestyles = ['solid', 'solid', 'solid', 'dashdot', 'dotted', 'dashed', 'dashed', 'dashed']
# Plot the data
plot_fp_rate_vs_num_hash_functions(m, n, labels, linestyles)
###Output
_____no_output_____
###Markdown
The plot relating the number of hash functions $(k)$ and the false positive rate $(f)$ in a Bloom filter. The graph shows the false positive rate for a fixed bits-per-element ratio $(m/n)$, different curves corresponding to different ratios. Starting from the top to bottom, we have $m/n=4, 5, 6, 8, 10, 12, 14$. As the amount of allowed space per element increases (going from top to bottom), given the same number of hash functions, the false positive rate drops. Also, the curves show the trend that increasing $k$ up until some point (going from left to right), for a fixed $m/n$, reduces the error, but after some point, increasing $k$ increases the error rate. Note that the curves are fairly smooth, and for example, when $m/n$=8, i.e., we are willing to spend 1 byte per element, if we use anywhere between 4 and 8 hash functions, the false positive rate will not go above 3%, even though the optimal choice of $k$ is between 5 and 6.While increasing $m$ or reducing $n$ drops the false positive rate, i.e., more bits per element results in the overall lower false positive curve, the graph also shows the two-fold effect that $k$ has on false positives: up to a certain point, increasing $k$ helps reduce false positives, but there is a point at which it starts to worsen it; this is because having more hash functions allows a lookup more chance to find a zero, but also on an insert, sets more bits to 1. The minimum for each curve is the sweet spot that is the optimal $k$ for a particular bits-per-element. This leads to formula for optimum number of hash functions:$${\displaystyle k={\frac {m}{n}}\ln 2}$$For example, when $m/n=8$, $k_{opt} = 5.545$. We can use this formula to optimally configure the Bloom filter. Keep in mind that these calculations assume $k$ is a real number, but our $k$ has to be an integer. So if $k_{opt}$ is 5.546 when $m/n=8$ a non-integer, then we need to choose one of the two neighboring integers, which means that false positive rate also is not an exact anymore. Often it is better to choose the smaller of the two possible values of $k$, because it reduces the amount of computation we need to do. So in that case we can conclude that $k=5$ is the optimal number of hash functions.Let's try a little bit different implementation of Bloom filter for spell checking13.
###Code
class BloomFilterSpell:
def __init__(self, size, hash_count):
self.size = size
self.hash_count = hash_count
self.bit_array = bitarray(size)
self.bit_array.setall(0)
def add(self, string):
for seed in range(self.hash_count):
result = mmh3.hash(string, seed) % self.size
self.bit_array[result] = 1
def lookup(self, string):
for seed in range(self.hash_count):
result = mmh3.hash(string, seed) % self.size
if self.bit_array[result] == 0:
return "Definitely not"
return "Probably"
def fp_prob(num_hash_funcs, num_items, bit_vec_length):
probability_of_success = np.e**(
(-num_hash_funcs * float(num_items)) / bit_vec_length)
return (1.0 - probability_of_success)**num_hash_funcs
def random_char(y):
return ''.join(random.choice(string.ascii_letters) for x in range(y))
size = 1024000
hash_functions = 5
bloomfil = BloomFilterSpell(size, hash_functions)
lines = open("data/words_alpha.txt").read().splitlines()
for line in lines:
bloomfil.add(line)
#result = raw_input("Which word you want to search: ")
prob_fp = fp_prob(hash_functions, len(lines), size)
print(f"Probability of False Positives: {prob_fp}")
random_word = random_char(10)
print (f"Randomly generated word is {random_word}")
print (f"{random_word} Spelling is {bloomfil.lookup(random_word)} correct")
# print "{} Spelling is {} correct".format(result,bloomfil.lookup(result))
###Output
Probability of False Positives: 0.408050357119402
Randomly generated word is eJPSDLJXJI
eJPSDLJXJI Spelling is Definitely not correct
|
02_SCF/basics/scf_pyscf.ipynb | ###Markdown
SCF Imports
###Code
import numpy as np
import scipy.linalg as spla
import pyscf
from pyscf import gto, scf
import matplotlib.pyplot as plt
import time
%matplotlib notebook
###Output
_____no_output_____
###Markdown
Some useful resources: - Szabo and Ostlund Chapter 3 (for algorithm see page 146) - [Notes by David Sherrill](http://vergil.chemistry.gatech.edu/notes/hf-intro/hf-intro.html) - [Notes by Joshua Goings](http://joshuagoings.com/2013/04/24/hartree-fock-self-consistent-field-procedure/) - [Programming notes by Francesco Evangelista](http://www.evangelistalab.org/wp-content/uploads/2013/12/Hartree-Fock-Theory.pdf) - [Psi4Numpy SCF page](https://github.com/psi4/psi4numpy/tree/master/Tutorials/03_Hartree-Fock) - [Crawdad programming notes](http://sirius.chem.vt.edu/wiki/doku.php?id=crawdad:programming:project3) The SCF algorithm from Szabo and Ostlund: 1. Specify a molecule (coordinates $\{R_A\}$, atomic numbers $\{Z_A\}$, number electrons $N$) and atomic orbital basis $\{\phi_\mu\}$. 2. Calculate molecular integrals over AOs ( overlap $S_{\mu\nu}$, core Hamiltonian $H^{\mathrm{core}}_{\mu\nu}$, and 2 electron integrals $(\mu \nu | \lambda \sigma)$ ). 3. Diagonalize the overlap matrix $S$ to obtain the transformation matrix $X$. 4. Make a guess at the original density matrix $P$. 5. Calculate the intermediate matrix $G$ using the density matrix $P$ and the two electron integrals $(\mu \nu | \lambda \sigma)$. 6. Construct the Fock matrix $F$ from the core hamiltonian $H^{\mathrm{core}}_{\mu\nu}$ and the intermediate matrix $G$. 7. Transform the Fock matrix $F' = X^\dagger F X$. 8. Diagonalize the Fock matrix to get orbital energies $\epsilon$ and molecular orbitals (in the transformed basis) $C'$. 9. Transform the molecular orbitals back to the AO basis $C = X C'$. 10. Form a new guess at the density matrix $P$ using $C$. 11. Check for convergence. (Are the changes in energy and/or density smaller than some threshold?) If not, return to step 5. 12. If converged, use the molecular orbitals $C$, density matrix $P$, and Fock matrix $F$ to calculate observables like the total Energy, etc. Quick noteThe reason we need to calculate the transformation matrix $X$ is because the atomic orbital basis is not orthonormal by default. This means without transformation we would need to solve a generalized eigenvalue problem $FC = ESC$. If we use scipy to solve this generalized eigenvalue problem we can simply the SCF algorithm. Simplified SCF 1. Specify a molecule (coordinates $\{R_A\}$, atomic numbers $\{Z_A\}$, number electrons $N$) and atomic orbital basis $\{\phi_\mu\}$. 2. Calculate molecular integrals over AOs ( overlap $S_{\mu\nu}$, core Hamiltonian $H^{\mathrm{core}}_{\mu\nu}$, and 2 electron integrals $(\mu \nu | \lambda \sigma)$ ). 3. Make a guess at the original density matrix $P$. 4. Calculate the intermediate matrix $G$ using the density matrix $P$ and the two electron integrals $(\mu \nu | \lambda \sigma)$. 5. Construct the Fock matrix $F$ from the core hamiltonian $H^{\mathrm{core}}_{\mu\nu}$ and the intermediate matrix $G$. 6. Solve the generalized eigenvalue problem using the Fock matrix $F$ and the overlap matrix $S$ to get orbital energies $\epsilon$ and molecular orbitals. 7. Form a new guess at the density matrix $P$ using $C$. 8. Check for convergence. (Are the changes in energy and/or density smaller than some threshold?) If not, return to step 4. 9. If converged, use the molecular orbitals $C$, density matrix $P$, and Fock matrix $F$ to calculate observables like the total Energy, etc. STEP 1 : Specify the molecule
###Code
# start timer
start_time = time.time()
# define molecule
mol = pyscf.gto.M(
atom="O 0.0000000 0.0000000 0.0000000; H 0.7569685 0.0000000 -0.5858752; H -0.7569685 0.0000000 -0.5858752",
basis='sto-3g',
unit="Ang",
verbose=0,
symmetry=False,
spin=0,
charge=0
)
# get number of atomic orbitals
num_ao = mol.nao_nr()
# get number of electrons
num_elec_alpha, num_elec_beta = mol.nelec
num_elec = num_elec_alpha + num_elec_beta
# get nuclear repulsion energy
E_nuc = mol.energy_nuc()
###Output
_____no_output_____
###Markdown
STEP 2 : Calculate molecular integrals Overlap $$ S_{\mu\nu} = (\mu|\nu) = \int dr \phi^*_{\mu}(r) \phi_{\nu}(r) $$Kinetic$$ T_{\mu\nu} = (\mu\left|-\frac{\nabla}{2}\right|\nu) = \int dr \phi^*_{\mu}(r) \left(-\frac{\nabla}{2}\right) \phi_{\nu}(r) $$Nuclear Attraction$$ V_{\mu\nu} = (\mu|r^{-1}|\nu) = \int dr \phi^*_{\mu}(r) r^{-1} \phi_{\nu}(r) $$Form Core Hamiltonian$$ H = T + V $$Two electron integrals$$ (\mu\nu|\lambda\sigma) = \int dr_1 dr_2 \phi^*_{\mu}(r_1) \phi_{\nu}(r_1) r_{12}^{-1} \phi_{\lambda}(r_2) \phi_{\sigma}(r_2) $$
###Code
# calculate overlap integrals
S = mol.intor('cint1e_ovlp_sph')
# calculate kinetic energy integrals
T = mol.intor('cint1e_kin_sph')
# calculate nuclear attraction integrals
V = mol.intor('cint1e_nuc_sph')
# form core Hamiltonian
H = T + V
# calculate two electron integrals
eri = mol.intor('cint2e_sph', aosym='s8')
# since we are using the 8 fold symmetry of the 2 electron integrals
# the functions below will help us when accessing elements
__idx2_cache = {}
def idx2(i, j):
if (i, j) in __idx2_cache:
return __idx2_cache[i, j]
elif i >= j:
__idx2_cache[i, j] = int(i*(i+1)/2+j)
else:
__idx2_cache[i, j] = int(j*(j+1)/2+i)
return __idx2_cache[i, j]
def idx4(i, j, k, l):
return idx2(idx2(i, j), idx2(k, l))
print(np.shape(eri))
###Output
_____no_output_____
###Markdown
STEP 3 : Form guess density matrix
###Code
# set inital density matrix to zero
D = np.zeros((num_ao, num_ao))
###Output
_____no_output_____
###Markdown
STEPS 4 - 8 : SCF loop 4. Calculate the intermediate matrix $G$ using the density matrix $P$ and the two electron integrals $(\mu \nu | \lambda \sigma)$. $$G_{\mu\nu} = \sum_{\lambda\sigma}^{\mathrm{num\_ao}} P_{\lambda \sigma}[2(\mu\nu|\lambda\sigma)-(\mu\lambda|\nu\sigma)]$$ 5. Construct the Fock matrix $F$ from the core hamiltonian $H^{\mathrm{core}}_{\mu\nu}$ and the intermediate matrix $G$. $$ F = H + G $$ 6. Solve the generalized eigenvalue problem using the Fock matrix $F$ and the overlap matrix $S$ to get orbital energies $\epsilon$ and molecular orbitals. $$F C = E S C $$ 7. Form a new guess at the density matrix $P$ using $C$. $$ P_{\mu\nu} = \sum_{i}^{\mathrm{num\_elec}/2} C_{\mu i} C_{\nu i} $$ 8. Check for convergence. (Are the changes in energy and/or density smaller than some threshold?) If not, return to step 4. $$ E_{\mathrm{elec}} = \sum^{\mathrm{num\_ao}}_{\mu\nu} P_{\mu\nu} (H_{\mu\nu} + F_{\mu\nu}) $$ $$ \Delta E = E_{\mathrm{new}} - E_{\mathrm{old}} $$ $$ |\Delta P| = \left[ \sum^{\mathrm{num\_ao}}_{\mu\nu} [P^{\mathrm{new}}_{\mu\nu} - P_{\mu\nu}^{\mathrm{old}}]^2 \right]^{1/2}$$ 9. If converged, use the molecular orbitals $C$, density matrix $P$, and Fock matrix $F$ to calculate observables like the total Energy, etc. $$ E_{\mathrm{total}} = V_{\mathrm{NN}} + E_{\mathrm{elec}} $$
###Code
# 2 helper functions for printing during SCF
def print_start_iterations():
print("{:^79}".format("{:>4} {:>11} {:>11} {:>11} {:>11}".format(
"Iter", "Time(s)", "RMSC DM", "delta E", "E_elec")))
print("{:^79}".format("{:>4} {:>11} {:>11} {:>11} {:>11}".format(
"****", "*******", "*******", "*******", "******")))
def print_iteration(iteration_num, iteration_start_time, iteration_end_time, iteration_rmsc_dm, iteration_E_diff, E_elec):
print("{:^79}".format("{:>4d} {:>11f} {:>.5E} {:>.5E} {:>11f}".format(iteration_num,
iteration_end_time - iteration_start_time, iteration_rmsc_dm, iteration_E_diff, E_elec)))
# set stopping criteria
iteration_max = 100
convergence_E = 1e-9
convergence_DM = 1e-5
# loop variables
iteration_num = 0
E_total = 0
E_elec = 0.0
iteration_E_diff = 0.0
iteration_rmsc_dm = 0.0
converged = False
exceeded_iterations = False
print_start_iterations()
while (not converged and not exceeded_iterations):
# store last iteration and increment counters
iteration_start_time = time.time()
iteration_num += 1
E_elec_last = E_elec
D_last = np.copy(D)
# form G matrix
G = np.zeros((num_ao, num_ao))
#########################################################
# FILL IN HOW TO MAKE THE G MATRIX HERE
#########################################################
# build fock matrix
#########################################################
# FILL IN HOW TO MAKE THE FOCK MATRIX HERE
#########################################################
# solve the generalized eigenvalue problem
E_orbitals, C = spla.eigh(F, S)
# compute new density matrix
D = np.zeros((num_ao, num_ao))
#########################################################
# FILL IN HOW TO MAKE THE DENSITY MATRIX HERE
#########################################################
# calculate electronic energy
#########################################################
# FILL IN HOW TO CALCULATE THE ELECTRONIC ENERGY HERE
#########################################################
# calculate energy change of iteration
iteration_E_diff = np.abs(E_elec - E_elec_last)
# rms change of density matrix
iteration_rmsc_dm = np.sqrt(np.sum((D - D_last)**2))
iteration_end_time = time.time()
print_iteration(iteration_num, iteration_start_time,
iteration_end_time, iteration_rmsc_dm, iteration_E_diff, E_elec)
if(np.abs(iteration_E_diff) < convergence_E and iteration_rmsc_dm < convergence_DM):
converged = True
if(iteration_num == iteration_max):
exceeded_iterations = True
###Output
_____no_output_____
###Markdown
STEP 9 : Calculate Observables
###Code
# calculate total energy
####################################################
# FILL IN HOW TO CALCULATE THE TOTAL ENERGY HERE
####################################################
print("{:^79}".format("Total Energy : {:>11f}".format(E_total)))
###Output
_____no_output_____ |
teaching-evals/visualizing_teaching_evaluations.ipynb | ###Markdown
###Code
# Load useful libraries
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# Load data and take a peek
df = pd.read_csv("https://raw.githubusercontent.com/jrg94/doodles/master/teaching-evals/mean-evals-by-term.csv")
df.head()
# Load question labels
labels = pd.read_csv("https://raw.githubusercontent.com/jrg94/doodles/master/teaching-evals/question-labels.csv")
labels.head()
# Load question distributions
dists = pd.read_csv("https://raw.githubusercontent.com/jrg94/doodles/master/teaching-evals/question-distributions.csv")
dists.head()
# Plot time series of all questions over 4 terms
results = df.plot(
subplots=True,
x="term",
y=["q1", "q2", "q3", "q4", "q5", "q6", "q7", "q8", "q9", "q10"],
figsize=(15, 15),
ylim=(4,5),
title=list(labels.values[0]),
legend=False,
sharex=True,
sharey=True,
layout=(5,2)
)
# Plot distributions of all four terms
filt = dists[dists["question"] == "q1"][
["term", "strongly disagree", "disagree", "neutral", "agree", "strongly agree"]
].set_index("term").T
results = filt.plot(
kind="bar",
subplots=True,
figsize=(12, 8),
ylim=(0,100),
legend=False
)
fig, ax = plt.subplots(nrows=5, ncols=2, figsize=(12, 8), sharex=True, sharey=True)
width=.15
i = 1
for row in ax:
for col in row:
filt = dists[dists["question"] == f"q{i}" ][
["term", "strongly disagree", "disagree", "neutral", "agree", "strongly agree"]
].set_index("term").T
col.set_title(labels.values[0][i - 1])
for j in range(5):
if j == 2: # centers the tick
col.bar(np.arange(4) + width * j, filt.iloc[j], width, label=filt.index[j], tick_label=filt.T.index, align="center")
else:
col.bar(np.arange(4) + width * j, filt.iloc[j], width, label=filt.index[j], align="center")
handles, axes_labels = col.get_legend_handles_labels()
i+=1
fig.legend(handles, axes_labels, loc="lower right", bbox_to_anchor=(1.15, .8))
fig.tight_layout()
###Output
_____no_output_____ |
dockerfile-example.ipynb | ###Markdown
Simple gym, roboschool Render TestSource: https://stackoverflow.com/a/44426542
###Code
import gym, roboschool
from IPython import display
import PIL.Image
import time
from io import BytesIO
def showarray(a, fmt='jpeg'):
f = BytesIO()
PIL.Image.fromarray(a).save(f, fmt)
display.display(display.Image(data=f.getvalue()))
env = gym.make('RoboschoolInvertedPendulum-v1')
env.reset()
fps = []
for _ in range(100):
action = env.action_space.sample()
env.step(action)
t1 = time.time()
showarray(env.render(mode='rgb_array'))
t2 = time.time()
fps.append(1/(t2-t1))
display.clear_output(wait=True)
###Output
_____no_output_____ |
modelling_pipeline/feature_engg_script.ipynb | ###Markdown
Importing Libraries
###Code
# Importing Libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import xgboost as xgb
import time
import pickle
from math import sqrt
from numpy import loadtxt
from itertools import product
from tqdm import tqdm
from sklearn import preprocessing
from xgboost import plot_tree
from matplotlib import pyplot
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import KFold
from sklearn.feature_extraction.text import TfidfVectorizer
os.chdir('..')
os.getcwd()
###Output
_____no_output_____
###Markdown
Loading Data
###Code
sales_train = pd.read_csv(r'datasets\sales_train.csv')
items = pd.read_csv(r'datasets\translated_items.csv')
shops = pd.read_csv(r'datasets\translated_shops.csv')
item_categories = pd.read_csv(r'datasets\translated_item_categories.csv')
test = pd.read_csv(r'datasets\test.csv')
sample_submission = pd.read_csv(r'datasets\sample_submission.csv')
###Output
_____no_output_____
###Markdown
Aggregation of data
###Code
# Create a dataframe grid which is based on shop and item id combinations and is arranged based on
grid = []
for block_num in sales_train['date_block_num'].unique():
cur_shops = sales_train[sales_train['date_block_num']==block_num]['shop_id'].unique()
cur_items = sales_train[sales_train['date_block_num']==block_num]['item_id'].unique()
grid.append(np.array(list(product(*[cur_shops, cur_items, [block_num]])),dtype='int32'))
index_cols = ['shop_id', 'item_id', 'date_block_num']
grid = pd.DataFrame(np.vstack(grid), columns = index_cols, dtype=np.int32)
grid
# Aggregations are done to convert daily sales to month level
sales_train['item_cnt_day'] = sales_train['item_cnt_day'].clip(0,20)
groups = sales_train.groupby(['shop_id', 'item_id', 'date_block_num'])
trainset = groups.agg({'item_cnt_day':'sum', 'item_price':'mean'}).reset_index()
trainset = trainset.rename(columns = {'item_cnt_day' : 'item_cnt_month'})
trainset['item_cnt_month'] = trainset['item_cnt_month'].clip(0,20)
trainset
# Extract mean prices for each item based on each month
sales_groups = sales_train.groupby(['item_id'])
sales_item_data = sales_groups.agg({'item_price':'mean'}).reset_index()
sales_item_data
trainset = pd.merge(grid,trainset,how='left',on=index_cols)
trainset.item_cnt_month = trainset.item_cnt_month.fillna(0)
trainset
# Replace NAN price values of non existing sales of item with mean item prices that has been seen
# throughtout the duration of 33 months
price_trainset = pd.merge(trainset[['item_id', 'item_price']], sales_item_data, how='left', on='item_id')
price_trainset['final_item_price']=0.0
for row in tqdm(price_trainset.copy().itertuples(), total=price_trainset.shape[0]):
if pd.isna(row.item_price_x):
price_trainset.iloc[row.Index, price_trainset.columns.get_loc('final_item_price')] = row.item_price_y
else:
price_trainset.iloc[row.Index, price_trainset.columns.get_loc('final_item_price')] = row.item_price_x
price_trainset
trainset['final_item_price'] = price_trainset['final_item_price']
trainset.drop(['item_price'], axis=1, inplace=True)
trainset.rename(columns = {'final_item_price':'item_price'}, inplace = True)
trainset
# Get category id
trainset = pd.merge(trainset, items[['item_id', 'item_category_id']], on = 'item_id')
trainset
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
# Set seeds and options
np.random.seed(10)
pd.set_option('display.max_rows', 231)
pd.set_option('display.max_columns', 100)
# Feature engineering list
new_features = []
enable_feature_idea = [True, True, True, True, True, True, False, False, True, True]
# Some parameters(maybe add more periods, score will be better) [1,2,3,12]
lookback_range = [1,2,3,4,5,6,7,8,9,10,11,12]
tqdm.pandas()
# Use recent data
start_month_index = trainset.date_block_num.min()
end_month_index = trainset.date_block_num.max()
current = time.time()
trainset = trainset[['shop_id', 'item_id', 'item_category_id', 'date_block_num', 'item_price', 'item_cnt_month']]
trainset = trainset[(trainset.date_block_num >= start_month_index) & (trainset.date_block_num <= end_month_index)]
print('Loading test set...')
test_dataset = loadtxt(r'datasets\test.csv', delimiter="," ,skiprows=1, usecols = (1,2), dtype=int)
testset = pd.DataFrame(test_dataset, columns = ['shop_id', 'item_id'])
print('Merging with other datasets...')
# Get item category id into test_df
testset = testset.merge(items[['item_id', 'item_category_id']], on = 'item_id', how = 'left')
testset['date_block_num'] = 34
# Make testset contains same column as trainset so we can concatenate them row-wise
testset['item_cnt_month'] = -1
testset
train_test_set = pd.concat([trainset, testset], axis = 0)
end = time.time()
diff = end - current
print('Took ' + str(int(diff)) + ' seconds to train and predict val set')
# Using Label Encoder to encode the item categories and use them with training set data
lb = preprocessing.LabelEncoder()
l_cat = list(item_categories.translated_item_category_name)
item_categories['item_category_id_fix'] = lb.fit_transform(l_cat)
item_categories['item_category_name_fix'] = l_cat
train_test_set = train_test_set.merge(item_categories[['item_category_id', 'item_category_id_fix']], on = 'item_category_id', how = 'left')
_ = train_test_set.drop(['item_category_id'],axis=1, inplace=True)
train_test_set.rename(columns = {'item_category_id_fix':'item_category_id'}, inplace = True)
_ = item_categories.drop(['item_category_id'],axis=1, inplace=True)
_ = item_categories.drop(['item_category_name'],axis=1, inplace=True)
_ = item_categories.drop(['translated_item_category_name'],axis=1, inplace=True)
item_categories.rename(columns = {'item_category_id_fix':'item_category_id'}, inplace = True)
item_categories.rename(columns = {'item_category_name_fix':'item_category_name'}, inplace = True)
item_categories = item_categories.drop_duplicates()
item_categories.index = np.arange(0, len(item_categories))
item_categories = item_categories.sort_values(by=['item_category_id']).reset_index(drop=True)
item_categories
###Output
_____no_output_____
###Markdown
Idea 0: Add previous shop/item sales as feature (Lag feature)
###Code
if enable_feature_idea[0]:
for diff in tqdm(lookback_range):
feature_name = 'prev_shopitem_sales_' + str(diff)
trainset2 = train_test_set.copy()
trainset2.loc[:, 'date_block_num'] += diff
trainset2.rename(columns={'item_cnt_month': feature_name}, inplace=True)
train_test_set = train_test_set.merge(trainset2[['shop_id', 'item_id', 'date_block_num', feature_name]], on = ['shop_id', 'item_id', 'date_block_num'], how = 'left')
train_test_set[feature_name] = train_test_set[feature_name].fillna(0)
new_features.append(feature_name)
train_test_set.head(3)
###Output
100%|██████████| 12/12 [02:47<00:00, 13.97s/it]
###Markdown
Idea 1: Add previous item sales as feature (Lag feature)
###Code
if enable_feature_idea[1]:
groups = train_test_set.groupby(by = ['item_id', 'date_block_num'])
for diff in tqdm(lookback_range):
feature_name = 'prev_item_sales_' + str(diff)
result = groups.agg({'item_cnt_month':'mean'})
result = result.reset_index()
result.loc[:, 'date_block_num'] += diff
result.rename(columns={'item_cnt_month': feature_name}, inplace=True)
train_test_set = train_test_set.merge(result, on = ['item_id', 'date_block_num'], how = 'left')
train_test_set[feature_name] = train_test_set[feature_name].fillna(0)
new_features.append(feature_name)
train_test_set.head(3)
###Output
100%|██████████| 12/12 [01:18<00:00, 6.54s/it]
###Markdown
Idea 2: Add previous shop/item price as feature (Lag feature)
###Code
if enable_feature_idea[2]:
groups = train_test_set.groupby(by = ['shop_id', 'item_id', 'date_block_num'])
for diff in tqdm(lookback_range):
feature_name = 'prev_shopitem_price_' + str(diff)
result = groups.agg({'item_price':'mean'})
result = result.reset_index()
result.loc[:, 'date_block_num'] += diff
result.rename(columns={'item_price': feature_name}, inplace=True)
train_test_set = train_test_set.merge(result, on = ['shop_id', 'item_id', 'date_block_num'], how = 'left')
train_test_set[feature_name] = train_test_set[feature_name]
new_features.append(feature_name)
train_test_set.head(3)
###Output
100%|██████████| 12/12 [04:34<00:00, 22.88s/it]
###Markdown
Idea 3: Add previous item price as feature (Lag feature)
###Code
if enable_feature_idea[3]:
groups = train_test_set.groupby(by = ['item_id', 'date_block_num'])
for diff in tqdm(lookback_range):
feature_name = 'prev_item_price_' + str(diff)
result = groups.agg({'item_price':'mean'})
result = result.reset_index()
result.loc[:, 'date_block_num'] += diff
result.rename(columns={'item_price': feature_name}, inplace=True)
train_test_set = train_test_set.merge(result, on = ['item_id', 'date_block_num'], how = 'left')
train_test_set[feature_name] = train_test_set[feature_name]
new_features.append(feature_name)
train_test_set.head(3)
###Output
100%|██████████| 12/12 [03:15<00:00, 16.27s/it]
###Markdown
Idea 4: Mean encodings for shop/item pairs(Mean encoding, doesnt work for me)
###Code
def create_mean_encodings(train_test_set, categorical_var_list, target):
feature_name = "_".join(categorical_var_list) + "_" + target + "_mean"
df = train_test_set.copy()
df1 = df[df.date_block_num <= 32]
df2 = df[df.date_block_num <= 33]
df3 = df[df.date_block_num == 34]
# Extract mean encodings using training data(here we don't use month 33 to avoid data leak on validation)
# If I try to extract mean encodings from all months, then val rmse decreases a tiny bit, but test rmse would increase by 4%
# So this is important
mean_32 = df1[categorical_var_list + [target]].groupby(categorical_var_list, as_index=False)[[target]].mean()
mean_32 = mean_32.rename(columns={target:feature_name})
# Extract mean encodings using all data, this will be applied to test data
mean_33 = df2[categorical_var_list + [target]].groupby(categorical_var_list, as_index=False)[[target]].mean()
mean_33 = mean_33.rename(columns={target:feature_name})
# Apply mean encodings
df2 = df2.merge(mean_32, on = categorical_var_list, how = 'left')
df3 = df3.merge(mean_33, on = categorical_var_list, how = 'left')
# Concatenate
train_test_set = pd.concat([df2, df3], axis = 0)
new_features.append(feature_name)
return train_test_set
train_test_set = create_mean_encodings(train_test_set, ['shop_id', 'item_id'], 'item_cnt_month')
train_test_set.head(3)
###Output
_____no_output_____
###Markdown
Idea 5: Mean encodings for item (Mean encoding, doesnt work for me)
###Code
train_test_set = create_mean_encodings(train_test_set, ['item_id'], 'item_cnt_month')
train_test_set.head(3)
###Output
_____no_output_____
###Markdown
Idea 6: Number of month from last sale of shop/item (Use info from past)
###Code
def create_last_sale_shop_item(row):
for diff in range(1,33+1):
feature_name = '_prev_shopitem_sales_' + str(diff)
if row[feature_name] != 0.0:
return diff
return np.nan
lookback_range = list(range(1, 33 + 1))
if enable_feature_idea[6]:
for diff in tqdm(lookback_range):
feature_name = '_prev_shopitem_sales_' + str(diff)
trainset2 = train_test_set.copy()
trainset2.loc[:, 'date_block_num'] += diff
trainset2.rename(columns={'item_cnt_month': feature_name}, inplace=True)
train_test_set = train_test_set.merge(trainset2[['shop_id', 'item_id', 'date_block_num', feature_name]], on = ['shop_id', 'item_id', 'date_block_num'], how = 'left')
train_test_set[feature_name] = train_test_set[feature_name].fillna(0)
#new_features.append(feature_name)
train_test_set.loc[:, 'last_sale_shop_item'] = train_test_set.progress_apply (lambda row: create_last_sale_shop_item(row),axis=1)
new_features.append('last_sale_shop_item')
###Output
_____no_output_____
###Markdown
Idea 7: Number of month from last sale of item(Use info from past)
###Code
def create_last_sale_item(row):
for diff in range(1,33+1):
feature_name = '_prev_item_sales_' + str(diff)
if row[feature_name] != 0.0:
return diff
return np.nan
lookback_range = list(range(1, 33 + 1))
if enable_feature_idea[7]:
groups = train_test_set.groupby(by = ['item_id', 'date_block_num'])
for diff in tqdm(lookback_range):
feature_name = '_prev_item_sales_' + str(diff)
result = groups.agg({'item_cnt_month':'mean'})
result = result.reset_index()
result.loc[:, 'date_block_num'] += diff
result.rename(columns={'item_cnt_month': feature_name}, inplace=True)
train_test_set = train_test_set.merge(result, on = ['item_id', 'date_block_num'], how = 'left')
train_test_set[feature_name] = train_test_set[feature_name].fillna(0)
new_features.append(feature_name)
train_test_set.loc[:, 'last_sale_item'] = train_test_set.progress_apply (lambda row: create_last_sale_item(row),axis=1)
###Output
_____no_output_____
###Markdown
Idea 8: Item name (Tfidf text feature)
###Code
items_subset = items[['item_id', 'item_name']]
feature_count = 25
tfidf = TfidfVectorizer(max_features=feature_count)
items_df_item_name_text_features = pd.DataFrame(tfidf.fit_transform(items_subset['item_name']).toarray())
cols = items_df_item_name_text_features.columns
for i in range(feature_count):
feature_name = 'item_name_tfidf_' + str(i)
items_subset[feature_name] = items_df_item_name_text_features[cols[i]]
new_features.append(feature_name)
items_subset.drop('item_name', axis = 1, inplace = True)
train_test_set = train_test_set.merge(items_subset, on = 'item_id', how = 'left')
train_test_set.head()
train_test_set
###Output
_____no_output_____
###Markdown
Save New feature list and training set
###Code
with open(os.path.join(r'datasets\training_datasets', 'new_features.pkl'), "wb") as fp: #Pickling
pickle.dump(new_features, fp)
import gc
def reduce_mem_usage(df, int_cast=True, obj_to_category=False, subset=None):
"""
Iterate through all the columns of a dataframe and modify the data type to reduce memory usage.
:param df: dataframe to reduce (pd.DataFrame)
:param int_cast: indicate if columns should be tried to be casted to int (bool)
:param obj_to_category: convert non-datetime related objects to category dtype (bool)
:param subset: subset of columns to analyse (list)
:return: dataset with the column dtypes adjusted (pd.DataFrame)
"""
start_mem = df.memory_usage().sum() / 1024 ** 2;
gc.collect()
print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
cols = subset if subset is not None else df.columns.tolist()
for col in tqdm(cols):
col_type = df[col].dtype
if col_type != object and col_type.name != 'category' and 'datetime' not in col_type.name:
c_min = df[col].min()
c_max = df[col].max()
# test if column can be converted to an integer
treat_as_int = str(col_type)[:3] == 'int'
if int_cast and not treat_as_int:
# treat_as_int = check_if_integer(df[col])
treat_as_int = (df[col] % 1 == 0).all()
if treat_as_int:
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.uint8).min and c_max < np.iinfo(np.uint8).max:
df[col] = df[col].astype(np.uint8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.uint16).min and c_max < np.iinfo(np.uint16).max:
df[col] = df[col].astype(np.uint16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.uint32).min and c_max < np.iinfo(np.uint32).max:
df[col] = df[col].astype(np.uint32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
elif c_min > np.iinfo(np.uint64).min and c_max < np.iinfo(np.uint64).max:
df[col] = df[col].astype(np.uint64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
elif 'datetime' not in col_type.name and obj_to_category:
df[col] = df[col].astype('category')
gc.collect()
end_mem = df.memory_usage().sum() / 1024 ** 2
print('Memory usage after optimization is: {:.3f} MB'.format(end_mem))
print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
return df
train_test_set = reduce_mem_usage(train_test_set)
train_test_set
train_test_set.to_csv(os.path.join(r'datasets\training_datasets', 'trainset.csv'), index=False)
###Output
_____no_output_____ |
scikitlearn/Ex_Files_ML_SciKit_Learn/Exercise Files/02_09_Bagged_Trees.ipynb | ###Markdown
Each machine learning algorithm has strengths and weaknesses. A weakness of decision trees is that they are prone to overfitting on the training set. A way to mitigate this problem is to constrain how large a tree can grow. Bagged trees try to overcome this weakness by using bootstrapped data to grow multiple deep decision trees. The idea is that many trees protect each other from individual weaknesses.In this video, I'll share with you how you can build a bagged tree model for regression. Import Libraries
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
# Bagged Trees Regressor
from sklearn.ensemble import BaggingRegressor
###Output
_____no_output_____
###Markdown
Load the DatasetThis dataset contains house sale prices for King County, which includes Seattle. It includes homes sold between May 2014 and May 2015. The code below loads the dataset. The goal of this dataset is to predict price based on features like number of bedrooms and bathrooms
###Code
df = pd.read_csv('data/kc_house_data.csv')
df.head()
# This notebook only selects a couple features for simplicity
# However, I encourage you to play with adding and substracting more features
features = ['bedrooms','bathrooms','sqft_living','sqft_lot','floors']
X = df.loc[:, features]
y = df.loc[:, 'price'].values
###Output
_____no_output_____
###Markdown
Splitting Data into Training and Test Sets
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
###Output
_____no_output_____
###Markdown
Note, another benefit of bagged trees like decision trees is that you don’t have to standardize your features unlike other algorithms like logistic regression and K-Nearest Neighbors. Bagged TreesStep 1: Import the model you want to useIn sklearn, all machine learning models are implemented as Python classes
###Code
# This was already imported earlier in the notebook so commenting out
#from sklearn.ensemble import BaggingRegressor
###Output
_____no_output_____
###Markdown
Step 2: Make an instance of the ModelThis is a place where we can tune the hyperparameters of a model.
###Code
reg = BaggingRegressor(n_estimators=100,
random_state = 0)
###Output
_____no_output_____
###Markdown
Step 3: Training the model on the data, storing the information learned from the data Model is learning the relationship between X (features like number of bedrooms) and y (price)
###Code
reg.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Step 4: Make PredictionsUses the information the model learned during the model training process
###Code
# Returns a NumPy Array
# Predict for One Observation
reg.predict(X_test.iloc[0].values.reshape(1, -1))
###Output
_____no_output_____
###Markdown
Predict for Multiple Observations at Once
###Code
reg.predict(X_test[0:10])
###Output
_____no_output_____
###Markdown
Measuring Model Performance Unlike classification models where a common metric is accuracy, regression models use other metrics like R^2, the coefficient of determination to quantify your model's performance. The best possible score is 1.0. A constant model that always predicts the expected value of y, disregarding the input features, would get a R^2 score of 0.0.
###Code
score = reg.score(X_test, y_test)
print(score)
###Output
_____no_output_____
###Markdown
Tuning n_estimators (Number of Decision Trees)A tuning parameter for bagged trees is **n_estimators**, which represents the number of trees that should be grown.
###Code
# List of values to try for n_estimators:
estimator_range = [1] + list(range(10, 150, 20))
scores = []
for estimator in estimator_range:
reg = BaggingRegressor(n_estimators=estimator, random_state=0)
reg.fit(X_train, y_train)
scores.append(reg.score(X_test, y_test))
plt.figure(figsize = (10,7))
plt.plot(estimator_range, scores);
plt.xlabel('n_estimators', fontsize =20);
plt.ylabel('Score', fontsize = 20);
plt.tick_params(labelsize = 18)
plt.grid()
###Output
_____no_output_____ |
Major_NN_Architecture_S.ipynb | ###Markdown
Autograded Notebook (Canvas & CodeGrade)This notebook will be automatically graded. It is designed to test your answers and award points for the correct answers. Following the instructions for each Task carefully.Instructions- **Download** this notebook as you would any other ipynb file - **Upload** to Google Colab or work locally (if you have that set-up)- **Delete** `raise NotImplementedError()`- **Write** your code in the ` YOUR CODE HERE` space- **Execute** the Test cells that contain assert statements - these help you check your work (others contain hidden tests that will be checked when you submit through Canvas)- **Save** your notebook when you are finished- **Download** as a ipynb file (if working in Colab)- **Upload** your complete notebook to Canvas (there will be additional instructions in Slack and/or Canvas) Major Neural Network Architectures Challenge *Data Science Unit 4 Sprint 3 Challenge*In this sprint challenge, you'll explore some of the cutting edge of Deep Learning. This week we studied several famous neural network architectures: recurrent neural networks (RNNs), long short-term memory (LSTMs), convolutional neural networks (CNNs), and Autoencoders. In this sprint challenge, you will revisit these models. Remember, we are testing your knowledge of these architectures not your ability to fit a model with high accuracy. __*Caution:*__ these approaches can be pretty heavy computationally. All problems were designed so that you should be able to achieve results within at most 5-10 minutes of runtime locally, on AWS SageMaker, on Colab or on a comparable environment. If something is running longer, double check your approach!__*GridSearch:*__ CodeGrade will likely break if it is asked to run a gridsearch for a deep learning model (CodeGrade instances run on a single processor). So while you may choose to run a gridsearch locally to find the optimum hyper-parameter values for your model, please delete (or comment out) the gridsearch code and simply instantiate a model with the optimum parameter values to get the performance that you want out of your model prior to submission. Challenge Objectives*You should be able to:** Part 1: Train a LSTM classification model* Part 2: Utilize a pre-trained CNN for object detection* Part 3: Describe a use case for an autoencoder* Part 4: Describe yourself as a Data Science and elucidate your vision of AI____ (CodeGrade) Before you submit your notebook you must first1) Restart your notebook's Kernel2) Run all cells sequentially, from top to bottom, so that cell numbers are sequential numbers (i.e. 1,2,3,4,5...)- Easiest way to do this is to click on the **Cell** tab at the top of your notebook and select **Run All** from the drop down menu. 3) If you have gridsearch code, now is when you either delete it or comment out that code so CodeGrade doesn't run it and crash. 4) Read the directions in **Part 2** of this notebook for specific instructions on how to prep that section for CodeGrade.____ Part 1 - LSTMsUse a LSTM to fit a multi-class classification model on Reuters news articles to distinguish topics of articles. The data is already encoded properly for use in a LSTM model. Your Tasks: - Use Keras to fit a predictive model, classifying news articles into topics. - Name your model as `model`- Use a `single hidden layer`- Use `sparse_categorical_crossentropy` as your loss function- Use `accuracy` as your metric- Report your overall score and accuracy- Due to resource concerns on CodeGrade, `set your model's epochs=1`For reference, the LSTM code we used in class will be useful. __*Note:*__ Focus on getting a running model, not on maxing accuracy with extreme data size or epoch numbers. Only revisit and push accuracy if you get everything else done!
###Code
# Import data (don't alter the code in this cell)
from tensorflow.keras.datasets import reuters
# Suppress some warnings from deprecated reuters.load_data
import warnings
warnings.filterwarnings('ignore')
# Load data
(X_train, y_train), (X_test, y_test) = reuters.load_data(num_words=None,
skip_top=0,
maxlen=None,
test_split=0.2,
seed=723812,
start_char=1,
oov_char=2,
index_from=3)
# Due to limited computational resources on CodeGrade, take the following subsample
train_size = 1000
X_train = X_train[:train_size]
y_train = y_train[:train_size]
# Demo of encoding
word_index = reuters.get_word_index(path="reuters_word_index.json")
print(f"Iran is encoded as {word_index['iran']} in the data")
print(f"London is encoded as {word_index['london']} in the data")
print("Words are encoded as numbers in our dataset.")
len(X_train[1])
len(X_test[1])
# Imports (don't alter this code)
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Embedding, LSTM
# DO NOT CHANGE THESE VALUES
# Keras docs say that the + 1 is needed: https://keras.io/api/layers/core_layers/embedding/
MAX_FEATURES = len(word_index.values()) + 1
# maxlen is the length of each sequence (i.e. document length)
MAXLEN = 200
X_train.shape
y_train.shape
X_test.shape
y_test.shape
import tensorflow as tf
import random
import sys
import os
import pandas as pd
import numpy as np
from __future__ import print_function
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.preprocessing import sequence
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten, LSTM, Dropout, Activation, Embedding, Bidirectional
# Pre-process your data by creating sequences
def print_text_from_seq(x):
# print('=================================================')
word_to_id = word_index
# word_to_id = {k:(v+index_from) for k,v in word_to_id.items()}
word_to_id["<PAD>"] = 0
word_to_id["<START>"] = 1
word_to_id["<UNK>"] = 2
word_to_id["<UNUSED>"] = 3
id_to_word = {value:key for key,value in word_to_id.items()}
print(f'Length = {len(x)}')
print(' '.join(id_to_word[id] for id in x ))
print('=================================================')
# Save your transformed data to the same variable name:
# example: X_train = some_transformation(X_train)
for i in range(0, 6):
print(X_train[i])
print('Pad Sequences (samples x time)')
x_train = sequence.pad_sequences(X_train, maxlen=MAXLEN)
x_test = sequence.pad_sequences(X_test, maxlen=MAXLEN)
print('x_train shape: ', x_train.shape)
print('x_test shape: ', x_test.shape)
# Visible tests
assert x_train.shape[1] == MAXLEN, "Your train input sequences are the wrong length. Did you use the sequence import?"
assert x_test.shape[1] == MAXLEN, "Your test input sequences are the wrong length. Did you use the sequence import?"
###Output
_____no_output_____
###Markdown
Create your modelMake sure to follow these instructions (also listed above):- Name your model as `model`- Use a `single hidden layer`- Use `sparse_categorical_crossentropy` as your loss function- Use `accuracy` as your metric**Additional considerations**The number of nodes in your output layer should be equal to the number of **unique** values in the sequences you are training and testing on. For this text, that value is equal to 46.- Set the number of nodes in your output layer equal to 46
###Code
model = Sequential()
model.add(Embedding(MAX_FEATURES, 46))
model.add(Dropout(0.5))
model.add(Bidirectional(LSTM(46)))
model.add(Dense(46, activation='softmax'))
model.summary()
opt = tf.keras.optimizers.Adam(lr=0.001, decay=1e-6)
model.compile(
loss='sparse_categorical_crossentropy',
optimizer=opt,
metrics=['accuracy'],
)
# Build and complie your model here
#from tensorflow.keras.layers import Dropout
#model = Sequential()
#model.add(Embedding(MAX_FEATURES, 46))
#model.add(Dropout(0.1))
#model.compile(loss='sparse_categorical_crossentropy',
# optimizer='adam',
# metrics=['accuracy'])
#model.summary()
# Visible Test
assert model.get_config()["layers"][1]["class_name"] == "Embedding", "Layer 1 should be an Embedding layer."
# Hidden Test
###Output
_____no_output_____
###Markdown
Fit your modelNow, fit the model that you built and compiled in the previous cells. Remember to set your `epochs=1`!
###Code
# Fit your model here
# REMEMBER to set epochs=1
output = model.fit(x_train,
y_train,
batch_size=46,
epochs=1,
validation_data=(x_test, y_test))
# Visible Test
n_epochs = len(model.history.history["loss"])
assert n_epochs == 1, "Verify that you set epochs to 1."
###Output
_____no_output_____
###Markdown
Sequence Data Question *Describe the `pad_sequences` method used on the training dataset. What does it do? Why do you need it?* pad_sequences method deals with the variable length issue. It ensures that all the sequenceses are the same length by adding place holders at the beginning or the end depending on 'pre' or 'post' argument. RNNs versus LSTMs *What are the primary motivations behind using Long-ShortTerm Memory Cell unit over traditional Recurrent Neural Networks?* LSTM gives us more controlability over the flow and mix of inputs per trained weights RNN / LSTM Use Cases *Name and Describe 3 Use Cases of LSTMs or RNNs and why they are suited to that use case* RNN/LSTM Use Cases: Speech recognition (Siri/Cortana),Text autofill,Translation,Stock Prices, DNA sequencingRNN and LSTM are very useful in processing textual and speech data because each word in a sequence is dependent upon the previous word. They use the context of each word... ie. to predict one word, the word before it is already providing a great deal of information as to what words are likely to follow. LSTM's are generally preferrable to RNNs. Part 2- CNNs Find the FrogTime to play "find the frog!" Use Keras and [ResNet50v2](https://www.tensorflow.org/api_docs/python/tf/keras/applications/resnet_v2) (pre-trained) to detect which of the images with the `frog_images` subdirectory has a frog in it. The skimage function below will help you read in all the frog images into memory at once. You should use the preprocessing functions that come with ResnetV2, and you should also resize the images using scikit-image. Reading in the imagesThe code in the following cell will download the images to your notebook (either in your local Jupyter notebook or in Google colab). Run ResNet50v2Your goal is to validly run ResNet50v2 on the input images - don't worry about tuning or improving the model. You can print out or view the predictions in any way you see fit. In order to receive credit, you need to have made predictions at some point in the following cells.*Hint* - ResNet 50v2 doesn't just return "frog". The three labels it has for frogs are: `bullfrog, tree frog, tailed frog`**Autograded tasks*** Instantiate your ResNet 50v2 and save to a variable named `resnet_model`**Other tasks*** Re-size your images* Use `resnet_model` to predict if each image contains a frog* Decode your predictions* Hint: the lesson on CNNs will have some helpful code**Stretch goals**** Check for other things such as fish* Print out the image with its predicted label* Wrap everything nicely in well documented functions Important note!To increase the chances that your notebook will run in CodeGrade, when you **submit** your notebook:* comment out the code where you load the images* comment out the code where you make the predictions* comment out any plots or image displays you create**MAKE SURE YOUR NOTEBOOK RUNS COMPLETELY BEFORE YOU SUBMIT!**
###Code
# Prep to import images (don't alter the code in this cell)
import urllib.request
# Text file of image URLs
text_file = "https://raw.githubusercontent.com/LambdaSchool/data-science-canvas-images/main/unit_4/sprint_challenge_files/frog_image_url.txt"
data = urllib.request.urlopen(text_file)
# Create list of image URLs
url_list = []
for line in data:
url_list.append(line.decode('utf-8'))
# Import images (don't alter the code in this cell)
from skimage.io import imread
from skimage.transform import resize
# instantiate list to hold images
image_list = []
### UNCOMMENT THE FOLLOWING CODE TO LOAD YOUR IMAGES
#loop through URLs and load each image
for url in url_list:
image_list.append(imread(url))
## UNCOMMENT THE FOLLOWING CODE TO VIEW AN EXAMPLE IMAGE SIZE
#What is an "image"?
print(type(image_list[0]), end="\n\n")
print("Each of the Images is a Different Size")
print(image_list[0].shape)
print(image_list[1].shape)
# Imports
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.keras.applications.resnet_v2 import ResNet50V2 # <-- pre-trained model
from tensorflow.keras.preprocessing import image
from tensorflow.keras.applications.resnet_v2 import preprocess_input, decode_predictions
resnet_model = tf.keras.applications.ResNet50V2(
include_top=True, weights='imagenet', input_tensor=None,
input_shape=None, pooling=None, classes=1000,
classifier_activation='softmax'
)
# Code from the CNN lecture might come in handy here!
#Instantiate your ResNet 50v2 and save to a variable named resnet_model
def process_img_path(img_path):
return image.load_img(img_path, target_size=(224, 224))
def img_recognition_pretrain(img):
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
model = ResNet50(weights='imagenet')
features = model.predict(x)
results = decode_predictions(features, top=4)[0]
print(results)
for entry in results:
if entry[1] == 'bullfrog':
return entry[2]
return 0.0
#Genuinely no idea whatsoever and I'm out of time
# Define the batch size:
batch_size=32
# Define the train and validation generators:
train = train_datagen.flow_from_directory(
image_list,
target_size=(224, 224),
classes=['bullfrog','tree_frog','tailed_frog'],
class_mode='categorical',
batch_size=batch_size
)
val = valid_datagen.flow_from_directory(
directory=image_list,
target_size=(224, 224),
classes=['bullfrog','tree_frog','tailed_frog'],
class_mode='categorical',
batch_size=batch_size
)
#Re-size your images
#Use resnet_model to predict if each image contains a frog
# Visible test
assert resnet_model.get_config()["name"] == "resnet50v2", "Did you instantiate the resnet model?"
###Output
_____no_output_____ |
notebooks/sports/soccer-partIII.ipynb | ###Markdown
 Soccer Analytics Welcome to a Jupyter notebook on soccer analytics. This notebook is a free resource and is part of the Callysto project, which brings data science skills to grades 5 to 12 classrooms. In this notebook, we answer the question: How do ball possession and scoring relate?Visualizations will be coded using Python, a computer programming language. Python contains words from English and is used by data scientists. Programming languages are how people communicate with computers. “Run” the cells to see the graphs.Click “Cell” and select “Run All.” This will import the data and run all the code to create the data visualizations (scroll back to the top after you’ve run the cells). 
###Code
# import Python libraries
import pandas as pd
import plotly.express as px
###Output
_____no_output_____
###Markdown
Making a csv fileData source: https://www.uefa.com/uefachampionsleague/standings/Data was collected for the group phase (6 games per team) for the 2020-2021 season from the Champions League website. The data was inputted into the cell below by reading tables on the website. Notice that the values are separated by commas; this format is needed for the computer to read the data. The `writefile` command is used to create the file.
###Code
%%writefile possession.csv
Total goals,Goal difference,Average ball possession (%),Team
18,16,61,Bayern
16,11,57,Barcelona
16,7,44,Monchengladbach
15,5,50,Man. United
14,12,54,Chelsea
14,10,51,Juventus
13,12,59,Man. City
13,7,54,Paris
12,7,56,Dortmund
11,2,58,Real Madrid
11,-1,51,Leipzig
11,4,47,Lazio
10,7,53,Liverpool
10,7,41,Porto
10,-7,48,RB Salzburg
10,2,47,Atalanta
9,1,57,Sevilla
8,-2,51,Club Brugge
7,0,55,Ajax
7,-2,51,Inter Milan
7,-1,50,Atletico Madrid
7,-11,45,Istanbul Basaksehir
6,-5,40,Krasnodar
5,-12,47,Ferencvaros
5,-7,47,Shakhtar Donetsk
5,-5,42,Lokomotiv Moskva
4,-9,47,Zenit
4,-9,46,Midtjylland
4,-9,45,Dynamo Kyiv
3,-8,50,Rennes
2,-8,50,Olympiacos
2,-11,50,Marseille
###Output
_____no_output_____
###Markdown
The Python library pandas is used to tell the computer to read and then display the data in a table, or dataframe. Pandas is a library used to organize data. The dataframe below is organized from most to least total goals per team.
###Code
possession_df = pd.read_csv('possession.csv')
possession_df.sort_values('Average ball possession (%)', ascending=False)
###Output
_____no_output_____
###Markdown
Since we are exploring how possession and scoring relate, let's calculate some measures of spread and central tendency on average ball possession (%) to better understand the shape of the data.
###Code
# Compute min, max, range, mean and median
# Min average ball possession
min_df = possession_df['Average ball possession (%)'].min() # change to 'Total goals' or 'Goal difference' to for different calculations
# Max average ball possession
max_df = possession_df['Average ball possession (%)'].max()
# Range average ball possession
range_df = (possession_df['Average ball possession (%)'].max()) - (possession_df['Average ball possession (%)'].min())
# Mean of average ball possession
mean_df = possession_df['Average ball possession (%)'].mean()
# Median of average ball possession
median_df = possession_df['Average ball possession (%)'].median()
# Print results
print("The minimum value is", min_df)
print("The maximum value is", max_df)
print("The range is", range_df)
print("The mean is", mean_df)
print("The median is", median_df)
###Output
_____no_output_____
###Markdown
Notice that the mean and median are 50, and the range is 21. You can update or change the code. Follow the directions after the in the code cell above.Now, let's visualize the range with a bar graph.
###Code
bar_df = px.bar(possession_df,
x='Team',
y='Average ball possession (%)', # change y to Total goals or Goal difference to visualize different variables
title='Average ball possession (%) by team') # update title, if needed
bar_df.update_layout(xaxis={'categoryorder':'total descending'})
###Output
_____no_output_____
###Markdown
Notice that the x-axis represents teams, and the y-axis represents average ball possession (%). Bayern has the highest average ball possession at 60%, and Krasnodar has the lowest at 40%. Marseille, Olympiacos, Atletico Madrid, and RB Salzburg all have ball possession of 50%, which is the mean and the median. These measures of central tendency can help us divide the dataset into teams with more ball possession and teams with less ball possession.Now that we've explored the centre and spread of average ball possession (%), let's examine how average ball possession (%) relates to total goals. The scatter plot displays average ball possession (%) on the x-axis and total goals on the y-axis. Total goals range from Marseille with 2 to Bayern with 18. Hover over the data points to view more information.
###Code
scatter_total_df = px.scatter(possession_df,
x="Average ball possession (%)",
y="Total goals", # change y to Goal difference
hover_data=["Team"],
trendline="ols",
title="Relationship between average ball possession (%) and total goals")
scatter_total_df.show()
###Output
_____no_output_____
###Markdown
Notice that the line of best fit indicates a positive trend with total goals increasing with average ball possession. Hover over the data points to find out more information. The data points further from the line seem to tell a different story. Bayern has the highest ball possession at 61% and the most total goals at 18. Marseille, on the other hand, has the least amount of total goals at 2 with ball possession of 50%.While total goals can help understand how successful teams are, the idea of possession involves keeping the ball to score and keeping the ball to prevent the other team from scoring. It might be interesting to explore the relationship between average ball possession and goal difference. Goal difference is the addition of total goals scored minus goals that other teams have scored against the team. The scatter plot below visualizes the relationship between average ball possession (%) and goal difference by team. The goal difference on the y-axis contains negative values; the negative values mean that a team has more goals scored against than more goals scored. Hover over the data points to view more information.
###Code
scatter_difference_df = px.scatter(possession_df,
x="Average ball possession (%)",
y="Goal difference",
size="Total goals",
color="Team",
title="Relationship between average ball possession (%) and goal difference by team")
scatter_difference_df.show()
###Output
_____no_output_____
###Markdown
Notice that Bayern leads in ball possession at 61% as well as in both total goals at 18 with a goal difference of 16 -- that means only 2 goals were scored against Bayern within the 6 games prior to knock-outs. Ferencvaros has the lowest goal difference of -12 and ball possession of 47%. Marseille with the lowest total goals of 2 has the second lowest goal difference of -11 and ball possession 50% of game play.
###Code
# This cell prevents the next section from running automatically
%%script false
#❗️Run this cell with Shift+Enter
import interactive as i
i.challenge1()
#❗️Run this cell with Shift+Enter
import interactive as i
i.challenge2()
#❗️Run this cell with Shift+Enter
import interactive as i
i.challenge3()
###Output
_____no_output_____ |
Chapter6/question9.ipynb | ###Markdown
Chapter 6 Question 9Regularisation on the `College` data set
###Code
import statsmodels.api as sm
import sklearn.model_selection
import sklearn.linear_model
import sklearn.metrics
import numpy as np
import matplotlib.pyplot as plt
import sklearn.decomposition
import sklearn.pipeline
import sklearn.cross_decomposition
import pandas as pd
college = sm.datasets.get_rdataset("College", "ISLR").data
college["Private"] = college["Private"] == "Yes"
college.head()
###Output
_____no_output_____
###Markdown
(a) Split the data set into a training set and a test set
###Code
X = college.drop(columns=["Apps"])
y = college["Apps"]
X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X, y, test_size=0.33)
###Output
_____no_output_____
###Markdown
(b) Fit a linear model using least squares, and report the test error
###Code
least_squares = sklearn.linear_model.LinearRegression()
least_squares.fit(X_train,y_train)
y_pred = least_squares.predict(X_test)
least_squares_mse = sklearn.metrics.mean_squared_error(y_test, y_pred)
print(least_squares_mse)
# The above is vast!
###Output
_____no_output_____
###Markdown
(c) Fit a ridge regression model, choosing $\lambda$ by CV. Report the test error
###Code
ridge = sklearn.linear_model.RidgeCV(alphas= np.linspace(0.001,10,num=1000))
ridge.fit(X_train,y_train)
y_pred = ridge.predict(X_test)
ridge_mse = sklearn.metrics.mean_squared_error(y_test, y_pred)
print(ridge_mse)
###Output
1540869.3506382525
###Markdown
(d) Fit a lasso model, choosing $\lambda$ by CV. Report the test error, along with the number of non-zero coefficients.
###Code
# Use the LassoCV
lasso_model = sklearn.linear_model.LassoCV(cv=5, max_iter=1e6)
lasso_model.fit(X_train,y_train)
mses = list(map(np.mean,lasso_model.mse_path_))
alphas = lasso_model.alphas_
plt.plot(alphas,np.log(mses))
plt.ylabel("log(mse)")
plt.xlabel("alpha")
plt.show()
print(lasso_model.coef_)
print(lasso_model.intercept_)
y_pred = lasso_model.predict(X_test)
lasso_mse = sklearn.metrics.mean_squared_error(y_test, y_pred)
lasso_mse
plt.scatter(college["Accept"], college["Apps"]) # Lasso suggests a roughly one-to-one mapping between these
###Output
_____no_output_____
###Markdown
(e) Fit a PCR model, with M chosen by cross-validation. Report the test error, along with the value of M selected.
###Code
# Standardise each predictor:
scaler = sklearn.preprocessing.StandardScaler()
regressor = sklearn.linear_model.LinearRegression()
pca = sklearn.decomposition.PCA()
pipe = sklearn.pipeline.Pipeline(steps=[("scaling", scaler), ("pca", pca), ("linear regression", regressor)])
p = len(X_train.columns)
params = {"pca__n_components": list(range(1,p+1))}
search = sklearn.model_selection.GridSearchCV(pipe, params, cv=5, return_train_score=True)
search.fit(X_train, y_train)
pca.fit(X_train_scaled)
fig, (ax0, ax1) = plt.subplots(nrows=2, sharex=True, figsize=(6, 6))
ax0.plot(pca.explained_variance_ratio_, linewidth=2)
ax0.set_ylabel('PCA explained variance')
ax0.axvline(search.best_estimator_.named_steps['pca'].n_components,
linestyle=':', label='n_components chosen')
ax0.legend(prop=dict(size=12))
# For each number of components, find the best classifier results
results = pd.DataFrame(search.cv_results_)
components_col = 'param_pca__n_components'
best_clfs = results.groupby(components_col).apply(
lambda g: g.nlargest(1, 'mean_test_score'))
best_clfs.plot(x=components_col, y='mean_test_score', yerr='std_test_score',
legend=False, ax=ax1)
ax1.set_ylabel('Classification accuracy (val)')
ax1.set_xlabel('n_components')
plt.tight_layout()
plt.show()
# The above graph suggests PCR with 5 components.
pipe.set_params(pca__n_components=5).fit(X_train, y_train)
y_pred = pipe.predict(X_test)
pcr_mse = sklearn.metrics.mean_squared_error(y_test, y_pred)
pcr_mse
###Output
/home/will/.local/anaconda3/lib/python3.6/site-packages/sklearn/preprocessing/data.py:617: DataConversionWarning: Data with input dtype bool, int64, float64 were all converted to float64 by StandardScaler.
return self.partial_fit(X, y)
/home/will/.local/anaconda3/lib/python3.6/site-packages/sklearn/base.py:465: DataConversionWarning: Data with input dtype bool, int64, float64 were all converted to float64 by StandardScaler.
return self.fit(X, y, **fit_params).transform(X)
/home/will/.local/anaconda3/lib/python3.6/site-packages/sklearn/pipeline.py:331: DataConversionWarning: Data with input dtype bool, int64, float64 were all converted to float64 by StandardScaler.
Xt = transform.transform(Xt)
###Markdown
(f) Fit a PLS model, with M chosen by cross-validation. Report the test error, along with the value of M selected.
###Code
# Standardise each predictor:
# scaler = sklearn.preprocessing.StandardScaler()
pls = sklearn.cross_decomposition.PLSRegression()
# pipe = sklearn.pipeline.Pipeline(steps=[("scaling", scaler), ("pca", pca), ("linear regression", regressor)])
p = len(X_train.columns)
params = {"n_components": list(range(1,p+1))}
search = sklearn.model_selection.GridSearchCV(pls, params, cv=5, return_train_score=True)
search.fit(X_train, y_train)
best_pls = search.best_estimator_
print(f"Number of components: {best_pls.n_components}")
y_pred = best_pls.predict(X_test)
pls_mse = sklearn.metrics.mean_squared_error(y_test, y_pred)
pls_mse
fig, ax = plt.subplots()
results = pd.DataFrame(search.cv_results_)
results.head()
plt.errorbar(results.param_n_components, results.mean_test_score, yerr=results.std_test_score)
# Actually, M=6 looks best here.
pls = sklearn.cross_decomposition.PLSRegression(n_components=6)
pls.fit(X_train, y_train)
y_pred = pls.predict(X_test)
pls_mse = sklearn.metrics.mean_squared_error(y_test, y_pred)
pls_mse
###Output
_____no_output_____ |
Labs/11-Transformers/11-Transformer.ipynb | ###Markdown
Lab 11: TransformersIn today's lab, we will learn how to create a transformer from scratch, thenwe'll take a look at ViT (the visual transformer). Some of the material in thislab comes from the following online sources:- https://medium.com/the-dl/transformers-from-scratch-in-pytorch-8777e346ca51- https://towardsdatascience.com/how-to-code-the-transformer-in-pytorch-24db27c8f9ec- https://towardsdatascience.com/implementing-visualttransformer-in-pytorch-184f9f16f632- https://github.com/lucidrains/vit-pytorchvision-transformer---pytorch- https://medium.com/mlearning-ai/vision-transformers-from-scratch-pytorch-a-step-by-step-guide-96c3313c2e0cThe above photo needs a credit! Transformers and machine learning trendsBefore the arrival of transformers, CNNs were most often used in the visual domain, while RNNs like LSTMs were most often used in NLP.There were many attempts at crossover, without much real success. Neither approach seemed capable of dealing with very large complexnatural language datasets effectively.In 2017, the Transformer was introduced. "Attention is all you need" has been cited more than 38,000 times.The main concept in a Transformer is self-attention, which replaces the sequential processing of RNNs and the localprocessing of CNNs with the ability to adaptively extract arbitrary relationships between different elements of its input,output, and memory state. Transformer architectureWe will use [Frank Odom's implementation of the Transformer in PyTorch](https://github.com/fkodom/transformer-from-scratch/tree/main/src).The architecture of the transformer looks like this:Here is a summary of the Transformer's details and mathematics:There are several processes that we need to implement in the model. We go one by one. AttentionBefore Transformers, the standard model for sequence-to-sequence learning was seq2seq, which combines an RNN for encoding withan RNN for decoding. The encoder processes the input and retains important information in a sequence or block of memory,while the decoder extracts the important information from the memory in order to produce an output.One problem with seq2seq is that some information may be lost while processing a long sequence.Attention allows us to focus on specific inputs directly.An attention-based decoder, when we want to produce the output token at a target position, will calculate an attention scorewith the encoder's memory at each input position. A high score for a particular encoder position indicates that it is more importantthan another position. We essentially use the decoder's input to select which encoder output(s) should be used to calculate thecurrent decoder output. Given decoder input $q$ (the *query*) and encoder outputs $p_i$, the attention operation calculates dotproducts between $q$ and each $p_i$. The dot products give the similarity of each pair. The dot products are softmaxed to getpositive weights summing to 1, and the weighted average $r$ is calculated as$$r = \sum_i \frac{e^{p_i\cdot q}}{\sum_j e^{p_j\cdot q}}p_i .$$We can think of $r$ as an adaptively selected combination of the inputs most relevant to producing an output. Multi-head self attentionTransformers use a specific type of attention mechanism, referred to as multi-head self attention.This is the most important part of the model. An illustration from the paper is shown below.The multi-head attention layer is described as:$$\text{Attention}(Q,K,V)=\text{softmax}(\frac{QK^T}{\sqrt{d_k}})V$$$Q$, $K$, and $V$ are batches of matrices, each with shape (batch_size, seq_length, num_features).When we are talking about *self* attention, each of the three matrices ineach batch is just a separate linear projection of the same input $\bar{h}_t^{l-1}$.Multiplying the query $Q$ with the key $K$ arrays results in a (batch_size, seq_length, seq_length) array,which tells us roughly how important each element in the sequence is to each other element in the sequence. These dotproducts are converted to normalized weights using a softmax across rows, so that each row of weights sums to one.Finally, the weight matrix attention is applied to the value ($V$) array using matrix multiplication. We thus get,for each token in the input sequence, a weighted average of the rows of $V$, each of which corresponds to one of theelements in the input sequence.Here is code for the scaled dot-product operation that is part of a multi-head attention layer:
###Code
import torch
import torch.nn.functional as f
from torch import Tensor, nn
def scaled_dot_product_attention(query: Tensor, key: Tensor, value: Tensor) -> Tensor:
# MatMul operations are translated to torch.bmm in PyTorch
temp = query.bmm(key.transpose(1, 2))
scale = query.size(-1) ** 0.5
softmax = f.softmax(temp / scale, dim=-1)
return softmax.bmm(value)
###Output
_____no_output_____
###Markdown
A multi-head attention module is composed of several identical*attention head* modules.Each attention head contains three linear transformations for $Q$, $K$, and $V$ and combines them using scaled dot-product attention.Note that this attention head could be used for self attention or another type of attention such as decoder-to-encoder attention, sincewe keep $Q$, $K$, and $V$ separate.
###Code
class AttentionHead(nn.Module):
def __init__(self, dim_in: int, dim_q: int, dim_k: int):
super().__init__()
self.q = nn.Linear(dim_in, dim_q)
self.k = nn.Linear(dim_in, dim_k)
self.v = nn.Linear(dim_in, dim_k)
def forward(self, query: Tensor, key: Tensor, value: Tensor) -> Tensor:
return scaled_dot_product_attention(self.q(query), self.k(key), self.v(value))
###Output
_____no_output_____
###Markdown
Multiple attention heads can be combined with the output concatenation and linear transformation to construct a multi-head attention layer:
###Code
class MultiHeadAttention(nn.Module):
def __init__(self, num_heads: int, dim_in: int, dim_q: int, dim_k: int):
super().__init__()
self.heads = nn.ModuleList(
[AttentionHead(dim_in, dim_q, dim_k) for _ in range(num_heads)]
)
self.linear = nn.Linear(num_heads * dim_k, dim_in)
def forward(self, query: Tensor, key: Tensor, value: Tensor) -> Tensor:
return self.linear(
torch.cat([h(query, key, value) for h in self.heads], dim=-1)
)
###Output
_____no_output_____
###Markdown
Each attention head computes its own transformation of the query, key, and value arrays,and then applies scaled dot-product attention. Conceptually, this means each head can attend to a different part of the input sequence, independent of the others. Increasing the number of attention heads allows the model to pay attention to more parts of the sequence atonce, which makes the model more powerful. Positional EncodingTo complete the transformer encoder, we need another component, the *position encoder*.The MultiHeadAttention class we just write has no trainable components that depend on a token's positionin the sequence (axis 1 of the input tensor). Meaning all of the weight matrices we have seen so far*perform the same calculation for every input position*; that is, we don't have any position-dependent weights.All of the operations so far operate over the *feature dimension* (axis 2). This is good in that the model is compatible with any sequencelength. But without *any* information about position, our model is going to be unable to differentiate between different orderings ofthe input -- we'll get the same result regardless of the order of the tokens in the input.Since order matters ("Ridgemont was in the store" has a differentmeaning from "The store was in Ridgemont"), we need some way to provide the model with information about tokens' positions in the input sequence.Whatever strategy we use should provide information about the relative position of data points in the input sequences.In the Transformer, positional information is encoded using trigonometric functions in a constant 2D matrix $PE$:$$PE_{(pos,2i)}=\sin (\frac{pos}{10000^{2i/d_{model}}})$$$$PE_{(pos,2i+1)}=\cos (\frac{pos}{10000^{2i/d_{model}}}),$$where $pos$ refers to a position in the input sentence sequence and $i$ refers to the position along the embedding vector dimension.This matrix is *added* to the matrix consisting of the embeddings of each of the input tokens:Position encoding can implemented as follows (put this in `utils.py`):
###Code
def position_encoding(seq_len: int, dim_model: int, device: torch.device = torch.device("cpu")) -> Tensor:
pos = torch.arange(seq_len, dtype=torch.float, device=device).reshape(1, -1, 1)
dim = torch.arange(dim_model, dtype=torch.float, device=device).reshape(1, 1, -1)
phase = pos / (1e4 ** (dim // dim_model))
return torch.where(dim.long() % 2 == 0, torch.sin(phase), torch.cos(phase))
###Output
_____no_output_____
###Markdown
These sinusoidal encodings allow us to work with arbirary length sequences because the sine and cosine functions are periodic in the range$[-1, 1]$. One hope is that if during inference we are provided with an input sequence longer than any found during training.The position encodings of the last elements in the sequence would be different from anything the model has seen before, but with theperiodic sine/cosine encoding, there will still be some similar structure, with the new encodings being very similar to neighboring encodings the model has seen before. For this reason, despite the fact that learned embeddings appeared to perform equally as well, the authors chosethis fixed sinusoidal encoding. The complete encoderThe transformer uses an encoder-decoder architecture. The encoder processes the input sequence and returns a sequence offeature vectors or memory vectors, while the decoder outputs a prediction of the target sequence,incorporating information from the encoder memory.First, let's complete the transformer layer with the two-layer feed forward network. Put this in `utils.py`:
###Code
def feed_forward(dim_input: int = 512, dim_feedforward: int = 2048) -> nn.Module:
return nn.Sequential(
nn.Linear(dim_input, dim_feedforward),
nn.ReLU(),
nn.Linear(dim_feedforward, dim_input),
)
###Output
_____no_output_____
###Markdown
Let's create a residual module to encapsulate the feed forward network or attentionmodel along with the common dropout and LayerNorm operations (also in `utils.py`):
###Code
class Residual(nn.Module):
def __init__(self, sublayer: nn.Module, dimension: int, dropout: float = 0.1):
super().__init__()
self.sublayer = sublayer
self.norm = nn.LayerNorm(dimension)
self.dropout = nn.Dropout(dropout)
def forward(self, *tensors: Tensor) -> Tensor:
# Assume that the "query" tensor is given first, so we can compute the
# residual. This matches the signature of 'MultiHeadAttention'.
return self.norm(tensors[0] + self.dropout(self.sublayer(*tensors)))
###Output
_____no_output_____
###Markdown
Now we can create the complete encoder! Put this in `encoder.py`. First, the encoder layermodule, which comprised a self attention residual block followed by a fully connected residual block:
###Code
class TransformerEncoderLayer(nn.Module):
def __init__(
self,
dim_model: int = 512,
num_heads: int = 6,
dim_feedforward: int = 2048,
dropout: float = 0.1,
):
super().__init__()
dim_q = dim_k = max(dim_model // num_heads, 1)
self.attention = Residual(
MultiHeadAttention(num_heads, dim_model, dim_q, dim_k),
dimension=dim_model,
dropout=dropout,
)
self.feed_forward = Residual(
feed_forward(dim_model, dim_feedforward),
dimension=dim_model,
dropout=dropout,
)
def forward(self, src: Tensor) -> Tensor:
src = self.attention(src, src, src)
return self.feed_forward(src)
###Output
_____no_output_____
###Markdown
Then the Transformer encoder just encapsulates several transformer encoder layers:
###Code
class TransformerEncoder(nn.Module):
def __init__(
self,
num_layers: int = 6,
dim_model: int = 512,
num_heads: int = 8,
dim_feedforward: int = 2048,
dropout: float = 0.1,
):
super().__init__()
self.layers = nn.ModuleList(
[
TransformerEncoderLayer(dim_model, num_heads, dim_feedforward, dropout)
for _ in range(num_layers)
]
)
def forward(self, src: Tensor) -> Tensor:
seq_len, dimension = src.size(1), src.size(2)
src += position_encoding(seq_len, dimension)
for layer in self.layers:
src = layer(src)
return src
###Output
_____no_output_____
###Markdown
The decoderThe decoder module is quite similar to the encoder, with just a few small differences:- The decoder accepts two inputs (the target sequence and the encoder memory), rather than one input.- There are two multi-head attention modules per layer (the target sequence self-attention module and the decoder-encoder attention module) rather than just one.- The second multi-head attention module, rather than strict self attention, expects the encoder memory as $K$ and $V$.- Since accessing future elements of the target sequence would be "cheating," we need to mask out future elements of the input target sequence.First, we have the decoder version of the transformer layer and the decoder module itself:
###Code
class TransformerDecoderLayer(nn.Module):
def __init__(
self,
dim_model: int = 512,
num_heads: int = 6,
dim_feedforward: int = 2048,
dropout: float = 0.1,
):
super().__init__()
dim_q = dim_k = max(dim_model // num_heads, 1)
self.attention_1 = Residual(
MultiHeadAttention(num_heads, dim_model, dim_q, dim_k),
dimension=dim_model,
dropout=dropout,
)
self.attention_2 = Residual(
MultiHeadAttention(num_heads, dim_model, dim_q, dim_k),
dimension=dim_model,
dropout=dropout,
)
self.feed_forward = Residual(
feed_forward(dim_model, dim_feedforward),
dimension=dim_model,
dropout=dropout,
)
def forward(self, tgt: Tensor, memory: Tensor) -> Tensor:
tgt = self.attention_1(tgt, tgt, tgt)
tgt = self.attention_2(tgt, memory, memory)
return self.feed_forward(tgt)
class TransformerDecoder(nn.Module):
def __init__(
self,
num_layers: int = 6,
dim_model: int = 512,
num_heads: int = 8,
dim_feedforward: int = 2048,
dropout: float = 0.1,
):
super().__init__()
self.layers = nn.ModuleList(
[
TransformerDecoderLayer(dim_model, num_heads, dim_feedforward, dropout)
for _ in range(num_layers)
]
)
self.linear = nn.Linear(dim_model, dim_model)
def forward(self, tgt: Tensor, memory: Tensor) -> Tensor:
seq_len, dimension = tgt.size(1), tgt.size(2)
tgt += position_encoding(seq_len, dimension)
for layer in self.layers:
tgt = layer(tgt, memory)
return torch.softmax(self.linear(tgt), dim=-1)
###Output
_____no_output_____
###Markdown
Note that there is not, as of yet, any masked attention implementation here!Making this version of the Transformer work in practice would require at least that. Putting it togetherNow we can put the encoder and decoder together:
###Code
class Transformer(nn.Module):
def __init__(
self,
num_encoder_layers: int = 6,
num_decoder_layers: int = 6,
dim_model: int = 512,
num_heads: int = 6,
dim_feedforward: int = 2048,
dropout: float = 0.1,
activation: nn.Module = nn.ReLU(),
):
super().__init__()
self.encoder = TransformerEncoder(
num_layers=num_encoder_layers,
dim_model=dim_model,
num_heads=num_heads,
dim_feedforward=dim_feedforward,
dropout=dropout,
)
self.decoder = TransformerDecoder(
num_layers=num_decoder_layers,
dim_model=dim_model,
num_heads=num_heads,
dim_feedforward=dim_feedforward,
dropout=dropout,
)
def forward(self, src: Tensor, tgt: Tensor) -> Tensor:
return self.decoder(tgt, self.encoder(src))
###Output
_____no_output_____
###Markdown
Let’s create a simple test, as a sanity check for our implementation. We can construct random tensors for the input and target sequences, check that our model executes without errors, and confirm that the output tensor has the correct shape:
###Code
src = torch.rand(64, 32, 512)
tgt = torch.rand(64, 16, 512)
out = Transformer()(src, tgt)
print(out.shape)
# torch.Size([64, 16, 512])
###Output
torch.Size([64, 16, 512])
###Markdown
You could try implementing masked attention and training this Transformer model on asequence-to-sequence problem. However, to understand masking, you might first findthe [PyTorch Transformer tutorial](https://pytorch.org/tutorials/beginner/transformer_tutorial.html)useful. Note that this model is only a Transformer encoder for language modeling, but it usesmasking in the encoder's self attention module. Vision Transformer (ViT)The Vision Transformer (ViT) is a transformer targeted at vision processing tasks. It has achieved state-of-the-art performance in image classification and (with some modification) other tasks. The ViT concept for image classification is as follows: How does ViT work?The steps of ViT are as follows:1. Split input image into patches2. Flatten the patches3. Produce linear embeddings from the flattened patches4. Add position embeddings5. Feed the sequence preceeded by a `[class]` token as input to a standard transformer encoder6. Pretrain the model to ouptut image labels for the `[class]` token (fully supervised on a huge dataset such as ImageNet-22K)7. Fine-tune on the downstream dataset for the specific image classification task ViT architectureViT is a Transformer encoder. In detail, it looks like this:In the figure we see four main parts: The high-level architecture of the model. The Transformer module. The multiscale self-attention (MSA) head. An individual self-attention (SA) head. Let's startLet's do a small scale implementation with the MNIST dataset. Thecode here is based on [Brian Pulfer's paper reimplementation repository](https://github.com/BrianPulfer/PapersReimplementations).
###Code
import numpy as np
import torch
import torch.nn as nn
from torch.nn import CrossEntropyLoss
from torch.optim import Adam
from torch.utils.data import DataLoader
from torchvision.datasets.mnist import MNIST
from torchvision.transforms import ToTensor
###Output
_____no_output_____
###Markdown
Import the MNIST dataset:
###Code
# Loading data
transform = ToTensor()
train_set = MNIST(root='./../datasets', train=True, download=True, transform=transform)
test_set = MNIST(root='./../datasets', train=False, download=True, transform=transform)
train_loader = DataLoader(train_set, shuffle=True, batch_size=16)
test_loader = DataLoader(test_set, shuffle=False, batch_size=16)
###Output
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz
Downloading http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz to ./../datasets/MNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Train and test functionsNext, let's create the train and test functions:
###Code
def train_ViT_classify(model, optimizer, N_EPOCHS, train_loader, device="cpu"):
criterion = CrossEntropyLoss()
for epoch in range(N_EPOCHS):
train_loss = 0.0
for batch in train_loader:
x, y = batch
x = x.to(device)
y = y.to(device)
y_hat = model(x)
loss = criterion(y_hat, y) / len(x)
train_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"Epoch {epoch + 1}/{N_EPOCHS} loss: {train_loss:.2f}")
def test_ViT_classify(model, optimizer, test_loader):
criterion = CrossEntropyLoss()
correct, total = 0, 0
test_loss = 0.0
for batch in test_loader:
x, y = batch
x = x.to(device)
y = y.to(device)
y_hat = model(x)
loss = criterion(y_hat, y) / len(x)
test_loss += loss
correct += torch.sum(torch.argmax(y_hat, dim=1) == y).item()
total += len(x)
print(f"Test loss: {test_loss:.2f}")
print(f"Test accuracy: {correct / total * 100:.2f}%")
###Output
_____no_output_____
###Markdown
Multi-head Self Attention (MSA) ModelAs with the basic transformer above, to build the ViT model, we need to create a MSA module and put ittogether with the other elements.For a single image, self attention means that each patch's representationis updated based on its input token's similarity with those of the other patches.As before, we perform a linear mapping of each patch to three distinct vectors $q$, $k$, and $v$ (query, key, value).For each patch, we need to compute the dot product of its $q$ vector with all of the $k$ vectors, divide by the square root of the dimensionof the vectors, then apply softmax to the result. The resulting matrix is called the matrix of attention cues.We multiply the attention cues with the $v$ vectors associated with the different input tokens and sum them all up.The input for each patch is transformed to a new value based on its similarity (after the linear mapping to $q$, $k$, and $v$) with other patches.However, the whole procedure is carried out $H$ times on $H$ sub-vectors of our current 8-dimensional patches, where $H$ is the number of heads.Once all results are obtained, they are concatenated together then passed through a linear layer.The MSA model looks like this:
###Code
class MSA(nn.Module):
def __init__(self, d, n_heads=2):
super(MSA, self).__init__()
self.d = d
self.n_heads = n_heads
assert d % n_heads == 0, f"Can't divide dimension {d} into {n_heads} heads"
d_head = int(d / n_heads)
self.q_mappings = [nn.Linear(d_head, d_head) for _ in range(self.n_heads)]
self.k_mappings = [nn.Linear(d_head, d_head) for _ in range(self.n_heads)]
self.v_mappings = [nn.Linear(d_head, d_head) for _ in range(self.n_heads)]
self.d_head = d_head
self.softmax = nn.Softmax(dim=-1)
def forward(self, sequences):
# Sequences has shape (N, seq_length, token_dim)
# We go into shape (N, seq_length, n_heads, token_dim / n_heads)
# And come back to (N, seq_length, item_dim) (through concatenation)
result = []
for sequence in sequences:
seq_result = []
for head in range(self.n_heads):
q_mapping = self.q_mappings[head]
k_mapping = self.k_mappings[head]
v_mapping = self.v_mappings[head]
seq = sequence[:, head * self.d_head: (head + 1) * self.d_head]
q, k, v = q_mapping(seq), k_mapping(seq), v_mapping(seq)
attention = self.softmax(q @ k.T / (self.d_head ** 0.5))
seq_result.append(attention @ v)
result.append(torch.hstack(seq_result))
return torch.cat([torch.unsqueeze(r, dim=0) for r in result])
###Output
_____no_output_____
###Markdown
**Note**: for each head, we create distinct Q, K, and V mapping functions (square matrices of size 4x4 in our example).Since our inputs will be sequences of size (N, 50, 8), and we only use 2 heads, we will at some point have an (N, 50, 2, 4) tensor, use a nn.Linear(4, 4) module on it, and then come back, after concatenation, to an (N, 50, 8) tensor. Position encodingThe position encoding allows the model to understand where each patch is in the original image. While it is theoretically possible to learnsuch positional embeddings, the original Vaswani et al. Transformer uses a fixed position embedding representation that addslow-frequency values to the first dimension and higher-frequency values to the later dimensions, resulting in a code that ismore similar for nearby tokens than far away tokens. For each token, we add to its j-th coordinate the value$$ p_{i,j} =\left\{\begin{matrix}\sin (\frac{i}{10000^{j/d_{embdim}}})\\ \cos (\frac{i}{10000^{j/d_{embdim}}})\end{matrix}\right.$$We can visualize the position encoding matrix thusly:Here is an implementation:
###Code
def get_positional_embeddings(sequence_length, d, device="cpu"):
result = torch.ones(sequence_length, d)
for i in range(sequence_length):
for j in range(d):
result[i][j] = np.sin(i / (10000 ** (j / d))) if j % 2 == 0 else np.cos(i / (10000 ** ((j - 1) / d)))
return result.to(device)
###Output
_____no_output_____
###Markdown
ViT ModelCreate the ViT model as below. The explaination is later.
###Code
class ViT(nn.Module):
def __init__(self, input_shape, n_patches=7, hidden_d=8, n_heads=2, out_d=10):
# Super constructor
super(ViT, self).__init__()
# Input and patches sizes
self.input_shape = input_shape
self.n_patches = n_patches
self.n_heads = n_heads
assert input_shape[1] % n_patches == 0, "Input shape not entirely divisible by number of patches"
assert input_shape[2] % n_patches == 0, "Input shape not entirely divisible by number of patches"
self.patch_size = (input_shape[1] / n_patches, input_shape[2] / n_patches)
self.hidden_d = hidden_d
# 1) Linear mapper
self.input_d = int(input_shape[0] * self.patch_size[0] * self.patch_size[1])
self.linear_mapper = nn.Linear(self.input_d, self.hidden_d)
# 2) Classification token
self.class_token = nn.Parameter(torch.rand(1, self.hidden_d))
# 3) Positional embedding
# (In forward method)
# 4a) Layer normalization 1
self.ln1 = nn.LayerNorm((self.n_patches ** 2 + 1, self.hidden_d))
# 4b) Multi-head Self Attention (MSA) and classification token
self.msa = MSA(self.hidden_d, n_heads)
# 5a) Layer normalization 2
self.ln2 = nn.LayerNorm((self.n_patches ** 2 + 1, self.hidden_d))
# 5b) Encoder MLP
self.enc_mlp = nn.Sequential(
nn.Linear(self.hidden_d, self.hidden_d),
nn.ReLU()
)
# 6) Classification MLP
self.mlp = nn.Sequential(
nn.Linear(self.hidden_d, out_d),
nn.Softmax(dim=-1)
)
def forward(self, images):
# Dividing images into patches
n, c, w, h = images.shape
patches = images.reshape(n, self.n_patches ** 2, self.input_d)
# Running linear layer for tokenization
tokens = self.linear_mapper(patches)
# Adding classification token to the tokens
tokens = torch.stack([torch.vstack((self.class_token, tokens[i])) for i in range(len(tokens))])
# Adding positional embedding
tokens += get_positional_embeddings(self.n_patches ** 2 + 1, self.hidden_d, device).repeat(n, 1, 1)
# TRANSFORMER ENCODER BEGINS ###################################
# NOTICE: MULTIPLE ENCODER BLOCKS CAN BE STACKED TOGETHER ######
# Running Layer Normalization, MSA and residual connection
self.msa(self.ln1(tokens.to("cpu")).to(device))
out = tokens + self.msa(self.ln1(tokens))
# Running Layer Normalization, MLP and residual connection
out = out + self.enc_mlp(self.ln2(out))
# TRANSFORMER ENCODER ENDS ###################################
# Getting the classification token only
out = out[:, 0]
return self.mlp(out)
###Output
_____no_output_____
###Markdown
Step 1: Patchifying and the linear mappingThe transformer encoder was developed with sequence data in mind, such as English sentences. However, an image is not a sequence. Thus, we break it into multiple sub-images and map each sub-image to a vector.We do so by simply reshaping our input, which has size $(N, C, H, W)$ (in our example $(N, 1, 28, 28)$), to size (N, Patches, Patch dimensionality), where the dimensionality of a patch is adjusted accordingly.In MNIST, we break each $(1, 28, 28)$ into 7x7 patches (hence, each of size 4x4). That is, we are going to obtain 7x7=49 sub-images out of a single image.$$(N,1,28,28) \rightarrow (N,P\times P, H \times C/P \times W \times C/P) \rightarrow (N, 7\times 7, 4\times 4) \rightarrow (N, 49, 16)$$ Step 2: Adding the classification tokenWhen information about all other tokens will be present here, we will be able to classify the image using only this special token. The initial value of the special token (the one fed to the transformer encoder) is a parameter of the model that needs to be learned.We can now add a parameter to our model and convert our (N, 49, 8) tokens tensor to an (N, 50, 8) tensor (we add the special token to each sequence).Passing from (N,49,8) → (N,50,8) is probably sub-optimal. Also, notice that the classification token is put as the first token of each sequence. This will be important to keep in mind when we will then retrieve the classification token to feed to the final MLP. Step 3: Positional encodingSee above, as we mentioned. Step 4: LN, MSA, and Residual ConnectionThe step is to apply layer normalization to the tokens, then apply MSA, and add a residual connection (add the input we had before applying LN).- **Layer normalization** is a popular block that, given an input, subtracts its mean and divides by the standard deviation.- **MSA**: same as the vanilla transformer.- **A residual connection** consists in just adding the original input to the result of some computation. This, intuitively, allows a network to become more powerful while also preserving the set of possible functions that the model can approximate.The residual connection is added at the original (N, 50, 8) tensor to the (N, 50, 8) obtained after LN and MSA. Step 5: LN, MLP, and Residual ConnectionAll that is left to the transformer encoder is just a simple residual connection between what we already have and what we get after passing the current tensor through another LN and an MLP. Step 6: Classification MLPFinally, we can extract just the classification token (first token) out of our N sequences, and use each token to get N classifications.Since we decided that each token is an 8-dimensional vector, and since we have 10 possible digits, we can implement the classification MLP as a simple 8x10 matrix, activated with the SoftMax function.The output of our model is now an (N, 10) tensor.
###Code
# device = torch.device("cuda:1" if torch.cuda.is_available() else "cpu")
# print('Using device', device)
# We haven't gotten CUDA too work yet -- the kernels always die!
device = "cpu"
model = ViT((1, 28, 28), n_patches=7, hidden_d=20, n_heads=2, out_d=10)
model = model.to(device)
N_EPOCHS = 5
LR = 0.01
optimizer = Adam(model.parameters(), lr=LR)
train_ViT_classify(model, optimizer, N_EPOCHS, train_loader, device)
test_ViT_classify(model, optimizer, test_loader)
###Output
Test loss: 59.81
Test accuracy: 93.07%
###Markdown
The testing accuracy is over 90%. Our implementation is done! Pytorch ViT[Here](https://github.com/lucidrains/vit-pytorchvision-transformer---pytorch) is the link of the full version of ViT using pytorch.
###Code
!pip install vit-pytorch
###Output
Collecting vit-pytorch
Downloading vit_pytorch-0.29.0-py3-none-any.whl (56 kB)
[K |████████████████████████████████| 56 kB 1.6 MB/s eta 0:00:011
[?25hRequirement already satisfied: torch>=1.6 in /home/alisa/anaconda3/lib/python3.8/site-packages (from vit-pytorch) (1.8.0+cu111)
Collecting einops>=0.4.1
Downloading einops-0.4.1-py3-none-any.whl (28 kB)
Requirement already satisfied: torchvision in /home/alisa/anaconda3/lib/python3.8/site-packages (from vit-pytorch) (0.9.0+cu111)
Requirement already satisfied: numpy in /home/alisa/.local/lib/python3.8/site-packages (from torch>=1.6->vit-pytorch) (1.21.4)
Requirement already satisfied: typing-extensions in /home/alisa/.local/lib/python3.8/site-packages (from torch>=1.6->vit-pytorch) (3.10.0.2)
Requirement already satisfied: pillow>=4.1.1 in /home/alisa/anaconda3/lib/python3.8/site-packages (from torchvision->vit-pytorch) (7.2.0)
Installing collected packages: einops, vit-pytorch
Successfully installed einops-0.4.1 vit-pytorch-0.29.0
###Markdown
ViT Pytorch implementation
###Code
import torch
from vit_pytorch import ViT
v = ViT(
image_size = 256,
patch_size = 32,
num_classes = 1000,
dim = 1024,
depth = 6,
heads = 16,
mlp_dim = 2048,
dropout = 0.1,
emb_dropout = 0.1
)
img = torch.randn(1, 3, 256, 256)
preds = v(img) # (1, 1000)
###Output
_____no_output_____
###Markdown
The implementation also contains a distillable ViT:
###Code
import torch
from torchvision.models import resnet50
from vit_pytorch.distill import DistillableViT, DistillWrapper
teacher = resnet50(pretrained = True)
v = DistillableViT(
image_size = 256,
patch_size = 32,
num_classes = 1000,
dim = 1024,
depth = 6,
heads = 8,
mlp_dim = 2048,
dropout = 0.1,
emb_dropout = 0.1
)
distiller = DistillWrapper(
student = v,
teacher = teacher,
temperature = 3, # temperature of distillation
alpha = 0.5, # trade between main loss and distillation loss
hard = False # whether to use soft or hard distillation
)
img = torch.randn(2, 3, 256, 256)
labels = torch.randint(0, 1000, (2,))
loss = distiller(img, labels)
loss.backward()
# after lots of training above ...
pred = v(img) # (2, 1000)
###Output
_____no_output_____ |
.ipynb_checkpoints/text_challenge-checkpoint.ipynb | ###Markdown
Novoic ML challenge – text data IntroductionWelcome to the Novoic ML challenge!This is an open-ended ML challenge to help us identify exceptional researchers and engineers. The guidance below describes an open-source dataset that you can use to demonstrate your research skills, creativity, coding ability, scientific communication or anything else you think is important to the role.Before starting the challenge, go ahead and read our CEO's [Medium post](https://medium.com/@emil_45669/the-doctor-is-ready-to-see-you-tube-videos-716b12367feb) on what we're looking for in our Research Scientists, Research Engineers and ML Interns. We recommend you spend around three hours on this (more or less if you wish), which you do not have to do in one go. Please make use of any resources you like.This is the text version of the challenge. Also available are audio and image versions. You can access all three from [this GitHub repo](https://github.com/novoic/ml-challenge).Best of luck – we're looking forward to seeing what you can do! Prepare the dataCopy the dataset to a local directory – this should take a few seconds.
###Code
!mkdir -p data
!gsutil -m cp -r gs://novoic-ml-challenge-text-data/* ./data
###Output
Unknown option: m
No command was given.
Choose one of -b, -d, -e, or -r to do something.
Try `/usr/bin/gsutil --help' for more information.
###Markdown
Data descriptionThe data comprises 5,574 SMS messages. Each message is labelled as either 'ham' (legitimate) or spam.Each line in `data.txt` corresponds to one message. The first word is the data label (either `ham` or `spam`), followed by a tab (`\t`) character and then the message.
###Code
with open('data/data.txt', 'r') as f:
msgs = f.read().splitlines()
print(msgs[10])
print(msgs[11])
###Output
_____no_output_____
###Markdown
For more information about the dataset, see its `README.md`.Directory structure:```data/├── data.txt├── LICENSE└── README.md``` The challengeThis is an open-ended challenge and we want to witness your creativity. Some suggestions:- Data exploration/visualization- Binary classification- Unsupervised clustering- Model explainabilityYou're welcome to explore one or more of these topics, or do something entirely different.Create, iterate on, and validate your work in this notebook, using any packages of your choosing.**You can access a GPU via `Runtime -> Change runtime type` in the toolbar.** Submission instructionsOnce you're done, send this `.ipynb` notebook (or a link to it hosted on Google Drive/GitHub with appropriate permissions) to [email protected], ensuring that outputs from cells (text, plots etc) are preserved.If you haven't applied already, make sure you submit an application first through our [job board](https://novoic.com/careers/). Your submissionThe below sets up TensorFlow as an example but feel free to use any framework you like.
###Code
# The default TensorFlow version on Colab is 1.x. Uncomment the below to use TensorFlow 2.x instead.
# %tensorflow_version 2.x
import tensorflow as tf
tf.__version__
###Output
_____no_output_____ |
examples/notebook-examples/data/tensorflow2/1_advanced/tensorflow2_gpu.ipynb | ###Markdown
TensorFlow GPU support document- GPU Guide: https://www.tensorflow.org/guide/gpu- Compatibility Matrix: https://www.tensorflow.org/install/sourcegpu
###Code
# Get GPU status with Nvidia System Management Interface (nvidia-smi)
# Check driver version and CUDA version are compatible with TensorFlow
!nvidia-smi
# Check the cuda toolkit version
! nvcc -V
# Get TensorFlow version and the number of GPUs visible to TensorFlow runtime
import tensorflow as tf
print("TensorFlow Version:", tf.__version__)
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
# Run 'MatMul' with GPU
tf.debugging.set_log_device_placement(True)
# Create some tensors
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)
###Output
Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0
tf.Tensor(
[[22. 28.]
[49. 64.]], shape=(2, 2), dtype=float32)
|
tensorflow_v2/dragen1860/ch07/ch07-反向传播算法.ipynb | ###Markdown
反向传播算法 激活函数导数 Sigmoid函数导数Sigmoid函数表达式:$$\sigma(x) = \frac{1}{1 + e^{-x}}$$Sigmoid函数的导数表达式:$$\frac{d}{dx} \sigma(x) = \sigma(1-\sigma)$$
###Code
# 导入 numpy 库
import numpy as np
from matplotlib import pyplot as plt
plt.rcParams['font.size'] = 16
plt.rcParams['font.family'] = ['STKaiti']
plt.rcParams['axes.unicode_minus'] = False
def set_plt_ax():
# get current axis 获得坐标轴对象
ax = plt.gca()
ax.spines['right'].set_color('none')
# 将右边 上边的两条边颜色设置为空 其实就相当于抹掉这两条边
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
# 指定下边的边作为 x 轴,指定左边的边为 y 轴
ax.yaxis.set_ticks_position('left')
# 指定 data 设置的bottom(也就是指定的x轴)绑定到y轴的0这个点上
ax.spines['bottom'].set_position(('data', 0))
ax.spines['left'].set_position(('data', 0))
def sigmoid(x):
# 实现 sigmoid 函数
return 1 / (1 + np.exp(-x))
def sigmoid_derivative(x):
# sigmoid 导数的计算
# sigmoid 函数的表达式由手动推导而得
return sigmoid(x)*(1-sigmoid(x))
x = np.arange(-6.0, 6.0, 0.1)
sigmoid_y = sigmoid(x)
sigmoid_derivative_y = sigmoid_derivative(x)
set_plt_ax()
plt.plot(x, sigmoid_y, color='C9', label='Sigmoid')
plt.plot(x, sigmoid_derivative_y, color='C4', label='导数')
plt.xlim(-6, 6)
plt.ylim(0, 1)
plt.legend(loc=2)
plt.show()
###Output
_____no_output_____
###Markdown
ReLU 函数导数ReLU 函数的表达式:$$\text{ReLU}(x)=\max(0,x)$$ReLU 函数的导数表达式:$$\frac{d}{dx} \text{ReLU} = \left \{ \begin{array}{cc} 1 \quad x \geqslant 0 \\0 \quad x < 0\end{array} \right.$$
###Code
def relu(x):
return np.maximum(0, x)
def relu_derivative(x): # ReLU 函数的导数
d = np.array(x, copy=True) # 用于保存梯度的张量
d[x < 0] = 0 # 元素为负的导数为 0
d[x >= 0] = 1 # 元素为正的导数为 1
return d
x = np.arange(-6.0, 6.0, 0.1)
relu_y = relu(x)
relu_derivative_y = relu_derivative(x)
set_plt_ax()
plt.plot(x, relu_y, color='C9', label='ReLU')
plt.plot(x, relu_derivative_y, color='C4', label='导数')
plt.xlim(-6, 6)
plt.ylim(0, 6)
plt.legend(loc=2)
plt.show()
###Output
_____no_output_____
###Markdown
LeakyReLU函数导数LeakyReLU 函数的表达式:$$\text{LeakyReLU} = \left\{ \begin{array}{cc}x \quad x \geqslant 0 \\px \quad x < 0\end{array} \right.$$LeakyReLU的函数导数表达式:$$\frac{d}{dx} \text{LeakyReLU} = \left\{ \begin{array}{cc}1 \quad x \geqslant 0 \\p \quad x < 0\end{array} \right.$$
###Code
def leakyrelu(x, p):
y = np.copy(x)
y[y < 0] = p * y[y < 0]
return y
# 其中 p 为 LeakyReLU 的负半段斜率,为超参数
def leakyrelu_derivative(x, p):
dx = np.ones_like(x) # 创建梯度张量,全部初始化为 1
dx[x < 0] = p # 元素为负的导数为 p
return dx
x = np.arange(-6.0, 6.0, 0.1)
p = 0.1
leakyrelu_y = leakyrelu(x, p)
leakyrelu_derivative_y = leakyrelu_derivative(x, p)
set_plt_ax()
plt.plot(x, leakyrelu_y, color='C9', label='LeakyReLU')
plt.plot(x, leakyrelu_derivative_y, color='C4', label='导数')
plt.xlim(-6, 6)
plt.yticks(np.arange(-1, 7))
plt.legend(loc=2)
plt.show()
###Output
_____no_output_____
###Markdown
Tanh 函数梯度tanh函数的表达式:$$\tanh(x)=\frac{e^x-e^{-x}}{e^x + e^{-x}}= 2 \cdot \text{sigmoid}(2x) - 1$$tanh函数的导数表达式:$$\begin{aligned}\frac{\mathrm{d}}{\mathrm{d} x} \tanh (x) &=\frac{\left(e^{x}+e^{-x}\right)\left(e^{x}+e^{-x}\right)-\left(e^{x}-e^{-x}\right)\left(e^{x}-e^{-x}\right)}{\left(e^{x}+e^{-x}\right)^{2}} \\&=1-\frac{\left(e^{x}-e^{-x}\right)^{2}}{\left(e^{x}+e^{-x}\right)^{2}}=1-\tanh ^{2}(x)\end{aligned}$$
###Code
def sigmoid(x): # sigmoid 函数实现
return 1 / (1 + np.exp(-x))
def tanh(x): # tanh 函数实现
return 2*sigmoid(2*x) - 1
def tanh_derivative(x): # tanh 导数实现
return 1-tanh(x)**2
x = np.arange(-6.0, 6.0, 0.1)
tanh_y = tanh(x)
tanh_derivative_y = tanh_derivative(x)
set_plt_ax()
plt.plot(x, tanh_y, color='C9', label='Tanh')
plt.plot(x, tanh_derivative_y, color='C4', label='导数')
plt.xlim(-6, 6)
plt.ylim(-1.5, 1.5)
plt.legend(loc=2)
plt.show()
###Output
_____no_output_____
###Markdown
链式法则
###Code
import tensorflow as tf
# 构建待优化变量
x = tf.constant(1.)
w1 = tf.constant(2.)
b1 = tf.constant(1.)
w2 = tf.constant(2.)
b2 = tf.constant(1.)
# 构建梯度记录器
with tf.GradientTape(persistent=True) as tape:
# 非 tf.Variable 类型的张量需要人为设置记录梯度信息
tape.watch([w1, b1, w2, b2])
# 构建 2 层线性网络
y1 = x * w1 + b1
y2 = y1 * w2 + b2
# 独立求解出各个偏导数
dy2_dy1 = tape.gradient(y2, [y1])[0]
dy1_dw1 = tape.gradient(y1, [w1])[0]
dy2_dw1 = tape.gradient(y2, [w1])[0]
# 验证链式法则, 2 个输出应相等
print(dy2_dy1 * dy1_dw1)
print(dy2_dw1)
###Output
tf.Tensor(2.0, shape=(), dtype=float32)
tf.Tensor(2.0, shape=(), dtype=float32)
###Markdown
Himmelblau 函数优化实战Himmelblau 函数是用来测试优化算法的常用样例函数之一,它包含了两个自变量$x$和$y$,数学表达式是:$$f(x, y)=\left(x^{2}+y-11\right)^{2}+\left(x+y^{2}-7\right)^{2}$$
###Code
from mpl_toolkits.mplot3d import Axes3D
def himmelblau(x):
# himmelblau 函数实现,传入参数 x 为 2 个元素的 List
return (x[0] ** 2 + x[1] - 11) ** 2 + (x[0] + x[1] ** 2 - 7) ** 2
x = np.arange(-6, 6, 0.1) # 可视化的 x 坐标范围为-6~6
y = np.arange(-6, 6, 0.1) # 可视化的 y 坐标范围为-6~6
print('x,y range:', x.shape, y.shape)
# 生成 x-y 平面采样网格点,方便可视化
X, Y = np.meshgrid(x, y)
print('X,Y maps:', X.shape, Y.shape)
Z = himmelblau([X, Y]) # 计算网格点上的函数值
# 绘制 himmelblau 函数曲面
fig = plt.figure('himmelblau')
ax = fig.gca(projection='3d') # 设置 3D 坐标轴
ax.plot_surface(X, Y, Z, cmap = plt.cm.rainbow ) # 3D 曲面图
ax.view_init(60, -30)
ax.set_xlabel('x')
ax.set_ylabel('y')
plt.show()
# 参数的初始化值对优化的影响不容忽视,可以通过尝试不同的初始化值,
# 检验函数优化的极小值情况
# [1., 0.], [-4, 0.], [4, 0.]
# 初始化参数
x = tf.constant([4., 0.])
for step in range(200):# 循环优化 200 次
with tf.GradientTape() as tape: #梯度跟踪
tape.watch([x]) # 加入梯度跟踪列表
y = himmelblau(x) # 前向传播
# 反向传播
grads = tape.gradient(y, [x])[0]
# 更新参数,0.01 为学习率
x -= 0.01*grads
# 打印优化的极小值
if step % 20 == 19:
print ('step {}: x = {}, f(x) = {}'.format(step, x.numpy(), y.numpy()))
###Output
step 19: x = [ 3.5381215 -1.3465767], f(x) = 3.7151756286621094
step 39: x = [ 3.5843277 -1.8470242], f(x) = 3.451140582910739e-05
step 59: x = [ 3.584428 -1.8481253], f(x) = 4.547473508864641e-11
step 79: x = [ 3.584428 -1.8481264], f(x) = 1.1368684856363775e-12
step 99: x = [ 3.584428 -1.8481264], f(x) = 1.1368684856363775e-12
step 119: x = [ 3.584428 -1.8481264], f(x) = 1.1368684856363775e-12
step 139: x = [ 3.584428 -1.8481264], f(x) = 1.1368684856363775e-12
step 159: x = [ 3.584428 -1.8481264], f(x) = 1.1368684856363775e-12
step 179: x = [ 3.584428 -1.8481264], f(x) = 1.1368684856363775e-12
step 199: x = [ 3.584428 -1.8481264], f(x) = 1.1368684856363775e-12
###Markdown
反向传播算法实战
###Code
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from sklearn.datasets import make_moons
from sklearn.model_selection import train_test_split
plt.rcParams['font.size'] = 16
plt.rcParams['font.family'] = ['STKaiti']
plt.rcParams['axes.unicode_minus'] = False
def load_dataset():
# 采样点数
N_SAMPLES = 2000
# 测试数量比率
TEST_SIZE = 0.3
# 利用工具函数直接生成数据集
X, y = make_moons(n_samples=N_SAMPLES, noise=0.2, random_state=100)
# 将 2000 个点按着 7:3 分割为训练集和测试集
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=TEST_SIZE, random_state=42)
return X, y, X_train, X_test, y_train, y_test
def make_plot(X, y, plot_name, XX=None, YY=None, preds=None, dark=False):
# 绘制数据集的分布, X 为 2D 坐标, y 为数据点的标签
if (dark):
plt.style.use('dark_background')
else:
sns.set_style("whitegrid")
plt.figure(figsize=(16, 12))
axes = plt.gca()
axes.set(xlabel="$x_1$", ylabel="$x_2$")
plt.title(plot_name, fontsize=30)
plt.subplots_adjust(left=0.20)
plt.subplots_adjust(right=0.80)
if XX is not None and YY is not None and preds is not None:
plt.contourf(XX, YY, preds.reshape(XX.shape), 25, alpha=1, cmap=plt.cm.Spectral)
plt.contour(XX, YY, preds.reshape(XX.shape), levels=[.5], cmap="Greys", vmin=0, vmax=.6)
# 绘制散点图,根据标签区分颜色
plt.scatter(X[:, 0], X[:, 1], c=y.ravel(), s=40, cmap=plt.cm.Spectral, edgecolors='none')
plt.show()
X, y, X_train, X_test, y_train, y_test = load_dataset()
# 调用 make_plot 函数绘制数据的分布,其中 X 为 2D 坐标, y 为标签
make_plot(X, y, "Classification Dataset Visualization ")
class Layer:
# 全连接网络层
def __init__(self, n_input, n_neurons, activation=None, weights=None,
bias=None):
"""
:param int n_input: 输入节点数
:param int n_neurons: 输出节点数
:param str activation: 激活函数类型
:param weights: 权值张量,默认类内部生成
:param bias: 偏置,默认类内部生成
"""
# 通过正态分布初始化网络权值,初始化非常重要,不合适的初始化将导致网络不收敛
self.weights = weights if weights is not None else np.random.randn(n_input, n_neurons) * np.sqrt(1 / n_neurons)
self.bias = bias if bias is not None else np.random.rand(n_neurons) * 0.1
self.activation = activation # 激活函数类型,如’sigmoid’
self.last_activation = None # 激活函数的输出值o
self.error = None # 用于计算当前层的delta 变量的中间变量
self.delta = None # 记录当前层的delta 变量,用于计算梯度
# 网络层的前向传播函数实现如下,其中last_activation 变量用于保存当前层的输出值:
def activate(self, x):
# 前向传播函数
r = np.dot(x, self.weights) + self.bias # X@W+b
# 通过激活函数,得到全连接层的输出o
self.last_activation = self._apply_activation(r)
return self.last_activation
# 上述代码中的self._apply_activation 函数实现了不同类型的激活函数的前向计算过程,
# 尽管此处我们只使用Sigmoid 激活函数一种。代码如下:
def _apply_activation(self, r):
# 计算激活函数的输出
if self.activation is None:
return r # 无激活函数,直接返回
# ReLU 激活函数
elif self.activation == 'relu':
return np.maximum(r, 0)
# tanh 激活函数
elif self.activation == 'tanh':
return np.tanh(r)
# sigmoid 激活函数
elif self.activation == 'sigmoid':
return 1 / (1 + np.exp(-r))
return r
# 针对于不同类型的激活函数,它们的导数计算实现如下:
def apply_activation_derivative(self, r):
# 计算激活函数的导数
# 无激活函数,导数为1
if self.activation is None:
return np.ones_like(r)
# ReLU 函数的导数实现
elif self.activation == 'relu':
grad = np.array(r, copy=True)
grad[r > 0] = 1.
grad[r <= 0] = 0.
return grad
# tanh 函数的导数实现
elif self.activation == 'tanh':
return 1 - r ** 2
# Sigmoid 函数的导数实现
elif self.activation == 'sigmoid':
return r * (1 - r)
return r
# 神经网络模型
class NeuralNetwork:
def __init__(self):
self._layers = [] # 网络层对象列表
def add_layer(self, layer):
# 追加网络层
self._layers.append(layer)
# 网络的前向传播只需要循环调各个网络层对象的前向计算函数即可,代码如下:
# 前向传播
def feed_forward(self, X):
for layer in self._layers:
# 依次通过各个网络层
X = layer.activate(X)
return X
def backpropagation(self, X, y, learning_rate):
# 反向传播算法实现
# 前向计算,得到输出值
output = self.feed_forward(X)
for i in reversed(range(len(self._layers))): # 反向循环
layer = self._layers[i] # 得到当前层对象
# 如果是输出层
if layer == self._layers[-1]: # 对于输出层
layer.error = y - output # 计算2 分类任务的均方差的导数
# 关键步骤:计算最后一层的delta,参考输出层的梯度公式
layer.delta = layer.error * layer.apply_activation_derivative(output)
else: # 如果是隐藏层
next_layer = self._layers[i + 1] # 得到下一层对象
layer.error = np.dot(next_layer.weights, next_layer.delta)
# 关键步骤:计算隐藏层的delta,参考隐藏层的梯度公式
layer.delta = layer.error * layer.apply_activation_derivative(layer.last_activation)
# 循环更新权值
for i in range(len(self._layers)):
layer = self._layers[i]
# o_i 为上一网络层的输出
o_i = np.atleast_2d(X if i == 0 else self._layers[i - 1].last_activation)
# 梯度下降算法,delta 是公式中的负数,故这里用加号
layer.weights += layer.delta * o_i.T * learning_rate
def train(self, X_train, X_test, y_train, y_test, learning_rate, max_epochs):
# 网络训练函数
# one-hot 编码
y_onehot = np.zeros((y_train.shape[0], 2))
y_onehot[np.arange(y_train.shape[0]), y_train] = 1
# 将One-hot 编码后的真实标签与网络的输出计算均方误差,并调用反向传播函数更新网络参数,循环迭代训练集1000 遍即可
mses = []
accuracys = []
for i in range(max_epochs + 1): # 训练1000 个epoch
for j in range(len(X_train)): # 一次训练一个样本
self.backpropagation(X_train[j], y_onehot[j], learning_rate)
if i % 10 == 0:
# 打印出MSE Loss
mse = np.mean(np.square(y_onehot - self.feed_forward(X_train)))
mses.append(mse)
accuracy = self.accuracy(self.predict(X_test), y_test.flatten())
accuracys.append(accuracy)
print('Epoch: #%s, MSE: %f' % (i, float(mse)))
# 统计并打印准确率
print('Accuracy: %.2f%%' % (accuracy * 100))
return mses, accuracys
def predict(self, X):
return self.feed_forward(X)
def accuracy(self, X, y):
return np.sum(np.equal(np.argmax(X, axis=1), y)) / y.shape[0]
nn = NeuralNetwork() # 实例化网络类
nn.add_layer(Layer(2, 25, 'sigmoid')) # 隐藏层 1, 2=>25
nn.add_layer(Layer(25, 50, 'sigmoid')) # 隐藏层 2, 25=>50
nn.add_layer(Layer(50, 25, 'sigmoid')) # 隐藏层 3, 50=>25
nn.add_layer(Layer(25, 2, 'sigmoid')) # 输出层, 25=>2
mses, accuracys = nn.train(X_train, X_test, y_train, y_test, 0.01, 1000)
x = [i for i in range(0, 101, 10)]
# 绘制MES曲线
plt.title("MES Loss")
plt.plot(x, mses[:11], color='blue')
plt.xlabel('Epoch')
plt.ylabel('MSE')
plt.show()
# 绘制Accuracy曲线
plt.title("Accuracy")
plt.plot(x, accuracys[:11], color='blue')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.show()
###Output
_____no_output_____ |
jupyter/07_align_spine.ipynb | ###Markdown
07 Align the spineWe need to make sure that all the fruits are standing upright before further modeling- Some spine tissue is very irregular by the sides (flaps between flesh wedges)- In such case, it is better to erode the spine to remove this extraneous flaps and keep just a bare column- The column is aligned via PCA- Since the sign of the eigenvectors is arbitrary when computed, final visual inspection is done to ensure that the fruit is standing on its base.The alignment is stored as a rotation 3x3 matrix `vh` for each fruit.
###Code
import numpy as np
import pandas as pd
import glob
import os
import warnings
warnings.filterwarnings( "ignore")
import matplotlib.pyplot as plt
%matplotlib inline
import tifffile as tf
from scipy import ndimage
import citrus_utils as vitaminC
tissue_src = '../data/tissue/'
bnames = [os.path.split(x)[-1] for x in sorted(glob.glob(tissue_src + 'WR*'))]
for i in range(len(bnames)):
print(i, '\t', bnames[i])
footpoints = 'geocentric'
oil_src = '../data/oil/'
oil_dst = '../data/glands/'
bname = bnames[0]
L = 3
lname = 'L{:02d}'.format(L)
src = oil_src + bname + '/' + lname + '/'
savefig = True
dst = '../data/spine/'
if not os.path.isdir(dst):
os.makedirs(dst)
spinename = tissue_src + bname + '/' + lname + '/' + bname + '_' + lname + '_spine.tif'
exoname = tissue_src + bname + '/' + lname + '/' + bname + '_' + lname + '_exocarp.tif'
print(spinename)
exo = tf.imread(exoname)
spine = tf.imread(spinename)
scoords = np.asarray(np.nonzero(spine))
snaps = vitaminC.collapse_dimensions(spine)
vitaminC.plot_collapse_dimensions(snaps, bname, 'spine')
###Output
_____no_output_____
###Markdown
Plot the original exocarp to get a sense if the fruit is standing upright as it is.- This one, `WR05` is almost upright
###Code
snaps = vitaminC.collapse_dimensions(exo)
vitaminC.plot_collapse_dimensions(snaps, bname, 'exocarp')
sz = 3
espine = ndimage.grey_erosion(spine, size=(sz,sz,sz))
tspine = vitaminC.get_largest_element(espine)
###Output
1 components
[314429]
###Markdown
- Erorded spine- The `x,y,z` coordinates have been aligned via PCA- The plot confirms that the spine is standing upright
###Code
vh = vitaminC.spine_based_alignment(tspine, 'eroded spine', savefig=False, dst=dst)
###Output
_____no_output_____
###Markdown
If the spine were to be standing upside down, we can flip the rotation by doing```vh[0] = -vh[0]``` Save the rotation matrix `vh` in the same folder as the spine scan
###Code
filename = tissue_src + bname + '/' + lname + '/' + bname + '_' + lname + '_vh_alignment.csv'
np.savetxt(filename, vh, delimiter=',')
###Output
_____no_output_____
###Markdown
Verify that `vh` is the right rotation- Rotate the oil gland tissues and check if the fruit looks standing upright
###Code
filename = src + bname + '_glands.tif'
img = tf.imread(filename)
centers = np.asarray(np.nonzero(img))
glands = np.matmul(centers.T, np.transpose(vh))
centerby = np.mean(glands, axis = 0)
scaleby = .5*np.std(glands[:,0])
glands = (glands - centerby)/scaleby
title = bname + '_' + lname + ' aligned glands'
vitaminC.plot_3Dprojections(glands, title=title, writefig=False, dst=dst)
###Output
_____no_output_____ |
Chapter.7/9.ipynb | ###Markdown
Chapter 7===============================
###Code
from sklearn.datasets import load_boston
import matplotlib.pyplot as plt
import numpy as np
boston = load_boston()
x_data = boston.data
y_data = boston.target.reshape(boston.target.size, 1)
x_data[:3]
from sklearn import preprocessing
minmax_scale = preprocessing.MinMaxScaler().fit(x_data)
x_scaled_data = minmax_scale.transform(x_data)
x_scaled_data[:3]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x_scaled_data,
y_data,
test_size = 0.2)
from sklearn import linear_model
regr = linear_model.LinearRegression(fit_intercept = True,
normalize = False,
copy_X = True,
n_jobs = 8)
regr.fit(X_train, y_train)
regr
y_true = y_test
y_pred = regr.predict(X_test)
np.sqrt(((y_true - y_pred) ** 2).sum() / len(y_true))
from sklearn.metrics import mean_squared_error
np.sqrt(mean_squared_error(y_true, y_pred))
print('Coefficients: ', regr.coef_)
print('intercept: ', regr.intercept_)
regr.predict(x_data[:5])
x_data[:5].dot(regr.coef_.T) + regr.intercept_
from sklearn.metrics import r2_score
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error
y_true = y_test
y_hat = regr.predict(X_test)
r2_score(y_true, y_hat), mean_absolute_error(y_true, y_hat), mean_squared_error(y_true, y_hat)
y_true = y_train
y_hat = regr.predict(X_train)
r2_score(y_true, y_hat), mean_absolute_error(y_true, y_hat), mean_squared_error(y_true, y_hat)
###Output
_____no_output_____ |
BERT_Quora_Dataset.ipynb | ###Markdown
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
import os
os.environ["CUDA_DEVICE_ORDER"]="PCI_BUS_ID";
os.environ["CUDA_VISIBLE_DEVICES"]="0";
###Output
_____no_output_____
###Markdown
**Trabalho apresentado a disciplina SCC5871 - Algoritmos de Aprendizado de Máquina**Grupo 611919610 - Fernando Torres Ferreira da Silva Download e Instalação do kaggleTutorial: https://www.kaggle.com/general/74235
###Code
! pip install -q kaggle
from google.colab import files
files.upload()
! mkdir ~/.kaggle
! cp kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
#Check if everything is OK!
! kaggle datasets list
###Output
_____no_output_____
###Markdown
Download do Dataset **Data:** Quora Question Pairs - https://www.kaggle.com/c/quora-question-pairs/data
###Code
!kaggle competitions download -c quora-question-pairs
!unzip train.csv.zip -d train
###Output
Warning: Looks like you're using an outdated API Version, please consider updating (server 1.5.12 / client 1.5.4)
Downloading test.csv.zip to /content
100% 173M/173M [00:01<00:00, 107MB/s]
100% 173M/173M [00:01<00:00, 96.0MB/s]
Downloading train.csv.zip to /content
43% 9.00M/21.2M [00:00<00:00, 25.4MB/s]
100% 21.2M/21.2M [00:00<00:00, 43.0MB/s]
test.csv.zip: Skipping, found more recently modified local copy (use --force to force download)
Downloading sample_submission.csv.zip to /content
0% 0.00/4.95M [00:00<?, ?B/s]
100% 4.95M/4.95M [00:00<00:00, 44.9MB/s]
Archive: train.csv.zip
inflating: train/train.csv
###Markdown
**Requirements** Import Libs
###Code
# Importando bibliotecas
import pandas as pd
!pip install ktrain
import ktrain
from ktrain import text
dataset = pd.read_csv('/content/train/train.csv')
dataset = dataset.dropna()
dataset
###Output
_____no_output_____
###Markdown
Reduzindo e dividindo em conjunto de treino e teste
###Code
dataset_size = 200000
train_size = dataset_size * 0.3 #30% para treino
# test_size = dataset_size - train_size
# Seleciona os primeiros 100k diferentes
dif = dataset.loc[dataset.is_duplicate == 0]
dif_part = dif.iloc[:int(dataset_size/2),:]
train_dif_part = dif_part.iloc[:int(train_size/2),:]
test_dif_part = dif_part.iloc[int(train_size/2):,:]
# Seleciona os primeiros 100k iguais
equal = dataset.loc[dataset.is_duplicate == 1]
equal_part = equal.iloc[:int(dataset_size/2),:]
train_equal_part = equal_part.iloc[:int(train_size/2),:]
test_equal_part = equal_part.iloc[int(train_size/2):,:]
# Concatena os dois conjuntos
train_df = pd.concat([train_dif_part,train_equal_part])
test_df = pd.concat([test_dif_part,test_equal_part])
x_train = train_df[['question1', 'question2']].values
y_train = train_df['is_duplicate'].values
x_test = test_df[['question1', 'question2']].values
y_test = test_df['is_duplicate'].values
# # IMPORTANT: data format for sentence pair classification is list of tuples of form (str, str)
x_train = list(map(tuple, x_train))
x_test = list(map(tuple, x_test))
y_test[36252]
print(x_train[0])
print(y_train[0])
MODEL_NAME = 'bert-base-uncased'
t = text.Transformer(MODEL_NAME, maxlen=128, class_names=['non-duplicate', 'duplicate'])
trn = t.preprocess_train(x_train, y_train)
val = t.preprocess_test(x_test, y_test)
model = t.get_classifier()
learner = ktrain.get_learner(model, train_data=trn, val_data=val, batch_size=32) # lower bs if OOM occurs
#testes devem ser menores que 5, pois segundo o prof. são obtidos os melhores resultados
learner.fit_onecycle(5e-5, 3)
#learner.fit_onecycle(4e-4, 3)
#learner.fit_onecycle(3e-4, 3)
#learnet.fit_onecycle(1e-3, 3)
learner.validate(class_names=t.get_classes())
###Output
_____no_output_____
###Markdown
------- 5e-5, 3 ------ (RESULTADOS DO ONECYCLE)begin training using onecycle policy with max lr of 5e-05...Epoch 1/31875/1875 [==============================] - 4853s 3s/step - loss: 0.4045 - accuracy: 0.8132 - val_loss: 0.3263 - val_accuracy: 0.8612Epoch 2/31875/1875 [==============================] - 4863s 3s/step - loss: 0.2737 - accuracy: 0.8885 - val_loss: 0.2992 - val_accuracy: 0.8748Epoch 3/31875/1875 [==============================] - 4871s 3s/step - loss: 0.1285 - accuracy: 0.9544 - val_loss: 0.3519 - val_accuracy: 0.8813(RESULTADOS VAlIDATE)---------------------------------------------------------------------------learner.fit_onecycle(5e-4, 3)begin training using onecycle policy with max lr of 0.0005...Epoch 1/31875/1875 [==============================] - 2755s 1s/step - loss: 0.5757 - accuracy: 0.6462 - val_loss: 0.6940 - val_accuracy: 0.5000Epoch 2/31875/1875 [==============================] - 2752s 1s/step - loss: 0.6953 - accuracy: 0.4990 - val_loss: 0.6932 - val_accuracy: 0.5000Epoch 3/31875/1875 [==============================] - 2740s 1s/step - loss: 0.6935 - accuracy: 0.5007 - val_loss: 0.6932 - val_accuracy: 0.5000 precision recall f1-score supportnon-duplicate 0.00 0.00 0.00 70000 duplicate 0.50 1.00 0.67 70000 accuracy 0.50 140000 macro avg 0.25 0.50 0.33 140000 weighted avg 0.25 0.50 0.33 140000/usr/local/lib/python3.6/dist-packages/sklearn/metrics/_classification.py:1272: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result))array([[ 0, 70000], [ 0, 70000]])--------------------------------------------------------------------------- begin training using onecycle policy with max lr of 5e-05...Epoch 1/31875/1875 [==============================] - 2642s 1s/step - loss: 0.3981 - accuracy: 0.8178 - val_loss: 0.3220 - val_accuracy: 0.8608Epoch 2/31875/1875 [==============================] - 2647s 1s/step - loss: 0.2785 - accuracy: 0.8860 - val_loss: 0.3055 - val_accuracy: 0.8705Epoch 3/31875/1875 [==============================] - 2652s 1s/step - loss: 0.1325 - accuracy: 0.9543 - val_loss: 0.3516 - val_accuracy: 0.8796 precision recall f1-score supportnon-duplicate 0.89 0.87 0.88 70000 duplicate 0.87 0.89 0.88 70000 accuracy 0.88 140000 macro avg 0.88 0.88 0.88 140000 weighted avg 0.88 0.88 0.88 140000array([[60604, 9396], [ 7454, 62546]]) **Plotando a perda do modelo**
###Code
learner.plot()
predictor = ktrain.get_predictor(learner.model, t)
y_test[69980:70010]
positive = x_test[69980]
negative = x_test[70010]
print('Duplicate:\n%s' %(positive,))
print('Non-Duplicate:\n%s' %(negative,))
predictor.predict(positive)
predictor.predict(negative)
predictor.predict([positive, negative])
predictor.save('/tmp/mrpc_model')
p = ktrain.load_predictor('/tmp/mrpc_model')
p.predict(positive)
###Output
_____no_output_____ |
Neural_Network_Structure.ipynb | ###Markdown
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.datasets import load_iris
from sklearn.linear_model import Perceptron
iris = load_iris()
iris.target
iris.data[:5, :]
data = iris.data[:, (2, 3)]
labels = iris.target
plt.figure(figsize=(13,6))
plt.scatter(data[:, 0], data[:, 1], c=labels, cmap=plt.cm.Set1, edgecolor='face')
plt.xlabel('Petal Length (cm)')
plt.ylabel('Petal Width (cm)')
plt.show()
X = iris.data[:, (2, 3)]
y = (iris.target == 2).astype(np.int)
test_perceptron = Perceptron()
test_perceptron.fit(X, y)
y1_pred = test_perceptron.predict([[5.1, 2]])
y2_pred = test_perceptron.predict([[1.4, 0.2]])
print('Prediction 1:', y1_pred, 'Prediction 2:', y2_pred)
###Output
Prediction 1: [1] Prediction 2: [0]
|
Random Forest Benchmark.ipynb | ###Markdown
Initiate Training and Testing
###Code
from google.colab import drive
drive.mount('/content/drive')
drive_path = "/content/drive/MyDrive/Colab Notebooks/MSC_21"
method_dir = "RandomForest"
log_path = f"{drive_path}/{method_dir}/logs/"
# PARAMETERS
# dude-gprc, tox21, muv
dataset = 'dude-gpcr'
with_bonds = False
rounds = 20
n_query = 64 # per class
episodes = 10000
lr = 0.001
balanced_queries = True
randomseed = 12
torch.manual_seed(randomseed)
np.random.seed(randomseed)
random.seed(randomseed)
torch.cuda.manual_seed(randomseed)
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
torch.backends.cudnn.is_available()
torch.backends.cudnn.benchmark = False # selects fastest conv algo
torch.backends.cudnn.deterministic = True
combinations = [
[10, 10],
[5, 10],
[1, 10],
[1, 5],
[1, 1]
]
cols = [
'DATE', 'CPU', 'CPU COUNT', 'GPU', 'GPU RAM', 'RAM', 'CUDA',
'REF', 'DATASET', 'ARCHITECTURE',
'SPLIT', 'TARGET', 'ACCURACY', 'ROC', 'PRC',
'TRAIN ROC', 'TRAIN PRC', 'EPISODES', 'TRAINING TIME', 'ROC_VALUES', 'PRC_VALUES'
]
train_dfs, test_dfs = load_dataset(dataset, bonds=with_bonds, feat='ecfp', create_new=False)
dt_string = datetime.now().strftime("%d/%m/%Y %H:%M:%S")
cpu = get_cmd_output('cat /proc/cpuinfo | grep -E "model name"')
cpu = cpu.split('\n')[0].split('\t: ')[-1]
cpu_count = psutil.cpu_count()
cuda_version = get_cmd_output('nvcc --version | grep -E "Build"')
gpu = get_cmd_output("nvidia-smi -L")
general_ram_gb = humanize.naturalsize(psutil.virtual_memory().available)
gpu_ram_total_mb = GPU.getGPUs()[0].memoryTotal
for c in combinations:
n_pos = c[0]
n_neg = c[1]
results = pd.DataFrame(columns=cols)
for target in test_dfs.keys():
print(target)
running_roc = []
running_prc = []
for round in trange(rounds):
start_time = time.time()
df = test_dfs[target]
support_neg = df[df['y'] == 0].sample(n_neg)
support_pos = df[df['y'] == 1].sample(n_pos)
train_data = pd.concat([support_neg, support_pos])
test_data = df.drop(train_data.index)
train_data = train_data.sample(frac=1)
test_data = test_data.sample(frac=1)
train_X, train_y = list(train_data['mol'].to_numpy()), train_data['y'].to_numpy(dtype=np.int16)
test_X, test_y = list(test_data['mol'].to_numpy()), test_data['y'].to_numpy(dtype=np.int16)
model = RandomForestClassifier(n_estimators=100)
model.fit(train_X, train_y)
probs_y = model.predict_proba(test_X)
roc = roc_auc_score(test_y, probs_y[:, 1])
prc = average_precision_score(test_y, probs_y[:, 1])
running_roc.append(roc)
running_prc.append(prc)
end_time = time.time()
duration = str(timedelta(seconds=(end_time - start_time)))
rounds_roc = f"{statistics.mean(running_roc):.3f} \u00B1 {statistics.stdev(running_roc):.3f}"
rounds_prc = f"{statistics.mean(running_prc):.3f} \u00B1 {statistics.stdev(running_prc):.3f}"
rec = pd.DataFrame([[dt_string, cpu, cpu_count, gpu, gpu_ram_total_mb, general_ram_gb, cuda_version, "MSC",
dataset, method_dir, f"{n_pos}+/{n_neg}-", target, 0, rounds_roc, rounds_prc,
0, 0, 0, duration, running_roc, running_prc
]], columns=cols)
results = pd.concat([results, rec])
results.to_csv(f"{drive_path}/results/{dataset}_{method_dir}_pos{n_pos}_neg{n_neg}.csv", index=False)
# model.score(test_X, test_y)
# pred_y = model.predict(test_X)
# model.classes_
from sklearn.metrics import confusion_matrix
preds = model.predict(test_X)
confusion_matrix(test_y, preds)
ConfusionMatrixDisplay.from_predictions(test_y, preds)
###Output
_____no_output_____ |
The-Lego-Collector's-Dilemma-Project/Solution.ipynb | ###Markdown
Load and split the dataset- Load the train data and using all your knowledge of pandas try to explore the different statistical properties of the dataset.- Separate the features and target and then split the train data into train and validation set.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, r2_score
import seaborn as sns
import warnings
warnings.filterwarnings("ignore")
# Code starts here
train_data=pd.read_csv('train.csv')
test_data=pd.read_csv('test.csv')
test_data.head()
#np.mean(train['list_price'])
train=train_data[['ages','num_reviews','piece_count','play_star_rating','review_difficulty','star_rating','theme_name','val_star_rating','country','Id']]
validation= train_data['list_price']
train.head()
validation.head()
validation
# Code ends here.
###Output
_____no_output_____
###Markdown
Data Visualization- All the features including target variable are continuous. - Check out the best plots for plotting between continuous features and try making some inferences from these plots.
###Code
# Code starts here
train.hist(column='star_rating',bins=10, figsize=(7,5), color='blue')
plt.show()
train.boxplot( 'num_reviews',showfliers=False, figsize=(10,5))
validation.hist('list_price',bins=10, figsize=(7,5), color='blue')
# Code ends here.
###Output
_____no_output_____
###Markdown
Feature Selection- Try selecting suitable threshold and accordingly drop the columns.
###Code
# Code starts here
train.head()
test_data.head()
# Code ends here.
###Output
_____no_output_____
###Markdown
Model building
###Code
# Code starts here
linearg = LinearRegression()
linearg.fit(train, validation)
pred= linearg.predict(test_data)
y_pred=pred.tolist()
y_pred
# Code ends here.
###Output
_____no_output_____
###Markdown
Residual check!- Check the distribution of the residual.
###Code
# Code starts here
#rsqaured = r2_score(test_data, np.exp(pred))
# Code ends here.
###Output
_____no_output_____
###Markdown
Prediction on the test data and creating the sample submission file.- Load the test data and store the `Id` column in a separate variable.- Perform the same operations on the test data that you have performed on the train data.- Create the submission file as a `csv` file consisting of the `Id` column from the test data and your prediction as the second column.
###Code
# Code starts here
id=test_data['Id']
id
my_dict={ 'Id': id, 'list_price': y_pred}
df=pd.DataFrame(my_dict)
df.set_index('Id')
df.to_csv('submission.csv')
df
# Code ends here.
###Output
_____no_output_____ |
Python Files/Occupation by Sex.ipynb | ###Markdown
Visual Analytics Coursework (Student Number - 2056224)
###Code
#importing libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
data = pd.read_csv('Occupation by Sex.csv')
data
###Output
_____no_output_____
###Markdown
Data Pre-processing Following are the first steps towards data pre-processing:1. The data is for year 2011, so there is no use of keeping the respective column with date. REMOVING COLUMN "date".2. Check if columns "geography" and "geography code" are same. If yes, remove "geography code".3. The words "measures: value" are same in every column. Removing just the respective names to make the column names look better and easily understandable. 4. The data has 'total' as Rural Urban, so there is no use of keeping the respective column. REMOVING COLUMN "Rural Urban".
###Code
data['date'].unique()
data['Rural Urban'].unique()
#Removing the date and rural urban column
data.drop('date', axis=1, inplace=True)
data.drop('Rural Urban', axis=1, inplace=True)
#Checing the columns geography and geography code are same or not?
if data['geography'].equals(data['geography code']) == True:
print("The columns have same values.")
else:
print("The columns does not have same values.")
#The values are same, so dropping the column geography code.
data.drop('geography code', axis=1, inplace=True)
#Check if every column name ends with "measures: value".
name_lst = data.columns
for i in range(1, len(name_lst)):
if name_lst[i].endswith('measures: Value') == False:
print("{} does not ends with 'measures: value'. ".format(name_lst[i]))
#Since the column names ends with the respective words, removing the respective from column names.
data.columns = data.columns.str.replace('; measures: Value', '')
data.columns
###Output
_____no_output_____
###Markdown
The second step towards data pre-processing would be creating sub columns for a set of same category of column names.
###Code
def col_split(x : pd.Series):
y = x['variable'].split(';')
x['Sex'] = y[0]
x['Occupation'] = y[1]
return x
new_data = data.melt(id_vars=['geography']).apply(col_split, axis = 1)
#data.melt(id_vars=['geography']).apply(col_split, axis = 1)
#Dropping the column 'variable' as it is of no use now.
new_data.drop('variable', axis=1, inplace=True)
#Removing the common name 'Sex: ' from 'Sex' column as it is of no use.
new_data['Sex'] = new_data['Sex'].str.replace('Sex: ', '')
#Removing the common name 'Occupation: ' from 'Occupation' column as it is of no use.
new_data['Occupation'] = new_data['Occupation'].str.replace('Occupation: ', '')
#bringing the 'value' column at the end of dataframe
new_data['Counts'] = new_data['value']
new_data.drop('value', axis=1, inplace=True)
new_data['Sex'].unique()
new_data['Occupation'].unique()
new_data[['area code', 'area name']] = new_data['geography'].str.split('-', 1, expand=True)
new_data.head(10)
new_data.tail(10)
new_data.to_csv('Occupation by Sex clean.csv', index=False)
###Output
_____no_output_____ |
Prepare_Dependency_Gen.ipynb | ###Markdown
Podium Dataflow Dependency Tree IntroductionThis Jupyter Notebook demonstrates how it is possible to generate QDC dataflow / source / entity dependency tree from the QDC metadata database (in this case PostgreSQL) using Python.The starting point for the example output (below) is the name of a dataflow "prod_stg_compsych_member_t", this starting point should be the dataflow that you are intrested in determining it's predecessor dataflows.The process will look at the LOADERs in the starting point dataflow and look back up the dependency tree to determine the source of each of the LOADERs. These sources can be a QDC Source / Entity or the output from a prior dataflow. Where the source is the output from a prior dataflow, the process recurses on itself until:* There are only QDC Source / Entities remaining or* The source dataflow is in a stop list of dataflow names provided to the process.The result of the process is a [NetworkX](https://networkx.github.io) Graph where the nodes are dataflows, sources and entities, the edges reflect the relationships.The final Graph is converted to a [DOT](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) file and rendered to the png shown here using the dot.exe command installed with [Graphviz](https://www.graphviz.org/).```bashdot.exe -Tpng -o prod_stg_compsych_member_t.png prod_stg_compsych_member_t.dot``` Sample output Python NotesAccess to the PostgreSQL database is through Python SQLAlchemy, the coding style chosen here for SQLAlchemy was chosen for clarity and simplicity. I am sure that there are many improvements and optimizations that could be made.The Python version used here was 3.6.5 installed from the Anaconda distribution which means that all the packages imported were pre installed.
###Code
from sqlalchemy import create_engine
from sqlalchemy import MetaData , Table, Column, Integer, Numeric, String, DateTime, ForeignKey, Text
from sqlalchemy import select, desc
from sqlalchemy import and_, or_, not_
from sqlalchemy import text
from sqlalchemy.sql import func
import networkx as nx
import json
import datetime as dt
import os
import yaml
import sys
import pydot
from networkx.drawing.nx_pydot import write_dot
###Output
_____no_output_____
###Markdown
QDC Database Connection and StylesThe QDC repository database connection details are specified in a yaml format and parsed by the `get_podium_cfg()` function below.The input to the function is a stream, this can be a string or an open file handle.This version of the config file contains style attributes that are used in the resulting `dot.exe` generated output file.The expected yaml stream should be:```yaml---pd_connect: dr: user: 'user' pwd: 'pwd' db: 'podium_md' host: 'host.host.com' port: 5436 dev: pd_connect: user: 'user' pwd: 'pwd' db: 'podium_md' host: 'host.host' port: 5436styles: meta: fontname: 'arial' fontsize: 10 meta_edge: fontname: 'arial' fontsize: 10 node: fontname: 'arial' fontsize: 12 bgcolor: '"ECF8B9"' source: shape: 'house' style: 'filled' fillcolor: '"D4E4F7"' entity_source: shape: 'ellipse' style: 'filled' fillcolor: '"3A84D9"' entity_target: shape: 'ellipse' style: 'filled' fillcolor: '"DAC344"' bundle: shape: 'box' style: '"filled,rounded"' fillcolor: '"E9446A"' edge: loader: color: '"66B032"' store: color: '"347B98"' source: color: '"092834"'``` Helper FunctionsMost of these functions are helper functions in accessing the QDC metadata database, the function where the actual work is done is the `get_bundle()` function.
###Code
def get_podium_cfg(yaml_stream):
"""Loads the Podium connection parameters from the input stream"""
try:
pd_cfg = yaml.load(yaml_stream)
except:
raise "Unexpected error reading yaml stream"
return pd_cfg
def connect(user, password, db, host, port):
'''Returns a connection and a metadata object'''
url = f'postgresql+psycopg2://{user}:{password}@{host}:{port}/{db}'
# The return value of create_engine() is our connection object
con = create_engine(url, client_encoding='utf8')
# We then bind the connection to MetaData()
meta = MetaData(bind=con)
return con, meta
def get_source(con, meta, source_nid):
"""Retrieve Podium Source row by nid
Parameters
==========
con : SQLAlchemy connection
meta : SQL Alchemy Meta Object
source_nid : Integer source nid
Returns
=======
Uses ResultProxy first() to retrieve one row and close the result set.
"""
assert isinstance(source_nid, int)
pd_source = Table('podium_core.pd_source', meta)
s = pd_source.select()
s = s.where(pd_source.c.nid == source_nid)
r = con.execute(s)
assert r.rowcount == 1
return r.first()
def get_source_byname(con, meta, source_name, source_type = 'PODIUM_INTERNAL'):
"""Returns the Podium source row for a named Source"""
lc_source_name = source_name.lower()
pd_source = Table('podium_core.pd_source', meta)
s = select([pd_source.c.nid,
pd_source.c.sname])
s = s.where(and_(func.lower(pd_source.c.sname) == lc_source_name,
pd_source.c.source_type == source_type))
rp = con.execute(s)
return rp.first()
def get_entity(con, meta, entity_nid):
"""Fetches the entity record by the entity nid, returns 1 row at most"""
assert isinstance(entity_nid, int)
pd_entity = Table('podium_core.pd_entity', meta)
s = select([pd_entity.c.sname,
pd_entity.c.source_nid])
s = s.where(pd_entity.c.nid == entity_nid)
rp = con.execute(s)
return rp.first()
def get_entity_byname(con, meta, source_nid, entity_name):
"""Fetches the entity record by entity name, returns 1 row at most"""
assert isinstance(source_nid, int)
lc_entity_name = entity_name.lower()
pd_entity = Table('podium_core.pd_entity', meta)
s = select([pd_entity.c.nid,
pd_entity.c.sname])
s = s.where(and_(func.lower(pd_entity.c.sname) == lc_entity_name,
pd_entity.c.source_nid == source_nid))
rp = con.execute(s)
return rp.first()
def get_entity_store(con, meta, entity_id):
"""Fetches the dataflows (if any) that STORE the passed entity id"""
assert isinstance(entity_id, int)
pd_prep_package = Table('podium_core.pd_prep_package', meta)
pd_bundle = Table('podium_core.pd_bundle', meta)
s = select([pd_bundle.c.nid, pd_bundle.c.sname])
s = s.select_from(pd_prep_package.join(pd_bundle, pd_prep_package.c.bundle_nid == pd_bundle.c.nid))
s = s.where(and_(pd_prep_package.c.entity_id == entity_id,
pd_prep_package.c.package_type == 'STORE'))
rp = con.execute(s, eid=entity_id).fetchall()
return rp
def get_bundle_id(con, meta, sname):
"""Get the bundle id of the passed Prepare Workflow name.
This is a case insensitive match and can only return a single row
or None
"""
pd_bundle = Table('podium_core.pd_bundle', meta)
lc_sname = sname.lower()
s = pd_bundle.select()
s = s.where(func.lower(pd_bundle.c.sname) == lc_sname)
rp = con.execute(s)
r = rp.first()
return r
def get_bundle_gui_state(con, meta, nid):
"""Get the bundle gui state record datflow nid.
"""
pd_bundle_gui_state = Table('podium_core.pd_bundle_gui_state', meta)
gui_cols = [pd_bundle_gui_state.c.nid,
pd_bundle_gui_state.c.created_ttz,
pd_bundle_gui_state.c.modified_ttz,
pd_bundle_gui_state.c.version,
pd_bundle_gui_state.c.modifiedby,
pd_bundle_gui_state.c.createdby]
s = select(gui_cols)
s = s.where(pd_bundle_gui_state.c.nid == nid)
rp = con.execute(s)
return rp.first()
def get_bundle_last_execution(con, meta, bundle_nid, count=10):
"""Get the last count execution details of the specified bundle.
"""
pd_prepare_execution_workorder = Table('podium_core.pd_prepare_execution_workorder', meta)
wo_cols = [pd_prepare_execution_workorder.c.nid,
pd_prepare_execution_workorder.c.record_count,
pd_prepare_execution_workorder.c.start_time,
pd_prepare_execution_workorder.c.end_time]
e = select(wo_cols)
e = e.where(and_(pd_prepare_execution_workorder.c.bundle_nid == bundle_nid,
pd_prepare_execution_workorder.c.end_time.isnot(None),
pd_prepare_execution_workorder.c.workorder_status == "FINISHED"))
e = e.order_by(desc(pd_prepare_execution_workorder.c.end_time))
e = e.limit(count)
rp = con.execute(e)
r = rp.fetchall()
rp.close()
return r
def get_entity_last_load(con, meta, source_nid, entity_name, n=1):
"""Get the last execution details of the specified bundle.
"""
pd_source = Table('podium_core.pd_source', meta)
pd_entity = Table('podium_core.pd_entity', meta)
pd_workorder = Table('podium_core.pd_workorder', meta)
#print entity_name
parent_source = get_source(con, meta, source_nid)
src = pd_source.select()
src = src.where(pd_source.c.sname == parent_source.sname)
srp = con.execute(src)
orig_source_id = None
for r in srp:
print(f'Source: {r.sname}, Source Type: {r.source_type}, nid: {r.nid}')
if r.source_type != 'PODIUM_INTERNAL':
orig_source_id = r.nid
break
print(f'orig_source_id: {orig_source_id}')
if orig_source_id is None:
return None
ety = pd_entity.select()
ety = ety.where(and_(pd_entity.c.source_nid == orig_source_id,
pd_entity.c.sname == entity_name))
rp = con.execute(ety)
orig_entity = rp.first()
if orig_entity is not None:
orig_entity_nid = orig_entity.nid
wo = select([pd_workorder.c.nid,
pd_workorder.c.start_time,
pd_workorder.c.end_time,
pd_workorder.c.record_count,
pd_workorder.c.good_count,
pd_workorder.c.bad_count,
pd_workorder.c.ugly_count])
wo = wo.where(and_(pd_workorder.c.entity_nid == orig_entity_nid,
pd_workorder.c.workorder_status == 'FINISHED'))
wo = wo.order_by(desc(pd_workorder.c.end_time))
wo = wo.limit(n)
rp = con.execute(wo)
r = rp.first()
else:
r = None
return r
def get_package_nodes(con, meta, bundle_nid):
pd_prep_package = Table('podium_core.pd_prep_package', meta)
s = pd_prep_package.select()
s = s.where(pd_prep_package.c.bundle_nid == bundle_nid)
rp = con.execute(s)
r = rp.fetchall()
rp.close()
return r
###Output
_____no_output_____
###Markdown
podium_core Tables Used 
###Code
prep_tables = ('pd_bundle',
'pd_bundle_gui_state',
'pd_prep_package',
'pd_entity',
'pd_source',
'pd_prepare_execution_workorder',
'pd_workorder'
)
###Output
_____no_output_____
###Markdown
Establish connection to podium_core and fetch used tables metadataEnter the correct yaml file name (or stream) in the call to the `get_podium_cfg()` function.
###Code
cfg = get_podium_cfg(open('pd_cfg.yaml', 'r'))
con_cfg = cfg['pd_connect']['dev']
con, meta = connect(con_cfg['user'], con_cfg['pwd'], con_cfg['db'], con_cfg['host'], con_cfg['port'])
meta.reflect(bind=con, schema='podium_core', only=prep_tables)
###Output
_____no_output_____
###Markdown
Main get_bundle() FunctionThis function is called with a single dataflow name (sname) and for each LOADER in the dataflow recurses backwards looking for dataflows that STORE that entity.If a dataflow is found the function self recurses.To prevent being caught in circular references, as each dataflow is visited it's name is stored in the wf_list, this list is checked each time the function is entered.The stop list "STOPPERS[]" is a list of dataflow names that will also stop the recursion process.The Networkx DiGraph is built up throughout the process adding nodes for Sources, Entities and Dataflows as they are first encountered.The node ids are the nid's from the related `pd_bundle` (Dataflow), `pd_source` (Source) and `pd_entity` (Entity) tables prefixes with the characters `b_`, `s_` and `e_` respectively.Edges are created between nodes to show the node relationships.
###Code
def get_bundle(con, meta, sname: str, world_graph, wf_list, styles: dict, stop_wf = []):
"""Build bundle dependency digraph"""
# Check if dataflow is in stop list
if (sname.lower() in stop_wf):
print(f'Dataflow {sname} is in stop list\n')
return
source_styles = styles['source']
entity_styles = styles['entity_source']
target_styles = styles['entity_target']
bundle_styles = styles['bundle']
edge_styles = styles['edge']
print(bundle_styles)
bundle = get_bundle_id(con, meta, sname)
#import pdb; pdb.set_trace()
print(f'Current dataflow {bundle.sname} ({bundle.nid})')
if bundle:
bundle_nid = bundle.nid
bundle_description = bundle.description
bundle_sname = bundle.sname
bundle_gui_state = get_bundle_gui_state(con, meta, bundle.bundle_gui_state_nid)
if bundle_gui_state:
bundle_mod_dt = bundle_gui_state.modified_ttz
bundle_mod_by = bundle_gui_state.modifiedby
bundle_version = bundle_gui_state.version
else:
bundle_mod_dt = 'Unknown'
bundle_mod_by = 'Unknown'
bundle_version = 'Unknown'
# To-do - check if output file for version already exists
# if so then bypass
bundle_exec = get_bundle_last_execution(con, meta, bundle_nid)
if bundle_exec:
exec_stats = []
for i, r in enumerate(bundle_exec):
if i == 0:
last_record_count = r.record_count
last_start_time = r.start_time
last_end_time = r.end_time
exec_stats.append(({'start_time': str(r.start_time),
'end_time': str(r.end_time),
'records': r.record_count}))
else:
last_record_count = 0
last_start_time = ''
last_end_time = ''
print(f'\t{bundle_nid}, {bundle_description}, {bundle_sname} records {last_record_count}')
print(f'\tModified by: {bundle_mod_by}, Modified Date: {bundle_mod_dt}, Version: {bundle_version}')
print(f'\tLast Start: {last_start_time}, Last End: {last_end_time}')
else:
print(f'Package: {sname}, not found')
return None
# add bundle to "world" graph
bundle_node_key = f'b_{bundle_nid}'
W.add_node(bundle_node_key,
nid=bundle_nid,
# sname=bundle_sname,
label=bundle_sname,
n_type='bundle',
**bundle_styles)
# Add LOADER / STORE nodes
p = get_package_nodes(con, meta, bundle_nid)
# Add LOADER and STORE nodes to graph
for n in p:
id = n.nid
n_type = n.package_type
if n_type in ('LOADER','STORE'):
entity_id = n.entity_id
entity_node_key = f'e_{entity_id}'
if n_type == 'LOADER':
l = get_entity_store(con, meta, entity_id)
if len(l) == 0:
print(f'No STORE found for {entity_id}')
else:
for i, ldr in enumerate(l):
print(f'{entity_id} ({i}) STORE by {ldr.sname}')
if not (ldr.sname.lower() in wf_list):
wf_list.append(ldr.sname.lower())
get_bundle(con, meta, ldr.sname, world_graph, wf_list, styles, stop_wf)
if (not W.has_node(entity_node_key)):
entity = get_entity(con, meta, entity_id)
source_id = entity.source_nid
source = get_source(con, meta, source_id)
if n_type == 'LOADER':
W.add_node(entity_node_key,
n_type='entity',
nid=entity_id,
snid=source_id,
#sname=entity.sname,
label=entity.sname,
**entity_styles)
if n_type == 'STORE':
W.add_node(entity_node_key,
n_type='entity',
nid=entity_id,
snid=source_id,
#sname=entity.sname,
label=entity.sname,
**target_styles)
source_node_key = f's_{source_id}'
if (not W.has_node(source_node_key)):
W.add_node(source_node_key,
n_type='source',
nid=source_id,
#sname=source.sname,
label=source.sname,
**source_styles)
W.add_edge(source_node_key,
entity_node_key,
**edge_styles['source'])
else:
source_nid = W.node[entity_node_key]['snid']
source_node_key = f's_{source_nid}'
print(f"Graph already has entity {entity_id}, {W.node[source_node_key]['label']}.{W.node[entity_node_key]['label']}")
if (n_type == 'STORE'):
W.add_edge(bundle_node_key, entity_node_key, **edge_styles['store'])
elif (n_type == 'LOADER'):
W.add_edge(entity_node_key, bundle_node_key, **edge_styles['loader'])
else:
print(f'ERROR {bundle_node_key}, {source_node_key}, {entity_node_key}')
def subg(g, sg, n_type):
"""Create a record type subgraph of the passsed node type"""
label_list = [g.node[n]['label'] for n in g.nodes if g.node[n]['n_type'] == n_type]
label_list.sort(key=lambda x: x.lower())
# Start subgraph and header record of number of lines
print(f'subgraph {sg} {{')
print(f'r_{sg} [shape=record,label="{{')
print(f'{len(label_list)} {n_type}')
for i, label in enumerate(label_list):
print(f'| {label}')
# Close subgraph
print('}"];}')
def write_record_dot(g, output_file=None):
print("digraph g {")
subg(g, "s1", "bundle")
subg(g, "s2", "source")
subg(g, "s3", "entity")
print('}')
###Output
_____no_output_____
###Markdown
Main
###Code
# final dataflow name and "stop list"
SEED = "prod_stg_compsych_member_t" # "prod_stg_source_member_t" #
#STOPPERS = ('je70_chess_history_cdc_v',
# 'je70_chess_init_member_history_v',
# 'je70_chess_copy_member_history_v')
# The stop list should be zero or more dataflow names that are stoppers in the
# recursion. If a dataflow name in te STOPPER list is hit then the recursion will
# stop at that point.
STOPPERS = ()
if __name__ == "__main__":
W = nx.DiGraph()
node_styles = cfg['styles']['node']
edge_styles = cfg['styles']['meta_edge']
meta_styles = cfg['styles']['meta']
now = dt.datetime.now()
meta_dict = {'shape': 'record', 'label': f'{{{now.strftime("%Y-%m-%d %H:%M")} | {{<f0> SEED | <f1> {SEED}}}}}', **meta_styles}
W.add_node('node', **node_styles)
W.add_node('meta', **meta_dict)
W.add_node('source', **cfg['styles']['source'])
W.add_node('entity', **cfg['styles']['entity_source'])
W.add_node('dataflow', **cfg['styles']['bundle'])
W.add_node('target', **cfg['styles']['entity_target'])
W.add_edges_from((('entity','source', {'label': 'belongs_to', **edge_styles}),
('entity','dataflow', {'label': 'LOADER', **edge_styles}),
('dataflow','target', {'label': 'STORE', **edge_styles})))
wf_list = []
wf_list.append(SEED.lower())
print(f'Processing starting point, dataflow {SEED}')
get_bundle(con, meta, SEED, W, wf_list, cfg['styles'], STOPPERS)
print(f'{len(W.nodes)} nodes added to DiGraph')
# Write output dot file
write_dot(W, f'{SEED}.dot')
# Write output GraphML file
with open(f"{SEED}.graphml", "wb") as ofile:
nx.write_graphml(W, ofile)
print("Finished")
###Output
Processing starting point, dataflow prod_stg_compsych_member_t
{'shape': 'box', 'style': '"filled,rounded"', 'fillcolor': '"#E9446A"'}
Current dataflow prod_stg_Compsych_Member_T (52599)
52599, , prod_stg_Compsych_Member_T records 692816
Modified by: SP17, Modified Date: 2018-11-13 12:58:03.577000-05:00, Version: 4
Last Start: 2018-12-03 12:36:07.258000-05:00, Last End: 2018-12-03 12:36:50.809000-05:00
18091 (0) STORE by init_stg_Compsych_Member_T
{'shape': 'box', 'style': '"filled,rounded"', 'fillcolor': '"#E9446A"'}
Current dataflow init_stg_Compsych_Member_T (54399)
54399, , init_stg_Compsych_Member_T records 731560
Modified by: SP17, Modified Date: 2018-12-04 13:05:33.594000-05:00, Version: 8
Last Start: 2018-12-04 13:55:31.350000-05:00, Last End: 2018-12-04 14:01:00.242000-05:00
No STORE found for 18266
18091 (1) STORE by prod_copy_stg_compsych_sso_eligibility_t
{'shape': 'box', 'style': '"filled,rounded"', 'fillcolor': '"#E9446A"'}
Current dataflow prod_copy_stg_compsych_sso_eligibility_t (52612)
52612, , prod_copy_stg_compsych_sso_eligibility_t records 731560
Modified by: SP17, Modified Date: 2018-11-13 12:59:18.682000-05:00, Version: 1
Last Start: 2018-12-04 13:44:50.488000-05:00, Last End: 2018-12-04 13:45:29.646000-05:00
Graph already has entity 18091, US_Derived.stg_compsych_sso_eligibility_t
18093 (0) STORE by prod_stg_compsych_sso_eligibility_cdc_t
{'shape': 'box', 'style': '"filled,rounded"', 'fillcolor': '"#E9446A"'}
Current dataflow prod_stg_compsych_sso_eligibility_cdc_t (52268)
52268, Constructs the stg_Compsych_Member_T History using the CDC transform., prod_stg_compsych_sso_eligibility_cdc_t records 1463120
Modified by: SP17, Modified Date: 2018-12-04 12:10:08.840000-05:00, Version: 18
Last Start: 2018-12-04 14:56:22.111000-05:00, Last End: 2018-12-04 15:32:10.387000-05:00
No STORE found for 18266
Graph already has entity 18266, GWRRPT_PR.compsych_sso_eligibility_t
18091 (0) STORE by init_stg_Compsych_Member_T
18091 (1) STORE by prod_copy_stg_compsych_sso_eligibility_t
Graph already has entity 18091, US_Derived.stg_compsych_sso_eligibility_t
Graph already has entity 18093, US_Derived.stg_compsych_sso_eligibility_temp_t
Graph already has entity 18091, US_Derived.stg_compsych_sso_eligibility_t
17 nodes added to DiGraph
Finished
|
lect6/.ipynb_checkpoints/O_notation-checkpoint.ipynb | ###Markdown
Эффективность работы кода на примере алгоритмов сортировки. Оценка сложностиВ программировании нотация большого «О» (О-нотация) используется в качестве меры измерения, помогающей программистам оценивать или предполагать эффективность написанного блока кода, скрипта или алгоритма. «Сколько времени потребуется на работу этого кода? Какова его сложность в привязке к тем данным, которые он обрабатывает?»Точное время работы скрипта или алгоритма определить сложно. Оно зависит от многих факторов, например, от скорости процессора и прочих характеристик компьютера, на котором запускается этот скрипт или алгоритм. Поэтому нотация большого «О» используется не для оценки конкретного времени работы кода. Ее используют для оценки того, как быстро возрастает время обработки данных алгоритмом в привязке к объему этих данных. Смешная история В 2009 году у одной компании в Южной Африке была проблема со скоростью интернета. У этой компании было два офиса в 50 милях друг от друга. Сотрудники решили провести занимательный эксперимент и посмотреть, не будет ли быстрее пересылать данные при помощи голубиной почты.Они поместили 4GB данных на флешку, прикрепили ее к голубю и выпустили его из офиса, чтобы он полетел в другой офис. И…Почтовый голубь оказался быстрее интернет-связи. Он победил с большим отрывом (иначе история не была бы такой забавной). Больше того, когда голубь долетел до второго офиса, а случилось это через два часа, через интернет загрузилось только 4% данных. Виды сложности* Временная сложность алгоритма определяет число шагов, которые должен предпринять алгоритм, в зависимости от объема входящих данных (n).* Пространственная сложность алгоритма определяет количество памяти, которое потребуется занять для работы алгоритма, в зависимости от объема входящих данных (n). Постоянное время: O(1)Обратите внимание, что в истории с голубем, рассказанной выше, голубь доставил бы и 5KB, и 10MB, и 2TB данных, хранящихся на флешке, за совершенно одинаковое количество времени. Время, необходимое голубю для переноса данных из одного офиса в другой, это просто время, необходимое, чтобы пролететь 50 миль.Если использовать О-нотацию, время, за которое голубь может доставить данные из офиса А в офис Б, называется постоянным временем и записывается как O(1). Оно не зависит от объема входящих данных. Пример алгоритма со сложностью О(1)
###Code
lst1 = [1, 2, 3, 4, 5, 3, 4, 5, 6, 6]
lst2 = ['A', 'a', 'D', 'DDG', 'ddf', 'dfdf']
print(f"Length of lst1 is {len(lst1)}",
f"Length of lst2 is {len(lst2)}", sep='\n')
###Output
Length of lst1 is 10
Length of lst2 is 6
###Markdown
Почему так происходит? (структура данных "Список" такова, что длина любого массива может быть получена за О(1), как и любой элемент списка) Линейное время: O(n)В отличие от пересылки голубем, передача данных через интернет будет занимать все больше и больше времени по мере увеличения объема передаваемых данных.Если использовать О-нотацию, мы можем сказать, что время, нужное для передачи данных из офиса А в офис Б через интернет, возрастает линейно и прямо пропорционально количеству передаваемых данных. Это время записывается как O(n), где n — количество данных, которое нужно передать.Следует учитывать, что в программировании «О» большое описывает наихудший сценарий. Допустим, у нас есть массив чисел, где мы должны найти какое-то определенное число при помощи цикла for. Оно может быть найдено при любой итерации, и чем раньше, тем быстрее функция завершит работу. О-нотация всегда указывает на верхнюю границу, т. е., описывает случай, когда алгоритму придется осуществить максимальное количество итераций, чтобы найти искомое число. Как, например, в том случае, если это число окажется последним в перебираемом массиве.
###Code
def find_element(x, lst):
'''
Функция ищет элемент x в списке lst
Если он есть, возвращает True, если нет - False
'''
# иди по списку
for i in lst:
# если текущий элемент нашелся, вернем True
if x == i:
return True
# если дошли до конца и элемент не нашелся, вернем False
return False
lst = [1, 2, 3, 4, 5, 87, 543, 34637, 547489]
find_element(543, lst)
###Output
_____no_output_____
###Markdown
Почему линейное? Представим, что искомый элемент - последний в массиве. Тогда в худшем случае нам придется пройти все элементы нашего массива. Экспоненциальное время: O(2^n)Если сложность алгоритма описывается формулой O(2^n), значит, время его работы удваивается с каждым дополнением к набору данных. Кривая роста функции O(2^n) экспоненциальная: сначала она очень пологая, а затем стремительно поднимается вверх. Примером алгоритма с экспоненциальной сложностью может послужить рекурсивный расчет чисел Фибоначчи:
###Code
def fibonacci(n):
'''
Функция рекурсивно считает
n'ное число Фибоначчи
'''
# Первое и второе числа Фибоначчи равны 1
if n in (1, 2):
return 1
# Если число не 1 и не 2, считай число как сумму двух предыдущих
return fibonacci(n - 1) + fibonacci(n - 2)
fibonacci(10)
###Output
_____no_output_____
###Markdown
Логарифмическое время: O(log n)Логарифмическое время поначалу понять сложнее. Поэтому для объяснения я воспользуюсь распространенным примером: концепцией бинарного поиска.Бинарный поиск это алгоритм поиска в отсортированных массивах. Работает он следующим образом. В отсортированном наборе данных выбирается серединный элемент и сравнивается с искомым значением. Если значение совпадает, поиск завершен.Если искомое значение больше, чем значение серединного элемента, нижняя половина набора данных (все элементы с меньшими значениями, чем у нашего серединного элемента) отбрасывается и дальнейший поиск ведется тем же способом в верхней половине.Если искомое значение меньше, чем значение серединного элемента, дальнейший поиск ведется в нижней половине набора данных.Эти шаги повторяются, при каждой итерации отбрасывая половину элементов, пока не будет найдено искомое значение или пока оставшийся набор данных станет невозможно разделить напополам:
###Code
from random import randint
# Сделали список
a = []
for i in range(15):
# Случайное целое число от 1 до 49 включительно
a.append(randint(1, 50))
# Отсортировали список
a.sort()
# Распечатали
print(a)
# Вводим с клавиатуры искомое число
value = int(input())
# Индекс середины списка
mid = len(a) // 2
# Индекс начала списка
low = 0
# Индекс конца списка
high = len(a) - 1
# Пока позиция "середины" не равна нашему значению
# и левый конце области, где мы ищем, меньше или равен правому концу:
while a[mid] != value and low <= high:
# Если наше значение больше значения в центре области поиска:
if value > a[mid]:
# Начинаем искать в области "справа от середины"
low = mid + 1
else:
# Иначе начинаем искать в области "слева от середины"
high = mid - 1
# Середина новой области поиска
mid = (low + high) // 2
if low > high:
print("No value")
else:
print("ID =", mid)
###Output
[3, 4, 6, 7, 7, 8, 13, 14, 19, 21, 33, 33, 34, 44, 48]
21
ID = 9
###Markdown
Пузырьковая сортировка (n^2)Этот простой алгоритм выполняет итерации по списку, сравнивая элементы попарно и меняя их местами, пока более крупные элементы не «всплывут» в начало списка, а более мелкие не останутся на «дне».**Алгоритм**Сначала сравниваются первые два элемента списка. Если первый элемент больше, они меняются местами. Если они уже в нужном порядке, оставляем их как есть. Затем переходим к следующей паре элементов, сравниваем их значения и меняем местами при необходимости. Этот процесс продолжается до последней пары элементов в списке.При достижении конца списка процесс повторяется заново для каждого элемента. Это крайне неэффективно, если в массиве нужно сделать, например, только один обмен. Алгоритм повторяется n² раз, даже если список уже отсортирован.
###Code
# Библиотека, чтоб время засекать
from datetime import datetime as dt
def bubble_sort(nums):
# Устанавливаем swapped в True, чтобы цикл запустился хотя бы один раз
swapped = True
while swapped:
swapped = False
# Идем циклом по индексам наших элементов
for i in range(len(nums) - 1):
# Если текущий элемент слева больше своего элемента справа
if nums[i] > nums[i + 1]:
# Меняем элементы местами
nums[i], nums[i + 1] = nums[i + 1], nums[i]
# Устанавливаем swapped в True для следующей итерации
swapped = True
# По окончании первого прогона цикла for
# самый большой элемент "Всплывет" наверх
# Проверяем, что оно работает
random_list_of_nums = [9, 5, 2, 1, 8, 4, 3, 7, 6]
bubble_sort(random_list_of_nums)
print(random_list_of_nums)
###Output
[1, 2, 3, 4, 5, 6, 7, 8, 9]
###Markdown
Сортировка выборомЭтот алгоритм сегментирует список на две части: отсортированную и неотсортированную. Наименьший элемент удаляется из второго списка и добавляется в первый.**Алгоритм**На практике не нужно создавать новый список для отсортированных элементов. В качестве него используется крайняя левая часть списка. Находится наименьший элемент и меняется с первым местами.Теперь, когда нам известно, что первый элемент списка отсортирован, находим наименьший элемент из оставшихся и меняем местами со вторым. Повторяем это до тех пор, пока не останется последний элемент в списке.
###Code
def selection_sort(nums):
# Значение i соответствует кол-ву отсортированных значений
for i in range(len(nums)):
# Исходно считаем наименьшим первый элемент
lowest_value_index = i
# Этот цикл перебирает несортированные элементы
for j in range(i + 1, len(nums)):
if nums[j] < nums[lowest_value_index]:
lowest_value_index = j
# Самый маленький элемент меняем с первым в списке
nums[i], nums[lowest_value_index] = nums[lowest_value_index], nums[i]
# Проверяем, что оно работает
random_list_of_nums = [9, 5, 2, 1, 8, 4, 3, 7, 6]
selection_sort(random_list_of_nums)
print(random_list_of_nums)
###Output
[1, 2, 3, 4, 5, 6, 7, 8, 9]
###Markdown
Время сортировкиЗатраты времени на сортировку выбором в среднем составляют O(n²), где n — количество элементов списка. Эмпирическое сравнение скоростиДавайте просто сгенерируем случайные данные и посмотрим, какой из алгоритмов будет быстрее работать
###Code
import numpy as np
# Сюда будем писать время сортировки
lst_bubble = []
lst_selection = []
# На каждом шаге генерируем случайный список длины i
for i in range(10, 501):
l = list(np.random.rand(i))
l2 = l.copy()
# Засекаем время и сортируем его пузырьком
t0 = float(dt.utcnow().timestamp())
bubble_sort(l)
t1 = float(dt.utcnow().timestamp()) - t0
lst_bubble.append(t1)
# Засекаем время и сортируем его вставками
t0 = float(dt.utcnow().timestamp())
selection_sort(l2)
t1 = float(dt.utcnow().timestamp()) - t0
lst_selection.append(t1)
# Библиотека для рисования
from matplotlib.pyplot import plot, legend
# plot(список значений по оси X, список значений по оси Y, label = название линии на графике)
plot(range(10, 501), lst_bubble, label='bubble')
plot(range(10, 501), lst_selection, label='selection')
legend()
###Output
_____no_output_____
###Markdown
Сортировка вставкойЭтот алгоритм сегментирует список на две части: отсортированную и неотсортированную. Перебираются элементы в неотсортированной части массива. Каждый элемент вставляется в отсортированную часть массива на то место, где он должен находиться.**Алгоритм**Проходим по массиву слева направо и обрабатываем по очереди каждый элемент. Слева от очередного элемента наращиваем отсортированную часть массива, справа по мере процесса потихоньку испаряется неотсортированная. В отсортированной части массива ищется точка вставки для очередного элемента. Сам элемент отправляется в буфер, в результате чего в массиве появляется свободная ячейка — это позволяет сдвинуть элементы и освободить точку вставки.
###Code
def insertion(data):
# Идем по нашему списку от элемента номер 1 (по счету это второй) до конца
for i in range(1, len(data)):
# Индекс "предыдущего" для i элемента
j = i - 1
# Сохраним элемент номер i в "буфер"
key = data[i]
# Пока предыдущий для i-того элемент больше следующего и индекс "предыдущего" не дошел до 0
# сдвигаем j влево, пока не дойдем до места, на которое надо вставить элемент key
# вставляем его
while data[j] > key and j >= 0:
data[j + 1] = data[j]
j -= 1
data[j + 1] = key
return data
lst = [1,5,7,9,2]
insertion(lst)
###Output
_____no_output_____
###Markdown
Сортировка слияниемЭтот алгоритм относится к алгоритмам «разделяй и властвуй». Он разбивает список на две части, каждую из них он разбивает ещё на две и т. д. Список разбивается пополам, пока не останутся единичные элементы.Соседние элементы становятся отсортированными парами. Затем эти пары объединяются и сортируются с другими парами. Этот процесс продолжается до тех пор, пока не отсортируются все элементы.**Алгоритм**Список рекурсивно разделяется пополам, пока в итоге не получатся списки размером в один элемент. Массив из одного элемента считается упорядоченным. Соседние элементы сравниваются и соединяются вместе. Это происходит до тех пор, пока не получится полный отсортированный список.Сортировка осуществляется путём сравнения наименьших элементов каждого подмассива. Первые элементы каждого подмассива сравниваются первыми. Наименьший элемент перемещается в результирующий массив. Счётчики результирующего массива и подмассива, откуда был взят элемент, увеличиваются на 1.
###Code
# Функция, чтобы объединить два отсортированных списка
def merge(left_list, right_list):
# Сюда будет записан результирующий отсортированный список
sorted_list = []
# поначалу оба индекса в нуле
left_list_index = right_list_index = 0
# Длина списков часто используется, поэтому создадим переменные для удобства
left_list_length, right_list_length = len(left_list), len(right_list)
# Проходим по всем элементам обоих списков. _ - просто обозначение неиспользуемой переменной
# Цикл нужен, чтобы пройти все left_list_length + right_list_length элементов
for _ in range(left_list_length + right_list_length):
if left_list_index < left_list_length and right_list_index < right_list_length:
# Сравниваем первые элементы в начале каждого списка
# Если первый элемент левого подсписка меньше, добавляем его
# в отсортированный массив
if left_list[left_list_index] <= right_list[right_list_index]:
sorted_list.append(left_list[left_list_index])
left_list_index += 1
# Если первый элемент правого подсписка меньше, добавляем его
# в отсортированный массив
else:
sorted_list.append(right_list[right_list_index])
right_list_index += 1
# Если достигнут конец левого списка, элементы правого списка
# добавляем в конец результирующего списка
elif left_list_index == left_list_length:
sorted_list.append(right_list[right_list_index])
right_list_index += 1
# Если достигнут конец правого списка, элементы левого списка
# добавляем в отсортированный массив
elif right_list_index == right_list_length:
sorted_list.append(left_list[left_list_index])
left_list_index += 1
return sorted_list
# Сортировка слиянием
def merge_sort(nums):
# Возвращаем список, если он состоит из одного элемента
if len(nums) <= 1:
return nums
# Для того чтобы найти середину списка, используем деление без остатка
# Индексы должны быть integer
mid = len(nums) // 2
# Сортируем и объединяем подсписки (до mid и после mid)
left_list = merge_sort(nums[:mid])
right_list = merge_sort(nums[mid:])
# Объединяем отсортированные списки в результирующий
return merge(left_list, right_list)
# Проверяем, что оно работает
random_list_of_nums = [120, 45, 68, 250, 176]
random_list_of_nums = merge_sort(random_list_of_nums)
print(random_list_of_nums)
###Output
[45, 68, 120, 176, 250]
###Markdown
Быстрая сортировкаЭтот алгоритм также относится к алгоритмам «разделяй и властвуй». При правильной конфигурации он чрезвычайно эффективен и не требует дополнительной памяти, в отличие от сортировки слиянием. Массив разделяется на две части по разные стороны от опорного элемента. В процессе сортировки элементы меньше опорного помещаются перед ним, а равные или большие —позади. АлгоритмБыстрая сортировка начинается с разбиения списка и выбора одного из элементов в качестве опорного. А всё остальное передвигается так, чтобы этот элемент встал на своё место. Все элементы меньше него перемещаются влево, а равные и большие элементы перемещаются вправо. РеализацияСуществует много вариаций данного метода. Способ разбиения массива, рассмотренный здесь, соответствует схеме Хоара (создателя данного алгоритма).
###Code
# Функция принимает на вход список nums
def quicksort(nums):
# Если его длина 0 и 1, возвращает его же - такой список всегда отсортирован :)
if len(nums) <= 1:
return nums
else:
# Если длина > 1 в качестве опорного выбирается случайный элемент из списка
q = random.choice(nums)
# Создается три списка:
# Сюда запишем элементы < q
s_nums = []
# Сюда запишем элементы > q
m_nums = []
# Сюда запишем элементы = q
e_nums = []
# Пишем
for n in nums:
if n < q:
s_nums.append(n)
elif n > q:
m_nums.append(n)
else:
e_nums.append(n)
# А теперь рекурсивно проделаем ту же процедуру с левым и правым списками: s_nums и m_nums
return quicksort(s_nums) + e_nums + quicksort(m_nums)
import random
quicksort([1, 2, 3, 4, 9, 6, 7, 5])
###Output
_____no_output_____
###Markdown
Немного оптимизируем по памяти - мы много сохраняли лишней информации. За опорную точку выбирать будем начало списка.
###Code
# Функция принимает на вход список и индексы его начала и конца
def partition(array, start, end):
# За опорный элемент выбрано начало (его индекс)
pivot = start
# Идем по списку от 1 элемента до конца
for i in range(start+1, end+1):
# Если элемент меньше или равен опорному
if array[i] <= array[start]:
#Идем по списку от 1 элемента до конца
pivot += 1
array[i], array[pivot] = array[pivot], array[i]
array[pivot], array[start] = array[start], array[pivot]
return pivot
def quick_sort(array, start=0, end=None):
if end is None:
end = len(array) - 1
def _quicksort(array, start, end):
if start >= end:
return
pivot = partition(array, start, end)
_quicksort(array, start, pivot-1)
_quicksort(array, pivot+1, end)
return _quicksort(array, start, end)
array = [29, 19, 47, 11, 6, 19, 24, 12, 17,
23, 11, 71, 41, 36, 71, 13, 18, 32, 26]
quick_sort(array)
print(array)
###Output
[6, 11, 11, 12, 13, 17, 18, 19, 19, 23, 24, 26, 29, 32, 36, 41, 47, 71, 71]
|
notebooks/exampleconfiguration.ipynb | ###Markdown
Using the configuration library is simple. Import the configure_settings function and call it. The only requirementfor successful collection is to have an existing project.yml file in the current directory of the notebooks. This example project has one, so we will call configure_settings from there. Further usage require us to load the settings. Instead of having the user have to import other libraries, we expose asecond function from notebook_config called get_settings() which will return an instance of ProjectConfiguration. To complete this example, we will obtain an instance and print out settings values.
###Code
from azure_utils.configuration.notebook_config import get_or_configure_settings
###Output
_____no_output_____
###Markdown
Now that the functions are imported, lets bring up the UI to configure the settings ONLY if the subscription_id setting has not been modified from it's original value of ''.
###Code
settings_object = get_or_configure_settings()
###Output
_____no_output_____
###Markdown
Finally, get an instance of the settings. You will do this in the main (configurable) notebook, and all follow on notebooks. From the default provided file we know the following settings are there.subscription_id, resource_group
###Code
sub_id = settings_object.get_value('subscription_id')
rsrc_grp = settings_object.get_value('resource_group')
print(sub_id, rsrc_grp)
###Output
_____no_output_____ |
04.CNN/CNN_fromFile_flower_checkpoints.ipynb | ###Markdown
이미지 준비
###Code
%%bash
apt install -y imagemagick
[ ! -f flower_photos_300x200_small_train_test.zip ]&& wget https://raw.githubusercontent.com/Finfra/AI_CNN_RNN/main/data/flower_photos_300x200_small_train_test.zip
rm -rf __MACOSX
rm -rf flowers
unzip -q flower_photos_300x200_small_train_test.zip
mv flower_photos_300x200_small_train_test flowers
cd flowers
files=$(find |grep "\.jpg$\|\.png$")
for i in $files; do
# convert $i -quiet -resize 300x200^ -gravity center -extent 300x200 -colorspace Gray ${i%.*}.png
convert $i -quiet -resize 300x200^ -gravity center -extent 300x200 -define png:color-type=2 ${i%.*}.png
# identify ${i%.*}.png
rm -f $i
done
find .|grep .DS_Store|xargs rm -f
find .|head -n 10
###Output
_____no_output_____
###Markdown
CNN Example FromFile by Keras
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
from tensorflow.keras.utils import to_categorical
import numpy as np
###Output
_____no_output_____
###Markdown
해당 Path의 이미지 Array로 리턴
###Code
from os import listdir
from os.path import isfile, join
from pylab import *
from numpy import *
def getFolder(thePath,isFile=True):
return [f for f in listdir(thePath) if isFile == isfile(join(thePath, f)) ]
def getImagesAndLabels(tPath,isGray=False):
labels=getFolder(tPath,False)
tImgDic={f:getFolder(join(tPath,f)) for f in labels}
tImages,tLabels=None,None
ks=sorted(list(tImgDic.keys()))
oh=np.identity(len(ks))
for label in tImgDic.keys():
for image in tImgDic[label]:
le=np.array([float(label)],ndmin=1)
img=imread(join(tPath,label,image))
if isGray:
img=img.reshape(img.shape+(1,))
img=img.reshape((1,) +img.shape)
if tImages is None:
tImages, tLabels =img, le
else:
tImages,tLabels = np.append(tImages,img,axis=0), np.append(tLabels,le ,axis=0)
return (tImages,to_categorical(tLabels) )
w=300
h=200
color=3
tPath='flowers/train'
train_images,train_labels=getImagesAndLabels(tPath)
tPath='flowers/test'
test_images,test_labels=getImagesAndLabels(tPath)
train_images, test_images = train_images / 255.0, test_images / 255.0
print(f"Shape of Train_images = {train_images.shape} , Shape of Train_labels = {train_labels.shape}")
model = models.Sequential()
model.add(layers.Conv2D(96, (13, 13), activation='relu', padding='same', input_shape=(h,w, color)))
model.add(layers.Dropout(0.2))
model.add(layers.MaxPooling2D((3, 3)))
model.add(layers.Conv2D(64, (7, 7), activation='relu', padding='same'))
model.add(layers.Dropout(0.2))
model.add(layers.MaxPooling2D((3, 3)))
model.add(layers.Conv2D(64, (5, 5), activation='relu', padding='same'))
model.add(layers.Dropout(0.2))
model.add(layers.BatchNormalization())
model.add(layers.MaxPooling2D((3, 3)))
model.add(layers.Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(layers.MaxPooling2D((3, 3)))
model.add(layers.Flatten())
model.add(layers.Dense(200, activation='tanh',kernel_regularizer='l1'))
model.add(layers.Dense(2, activation='softmax'))
model.summary()
%%bash
[ ! -d /content/ckpt/ ] && mkdir /content/ckpt/
epochs = 40
batch_size = 100
from tensorflow.keras.callbacks import ModelCheckpoint
filename = f'/contents/ckpt/checkpoint-epoch-{epochs}-batch-{batch_size}-trial-001.h5'
checkpoint = ModelCheckpoint(filename,
monitor='val_loss',
verbose=1,
save_best_only=True,
mode='auto'
)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
hist=model.fit(train_images,
train_labels,
batch_size=batch_size,
epochs=epochs,
validation_data=(test_images,test_labels),
callbacks=[checkpoint]
)
###Output
_____no_output_____
###Markdown
Train history
###Code
print('## training loss and acc ##')
fig, loss_ax = plt.subplots()
acc_ax = loss_ax.twinx()
loss_ax.plot(hist.history['loss'], 'y', label='train loss')
loss_ax.set_xlabel('epoch')
loss_ax.set_ylabel('loss')
loss_ax.legend(loc='upper left')
acc_ax.plot(hist.history['accuracy'], 'b', label='train acc')
acc_ax.set_ylabel('accuracy')
acc_ax.legend(loc='upper right')
plt.show()
###Output
_____no_output_____
###Markdown
Result
###Code
score = model.evaluate(test_images, test_labels, verbose=0, batch_size=32)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
_____no_output_____ |
linear_regression/ridge_regression_demo.ipynb | ###Markdown
Demonstration of Regularized Multivariate Linear RegressionThis jupyter notebook is part of a [collection of notebooks](../index.ipynb) on various topics of Digital Signal Processing. Please direct questions and suggestions to [[email protected]](mailto:[email protected]). This notebook demonstrates a regularized multivariate linear regression, the [ridge regression](https://en.wikipedia.org/wiki/Ridge_regression), which is used for [multicollinear](https://en.wikipedia.org/wiki/Multicollinearity) features.
###Code
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Generate DatasetIn the following, a synthetic dataset with $N$ examples is generated by implementing a simple two-dimensional linear relationship and additive noise. The features are then lifted into a higher dimensional feature space by a linear mapping. This leads to linear correlations between features.
###Code
N = 1000 # total number of examples
F = 6 # dimensionality of lifted feature space
alpha = 1.2 # true intercept
theta = [0.1, 0.25] # true slopes
np.random.seed(123)
X = np.random.uniform(low=-5, high=10, size=(N, 2))
Y = alpha + np.dot(X, theta) + .5 * np.random.normal(size=(N))
# lifting of feature space by linear mapping
A = np.random.uniform(low=-2, high=2, size=(2, F))
A = A * np.random.choice([0, 1], size=(2, F), p=[2./10, 8./10])
XF = np.dot(X, A)
###Output
_____no_output_____
###Markdown
The condition number of the (unscaled) empirical covariance matrix $\mathbf{X}^T \mathbf{X}$ is used as a measure for the ill-conditioning of the normal equation. The results show that the condition number for the lifted feature space is indeed very high due to the multicollinear features.
###Code
kappa_x = np.linalg.cond(X.T @ X)
kappa_xf = np.linalg.cond(XF.T @ XF)
print('Condition number of covariance matrix of \n \t uncorrelated features: {}'.format(kappa_x))
print('\t correlated features: {}'.format(kappa_xf))
###Output
Condition number of covariance matrix of
uncorrelated features: 1.5900217803414807
correlated features: 3.2120397364170125e+17
###Markdown
Estimate Parameters of Ridge RegressionLets estimate the parameters of the linear mutivariate regression model using the ridge regression. First some helper functions are defined for estimating the regression coefficients, the prediction model and evaluation of the results.
###Code
def ridge_regression(Xt, Y, mu=0):
return np.linalg.inv(Xt.T @ Xt + mu*np.eye(F+1)) @ Xt.T @ Y
def predict(Xt, theta_hat):
return np.dot(Xt, theta_hat)
def evaluate(Y, Y_hat):
e = Y - Y_hat
std_e = np.std(e)
TSS = np.sum((Y - np.mean(Y))**2)
RSS = np.sum((Y-Y_hat)**2)
Rs = 1 - RSS/TSS
return std_e, Rs
###Output
_____no_output_____
###Markdown
First the ordinary least-squares approach is applied by setting the regularization parameter to $\mu = 0$. The results show that the estimates of the parameters show large errors, which is also reflected by the performance metrics.
###Code
Xt = np.concatenate((np.ones((len(XF), 1)), XF), axis=1)
theta_hat = ridge_regression(Xt, Y)
Y_hat = predict(Xt, theta_hat)
std_e, Rs = evaluate(Y, Y_hat)
print('Estimated/true intercept: \t\t {0:.3f} / {1:.3f}'.format(theta_hat[0], alpha))
print('Standard deviation of residual error: \t {0:.3f}'.format(std_e))
print('Coefficient of determination: \t\t {0:.3f}'.format(Rs))
###Output
Estimated/true intercept: -19.743 / 1.200
Standard deviation of residual error: 7.800
Coefficient of determination: -166.338
###Markdown
Now the ridge regression is used with $\mu = 0.001$. The results are much better now, however it is not clear if the regularization parameter has been chosen appropriately.
###Code
theta_hat = ridge_regression(Xt, Y, mu=1e-3)
Y_hat = predict(Xt, theta_hat)
std_e, Rs = evaluate(Y, Y_hat)
print('Estimated/true intercept: \t\t {0:.3f} / {1:.3f}'.format(theta_hat[0], alpha))
print('Standard deviation of residual error: \t {0:.3f}'.format(std_e))
print('Coefficient of determination: \t\t {0:.3f}'.format(Rs))
###Output
Estimated/true intercept: 1.201 / 1.200
Standard deviation of residual error: 0.492
Coefficient of determination: 0.853
###Markdown
Hyperparameter SearchIn order to optimize the regularization parameter $\mu$ for the given dataset, the ridge regression is evaluated for a series of potential regularization parameters. After a first coarse search, the search is refined in a second step. The results of this refined search are computed and plotted in the following. It can be concluded that $\mu = 3 \cdot 10^{-11}$ seems to be a good choice.
###Code
results = list()
for n in np.linspace(-12, -10, 100):
mu = 10.0**n
theta_hat = ridge_regression(Xt, Y, mu=mu)
Y_hat = predict(Xt, theta_hat)
std_e, Rs = evaluate(Y, Y_hat)
results.append((mu, std_e, Rs))
results = np.array(results)
fig, ax = plt.subplots()
plt.plot(results[:, 0], results[:, 1], label=r'$\sigma_e$')
plt.plot(results[:, 0], results[:, 2], label=r'$R^2$')
ax.set_xscale('log')
plt.xlabel(r'$\mu$')
plt.ylim([-2, 2])
plt.legend()
plt.grid(True, which="both")
###Output
_____no_output_____ |
1/NN.ipynb | ###Markdown
ROC curve
###Code
y_pre = np.concatenate([i for i in res_sigmoid])
fpr, tpr, thersholds = roc_curve(label[: ,0], y_pre[: ,0])
roc_auc = auc(fpr, tpr)
plt.figure(figsize=[8, 6])
plt.plot(fpr, tpr, 'k--', color='navy', label='ROC (area = {0:.3f})'.format(roc_auc), lw=2)
ax = plt.gca()
ax.spines['bottom'].set_linewidth(2)
ax.spines['left'].set_linewidth(2)
ax.spines['right'].set_linewidth(2)
ax.spines['top'].set_linewidth(2)
font_legend = {'family':'DejaVu Sans','weight':'normal','size':15}
font_label = {'family': 'DejaVu Sans', 'weight': 'bold', 'size': 20}
font_title = {'family': 'DejaVu Sans', 'weight': 'bold', 'size': 30}
# 设置坐标刻度值的大小以及刻度值的字体
plt.tick_params(labelsize=16)
labels = ax.get_xticklabels() + ax.get_yticklabels()
[label.set_fontname('DejaVu Sans') for label in labels]
# plt.xlim(-5, 150)
# plt.xticks(range(0, 149, 29), labels=['1', '30', '60', '90', '120', '150'])
plt.ylabel('False Positive Rate', font_label)
plt.xlabel('True Positive Rate', font_label)
plt.legend(loc="lower right", prop=font_legend)
###Output
_____no_output_____
###Markdown
Export ML
###Code
def cal_ML(tmp, features_path, cls_1, cls_2):
tmp_1, tmp_2 = sorted(tmp[cls_1]), sorted(tmp[cls_2])
N1, N2 = len(tmp_1), len(tmp_2)
ML_y1, ML_y2 = [], []
Error_bar1, Error_bar2 = [] ,[]
for j in range(N1):
valid_x = sorted(tmp_1)[j:]
E0 = valid_x[0]
Sum = np.sum(np.log(valid_x/E0))
N_prime = N1 - j
alpha = 1 + N_prime / Sum
error_bar = (alpha - 1) / pow(N_prime, 0.5)
ML_y1.append(alpha)
Error_bar1.append(error_bar)
for j in range(N2):
valid_x = sorted(tmp_2)[j:]
E0 = valid_x[0]
Sum = np.sum(np.log(valid_x/E0))
N_prime = N2 - j
alpha = 1 + N_prime / Sum
error_bar = (alpha - 1) / pow(N_prime, 0.5)
ML_y2.append(alpha)
Error_bar2.append(error_bar)
with open(features_path[:-4] + '_1 ' + 'Energy' + '_ML.txt', 'w') as f:
f.write('Energy, ε, Error bar\n')
for j in range(len(ML_y1)):
f.write('{}, {}, {}\n'.format(sorted(tmp_1)[j], ML_y1[j], Error_bar1[j]))
with open(features_path[:-4] + '_2 ' + 'Energy' + '_ML.txt', 'w') as f:
f.write('Energy, ε, Error bar\n')
for j in range(len(ML_y2)):
f.write('{}, {}, {}\n'.format(sorted(tmp_2)[j], ML_y2[j], Error_bar2[j]))
predict = np.concatenate([i for i in res_01])
cls_1 = predict[:, 0] == 1
cls_2 = predict[:, 1] == 1
features_path = r'6_2_select.txt'
cal_ML(feature[:, 3], features_path, cls_1, cls_2)
sum(cls_1), sum(cls_2), sum(label[:, 0] == 1), sum(label[:, 1] == 1)
###Output
_____no_output_____
###Markdown
SVM
###Code
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
import pandas as pd
import seaborn as sns
from sklearn.metrics import classification_report, accuracy_score, precision_score, recall_score, f1_score, cohen_kappa_score, roc_curve, auc, confusion_matrix
def plot_norm(ax, xlabel=None, ylabel=None, zlabel=None, title=None, x_lim=[], y_lim=[], z_lim=[], legend=True, grid=False,
legend_loc='upper left', font_color='black', legendsize=11, labelsize=14, titlesize=15, ticksize=13, linewidth=2):
ax.spines['bottom'].set_linewidth(linewidth)
ax.spines['left'].set_linewidth(linewidth)
ax.spines['right'].set_linewidth(linewidth)
ax.spines['top'].set_linewidth(linewidth)
# 设置坐标刻度值的大小以及刻度值的字体 Arial
ax.tick_params(which='both', width=linewidth, labelsize=ticksize, colors=font_color)
labels = ax.get_xticklabels() + ax.get_yticklabels()
[label.set_fontname('Arial') for label in labels]
font_legend = {'family': 'Arial', 'weight': 'normal', 'size': legendsize}
font_label = {'family': 'Arial', 'weight': 'bold', 'size': labelsize, 'color':font_color}
font_title = {'family': 'Arial', 'weight': 'bold', 'size': titlesize, 'color':font_color}
if x_lim:
ax.set_xlim(x_lim[0], x_lim[1])
if y_lim:
ax.set_ylim(y_lim[0], y_lim[1])
if z_lim:
ax.set_zlim(z_lim[0], z_lim[1])
if legend:
plt.legend(loc=legend_loc, prop=font_legend)
if grid:
ax.grid(ls='-.')
if xlabel:
ax.set_xlabel(xlabel, font_label)
if ylabel:
ax.set_ylabel(ylabel, font_label)
if zlabel:
ax.set_zlabel(zlabel, font_label)
if title:
ax.set_title(title, font_title)
plt.tight_layout()
def plot_confmat(tn, fp, fn, tp):
cm = np.zeros([2, 2])
cm[0][0], cm[0][1], cm[1][0], cm[1][1] = tn, fp, fn, tp
f, ax=plt.subplots(figsize=(2.5, 2))
sns.heatmap(cm,annot=True, ax=ax, fmt='.20g') #画热力图
ax.xaxis.set_ticks_position('top')
ax.yaxis.set_ticks_position('left')
ax.tick_params(bottom=False,top=False,left=False,right=False)
def print_res(model, x_pred, y_true):
target_pred = model.predict(x_pred)
true = np.sum(target_pred == y_true)
print('预测对的结果数目为:', true)
print('预测错的的结果数目为:', y_true.shape[0]-true)
print('使用SVM预测的准确率为:',
accuracy_score(y_true, target_pred))
print('使用SVM预测的精确率为:',
precision_score(y_true, target_pred))
print('使用SVM预测的召回率为:',
recall_score(y_true, target_pred))
print('使用SVM预测的F1值为:',
f1_score(y_true, target_pred))
print('使用SVM预测b的Cohen’s Kappa系数为:',
cohen_kappa_score(y_true, target_pred))
print('使用SVM预测的分类报告为:','\n',
classification_report(y_true, target_pred))
tn, fp, fn, tp = confusion_matrix(y_true, target_pred).ravel()
plot_confmat(tn, fp, fn, tp)
return target_pred
###Output
_____no_output_____
###Markdown
Dislocation
###Code
fold = r'C:\Users\jonah\Desktop\Ni_dislocation.csv'
data = pd.read_csv(fold).astype(np.float32)
feature = data.iloc[:, :-1].values
label = np.array(data.iloc[:, -1].tolist()).reshape(-1, 1)
# ext = np.zeros([label.shape[0], 1]).astype(np.float32)
# ext[np.where(label == 0)[0]] = 1
# label = np.concatenate((label, ext), axis=1)
df_temp = train_test_split(feature, label, test_size=0.2, stratify=label, random_state=69)
stdScaler = StandardScaler().fit(df_temp[0])
trainStd = stdScaler.transform(df_temp[0])
testStd = stdScaler.transform(df_temp[1])
svm = SVC(max_iter=200, random_state=100).fit(trainStd, df_temp[2].reshape(-1))
print('建立的SVM模型为:\n', svm)
rf = RandomForestClassifier(max_depth=10, random_state=100).fit(trainStd, df_temp[2].reshape(-1))
print('建立的RF模型为:\n', rf)
target_pred_svm = print_res(svm, testStd, df_temp[3].reshape(-1))
target_pred_rf = print_res(rf, testStd, df_temp[3].reshape(-1))
fpr_svm, tpr_svm, thersholds_svm = roc_curve(df_temp[3].reshape(-1), target_pred_svm)
roc_auc_svm = auc(fpr_svm, tpr_svm)
fpr_rf, tpr_rf, thersholds_rf = roc_curve(df_temp[3].reshape(-1), target_pred_rf)
roc_auc_rf = auc(fpr_rf, tpr_rf)
fig = plt.figure(figsize=[6, 3.9])
ax = plt.subplot()
ax.plot(fpr_svm, tpr_svm, 'k--', color='navy', label='SVM (area = {0:.3f})'.format(roc_auc_svm), lw=2)
ax.plot(fpr_rf, tpr_rf, 'k--', color='green', label='RF (area = {0:.3f})'.format(roc_auc_rf), lw=2)
plot_norm(ax, 'True Positive Rate', 'False Positive Rate', legend_loc='lower right')
fold = r'C:\Users\jonah\Desktop\Nano_Ni_3_cnts_4.csv'
data = pd.read_csv(fold).astype(np.float32)
nano_ni = data.values
stdScaler = StandardScaler().fit(nano_ni)
trainStd = stdScaler.transform(nano_ni)
target_pred = svm.predict(trainStd)
sum(target_pred), target_pred.shape
fold = r'C:\Users\jonah\Desktop\Nano_Ni_3_cnts_4.csv'
data = pd.read_csv(fold).astype(np.float32)
nano_ni = data.values
stdScaler = StandardScaler().fit(nano_ni)
trainStd = stdScaler.transform(nano_ni)
target_pred = rf.predict(trainStd)
sum(target_pred), target_pred.shape
###Output
_____no_output_____
###Markdown
Twinning
###Code
fold = r'C:\Users\jonah\Desktop\Ni_twinning.csv'
data = pd.read_csv(fold).astype(np.float32)
feature = data.iloc[:, :-1].values
label = np.array(data.iloc[:, -1].tolist()).reshape(-1, 1)
df_temp = train_test_split(feature, label, test_size=0.2, stratify=label, random_state=69)
stdScaler = StandardScaler().fit(df_temp[0])
trainStd = stdScaler.transform(df_temp[0])
testStd = stdScaler.transform(df_temp[1])
svm = SVC(max_iter=200, random_state=100).fit(trainStd, df_temp[2].reshape(-1))
print('建立的SVM模型为:\n', svm)
rf = RandomForestClassifier(max_depth=10, random_state=100).fit(trainStd, df_temp[2].reshape(-1))
print('建立的RF模型为:\n', rf)
target_pred_svm = print_res(svm, testStd, df_temp[3].reshape(-1))
p, r = 0.9889, 0.9828
2*p*r/(p+r)
target_pred_rf = print_res(rf, testStd, df_temp[3].reshape(-1))
fpr_svm, tpr_svm, thersholds_svm = roc_curve(df_temp[3].reshape(-1), target_pred_svm)
roc_auc_svm = auc(fpr_svm, tpr_svm)
fpr_rf, tpr_rf, thersholds_rf = roc_curve(df_temp[3].reshape(-1), target_pred_rf)
roc_auc_rf = auc(fpr_rf, tpr_rf)
fig = plt.figure(figsize=[6, 3.9])
ax = plt.subplot()
ax.plot(fpr_svm, tpr_svm, 'k--', color='navy', label='SVM (area = {0:.3f})'.format(roc_auc_svm), lw=2)
ax.plot(fpr_rf, tpr_rf, 'k--', color='green', label='RF (area = {0:.3f})'.format(roc_auc_rf), lw=2)
plot_norm(ax, 'True Positive Rate', 'False Positive Rate', legend_loc='lower right')
fold = r'C:\Users\jonah\Desktop\Nano_Ni_3_cnts_4_~dislocation.csv'
data = pd.read_csv(fold).astype(np.float32)
nano_ni = data.values
stdScaler = StandardScaler().fit(nano_ni)
trainStd = stdScaler.transform(nano_ni)
target_pred = rf.predict(trainStd)
sum(target_pred), target_pred.shape
###Output
_____no_output_____
###Markdown
Crack
###Code
fold = r'C:\Users\jonah\Desktop\Ni_twinning&crack.csv'
data = pd.read_csv(fold).astype(np.float32)
feature = data.iloc[:, :-1].values
label = np.array(data.iloc[:, -1].tolist()).reshape(-1, 1)
df_temp = train_test_split(feature, label, test_size=0.2, stratify=label, random_state=69)
stdScaler = StandardScaler().fit(df_temp[0])
trainStd = stdScaler.transform(df_temp[0])
testStd = stdScaler.transform(df_temp[1])
svm = SVC(max_iter=200, random_state=100).fit(trainStd, df_temp[2].reshape(-1))
print('建立的SVM模型为:\n', svm)
rf = RandomForestClassifier(max_depth=10, random_state=100).fit(trainStd, df_temp[2].reshape(-1))
print('建立的RF模型为:\n', rf)
target_pred_svm = print_res(svm, testStd, df_temp[3].reshape(-1))
target_pred_rf = print_res(rf, testStd, df_temp[3].reshape(-1))
fpr_svm, tpr_svm, thersholds_svm = roc_curve(df_temp[3].reshape(-1), target_pred_svm)
roc_auc_svm = auc(fpr_svm, tpr_svm)
fpr_rf, tpr_rf, thersholds_rf = roc_curve(df_temp[3].reshape(-1), target_pred_rf)
roc_auc_rf = auc(fpr_rf, tpr_rf)
fig = plt.figure(figsize=[6, 3.9])
ax = plt.subplot()
ax.plot(fpr_svm, tpr_svm, 'k--', color='navy', label='SVM (area = {0:.3f})'.format(roc_auc_svm), lw=2)
ax.plot(fpr_rf, tpr_rf, 'k--', color='green', label='RF (area = {0:.3f})'.format(roc_auc_rf), lw=2)
plot_norm(ax, 'True Positive Rate', 'False Positive Rate', legend_loc='lower right')
###Output
_____no_output_____ |
Matplotlib/.ipynb_checkpoints/6. Area Chart-checkpoint.ipynb | ###Markdown
Prepared by Anushka Bajpai Data Visualization with MatplotlibConnect with me on - LinkedIn
###Code
import numpy as np
import pandas as pd
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Area Plot
###Code
x = np.arange(1,31)
y = np.random.normal(10,11,size=30)
y = np.square(y)
plt.figure(figsize=(16,6))
plt.plot(x,y)
plt.fill_between(x, y)
plt.show()
###Output
_____no_output_____
###Markdown
Changing Fill Color
###Code
x = np.arange(1,31)
y = np.random.normal(10,11,size=30)
y = np.square(y)
plt.figure(figsize=(16,6))
plt.fill_between( x, y, color="#baf1a1") # #Changing Fill color
plt.plot(x, y, color='#7fcd91') # Color on edges
plt.title("$ Area $ $ chart $" , fontsize = 16)
plt.xlabel("$X$" , fontsize = 16)
plt.ylabel("$Y$" , fontsize = 16)
plt.show()
###Output
_____no_output_____
###Markdown
Changing Fill Color and its transperancy
###Code
x = np.arange(1,31)
y = np.random.normal(10,11,size=30)
y = np.square(y)
plt.figure(figsize=(16,6))
plt.fill_between( x, y, color="#C8D700" , alpha = 0.3) # Changing transperancy using Alpha parameter
plt.plot(x, y, color='#36BD00')
plt.title("$ Area $ $ chart $" , fontsize = 16)
plt.xlabel("$X$" , fontsize = 16)
plt.ylabel("$Y$" , fontsize = 16)
plt.show()
x = np.arange(1,51)
y = np.random.normal(1,5,size=50)
y = np.square(y)
plt.figure(figsize=(16,6))
plt.fill_between( x, y, color="#5ac8fa", alpha=0.4)
plt.plot(x, y, color="blue", alpha=0.6) # Bold line on edges
plt.title("$ Area $ $ chart $" , fontsize = 16)
plt.xlabel("$X$" , fontsize = 16)
plt.ylabel("$Y$" , fontsize = 16)
plt.show()
plt.figure(figsize=(16,6))
plt.fill_between( x, y, color="#5ac8fa", alpha=0.4)
plt.plot(x, y, color="blue", alpha=0.2) # Less stronger line on edges
plt.title("$ Area $ $ chart $" , fontsize = 14)
plt.xlabel("$X$" , fontsize = 14)
plt.ylabel("$Y$" , fontsize = 14)
plt.show()
###Output
_____no_output_____
###Markdown
Stacked Area plot
###Code
x=np.arange(1,6)
y1 = np.array([1,5,9,13,17])
y2 = np.array([2,6,10,14,16])
y3 = np.array([3,7,11,15,19])
y4 = np.array([4,8,12,16,20])
plt.figure(figsize=(8,6))
plt.stackplot(x,y1,y2,y3,y4, labels=['Y1','Y2','Y3','Y4'])
plt.legend(loc='upper left')
plt.show()
x=np.arange(1,6)
y=[ [1,5,9,13,17], [2,6,10,14,16], [3,7,11,15,19] , [4,8,12,16,20] ]
plt.figure(figsize=(8,6))
plt.stackplot(x,y , labels=['Y1','Y2','Y3','Y4'])
plt.legend(loc='upper left')
plt.show()
x=np.arange(1,7)
y=[ [1,5,9,3,17,1], [2,6,10,4,16,2], [3,7,11,5,19,1] , [4,8,12,6,20,2] ]
plt.figure(figsize=(10,6))
plt.stackplot(x,y , labels=['Y1','Y2','Y3','Y4'])
plt.legend(loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Changing Fill Color and its transperancy in Stacked Plot
###Code
x=np.arange(1,7)
y=[ [1,5,9,3,17,1], [2,6,10,4,16,2], [3,7,11,5,19,1] , [4,8,12,6,20,2] ]
plt.figure(figsize=(11,6))
plt.stackplot(x,y , labels=['Y1','Y2','Y3','Y4'] , colors= ["#00b159" , "#ffc425", "#f37735", "#ff3b30"])
plt.legend(loc='upper left')
plt.show()
plt.figure(figsize=(11,6))
plt.stackplot(x,y, labels=['Y1','Y2','Y3','Y4'], colors= ["#00b159" , "#ffc425", "#f37735", "#ff3b30"], alpha=0.7 )
plt.legend(loc='upper left')
plt.show()
plt.figure(figsize=(11,6))
plt.stackplot(x,y, labels=['Y1','Y2','Y3','Y4'], colors= ["#00b159" , "#ffc425", "#f37735", "#ff3b30"], alpha=0.5 )
plt.legend(loc='upper left')
plt.show()
###Output
_____no_output_____ |
28_Stock_Market/1_Moving_Average/2_Weighted & Exponential.ipynb | ###Markdown
- Two other moving averages are commonly used among financial market : - Weighted Moving Average (WMA) - Exponential Moving Average (EMA) Weighted Moving Average - In some applications, one of the limitations of the simple moving average is that it gives equal weight to each of the daily prices included in the window. E.g., in a 10-day moving average, the most recent day receives the same weight as the first day in the window: each price receives a 10% weighting.- Compared to the Simple Moving Average, the Linearly Weighted Moving Average (or simply Weighted Moving Average, WMA), gives more weight to the most recent price and gradually less as we look back in time. On a 10-day weighted average, the price of the 10th day would be multiplied by 10, that of the 9th day by 9, the 8th day by 8 and so on. The total will then be divided by the sum of the weights (in this case: 55). In this specific example, the most recent price receives about 18.2% of the total weight, the second more recent 16.4%, and so on until the oldest price in the window that receives 0.02% of the weight.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
###Output
_____no_output_____
###Markdown
- We apply a style for our charts. If you’re using Jupyter it’s a good idea to add the %matplotlib inline instruction (and skip plt.show() when creating charts):
###Code
plt.style.use('fivethirtyeight')
data = pd.read_csv('MV EMV.csv', index_col = 'Date')
df = pd.read_csv('cs movavg.csv', index_col = 'Date')
data.index = pd.to_datetime(data.index)
df.index = pd.to_datetime(df.index)
# We can drop the old index column:
data = data.drop(columns='Unnamed: 0')
df = df.drop(columns='Unnamed: 0')
display(data.head(15))
display(df.head(15))
(data.shape), (df.shape)
###Output
_____no_output_____
###Markdown
- We are going to consider only the Price and 10-Day WMA columns for now and move to the EMA later on
###Code
weights = np.arange(1,11) #this creates an array with integers 1 to 10 included
weights
###Output
_____no_output_____
###Markdown
- When it comes to linearly weighted moving averages, the pandas library does not have a ready off-the-shelf method to calculate them. It offers, however, a very powerful and flexible method: ** .apply()** This method allows us to create and pass any custom function to a rolling window: that is how we are going to calculate our Weighted Moving Average. To calculate a 10-Day WMA, we start by creating an array of weights - whole numbers from 1 to 10:
###Code
wma10 = data['Price'].rolling(10).apply(lambda prices: np.dot(prices, weights)/weights.sum(), raw=True)
wma10.head(20)
###Output
_____no_output_____
###Markdown
Study
###Code
df['Our 10-day WMA'] = np.round(wma10, decimals=3)
df[['Price', '10-day WMA', 'Our 10-day WMA']].head(20)
###Output
_____no_output_____
###Markdown
- The two WMA columns look the same. There are a few differences in the third decimal place, but we can put that down to rounding error and conclude that our implementation of the WMA is correct. In a real-life application, if we want to be more rigorous we should compute the differences between the two columns and check that they are not too large. For now, we keep things simple and we can be satisfied with the visual inspection. -------------
###Code
sma10 = data['Price'].rolling(10).mean()
data['10-day SMA'] = np.round(sma10, decimals=3)
data['10-day WMA'] = np.round(wma10, decimals=3)
data[['Price', '10-day SMA', '10-day WMA']].head(20)
plt.figure(figsize = (12,6))
plt.plot(data['Price'], label="Price")
plt.plot(data['10-day WMA'], label="10-Day WMA")
plt.plot(data['10-day SMA'], label="10-Day SMA")
plt.xlabel("Date")
plt.ylabel("Price")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
- As we can see, both averages smooth out the price movement. The WMA is more reactive and follows the price closer than the SMA: we expect that since the WMA gives more weight to the most recent price observations. Also, both moving average series start on day 10: the first day with enough available data to compute the averages. ------------------------ Exponential Moving Average - Similarly to the Weighted Moving Average, the Exponential Moving Average (EMA) assigns a greater weight to the most recent price observations. While it assigns lesser weight to past data, it is based on a recursive formula that includes in its calculation all the past data in our price series.- Pandas includes a method to compute the EMA moving average of any time series: .ewm().
###Code
ema10 = data['Price'].ewm(span=10).mean()
ema10.head(10)
df['Our 10-day EMA'] = np.round(ema10, decimals=3)
df[['Price', '10-day EMA', 'Our 10-day EMA']].head(20)
###Output
_____no_output_____
###Markdown
- As you have already noticed, we have a problem here: the 10-day EMA that we just calculated does not correspond to the one calculated in the downloaded spreadsheet. One starts on day 10, while the other starts on day 1. Also, the values do not match exactly.- Is our calculation wrong? Or is the calculation in the provided spreadsheet wrong? Neither: those two series correspond to two different definitions of EMA. To be more specific, the formula used to compute the EMA is the same. What changes is just the use of the initial values.- If we look carefully at the definition of Exponential Moving Average on the StockCharts.com web page we can notice one important detail: they start calculating a 10-day moving average on day 10, disregarding the previous days and replacing the price on day 10 with its 10-day SMA. It’s a different definition than the one applied when we calculated the EMA using the .ewm() method directly.- The following lines of code create a new modified price series where the first 9 prices (when the SMA is not available) are replaced by NaN and the price on the 10th date becomes its 10-Day SMA:
###Code
modPrice = df['Price'].copy()
modPrice.iloc[0:10] = sma10[0:10]
modPrice.head(20)
###Output
_____no_output_____
###Markdown
- We can use this modified price series to calculate a second version of the EWM. By looking at the documentation, we can note that the .ewm() method has an adjust parameter that defaults to True. This parameter adjusts the weights to account for the imbalance in the beginning periods (if you need more detail, see the Exponentially weighted windows section in the pandas documentation).- If we want to emulate the EMA as in our spreadsheet using our modified price series, we don’t need this adjustment. We then set adjust=False:
###Code
ema10alt = modPrice.ewm(span=10, adjust=False).mean()
df['Our 2nd 10-Day EMA'] = np.round(ema10alt, decimals=3)
df[['Price', '10-day EMA', 'Our 10-day EMA', 'Our 2nd 10-Day EMA']].head(20)
###Output
_____no_output_____
###Markdown
- Now, we are doing much better. We have obtained an EMA series that matches the one calculated in the spreadsheet.- We ended up with two different versions of EMA in our hands: **1. ema10:** This version uses the plain .ewm() method, starts at the beginning of our price history but does not match the definition used in the spreadsheet. **2. ema10alt:** This version starts on day 10 (with an initial value equal to the 10-day SMA) and matches the definition on our spreadsheet. - Which one is the best to use? The answer is: it depends on what we need for our application and to build our system. If we need an EMA series that starts from day 1, then we should choose the first one. On the other hand, if we need to use our average in combination with other averages that have no values for the initial days (such as the SMA), then the second is probably the best one.- The second EMA is widely used among financial market analysts: if we need to implement an already existing system, we need to be careful to use the correct definition. Otherwise, the results may not be what is expected from us and may put the accuracy of all of our work into question. In any case, the numeric difference between those two averages is minimal, with an impact on our trading or investment decision system limited to the initial days.
###Code
plt.figure(figsize = (12,6))
plt.plot(df['Price'], label="Price")
plt.plot(wma10, label="10-Day WMA")
plt.plot(sma10, label="10-Day SMA")
plt.plot(ema10, label="10-Day EMA-1")
plt.plot(ema10alt, label="10-Day EMA-2")
plt.xlabel("Date")
plt.ylabel("Price")
plt.legend()
plt.show()
###Output
_____no_output_____ |
SGTPy-examples/SGTPy's paper notebooks/Fit Equilibrium Hexane + Etanol.ipynb | ###Markdown
Fit $k_{ij}$ interactions parameter of Hexane and EthanolThis notebook has te purpose of showing how to optimize the $k_{ij}$ for a mixture in SGTPy.First it's needed to import the necessary modules
###Code
import numpy as np
from SGTPy import component, mixture, saftvrmie
from SGTPy.fit import fit_kij
###Output
_____no_output_____
###Markdown
Now that the functions are available it is necessary to create the mixture.
###Code
ethanol = component('ethanol2C', ms = 1.7728, sigma = 3.5592 , eps = 224.50,
lambda_r = 11.319, lambda_a = 6., eAB = 3018.05, rcAB = 0.3547,
rdAB = 0.4, sites = [1,0,1], cii= 5.3141080872882285e-20)
hexane = component('hexane', ms = 1.96720036, sigma = 4.54762477, eps = 377.60127994,
lambda_r = 18.41193194, cii = 3.581510586936205e-19)
mix = mixture(hexane, ethanol)
###Output
_____no_output_____
###Markdown
Now the experimental equilibria data is read and a tuple is created. It includes the experimental liquid composition, vapor composition, equilibrium temperature and pressure. This is done with ```datavle = (Xexp, Yexp, Texp, Pexp)```
###Code
# Experimental data obtained from Sinor, Weber, J. Chem. Eng. Data, vol. 5, no. 3, pp. 243–247, 1960.
# Experimental temperature saturation in K
Texp = np.array([351.45, 349.15, 346.35, 340.55, 339.05, 334.95, 332.55, 331.85,
331.5 , 331.25, 331.15, 331.4 , 331.6 , 332.3 , 333.35, 336.65,
339.85, 341.85])
# Experimental pressure in Pa
Pexp = np.array([101330., 101330., 101330., 101330., 101330., 101330., 101330.,
101330., 101330., 101330., 101330., 101330., 101330., 101330.,
101330., 101330., 101330., 101330.])
# Experimental liquid composition
Xexp = np.array([[0. , 0.01 , 0.02 , 0.06 , 0.08 , 0.152, 0.245, 0.333, 0.452,
0.588, 0.67 , 0.725, 0.765, 0.898, 0.955, 0.99 , 0.994, 1. ],
[1. , 0.99 , 0.98 , 0.94 , 0.92 , 0.848, 0.755, 0.667, 0.548,
0.412, 0.33 , 0.275, 0.235, 0.102, 0.045, 0.01 , 0.006, 0. ]])
# Experimental vapor composition
Yexp = np.array([[0. , 0.095, 0.193, 0.365, 0.42 , 0.532, 0.605, 0.63 , 0.64 ,
0.65 , 0.66 , 0.67 , 0.675, 0.71 , 0.745, 0.84 , 0.935, 1. ],
[1. , 0.905, 0.807, 0.635, 0.58 , 0.468, 0.395, 0.37 , 0.36 ,
0.35 , 0.34 , 0.33 , 0.325, 0.29 , 0.255, 0.16 , 0.065, 0. ]])
datavle = (Xexp, Yexp, Texp, Pexp)
###Output
_____no_output_____
###Markdown
The function ```fit_kij``` optimize the $k_{ij}$. The function requires bounds for the parameter, as well as the mixture object and the equilibria data.
###Code
# bounds fors kij
kij_bounds = (-0.01, 0.01)
fit_kij(kij_bounds, mix, datavle = datavle)
###Output
_____no_output_____ |
precipitation-bug-report-2.ipynb | ###Markdown
First surprise
###Code
cmlds = cml.load_dataset("s2s-ai-challenge-training-output-reference",
date = 20200220,
parameter='tp'
)
xrds = fix_dataset_dims(cmlds.to_xarray())
xrds.sel(forecast_year=2012).isnull().sum(dim=['latitude', 'longitude']).tp.plot()
###Output
_____no_output_____
###Markdown
Numer of null values increases after a certain lead time. I think there are many cases of this in the dataset. Here is an example of correct lead time.
###Code
xrds.sel(forecast_year=2012).isel(lead_time=5).tp.plot()
###Output
_____no_output_____
###Markdown
Lead time 20 has only nulls.
###Code
xrds.sel(forecast_year=2012).isel(lead_time=20).tp.plot()
###Output
_____no_output_____
###Markdown
Second surprise
###Code
cmlds = cml.load_dataset("s2s-ai-challenge-training-output-reference",
date = 20200326,
parameter='tp'
)
xrds = fix_dataset_dims(cmlds.to_xarray())
xrds.forecast_dayofyear
xrds.isel(lead_time=1).sel(forecast_year=2015, forecast_dayofyear=86).tp.plot()
###Output
_____no_output_____ |
create_dictionary_based_sentiment_analyzer.ipynb | ###Markdown
Dictionary Based Sentiment Analyzer* Word tokenization* Sentence tokenization* Scoring of the reviews* Comparison of the scores with the reviews in plots* Measuring the distribution* Handling negation* Adjusting your dictionary-based sentiment analyzer* Checking your results
###Code
# all imports and related
%matplotlib inline
import pandas as pd
import numpy as np
import altair as alt
from nltk import download as nltk_download
from nltk.tokenize import word_tokenize, sent_tokenize
from nltk.sentiment.util import mark_negation
nltk_download('punkt') # required by word_tokenize
from collections import Counter
###Output
/usr/local/lib/python3.6/dist-packages/nltk/twitter/__init__.py:20: UserWarning: The twython library has not been installed. Some functionality from the twitter package will not be available.
warnings.warn("The twython library has not been installed. "
###Markdown
load the small_corpus CSVrun process from[create_dataset.ipynb](https://github.com/oonid/growth-hacking-with-nlp-sentiment-analysis/blob/master/create_dataset.ipynb)copy file **small_corpus.csv** to this Google Colab Files (via file upload or mount drive).
###Code
df = pd.read_csv('small_corpus.csv')
df
# check if any columns has null, and yes the reviews column has
df.isnull().any()
# repair null in column reviews with empty string ''
df.reviews = df.reviews.fillna('')
# test again
df.isnull().any()
rating_list = list(df['ratings'])
review_list = list(df['reviews'])
print(rating_list[:5])
for r in review_list[:5]:
print('--\n{}'.format(r))
###Output
[1, 1, 1, 1, 1]
--
Recently UBISOFT had to settle a huge class-action suit brought against the company for bundling (the notoriously harmful) StarFORCE DRM with its released games. So what the geniuses at the helm do next? They decide to make the same mistake yet again - by choosing the same DRM scheme that made BIOSHOCK, MASS EFFECT and SPORE infamous: SecuROM 7.xx with LIMITED ACTIVATIONS!
MASS EFFECT can be found in clearance bins only months after its release; SPORE not only undersold miserably but also made history as the boiling point of gamers lashing back, fed up with idiotic DRM schemes. And the clueless MBAs that run an art-form as any other commodity business decided that, "hey, why not jump into THAT mud-pond ourselves?"
The original FAR CRY was such a GREAT game that any sequel of it would have to fight an uphill battle to begin with (especially without its original developing team). Now imagine shooting this sequel on the foot with a well known, much hated and totally useless DRM scheme that turns it into another Rent-A-Game no one wants. Were I a UBISOFT stock-holder I would be ordering my broker to "Sell-Sell!-SELL!!" instead of posting this...
Ever since its 7.xx version, SecuROM has NOTHING to do with "fighting piracy". All it does in this direction (blocking certain optical and virtual drives) is a very old, lame and already bypassed attempt that serves as a thin smoke-screen. SecuROM is, in fact, an intruding and silent Data-Miner and Root-Hijacker that is delivered by means of popular games.
That is why even the STEAM versions as well as the (free) Demos of such games are infected with it. SecuROM will borrow deep into our PC systems and will refuse to be removed completely even after uninstalling the game it came with. It will retain backdoor access and will keep reporting to its mothership.
Lately, these security concerns have been accentuated as known Trojans seem to be exploiting SecuROM's backdoor access for their own purposes. In effect, installing a SecuROM-infected game in our computer will be placing your hardware and data at risk long after having uninstalled the game.
And the latest vehicle to deliver this hazardous snoopware is FAR CRY 2 - a game crippled by LIMITED INSTALLATIONS! No, thanks. I think I 'll pass this one too.
The only people who do not care about SecuROM are, in fact,...pirates! Because cracking games "protected" by this contraption apparently is very easy. Every single game that was supposedly "protected" by SecuROM was cracked hours withing its release!
To everyone else though, SecuROM (or StarFORCE or any other hazardous DRM scheme) is a core issue that needs to be resolved before PC gaming can evolve any further. And the best way to resolve such issues is market correction.
That is why it is important for gamers to keep voting with their wallets. And as with any vote, well informed decisions are paramount in making the right choice.
--
code didn't work, got me a refund.
--
these do not work at all, all i get is static and they came with nothing in the box to give any help.
--
well let me start by saying that when i first started this game up i was so excited, but when i started playing OMG was i pissed off. The graphics where that of a xbox original, i am not joking they are some of the WORST graphics i have seen in a long time. NOT only are the graphics bad but the game play is not much better. Besides the fact that when you walk you look like you have something rather large in your butt making your player look VERY stiff, and has for the story, WAIT there was a story here??? well if there was anything to do with a story then i must have missed it, the so called story here is not at ALL good in fact it was just pain boring and luckily for me was VERY short beating it in under 2 hours. Ok now on to the level design, well lets just say that it was some of the worst i have seen. It goes like this, run into to dark boring room shoot a few bad guys or zombies then move to the next room that looks almost the same then repeat and do again for the next few hours. Ok so now i will move to the sound of the game, so the zombies when you kill them it sounds OK at best, but as for the sound of the guns WOW OMFG the worst EVER sounds weaker then a nerf guns!!!! the voice overs are very bad as well. Ok now onto the only kind of fun in this game, the only time i had ANY fun on this game was when i played with friends, but even still it was very boring. There is a ranking system that is mediocre at best, the more enemy's you kill the more XP you will get, and when you earn XP you can buy guns that won't really care to use because they all for the most part suck, or you can buy powers for the different players you can choose from. I would also like to add in the fact that when i you enemy it is almost completely random as to when they die, sometimes it will only take a few shoots to kill a simple solder and other time the same kind of enemy will take a clip and a half to kill, as well as that but the hit indication is very OFF. So i will wrap this up by saying this, DO NOT BUY THIS GAME, rent it first to see for yourself. i hope someone can find fun in this game, but for me i could not find much fun here.
I forgot to add in here that the AI IS THE WORST IN ANY GAME EVER. your team mates make no attempt whatsoever to avoid danger or even bother to shoot at the enemy, your friendly AI will spend the bulk of their time wasting ammo on a wall or ceiling if they are even shooting at ALL, the rest of the time they are just in your way block you in doorways or blocking your shots, never before have i ever seen AI worse than this. As for the enemy AI they are just as bad, half the time the enemy is just standing there as i am shooting them.
SO this game is pretty much a FAIL on every level, its a broken game, But lets hope that resident evil 6 will make up for this REALLY REALLY BAD GAME
--
Dont waste your money, you will just end up using the nunchuck, or a classic controler in the end, and more over don't waste your money on the new Mario Cart, just keep, or get one of the older ones, its just recycled maps with new controls that aren't that good anyways.
###Markdown
tokenize the sentences and words of the reviews
###Code
word_tokenized = df['reviews'].apply(word_tokenize)
word_tokenized
sent_tokenized = df['reviews'].apply(sent_tokenize)
sent_tokenized
###Output
_____no_output_____
###Markdown
download the opinion lexicon of NLTKuse it with reference to it source:https://www.nltk.org/_modules/nltk/corpus/reader/opinion_lexicon.html
###Code
# imports and related
nltk_download('opinion_lexicon')
from nltk.corpus import opinion_lexicon
print('total lexicon words: {}'.format(len(opinion_lexicon.words())))
print('total lexicon negatives: {}'.format(len(opinion_lexicon.negative())))
print('total lexicon positives: {}'.format(len(opinion_lexicon.positive())))
print('sample of lexicon words (first 10, by id):')
print(opinion_lexicon.words()[:10]) # print first 10 sorted by file id
print('sample of lexicon words (first 10, by alphabet):')
print(sorted(opinion_lexicon.words())[:10]) # print first 10 sorted alphabet
positive_set = set(opinion_lexicon.positive())
negative_set = set(opinion_lexicon.negative())
print(len(positive_set))
print(len(negative_set))
def simple_opinion_test(words):
if words not in opinion_lexicon.words():
print('{} not covered on opinion_lexicon'.format(words))
else:
if words in opinion_lexicon.negative():
print('{} is negative'.format(words))
if words in opinion_lexicon.positive():
print('{} is positive'.format(words))
simple_opinion_test('awful')
simple_opinion_test('beautiful')
simple_opinion_test('useless')
simple_opinion_test('Great') # must be lower case
simple_opinion_test('warming')
###Output
awful is negative
beautiful is positive
useless is negative
Great not covered on opinion_lexicon
warming not covered on opinion_lexicon
###Markdown
classify each review in a scale of -1 to +1
###Code
# the process to score review:
# * tokenize review (from multiple sentence) become sentences
# * so sentence score will be build from it words
def score_sentence(sentence):
"""sentence (input) are words that tokenize from sentence.
return score between -1 and 1
if the total positive greater than total negative then return 0 to 1
if the total negative greater than total positive then return -1 to 0
"""
# opinion lexicon not contains any symbol character, and must be set lower
selective_words = [w.lower() for w in sentence if w.isalnum()]
total_selective_words = len(selective_words)
# count total words that categorized as positive from opinion lexicon
total_positive = len([w for w in selective_words if w in positive_set])
# count total words that categorized as negative from opinion lexicon
total_negative = len([w for w in selective_words if w in negative_set])
if total_selective_words > 0: # has at least 1 word to categorize
return (total_positive - total_negative) / total_selective_words
else: # no selective words
return 0
def score_review(review):
"""review (input) is single review, could be multiple sentences.
tokenize review become sentences.
tokenize sentence become words.
collect sentence scores as list, called sentiment scores.
score of review = sum of all sentence scores / total of all sentence scores
return score of review
"""
sentiment_scores = []
sentences = sent_tokenize(review)
# process per sentence
for sentence in sentences:
# tokenize sentence become words
words = word_tokenize(sentence)
# calculate score per sentence, passing tokenized words as input
sentence_score = score_sentence(words)
# add to list of sentiment scores
sentiment_scores.append(sentence_score)
# mean value = sum of all sentiment scores / total of sentiment scores
if sentiment_scores: # has at least 1 sentence score
return sum(sentiment_scores) / len(sentiment_scores)
else: # return 0 if no sentiment_scores, avoid division by zero
return 0
review_sentiments = [score_review(r) for r in review_list]
print(review_sentiments[:5])
print(rating_list[:5])
print(review_sentiments[:5])
for r in review_list[:5]:
print('--\n{}'.format(r))
df = pd.DataFrame({
"rating": rating_list,
"review": review_list,
"review dictionary based sentiment": review_sentiments,
})
df
df.to_csv('dictionary_based_sentiment.csv', index=False)
###Output
_____no_output_____
###Markdown
Compare the scores of the product reviews with the product ratings using a plot
###Code
rating_counts = Counter(rating_list)
print('distribution of rating as dictionary: {}'.format(rating_counts))
###Output
distribution of rating as dictionary: Counter({1: 1500, 5: 1500, 2: 500, 3: 500, 4: 500})
###Markdown
a plot of the distribution of the ratings
###Code
# ratings as str will be different with ratings as int from keys()
dfrc = pd.DataFrame({
"ratings": [str(k) for k in rating_counts.keys()],
"counts": list(rating_counts.values())
})
dfrc
rating_counts_chart = alt.Chart(dfrc).mark_bar().encode(x="ratings", y="counts")
rating_counts_chart
###Output
_____no_output_____
###Markdown
a plot of the distribution of the sentiment scores
###Code
# get histogram value
# with the value of the probability density function at the bin,
# normalized such that the integral over the range is 1
hist, bin_edges = np.histogram(review_sentiments, density=True)
print('histogram value: {}'.format(hist))
print('bin_edges value: {}'.format(bin_edges)) # from -1 to 1
print()
labels = [(str(l[0]), str(l[1])) for l in zip(bin_edges, bin_edges[1:])]
print('labels: {}'.format(labels))
labels = [" ".join(label) for label in labels]
print('labels: {}'.format(labels))
dfsc = pd.DataFrame({
"sentiment scores": labels,
"counts": hist,
})
dfsc
# sentiment_counts_chart = alt.Chart(dfsc).mark_bar() \
# .encode(x="sentiment scores", y="counts")
sentiment_counts_chart = alt.Chart(dfsc).mark_bar() \
.encode(x=alt.X("sentiment scores", sort=labels), y="counts")
sentiment_counts_chart
###Output
_____no_output_____
###Markdown
a plot about the relation of the sentiment scores and product ratings
###Code
# explore if there's relationship between ratings and sentiments
dfrs = pd.DataFrame({
"ratings": [str(r) for r in rating_list],
"sentiments": review_sentiments,
})
dfrs
rating_sentiments_chart = alt.Chart(dfrs).mark_bar()\
.encode(x="ratings", y="sentiments", color="ratings", \
tooltip=["ratings", "sentiments"])\
.interactive()
rating_sentiments_chart
###Output
_____no_output_____
###Markdown
Measure the correlation of the sentiment scores and product ratingsarticle from [machinelearningmastery](https://machinelearningmastery.com/how-to-use-correlation-to-understand-the-relationship-between-variables/) about how to use correlation to understand the relationship between variable.* Covariance. Variables can be related by a linear relationship.* Pearson's Correlation. Pearson correlation coefficient can be used to summarize the strength of the linear relationship between two data samples.* Spearman's Correlation. Two variables may be related by a non-linear relationship, such that the relationship is stronger or weaker across the distribution of the variables.import pearsonr and spearmanr from package scipy.stats
###Code
from scipy.stats import pearsonr, spearmanr
pearson_correlation, _ = pearsonr(rating_list, review_sentiments)
print('pearson correlation: {}'.format(pearson_correlation))
spearman_correlation, _ = spearmanr(rating_list, review_sentiments)
print('spearman correlation: {}'.format(spearman_correlation))
# Spearman rank correlation value said that there's weak correlation
# between rating and review score (sentiments)
###Output
pearson correlation: 0.4339913860749739
spearman correlation: 0.5860296165999471
###Markdown
Improve your sentiment analyzer in order to reduce contradictory cases need to handle negation, since mostly those cases are contradictory when there is negation in the sentence (e.g., no problem)
###Code
for idx, review in enumerate(review_list):
r = rating_list[idx]
s = review_sentiments[idx]
if r == 5 and s < -0.2:
# rating 5 but sentiment negative below -0.2
print('({}, {}): {}'.format(r, s, review))
if r == 1 and s > 0.3:
# rating 1 but got sentiment positive more than 0.3
print('({}, {}): {}'.format(r, s, review))
###Output
(1, 0.6666666666666666): Never worked right
(1, 0.5): Not Good
(1, 0.3333333333333333): not kid appropriate
(1, 0.5): doesn't work
(1, 0.5): Never worked.
(1, 0.3333333333333333): Not well made
(1, 0.5): Doesn't work
(1, 0.3333333333333333): Did not work.
(1, 0.3333333333333333): returning don't work
(1, 0.3333333333333333): Does not work correctly with xbox
(1, 0.5): Doesn't work
(1, 0.3333333333333333): Does not work
(1, 0.5): doesn't work
(5, -0.2857142857142857): Killing zombies, how can you go wrong!
(5, -0.25): Would buy again. No problems.
###Markdown
use the mark_negation function to handle negation
###Code
test_sentence = 'Does not work correctly with xbox'
print(mark_negation(test_sentence.split()))
# not detected on "No problems."
test_sentence = 'Would buy again. No problems.'
print(mark_negation(test_sentence.split()))
# sentence from sample solution works to detect "no problems."
test_sentence = "I received these on time and no problems. No damages battlfield never fails"
print(mark_negation(test_sentence.split()))
###Output
_____no_output_____ |
movie-box-office-revenue-prediction.ipynb | ###Markdown
Data Exploring
###Code
import pandas
from pandas import DataFrame
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
data = pandas.read_csv('cost_revenue_clean.csv')
data.describe()
X = DataFrame(data, columns = ['production_budget_usd'])
y = DataFrame(data, columns = ['worldwide_gross_usd'])
X.describe()
y.describe()
###Output
_____no_output_____
###Markdown
Data Vizualisation
###Code
plt.figure(figsize=(10,6))
plt.scatter(X, y, alpha=0.3)
plt.title('Film Cost vs Global Revenue')
plt.xlabel('Production Budget $')
plt.ylabel('Worldwide Gross $')
plt.ylim(0, 3000000000)
plt.xlim(0, 450000000)
plt.show()
###Output
_____no_output_____
###Markdown
Model Training
###Code
regression = LinearRegression()
regression.fit(X,y)
print('Slope coefficient:')
regression.coef_
print('Intercept :')
regression.intercept_
plt.figure(figsize=(10,6))
plt.scatter(X, y, alpha=0.3)
# Adding the regression line here:
plt.plot(X, regression.predict(X), color = 'red', linewidth = 3)
plt.title('Film Cost vs Global Revenue')
plt.xlabel('Production Budget $')
plt.ylabel('Worldwide Gross $')
plt.ylim(0, 3000000000)
plt.xlim(0, 450000000)
plt.show()
###Output
_____no_output_____
###Markdown
Model Score
###Code
#Getting r square from Regression
regression.score(X, y)
###Output
_____no_output_____ |
Project/main.ipynb | ###Markdown
Task Data
###Code
import numpy as np
import csv
from sklearn.preprocessing import MultiLabelBinarizer
from scipy.sparse import hstack
import pickle as pkl
from utils.tokenizer import tokenize_corpus
def getNames(data):
names = []
if not data:
return names
parsedData = eval(data)
if not parsedData:
return names
for pieceOfInfo in parsedData:
name = pieceOfInfo['name']
names.append(name)
return np.array(names)
with open('./data/links.csv', 'r', encoding='utf-8', newline='') as f:
reader = csv.reader(f)
next(reader, None)
id_to_movieId = dict()
for line in reader:
try:
id_to_movieId[int(line[2])] = int(line[0])
except:
pass
with open('./data/movies_metadata.csv', encoding= 'utf-8') as csvFile:
reader = csv.DictReader(csvFile)
i = 0
for row in reader:
dataEmbeded[i, 0] = row['overview']
try:
dataEmbeded[i, 1] = id_to_movieId[int(row['id'])]
except:
pass
dataEmbeded[i, 2] = row['adult'] == 1
dataEmbeded[i, 3] = row['budget']
dataEmbeded[i, 4] = getNames(row['genres'])
dataEmbeded[i, 5] = row['popularity']
dataEmbeded[i, 6] = getNames(row['production_companies'])
dataEmbeded[i, 7] = row['production_countries'] == "[{'iso_3166_1': 'US', 'name': 'United States of America'}]"
dataEmbeded[i, 8] = row['revenue']
dataEmbeded[i, 9] = getNames(row['spoken_languages'])
i += 1
one_hot = MultiLabelBinarizer(sparse_output=True)
genres = one_hot.fit_transform(dataEmbeded[:,4])
production_companies = one_hot.fit_transform(dataEmbeded[:,6])
spoken_languages = one_hot.fit_transform(dataEmbeded[:,9])
BoW = tokenize_corpus(dataEmbeded[:,0], stop_words = False, BoW = True)
data = hstack([BoW, genres, spoken_languages])
with open('./data/data.npy', 'wb') as pikeler:
data = {'ids':dataEmbeded[:, 1], 'data':data}
pkl.dump(data, pikeler)
###Output
_____no_output_____ |
DataEngineering/AffinityAnalysis.ipynb | ###Markdown
Machine Learning and Mathmatical methods Type of ML Problem Description Example Classification Pick one of the N labels, samples, or nodes (Linear) Regression Predict numerical or time based values Click-through rate ClusteringGroup similar examples (relevance) Association rule learningInfer likely association patterns in data Structured output Natural language processing image recognition In this lesson we will focus on an Association rule learning (Affinity Analysis) and Structured output (bag of words) Affinity Analysis - to determine hobby recomendationsAffinity Analysis is data wrangling used to determine the similiarities between objects (samples). Its used in:* Sales, Advertising, or similar Recommendations (you liked "Movie" you might also like "other movie")* Mapping Familia Human Genes (i.e. the "do you share ancestors?" people)* Social web-maps that associate friend "likes" and "sharing" (guess which we are doing)It works by finding associations among samples, that is "finding combinations of items, objects, or anything else which happen frequently together". Then it builds rules which use these to determine the likelyhood (probablity) of items being related...and then we build another graph (no kidding, well kind-of. depends on application).We are going to use this with our data but typically this would be hundreds to millions of transactions to ensure statistical significance.If your really interested in how these work (on a macro level): Smirnov has a great blog about it We start by building our data into a set of arrays (real arrays)So we will be using numpy (a great library that allows for vector, array, set, and other numeric operations on matrix/arrays with Python) - the arrays it uses are grids of values (all same type), indexed by a tuple of nonnegative integers.The dataset can be thought of as each hobby (or hobby type) as a column with a -1 (dislike), 0 (neutral), or 1 (liked) based on friends - we will assume all were strong relationships at this point. Weighting will be added later. So think of it like (except we are dropping the person cause I don't care who the person is in this): Person Football Reading Chess Sketching video games Josiah 1 1 1 -1 1 Jill 1 0 0 1 -1 Mark -1 1 1 1 -1 Now lets look for our premise (the thing to find out): A person that likes Football will also like Video Games
###Code
from numpy import array #faster to load and all we need here
# We are going to see if a person that likes Football also likes Video Games (could do reverse too)
# Start by building our data (fyi, capital X as in quantity, and these will be available in other cells)
X = array([
[1,1,1,-1,1],
[1,0,0,1,-1],
[-1,1,1,1,-1]
])
features = ["football", "reading", "chess", "sketching", "video games"]
n_features = len(features) # for interating over features
football_fans = 0
# Even though it is a numpy array we can still just use it like an interator
for sample in X:
if sample[0] == 1: #Person really likes football
football_fans += 1
print("{}: people love Football".format(football_fans))
#So we could already figure out just that it's 50% right now
###Output
_____no_output_____
###Markdown
Lets build some rule setsThe simplest measurements of rules are support and confidence. Support = Number of times rule occurs (frequency count) Confidence = Percentage of times rule applies when our premise appliesWe will use dictionaries (defaultdicts supply a default value) to compute these. We will count the number of valid results and a simple frequency of our premises. To test multiple premises we will make a large loop over them. By then end they will have:A Set as the key (0,4) for Football vs. Video GamesThe count of valid/invalid/total occurances (based on dict) Why must we test multiple premises? Because this is ML, its analytics - it is not based on a human querying but statistical calcThose who have done Python may see areas where comprehensions, enumerators, generators, and caches could speed this up - if so great! but let's start simple.We call this simple rule sets but they are the same that are used for much more complex data: See lines 59, 109, and 110
###Code
from collections import defaultdict
valid_rules = defaultdict(int) #count of completed rules
num_occurances = defaultdict(int) #count of any premise
for sample in X:
for premise in range(n_features):
if sample[premise] == 1: #We are only looking at likes right now
num_occurances[premise] += 1 # That's one like people
for conclusion in range(n_features):
if premise == conclusion: continue
#i.e. if we are looking at the same idx move to next
if sample[conclusion] == 1:
valid_rules[(premise, conclusion)] +=1
#conlusion shows "Like" or 1 so valid rule
###Output
_____no_output_____
###Markdown
Now we determine the confidence of our rulesMake a copy of our collection of valid rules and counts (the valid_rule dict). Then loop over the set and divide the frequency of valid occurances by the total frequency....if this reminds you of one item in your ATM project - well...it should.
###Code
support = valid_rules
## two indexes (0,4) compared as the key: count of matching 1s (likes) as value
# The key is actually a set
confidence = defaultdict(float)
for (premise, conclusion) in valid_rules.keys():
rule = (premise, conclusion)
confidence[rule] = valid_rules[rule] / num_occurances[premise]
## set of indexes as key: # of valid occurances / total occurances as value
###Output
_____no_output_____
###Markdown
Then it's just time to print out the results (lets say top 2)
###Code
# Let's find the top 7 rules (by occurance not confidence)
sorted_support = sorted(support.items(),
key=itemgetter(1), # sort in the order of the values of the dictionary
reverse=True) # Descending
sorted_confidence = sorted(confidence.items(),
key=itemgetter(1), reverse=True) # Now these dicts are in same order
# Now just print out the top 2
for i in range(2):
print("Associated Rule {}".format(i + 1))
premise, conclusion = sorted_support[i][0]
print_rule(premise, conclusion, support, confidence, features)
### Function would usually go at top but for notebook I can just run this before earlier cell and want to show progression
def print_rule(premise, conclusion, support, confidence, features):
premise_name = features[premise] #so if 0 = football, 1 = ...
conclusion_name = features[conclusion]
print("rule: if someone likes {} they will also like {}".format(premise_name, conclusion_name))
print("confidence: {0:.3f} : idx {1} vs. idx {2}".format(
confidence[(premise, conclusion)], premise, conclusion))
print("support:{}".format(support[(premise, conclusion)]))
###Output
_____no_output_____
###Markdown
Prints
###Code
Associated Rule 1
rule: if someone likes reading they will also like chess
confidence: 1.000 : idx 1 vs. idx 2
support:2
Associated Rule 2
rule: if someone likes chess they will also like reading
confidence: 1.000 : idx 2 vs. idx 1
support:2
Associated Rule 3
rule: if someone likes football they will also like reading
confidence: 0.500 : idx 0 vs. idx 1
support:1
Associated Rule 4
rule: if someone likes football they will also like chess
confidence: 0.500 : idx 0 vs. idx 2
support:1
Associated Rule 5
rule: if someone likes football they will also like video games
confidence: 0.500 : idx 0 vs. idx 4
support:1
Associated Rule 6
rule: if someone likes reading they will also like football
confidence: 0.500 : idx 1 vs. idx 0
support:1
Associated Rule 7
rule: if someone likes reading they will also like video games
confidence: 0.500 : idx 1 vs. idx 4
support:1
###Output
_____no_output_____ |
source_files/index.ipynb | ###Markdown
Introduction to Sets - Lab IntroductionProbability theory is all around. A common example is in the game of poker or related card games, where players try to calculate the probability of winning a round given the cards they have in their hands. Also, in a business context, probabilities play an important role. Operating in a volatile economy, companies need to take uncertainty into account and this is exactly where probability theory plays a role.As mentioned in the lesson before, a good understanding of probability starts with understanding of sets and set operations. That's exactly what you'll learn in this lab! ObjectivesYou will be able to:* Use Python to perform set operations* Use Python to demonstrate the inclusion/exclusion principle Exploring Set Operations Using a Venn DiagramLet's start with a pretty conceptual example. Let's consider the following sets: - $\Omega$ = positive integers between [1, 12] - $A$= even numbers between [1, 10] - $B = \{3,8,11,12\}$ - $C = \{2,3,6,8,9,11\}$ a. Illustrate all the sets in a Venn Diagram like the one below. The rectangular shape represents the universal set. b. Using your Venn Diagram, list the elements in each of the following sets:- $ A \cap B$- $ A \cup C$- $A^c$ - The absolute complement of B- $(A \cup B)^c$- $B \cap C'$- $A\backslash B$- $C \backslash (B \backslash A)$ - $(C \cap A) \cup (C \backslash B)$ c. For the remainder of this exercise, let's create sets A, B and C and universal set U in Python and test out the results you came up with. Sets are easy to create in Python. For a guide to the syntax, follow some of the documentation [here](https://www.w3schools.com/python/python_sets.asp)
###Code
# Create set A
A = None
'Type A: {}, A: {}'.format(type(A), A) # "Type A: <class 'set'>, A: {2, 4, 6, 8, 10}"
# Create set B
B = None
'Type B: {}, B: {}'.format(type(B), B) # "Type B: <class 'set'>, B: {8, 11, 3, 12}"
# Create set C
C = None
'Type C: {}, C: {}'.format(type(C), C) # "Type C: <class 'set'>, C: {2, 3, 6, 8, 9, 11}"
# Create universal set U
U = None
'Type U: {}, U: {}'.format(type(U), U) # "Type U: <class 'set'>, U: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}"
###Output
_____no_output_____
###Markdown
Now, verify your answers in section 1 by using the correct methods in Python. To provide a little bit of help, you can find a table with common operations on sets below.| Method | Equivalent | Result || ------ | ------ | ------ || s.issubset(t) | s <= t | test whether every element in s is in t| s.issuperset(t) | s >= t | test whether every element in t is in s| s.union(t) | s $\mid$ t | new set with elements from both s and t| s.intersection(t) | s & t | new set with elements common to s and t| s.difference(t) | s - t | new set with elements in s but not in t| s.symmetric_difference(t) | s ^ t | new set with elements in either s or t but not both 1. $ A \cap B$
###Code
A_inters_B = None
A_inters_B # {8}
###Output
_____no_output_____
###Markdown
2. $ A \cup C $
###Code
A_union_C = None
A_union_C # {2, 3, 4, 6, 8, 9, 10, 11}
###Output
_____no_output_____
###Markdown
3. $A^c$ (you'll have to be a little creative here!)
###Code
A_comp = None
A_comp # {1, 3, 5, 7, 9, 11, 12}
###Output
_____no_output_____
###Markdown
4. $(A \cup B)^c $
###Code
A_union_B_comp = None
A_union_B_comp # {1, 5, 7, 9}
###Output
_____no_output_____
###Markdown
5. $B \cap C' $
###Code
B_inters_C_comp = None
B_inters_C_comp # {12}
###Output
_____no_output_____
###Markdown
6. $A\backslash B$
###Code
compl_of_B = None
compl_of_B # {2, 4, 6, 10}
###Output
_____no_output_____
###Markdown
7. $C \backslash (B \backslash A) $
###Code
C_compl_B_compl_A = None
C_compl_B_compl_A # {2, 6, 8, 9}
###Output
_____no_output_____
###Markdown
8. $(C \cap A) \cup (C \backslash B)$
###Code
C_inters_A_union_C_min_B= None
C_inters_A_union_C_min_B # {2, 6, 8, 9}
###Output
_____no_output_____
###Markdown
The Inclusion Exclusion Principle Use A, B and C from exercise one to verify the inclusion exclusion principle in Python. You can use the sets A, B and C as used in the previous exercise. Recall from the previous lesson that:$$\mid A \cup B\cup C\mid = \mid A \mid + \mid B \mid + \mid C \mid - \mid A \cap B \mid -\mid A \cap C \mid - \mid B \cap C \mid + \mid A \cap B \cap C \mid $$Combining these main commands:| Method | Equivalent | Result || ------ | ------ | ------ || a.union(b) | A $\mid$ B | new set with elements from both a and b| a.intersection(b) | A & B | new set with elements common to a and balong with the `len(x)` function to get to the cardinality of a given x ("|x|").What you'll do is translate the left hand side of the equation for the inclusion principle in the object `left_hand_eq`, and the right hand side in the object `right_hand_eq` and see if the results are the same.
###Code
left_hand_eq = None
print(left_hand_eq) # 9 elements in the set
right_hand_eq = None
print(right_hand_eq) # 9 elements in the set
None # Use a comparison operator to compare `left_hand_eq` and `right_hand_eq`. Needs to say "True".
###Output
_____no_output_____
###Markdown
Set Operations in PythonMary is preparing for a road trip from her hometown, Boston, to Chicago. She has quite a few pets, yet luckily, so do her friends. They try to make sure that they take care of each other's pets while someone is away on a trip. A month ago, each respective person's pet collection was given by the following three sets:
###Code
Nina = set(["Cat","Dog","Rabbit","Donkey","Parrot", "Goldfish"])
Mary = set(["Dog","Chinchilla","Horse", "Chicken"])
Eve = set(["Rabbit", "Turtle", "Goldfish"])
###Output
_____no_output_____
###Markdown
In this exercise, you'll be able to use the following operations:|Operation | Equivalent | Result|| ------ | ------ | ------ ||s.update(t) | $s \mid t$ |return set s with elements added from t||s.intersection_update(t) | s &= t | return set s keeping only elements also found in t||s.difference_update(t) | s -= t |return set s after removing elements found in t||s.symmetric_difference_update(t) | s ^= t |return set s with elements from s or t but not both||s.add(x) | | add element x to set s||s.remove(x) | | remove x from set s||s.discard(x) | | removes x from set s if present||s.pop() | | remove and return an arbitrary element from s||s.clear() | |remove all elements from set s|Sadly, Eve's turtle passed away last week. Let's update her pet list accordingly.
###Code
None
Eve # should be {'Rabbit', 'Goldfish'}
###Output
_____no_output_____
###Markdown
This time around, Nina promised to take care of Mary's pets while she's away. But she also wants to make sure her pets are well taken care of. As Nina is already spending a considerable amount of time taking care of her own pets, adding a few more won't make that much of a difference. Nina does want to update her list while Marie is away.
###Code
None
Nina # {'Chicken', 'Horse', 'Chinchilla', 'Parrot', 'Rabbit', 'Donkey', 'Dog', 'Cat', 'Goldfish'}
###Output
_____no_output_____
###Markdown
Mary, on the other hand, wants to clear her list altogether while away:
###Code
None
Mary # set()
###Output
_____no_output_____
###Markdown
Look at how many species Nina is taking care of right now.
###Code
None
n_species_Nina # 9
###Output
_____no_output_____
###Markdown
Taking care of this many pets is weighing heavily on Nina. She remembered Eve had a smaller collection of pets lately, and that's why she asks Eve to take care of the common species. This way, the extra pets are not a huge effort on Eve's behalf. Let's update Nina's pet collection.
###Code
None
Nina # 7
###Output
_____no_output_____
###Markdown
Taking care of 7 species is something Nina feels comfortable doing! Writing Down the Elements in a SetMary dropped off her Pet's at Nina's house and finally made her way to the highway. Awesome, her vacation has begun!She's approaching an exit. At the end of this particular highway exit, cars can either turn left (L), go straight (S) or turn right (R). It's pretty busy and there are two cars driving close to her. What you'll do now is create several sets. You won't be using Python here, it's sufficient to write the sets down on paper. A good notion of sets and subsets will help you calculate probabilities in the next lab!Note: each set of action is what _all three cars_ are doing at any given timea. Create a set $A$ of all possible outcomes assuming that all three cars drive in the same direction. b. Create a set $B$ of all possible outcomes assuming that all three cars drive in a different direction. c. Create a set $C$ of all possible outcomes assuming that exactly 2 cars turn right. d. Create a set $D$ of all possible outcomes assuming that exactly 2 cars drive in the same direction. e. Write down the interpretation and give all possible outcomes for the sets denoted by: - I. $D'$ - II. $C \cap D$, - III. $C \cup D$. Optional Exercise: European CountriesUse set operations to determine which European countries are not in the European Union. You just might have to clean the data first with pandas.
###Code
import pandas as pd
#Load Europe and EU
europe = pd.read_excel('Europe_and_EU.xlsx', sheet_name = 'Europe')
eu = pd.read_excel('Europe_and_EU.xlsx', sheet_name = 'EU')
#Use pandas to remove any whitespace from names
europe.head(3) #preview dataframe
eu.head(3)
# Your code comes here
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.