path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
06_2D-particle_in_a_box.ipynb | ###Markdown
Particle in a two-dimensional box*Roberto Di Remigio*, *Luca Frediani*After discussing and experimenting with the one-dimensional particle in a box model, we now move on to the two-dimensional case. The particle is now confined into a two-dimensional box with sides $L_x$ and $L_y$ long by the appropriate potential energy operator:\begin{equation}V(x, y) =\begin{cases}0 \quad\quad \text{if} \,\, 0\leq x \leq L_x \,\, \text{and} \,\, 0 \leq y \leq L_y \\+\infty \quad\quad \text{otherwise}\end{cases}\end{equation}Notice that $L_x$ can differ from $L_y$. In the general case, the particle can be confined insidea rectangular well. The geometry of the potential will determine the properties of the solutions.How does the quantum particle behave? We need to find the **eigenfunctions** and **eigenvalues** of the **Hamiltonian operator**, that is we have to solve the following ordinary differential equation:\begin{equation}-\frac{\hbar^2}{2m}\left(\frac{\mathrm{\partial}^2}{\mathrm{\partial}x^2} + \frac{\mathrm{\partial}^2}{\mathrm{\partial}y^2}\right)\psi_{nm}(x,y) = E_{nm}\psi_{nm}(x,y)\end{equation}with **boundary conditions**:\begin{equation}\begin{aligned} \psi_{nm}(0, y) &= 0 \\ \psi_{nm}(L_x, y) &= 0\end{aligned}\end{equation}and:\begin{equation}\begin{aligned} \psi_{nm}(x, 0) &= 0 \\ \psi_{nm}(x, L_y) &= 0\end{aligned}\end{equation}You will notice that, not only the eigenfunctions now depend on two **degrees of freedom** (the $x$ and $y$ coordinates) but they also carry **two** quantum numbers $n$ and $m$.Given that the kinetic energy operator is **separable**, an acceptable form for the solutions isthe product of one-dimensional states:\begin{equation}\psi_{nm}(x, y) = \psi_{n}(x)\psi_{m}(y)\end{equation}that is, states that are eigenfunctions of the one-dimensional particle in a box problem.A more explicit form is:\begin{equation}\psi_{nm}(x, y) = \sqrt{\frac{2}{L_x}}\sin\left(\frac{n\pi x}{L_x}\right) \sqrt{\frac{2}{L_y}}\sin\left(\frac{m\pi y}{L_y}\right) \quad \forall n, m \neq 0\end{equation}We can then derive the form of the eigenvalues by inserting this form of the wavefunction into the Schrödinger equation:\begin{equation}E_{nm} = \frac{h^2}{8M}\left(\frac{n^2}{L_x^2} + \frac{m^2}{L_y^2} \right)\end{equation}Of course, if the box is square the expression for the eigenvalues would simplify to:\begin{equation}E_{nm} = \frac{h^2}{8ML^2}\left(n^2 + m^2\right) \quad \forall n, m\neq 0\end{equation} Exercise 1: NormalizationThe one-dimensional eigenfunctions $\psi_n(x)$ and $\psi_m(y)$ are orthonormal. What about the two-dimensional eigenfunctions $\psi_{nm}$? Are they orthogonal? Are they normalized?Given a linear combination of two-dimensional normalized, eigenfunctions, is it still normalized? That is,is \begin{equation}\Psi(x, y) = \psi_{11}(x, y) + \psi_{21}(x, y)\end{equation}normalized? If not, find the normalization constant.Define also a function to calculate the value of the two-dimensional eigenfunctions on a grid of points. We will use this function to plot the eigenfunctions.The function should take the following arguments: the quantum numbers $n$ and $m$, the box lengths $L_x$ and $L_y$, the NumPy arrays with $x$ and $y$ values:```Pythondef eigenfunction2D(n, m, Lx, Ly, x, y): """ Normalized eigenfunction for the 2D particle in a box. n -- the quantum number, relative to the x axis m -- the quantum number, relative to the y axis Lx -- the size of the box on the x axis Ly -- the size of the box on the y axis x -- the NumPy array with the x values y -- the NumPy array with the y values """```Once this function is defined, we can obtain the respective probability distribution by taking its square.**Hint** Notice that you can re-use the function for the one-dimensional particle in a box to write this one! Interval: 3D plots with `matplotlib`Since these will be 3D plots, the `matplotlib` commands are slightly more complicated.The following commands will set up two 3D plots side by side. Put the plot of the eigenfunction on the left panel and the probability density on the right panel.```Pythonfrom mpl_toolkits.mplot3d import axes3dimport matplotlib.pyplot as pltfrom matplotlib import cmimport numpy as npfrom scipy.constants import * make sure we see it on this notebook%matplotlib inline Generate points on x and y axesx = np.linspace(0, pi, 100)y = np.linspace(0, pi, 100) Generate grid in the xy planeX, Y = np.meshgrid(x, y) Tell matplotlib to create a figure with two panelsfig = plt.figure(figsize=plt.figaspect(0.5)) Tell matplotlib to add axes for a plot on the left panelax = fig.add_subplot(1, 2, 1, projection='3d') Generate function values for the first plotZ = (np.sin(X)*np.cos(Y)).T max_val = np.max(Z)ax.plot_surface(X, Y, Z, rstride=8, cstride=8, alpha=0.3) Contour plots on the xy planecset = ax.contour(X, Y, Z, zdir='z', offset=-max_val)ax.set_xlabel('X')ax.set_xlim([0, L])ax.set_ylabel('Y')ax.set_ylim([0, L])ax.set_zlabel('Z')ax.set_zlim(-max_val, max_val) Tell maplotlib to add axes for a plot on the right panleax = fig.add_subplot(1, 2, 2, projection='3d')Z1 = ((np.sin(X)*np.cos(Y)).T )**2max_val = np.max(Z1)ax.plot_surface(X, Y, Z1, rstride=8, cstride=8, alpha=0.3) Contour plots on the xy planecset = ax.contour(X, Y, Z1, zdir='z', offset=-max_val)plt.show()```
###Code
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from matplotlib import cm
import numpy as np
from scipy.constants import *
# make sure we see it on this notebook
%matplotlib inline
# Generate points on x and y axes
x = np.linspace(0, pi, 100)
y = np.linspace(0, pi, 100)
# Generate grid in the xy plane
X, Y = np.meshgrid(x, y)
# Tell matplotlib to create a figure with two panels
fig = plt.figure(figsize=plt.figaspect(0.5))
# Tell matplotlib to add axes for a plot on the left panel
ax = fig.add_subplot(1, 2, 1, projection='3d')
# Generate function values for the first plot
Z = (np.sin(X)*np.cos(Y)).T
max_val = np.max(Z)
ax.plot_surface(X, Y, Z, rstride=8, cstride=8, alpha=0.3)
# Contour plots on the xy plane
cset = ax.contour(X, Y, Z, zdir='z', offset=-max_val)
ax.set_xlabel('X')
ax.set_xlim([0, pi])
ax.set_ylabel('Y')
ax.set_ylim([0, pi])
ax.set_zlabel('Z')
ax.set_zlim(-max_val, max_val)
# Tell maplotlib to add axes for a plot on the right panle
ax = fig.add_subplot(1, 2, 2, projection='3d')
Z1 = ((np.sin(X)*np.cos(Y)).T)**2
max_val = np.max(Z1)
ax.plot_surface(X, Y, Z1, rstride=8, cstride=8, alpha=0.3)
# Contour plots on the xy plane
cset = ax.contour(X, Y, Z1, zdir='z', offset=-max_val)
###Output
_____no_output_____ |
notebooks/01_jax_introduction_exercises.ipynb | ###Markdown
JaxTon 💯 JAX exercises This is Set 1: JAX Introduction (Exercises 1-10) of JaxTon: 💯 JAX exercises You can find all the exercises and solutions on GitHub **Prerequisites*** The configuration of jax should be updated as shown in the code snippet below in order to use TPUs
###Code
## install jax
!python3 -m pip install jax
## import packages
import jax
import os
import requests
## setup JAX to use TPUs if available
try:
url = 'http:' + os.environ['TPU_NAME'].split(':')[1] + ':8475/requestversion/tpu_driver_nightly'
resp = requests.post(url)
jax.config.FLAGS.jax_xla_backend = 'tpu_driver'
jax.config.FLAGS.jax_backend_target = os.environ['TPU_NAME']
except:
pass
jax.devices()
###Output
_____no_output_____ |
notebooks/eln_configure.ipynb | ###Markdown
Configure connection to ELNAuthenticate AiiDAlab with an Electronic Laboratory Notebook (ELN)
###Code
%%javascript
IPython.OutputArea.prototype._should_scroll = function(lines) {
return false;
}
from aiidalab_widgets_base import ElnConfigureWidget
display(ElnConfigureWidget())
###Output
_____no_output_____ |
notebooks/audioDSP-formant_synthesis.ipynb | ###Markdown
PROCESAMIENTO DIGITAL DE SEÑALES DE AUDIO Síntesis por cascada de formantes
###Code
%matplotlib inline
import math
import numpy as np
import matplotlib.pyplot as plt
from scipy import signal
from scipy.io import wavfile
import IPython.display as ipd
###Output
_____no_output_____
###Markdown
DescripciónEste ejercicio sirve para estudiar el **modelo del mecanismo de producción de voz**, a través de la síntesis de sonidos vocálicos.La idea consiste en generar un **tren de impulsos** de banda limitada como fuente de excitación glotal, y luego filtrarlo sucesivamente mediante múltiples **resonadores** con frecuencias y anchos de banda correspondientes a las distintas **formantes** de una vocal. Por último, se aplica un filtro pasa-altos como modelo de radiación. Cómo correr el notebookSe puede bajar y correr el notebook de forma local en una computadora.O también se puede correr en Google Colab usando el siguiente enlace. Run in Google Colab ResonadorA continuación se define una función que implementa un filtro resonador. Estudie sus parámetros de entrada.
###Code
def resonator(x, fs, res_freq, res_bw):
"""
second order difference equation digital resonator
Parameters
----------
x (numpy array) : input audio waveform
fs (int) : sampling frequency in Hz
res_freq (float) : resonator frequency in Hz
res_bw (float) : resonator bandwith in Hz
Returns
-------
y (numpy array) : filtered audio waveform
"""
C = -math.exp(-2*math.pi*res_bw/fs)
B = 2*math.exp(-math.pi*res_bw/fs)*math.cos(2*math.pi*res_freq/fs)
A = 1-B-C
T = x.shape[0]
# add two initial null values to x
x = np.insert(x, 0, 0)
x = np.insert(x, 0, 0)
# output signal
y = np.zeros((T+2, 1))
# filtering difference equation
for ind in range(2, T+1):
y[ind] = A*x[ind] + B*y[ind-1] + C*y[ind-2]
y = y[2:]
return y
###Output
_____no_output_____
###Markdown
Síntesis de formantesLa siguiente función implementa la síntesis del sonido de una vocal mediante formantes en cascada.Complete el código que se proporciona a continuación y siguiendo los siguientes pasos. 1. Complete las instrucciones necesarias para generar el tren de pulsos (con muestras en 0 y 1).2. Complete las instrucciones necesarias para aplicar los resonadores en cascada para la formante correspondiente.3. Complete las instrucciones necesarias para implementar el filtro pasa-altos del modelo de radiación como una diferencia de primer orden.
###Code
def formant_synthesis(fs=44100, f0=100, dur=1, vowel='a'):
"""
formant synthesizer
Parameters
----------
fs (int) : sampling frequency in Hz
f0 (float) : fundamental frequency in Hz
dur (float) : vowel duration in seconds
vowel (string) : character to set the vowel: 'a' 'e' 'i' 'o' 'u'
Returns
-------
y (numpy array) : synthetized audio waveform
"""
# period in samples
T0 = int(round(fs/f0))
# time instants
t = np.linspace(0, dur, fs*dur)
T = len(t)
# voicing source
vs = np.zeros((T, 1))
# impulse train
# vs[...] =
# low-pass resonator filter (res_freq = 0 Hz, res_bw = 100 Hz)
vs = resonator(vs, fs, 0, 4*f0)
# amplitude gain in dB
vsGain = 35
# set amplitude gain
vs *= 10**(vsGain/20)
# formants definition
# number of formants for each vowel
num_formants = 4
# formants data
# freq1 bw1 freq2 bw2 freq3 bw3 freq4 bw4
# reference: http://www.sinfomed.org.ar/mains/temas/voctexto.htm
formants_data = {
'a': [830, 105, 1350, 106, 2450, 142, 3655, 197],
'e': [430, 75, 2120, 106, 2628, 140, 3610, 180],
'i': [290, 63, 2295, 103, 2915, 174, 3645, 124],
'o': [510, 83, 860, 105, 2480, 156, 3485, 170],
'u': [335, 80, 720, 112, 2380, 208, 3355, 150]
}
# select formants corresponding to given vowel
try:
formants_values = formants_data[vowel]
except KeyError as e:
print('Error: vowel string not found. Using \'a\' as input parameter. KeyError: ' + str(e))
formants_values = formants_data['a']
# cascade filters vocal tract simulation
y = vs
for idx in range(num_formants):
# y = resonator(...)
# radiation characteristic
# y =
# normalization
y = y*0.99/np.max(np.abs(y))
return y, vs, t
###Output
_____no_output_____
###Markdown
Prueba de la síntesis de formantesEl código que se proporciona a continuación sintetiza una vocal con la función implementada anteriormente. Además se grafica la forma de onda de una trama de la excitación, así como la forma de onda y el espectro de una trama de la señal resultante.Ejecute el código y analice lo siguiente. 1. Observe que la señal usada como excitación para el banco de filtros resonantes es un tren de pulsos limitado en frecuencia. ¿Es esto coherente con el modelo planteado en clase? ¿Cómo debería cambiar el ancho de banda de la excitación con la intensidad de la señal?2. Analice la forma de onda de la señal resultante. ¿Es una señal periódica? ¿Cuál es su período?3. Analice el espectro de la señal resultante. ¿Logra distinguir con claridad la ubicación de las dos primeras formantes? Síntesis de formantes
###Code
xv, vs, t = formant_synthesis(vowel='a')
###Output
_____no_output_____
###Markdown
Cálculo de espectro y gráficas
###Code
# sampling rate
sr = 44100
# fundamental frequency
f0 = 100
# duration
dur = 1
# window samples
N = 2**math.ceil(math.log2(round(4*sr/f0)))
# nfft (fft samples)
nfft = 2 * N
# maximum frequency
fmax = 5000
# frame indexes
tmid = round(dur/2*sr)
ind_ini = tmid - int(N/2)
ind_end = tmid + int(N/2)
# smoothing window
window = signal.windows.get_window('hann', N)
# signal frame
frame = xv[ind_ini:ind_end]
# windowed signal frame
frame_win = frame[:, 0] * window
# spectrum of the signal frame
Xv = np.fft.fft(frame_win, nfft)
# magnitude spectrum
magXv = np.abs(Xv)
# frequency values
f = np.fft.fftfreq(nfft) * sr
ind_fmx = np.argwhere(f > fmax)[0][0]
plt.figure(figsize=(12,12))
plt.subplot(6, 1, 1)
plt.plot(t[ind_ini:ind_end], vs[ind_ini:ind_end], 'k', label='voicing source')
plt.ylabel('Amplitude')
plt.legend()
plt.subplot(6, 1, 2)
plt.plot(t[ind_ini:ind_end], xv[ind_ini:ind_end], 'k', label='synthetic signal')
plt.ylabel('Amplitude')
plt.xlabel('time (s)')
plt.legend()
plt.tight_layout()
plt.figure(figsize=(12,6))
plt.plot(f[:ind_fmx], 20 * np.log10(magXv[:ind_fmx]), 'k', label='spectrum')
plt.legend()
plt.ylabel('Magnitude (dB)')
plt.xlabel('Frequency (Hz)')
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Síntesis de vocalesEl código que se proporciona a continuación sintetiza las cinco vocales y guarda un archivo de audio para cada una. Además se grafica la forma de onda y el espectro de cada señal de audio.Ejecute el código y siga los siguientes pasos. 1. Evalúe auditivamente el resultado de la síntesis. ¿Logra diferenciar el sonido de cada vocal?2. Compare la forma de onda de cada vocal. ¿Logra distinguir las diferencias dadas por las diferentes formantes?3. Compare el espectro de cada vocal. ¿Logra distinguir con claridad las diferencias en la ubicación de las dos primeras formantes?4. Considere el mapa de formantes presentado en clase. ¿La ubicación de las formantes sigue esa caracterización? 5. Nuevamente evalúe auditivamente el resultado de la síntesis. ¿Qué limitantes identifica en la síntesis que hacen que el sonido sea poco realista?
###Code
# sampling rate
sr = 44100
# formants
vowels = ['a', 'e', 'i', 'o', 'u']
# list of numpy arrays (audio waveforms)
s = []
# test formant synthesis
for v in vowels:
xv, _, _ = formant_synthesis(vowel=v)
# save for ploting
s.append(xv)
# write audio file (44100, 16 bits)
wxv = xv * 32767
wavfile.write('./' + v +'.wav', sr, wxv.astype(np.int16))
###Output
_____no_output_____
###Markdown
Evaluación auditiva de la síntesis
###Code
sr, data = wavfile.read('a.wav')
ipd.Audio(data, rate=sr)
sr, data = wavfile.read('e.wav')
ipd.Audio(data, rate=sr)
sr, data = wavfile.read('i.wav')
ipd.Audio(data, rate=sr)
sr, data = wavfile.read('o.wav')
ipd.Audio(data, rate=sr)
sr, data = wavfile.read('u.wav')
ipd.Audio(data, rate=sr)
###Output
_____no_output_____
###Markdown
Graficar forma de onda
###Code
# number of samples to plot
ns = 4096
# number of vowels
num_vowels = len(vowels)
plt.figure(figsize=(12,12))
for index, v in enumerate(vowels):
xv = s[index]
plt.subplot(num_vowels, 1, index+1)
plt.plot(xv[:ns], 'k', label=v)
plt.legend()
plt.xlim([0, ns])
plt.ylabel('Amplitude')
plt.xlabel('Time (samples)')
###Output
_____no_output_____
###Markdown
Calcular y graficar espectro de una trama de señal
###Code
# window samples
f0 = 100
N = 2**math.ceil(math.log2(round(4*sr/f0)))
# nfft (fft samples)
nfft = 2 * N
# maximum frequency
fmax = 5000
# duration
dur = 1
# frame indexes
tmid = round(dur/2*sr)
ind_ini = tmid - int(N/2)
ind_end = tmid + int(N/2)
# smoothing window
window = signal.windows.get_window('hann', N)
# list of numpy arrays (spectrums)
Xs = []
for index, v in enumerate(vowels):
# waveform
xv = s[index]
# signal frame
frame = xv[ind_ini:ind_end]
# windowed signal frame
frame_win = frame[:, 0] * window
# spectrum of the signal frame
Xv = np.fft.fft(frame_win, nfft)
# save spectrum
Xs.append(Xv)
# frequency values
f = np.fft.fftfreq(nfft) * sr
ind_fmx = np.argwhere(f > fmax)[0][0]
plt.figure(figsize=(12,24))
for index, v in enumerate(vowels):
# spectrum
Xv = Xs[index]
# magnitude spectrum
magX = np.abs(Xv)
plt.subplot(num_vowels, 1, index+1)
plt.plot(f[:ind_fmx], 20 * np.log10(magX[:ind_fmx]), 'k', label=v)
plt.legend()
plt.ylabel('Magnitude (dB)')
plt.xlabel('Frequency (Hz)')
plt.tight_layout()
###Output
_____no_output_____ |
classes/007_matrices_and_spark.ipynb | ###Markdown
Matrices RepresentacionVamos a estar operando con matrices dispersas con la siguiente representacion en un archivo distribuido donde cada registro es de la forma: (fila,columna,valor).Por lo tanto este tipo de representacion
###Code
data = [
(1,2,4),
(1,5,3),
(2,1,3),
(3,2,2),
(4,4,-1),
(5,1,1),
(5,5,2)]
data
###Output
_____no_output_____
###Markdown
Representa la siguiente matriz dispersa``` 0 4 0 0 3 3 0 0 0 0 0 2 0 0 0 0 0 0 -1 0 1 0 0 0 2```Notar que **la representacion que hemos hecho esta indexada de 1 en adelante en vez de 0, como se hace en la mayoria de los lenguajes de programacion. Esto es algo a tener en cuenta en las subsiguientes operaciones**
###Code
matrixRDD = sc.parallelize(data,8);
matrixRDD.take(20)
###Output
_____no_output_____
###Markdown
Multiplicacion matriz vector Buscamos realizar la siguiente operacion: Representa la siguiente matriz dispersa``` 0 4 0 0 3 1 3 0 0 0 0 2 0 2 0 0 0 * 3 0 0 0 -1 0 4 1 0 0 0 2 5```Para poder realizar la operacion con las matrices deberia multiplicar cada fila de la matriz por el vector.En el caso de la matriz dispersa esto es equivalente a **multiplicar cada elemento de la matriz dispersa por el elemento que le corresponde en el vector y acumular por filas.**Para poder realizar la operacion tenemos que llevar la matriz a una representacion de (fila,columna,valor) en (fila,(columna,valor))
###Code
#definimos el vector que estaremos multiplicando
vector = [1, 2, 3, 4, 5]
matrixPerRowRDD = matrixRDD.map(lambda x: (x[0],(x[1],x[2])))
matrixPerRowRDD.take(20)
###Output
_____no_output_____
###Markdown
Una vez que la llevamos al formato que necesitamos podemos multiplicar cada elemento de la matriz por el elemento del vector que le corresponde.**Cada elemento de la matriz nos indica el numero de columna (y ese numero de columna menos 1 nos indica el indice del vector por el que tenemos que multiplicar).**
###Code
partialResultRDD = matrixPerRowRDD.map(lambda x: (x[0], vector[x[1][0]-1] * x[1][1]))
partialResultRDD.take(20)
###Output
_____no_output_____
###Markdown
todos los valores parciales por fila que tenemos que agregar fila para obtener el total
###Code
resultPerRow = partialResultRDD.reduceByKey(lambda x,y: x+y)
resultPerRow.take(5)
###Output
_____no_output_____
###Markdown
Notese que el resultado esta expresado como (fila, valor) para llevarlo a una representacion de vector podemos hacer lo siguiente.
###Code
result = resultPerRow.map(lambda x: x[1])
result.take(5)
###Output
_____no_output_____
###Markdown
Multiplicacion de MatricesSuponiendo que las dimensiones son compatibles, haremos la multiplicacion de dos matrices dispersas definidas nuevamente como (fila, columna, valor)``` 1 2 x 5 6 = 19 22 3 4 7 8 43 50```Tenemos que notar que en el caso de las matrices dispersas
###Code
# matriz 1
m1 = [(1,1,1),
(1,2,2),
(2,1,3),
(2,2,4)]
# matriz 2
m2 = [(1,1,5),
(1,2,6),
(2,1,7),
(2,2,8)]
m1
m2
m1RDD = sc.parallelize(m1,8)
m2RDD = sc.parallelize(m2,8)
###Output
_____no_output_____
###Markdown
Para realizar el producto entre dos matrices debemos notar que **los elementos de la primer columna de la matriz 1 (1 y 3) siempre se multiplican unicamente por los elementos de la primera fila de la matriz 2 (5 y 6) por lo tanto la estrategia es realizar un join en donde la columna de la matriz 1 coincida con la fila de la matriz 2.**
###Code
# llevamos a matriz 1 a una representacion por columna
r1 = m1RDD.map(lambda x: (x[1],(x[0],x[2])))
r1.take(20)
# llevamos a matriz 2 a una representacion por fila
r2 = m2RDD.map(lambda x: (x[0],(x[1],x[2])))
r2.take(20)
###Output
_____no_output_____
###Markdown
luego usando join juntamos los datos que necesitamos procesar en conjunto.
###Code
rj = r1.join(r2)
rj.take(20)
###Output
_____no_output_____
###Markdown
Tenemos que multiplicar los valores y el resultado hay que acumularlo en el numero de fila y columna indicado en los registros. Por ejemplo para el registro ``(2, ((2, 4), (1, 7)))`` tenemos que multiplicar ``4 * 7`` y ese resultado acumularlo para la fila 2, columna 1 obteniendo ``((2,1), 28)``.La idea entonces es acumular por fila y columan para luego acumular usando un reduce para obtener el resultado final
###Code
rj2 = rj.map(lambda x:((x[1][0][0], x[1][1][0]), x[1][0][1] * x[1][1][1]))
rj2.take(20)
result = rj2.reduceByKey(lambda x,y: x+y)
result.take(20)
###Output
_____no_output_____
###Markdown
Tener en cuenta que el realizar la operacion los resultados quedan en la representacion **((fila, columna), valor)**. Si quisieramos llevarlo a la representacion original de matriz dispera podriamos hacer la siguiente transformacion
###Code
final = result.map(lambda x: (x[0][0], x[0][1], x[1]))
final.take(20)
###Output
_____no_output_____ |
notebooks/2016-11-04(Study of w, symmetry, z_co shapes and meaning).ipynb | ###Markdown
W symmetry, z_co shapes and meaningThis is a notebook to remind myself why I do the calculations in the way that I am doing them. It should cover the following series of points:* Why w is the way that it is, why is it the first element in the dot product np.dot(w, o)* In the dynamic equations how the z_co and p_co are build, and why it makes sense in the light of the role of w.* An example illustrating that this works properly We start by loading the libraries as usual
###Code
from __future__ import print_function
import pprint
import subprocess
import sys
sys.path.append('../')
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1 import make_axes_locatable
from network import BCPNN, NetworkManager
from data_transformer import build_ortogonal_patterns
%matplotlib inline
matplotlib.rcParams.update({'font.size': 22})
np.set_printoptions(suppress=True, precision=2)
###Output
_____no_output_____
###Markdown
Git Here we have the git machinery to run this in the original version when it was build
###Code
run_old_version = False
if run_old_version:
hash_when_file_was_written = '78d59994a229080583cdcff5bfff3c28caf385ef'
hash_at_the_moment = subprocess.check_output(["git", 'rev-parse', 'HEAD']).strip()
print('Actual hash', hash_at_the_moment)
print('Hash of the commit used to run the simulation', hash_when_file_was_written)
subprocess.call(['git', 'checkout', hash_when_file_was_written])
###Output
_____no_output_____
###Markdown
How the dot product works
###Code
w = np.arange(9).reshape((3, 3))
x = np.ones(3)
result1 = np.dot(w, x)
result2 = np.dot(x, w)
print('w')
pprint.pprint(w)
print('x')
pprint.pprint(x)
print('dot(w, x)')
pprint.pprint(result1)
print('dot(x, w)')
pprint.pprint(result2)
###Output
w
array([[0, 1, 2],
[3, 4, 5],
[6, 7, 8]])
x
array([ 1., 1., 1.])
dot(w, x)
array([ 3., 12., 21.])
dot(x, w)
array([ 9., 12., 15.])
###Markdown
We want the result that sums the multiplications over the rows and therefore we chose np.dot(w, o). **In this contex $w_{ij}$ represents the influence of $o_j$ in $o_i$ and that how is should be read** How the coactivations work (Outer product)In our calculations $w_{ij}$ depend on $zco_{ij}$ which is calculate with an outer product. Here we calculate how the dot product depends in the order and justify our choice
###Code
x = np.arange(3)
y = np.ones(3)
result1 = np.outer(x, y)
result2 = np.outer(y, x)
print('x')
pprint.pprint(x)
print('y')
pprint.pprint(y)
print('outer(x, y)')
pprint.pprint(result1)
print('outer(y, x)')
pprint.pprint(result2)
###Output
x
array([0, 1, 2])
y
array([ 1., 1., 1.])
outer(x, y)
array([[ 0., 0., 0.],
[ 1., 1., 1.],
[ 2., 2., 2.]])
outer(y, x)
array([[ 0., 1., 2.],
[ 0., 1., 2.],
[ 0., 1., 2.]])
###Markdown
If $r$ is the result, we can see that $r_{ij}= x_i y_j$ for the first result and that $r_{ij}= y_i x_j $ for the second result. Running example First we build the patterns and the network
###Code
hypercolumns = 2
minicolumns = 10
n_patterns = 10 # Number of patterns
patterns_dic = build_ortogonal_patterns(hypercolumns, minicolumns)
patterns = list(patterns_dic.values())
patterns = patterns[:n_patterns]
# Build the network
tau_z_pre = 0.500
tau_z_post = 0.050
nn = BCPNN(hypercolumns, minicolumns, tau_z_post=tau_z_post, tau_z_pre=tau_z_pre)
###Output
_____no_output_____
###Markdown
Then we build the network manager
###Code
dt = 0.001
T_training = 1.0
time_training = np.arange(0, T_training, dt)
T_ground = 1.0
time_ground = np.arange(0, T_ground, dt)
values_to_save = ['o', 'z_pre', 'z_post', 'p_pre', 'p_post', 'p_co', 'z_co', 'w']
manager = NetworkManager(nn=nn, values_to_save=values_to_save)
###Output
_____no_output_____
###Markdown
Finally we trained the network and extract the history
###Code
repetitions = 3
resting_state = False
for i in range(repetitions):
print('repetitions', i)
for pattern in patterns:
nn.k = 1.0
manager.run_network(time=time_training, I=pattern)
nn.k = 0.0
if resting_state:
manager.run_network(time=time_ground)
history = manager.history
if resting_state:
T_total = n_patterns * repetitions * (T_training + T_ground)
else:
T_total = n_patterns * repetitions * T_training
total_time = np.arange(0, T_total, dt)
###Output
repetitions 0
repetitions 1
repetitions 2
###Markdown
Extract the quantities to plot
###Code
z_pre_hypercolum = history['z_pre'][..., :minicolumns]
z_post_hypercolum = history['z_post'][..., :minicolumns]
o_hypercolum = history['o'][..., :minicolumns]
p_pre_hypercolum = history['p_pre'][..., :minicolumns]
p_post_hypercolum = history['p_post'][..., :minicolumns]
p_co = history['p_co']
z_co = history['z_co']
w = history['w']
p_co01 = p_co[:, 0, 1]
p_co10 = p_co[:, 1, 0]
z_co01 = z_co[:, 0, 1]
z_co10 = z_co[:, 1, 0]
w01 = w[:, 0, 1]
w10 = w[:, 1, 0]
aux01 = p_co01 / (p_pre_hypercolum[:, 0] * p_post_hypercolum[:, 1])
aux10 = p_co10 / (p_pre_hypercolum[:, 1] * p_post_hypercolum[:, 0])
###Output
/home/heberto/miniconda/lib/python2.7/site-packages/ipykernel/__main__.py:20: RuntimeWarning: invalid value encountered in divide
/home/heberto/miniconda/lib/python2.7/site-packages/ipykernel/__main__.py:21: RuntimeWarning: invalid value encountered in divide
###Markdown
Plotting
###Code
import seaborn as sns
sns.set_context('notebook', font_scale=2.0)
cmap = matplotlib.cm.get_cmap('viridis')
traces_to_plot = [0, 1]
norm = matplotlib.colors.Normalize(vmin=0, vmax=len(traces_to_plot))
# Plot the traces
fig = plt.figure(figsize=(20, 15))
ax11 = fig.add_subplot(421)
ax12 = fig.add_subplot(422)
ax21 = fig.add_subplot(423)
ax22 = fig.add_subplot(424)
ax31 = fig.add_subplot(425)
ax32 = fig.add_subplot(426)
ax41 = fig.add_subplot(427)
ax42 = fig.add_subplot(428)
fig.tight_layout()
# fig.subplots_adjust(right=0.8)
for index in range(minicolumns):
# Plot ALL the activities
ax12.plot(total_time, o_hypercolum[:, index], label=str(index))
for index in traces_to_plot:
# Plot activities
ax11.plot(total_time, o_hypercolum[:, index], color=cmap(norm(index)), label=str(index))
# Plot the z post and pre traces in the same graph
ax21.plot(total_time, z_pre_hypercolum[:, index], color=cmap(norm(index)), label='pre ' + str(index))
ax21.plot(total_time, z_post_hypercolum[:, index], color=cmap(norm(index)), linestyle='--', label='post ' + str(index))
# Plot the pre and post probabilties in the same graph
ax22.plot(total_time, p_pre_hypercolum[:, index], color=cmap(norm(index)), label='pre ' + str(index))
ax22.plot(total_time, p_post_hypercolum[:, index], color=cmap(norm(index)), linestyle='--', label='post ' + str(index))
# Plot z_co and p_co in the same graph
ax31.plot(total_time, z_co01, label='zco_01')
ax31.plot(total_time, z_co10, label='zco_10')
# Plot the aux quantity
ax32.plot(total_time, aux01, label='aux01')
ax32.plot(total_time, aux10, label='aux10')
# Plot the coactivations probabilities
ax41.plot(total_time, p_co01, '-', label='pco_01')
ax41.plot(total_time, p_co10, '-',label='pco_10')
# Plot the weights
ax42.plot(total_time, w01, label='01')
ax42.plot(total_time, w10, label='10')
axes = fig.get_axes()
for ax in axes:
ax.set_xlim([0, T_total])
ax.legend()
ax11.set_ylim([-0.1, 1.1])
ax12.set_ylim([-0.1, 1.1])
ax21.set_ylim([-0.1, 1.1])
ax21.set_title('z-traces')
ax22.set_title('probabilities')
ax31.set_title('z_co')
ax32.set_title('Aux')
ax41.set_title('p_co')
ax42.set_title('w')
plt.show()
###Output
_____no_output_____
###Markdown
We observe that every time that there is activity of 1 preceded by 0 the zco_10 rises which is exactly that we expected. Why?w10 is a measure of the effect of 0 and 1. And is proportional to z10.
###Code
sns.set_style("whitegrid", {'axes.grid' : False})
w = nn.w
aux_max = np.max(np.abs(w))
cmap = 'coolwarm'
fig = plt.figure(figsize=(16, 12))
ax1 = fig.add_subplot(111)
im1 = ax1.imshow(w, cmap=cmap, interpolation='None', vmin=-aux_max, vmax=aux_max)
divider = make_axes_locatable(ax1)
cax1 = divider.append_axes('right', size='5%', pad=0.05)
fig.colorbar(im1, ax=ax1, cax=cax1)
###Output
_____no_output_____
###Markdown
Git reset
###Code
if run_old_version:
subprocess.call(['git', 'checkout', 'master'])
###Output
_____no_output_____ |
ADRNX_magcycle_tutorial.ipynb | ###Markdown
KICP ADR CMB SUMMER SCHOOL NOTEBOOK (RBT/AT/NK) 1. Initialization *(will be different depending on your cryostat and software)*
###Code
# Various imports
import serial, threading, sys, time, os, datetime, warnings, numpy as np, matplotlib.pyplot as plt
from IPython.display import display, clear_output
from ipywidgets import FloatProgress, HTML, VBox
from timeit import default_timer as timer
# We will be using our homemade library to talk to ADR via serial
sys.path.append('/home/pi/Desktop/ADRNX/SRS')
import SRS
import Switchduino as sd
# Initialize mainframe controller (with config file, suppressing debug statements)
mf = SRS.SIM900.SIM900('/home/pi/Desktop/ADRNX/SRS/config.yaml',debug=False)
mf.init_mainframe()
# Create module objects (coresponding to those in the rack) and add them to mainframe
s921 = SRS.SIM921.SIM921(mf,name='s921',debug=False)
s922 = SRS.SIM922.SIM922(mf,name='s922',debug=False)
s960 = SRS.SIM960.SIM960(mf,name='s960',debug=False)
s970 = SRS.SIM970.SIM970(mf,name='s970',debug=False)
mf.register_and_init_mods([s921, s922, s960, s970])
# Finally, start communication threads (this will enable all the backend virtual queues and messaging)
mf.start_comm_threads()
def make_bar(minv=0,maxv=1,prefix='Voltage: ',postfix='/2.0V',style='info'):
progress = FloatProgress(min=minv,max=maxv,step=(maxv-minv)/1000)
progress.bar_style = style
label = HTML()
box = VBox(children=[label, progress])
display(box)
label.value = prefix+''+postfix
return [progress, label, box,prefix,postfix]
def update_bar(bar,val,fmt='{:.3f}'):
bar[0].value = val
bar[1].value = bar[3]+fmt.format(val)+bar[4]
if val == bar[0].max:
bar[0].bar_style = 'success'
def check_for_exceptions():
try:
print(s960.excq.get_nowait())
raise RuntimeError("Got s960 exc")
except:
pass
try:
print(s921.excq.get_nowait())
raise RuntimeError("Got s921 exc")
except:
pass
###Output
_____no_output_____
###Markdown
2a. Preparation part 1To make sure you don't quench magnet and kill your PhD career, it is prudent to do a lot of checks before cycling. Your KEPCO voltage follower power supply output should be 0, since we are playing with sense resistors here. To begin, cycle the heat switch (to relieve any mechanical stresses).
###Code
sd.heatswitch_open()
sd.heatswitch_close()
###Output
Heat Switch opening
4K-GGG LED OFF
4K-FAA LED OFF
GGG-FAA LED OFF
Heat Switch is opened
Heat Switch closing
4K-GGG LED ON
4K-FAA LED ON
GGG-FAA LED ON
Heat Switch is closed
###Markdown
Then make sure your relay is in the right mode (1 Ohm resistor aka magcycle mode)
###Code
if abs(s960.get_output_voltage())<0.01: sd.relay_switch_magcycle()
else: raise Exception()
###Output
_____no_output_____
###Markdown
2b. Preparation part 2Now do a bunch of checks on your voltage controller
###Code
# Check actual controller output - it might not be 0 for legitimate reasons, so just warn
if abs(s960.get_output_voltage())>0.01: warnings.warn("Current is not zero - BE CAREFUL")
# If we are in manual mode, warn if current output is not zero, otherwise error out (we should not be PIDing atm)
if s960.get_PID_mode() == 0:
if abs(s960.get_manual_output())>0.01: warnings.warn("Manual output is not zero - BE CAREFUL")
else:
raise RuntimeError("You should exit PID mode before running this!")
# These are not as important
if s960.get_offset_onoff() != 0:
if s960.get_offset != 0.0: raise RuntimeError("Running with offsets screws things up - who enabled them?")
if s960.get_ramp_state() != 0: raise RuntimeError("You are ramping...that should not happen...like ever!")
# Set up things the way we want, some are redundant...just in case of random undergrads and other acts of god
s960.set_PID_mode(0); assert s960.get_PID_mode() == 0
s960.set_offset(0); assert s960.get_offset() == 0
s960.set_offset_onoff(0); assert s960.get_offset_onoff() == 0
s960.set_setpoint(0); assert s960.get_setpoint() == 0
s960.set_setpoint_mode(0); assert s960.get_setpoint_mode() == 0
s960.set_ramp_onoff(0); assert s960.get_ramp_onoff() == 0
# Setting limits both in SIM960 and also in code to prevent excess voltages
ul = 5.5
s960.set_upper_lim(ul-0.1); assert s960.get_upper_lim() == ul-0.1
s960.manlim_high = ul
ll = -0.1
s960.set_lower_lim(-0.1); assert s960.get_lower_lim() == -0.1
s960.manlim_low = -0.1
#Finally, let's see what the settings are
s960.tellastory()
###Output
My name is s960 (ID: Stanford_Research_Systems,SIM960,s/n014669,ver2.17)
Status is 0b10000
Settings:
PID mode is 0 (MAN 0, PID 1)
Setpt source is 0 (INT 0, EXT 1)
RAMP mode is 0 (OFF 0, ON 1)
RAMP rate is 0.01 (V/s)
RAMP status is 0 (IDLE 0, PENDING 1, RAMPING 2, PAUSED 3)
PID:
P term is +2.00E+01 [state: 1 (OFF 0, ON 1)]
I term is +2.00E-01 [state: 1 (OFF 0, ON 1)]
D term is +1.00E-05 [state: 1 (OFF 0, ON 1)]
Setpoints:
Manual output is 0.0 (V)
PID setpoint is 0.0 (V)
Offset is 0.0 (V) [state: 0 (OFF 0, ON 1)]
Limits are -0.1<->5.4 (V)
Voltages:
Output V is +0.000781
Measure V is +2.373784
Setpoint V is -0.000087
###Markdown
**If you are happy about current settings, proceed to cycling (if outputs are 0, you can now turn on KEPCO if not already on)** 3. Mag up*First cycle stage is ramping up current while keeping back emf sufficiently low. Final target depends on your magnet spec (~9A max in our case), as well as the desired temperature/hold time. There are complicated tradeoffs - while this part runs, take a look at HPD manual for some interesting curves.*
###Code
target = 8.2 #A
maxemf = 0.12#V
vstep = 0.005 #Max slew rate in V/s
target=target*0.55 #convert to V
# Define some empty lists for logging
emf_series, vmeas_series, times_emf, times_vmeas = ([] for i in range(4))
tempFAA_series, times_tempFAA = ([] for i in range(2))
# Get current output twice - if this is somehow wrong, things will go very bad
vout = s960.get_manual_output()
vout2 = s960.get_manual_output()
assert vout == vout2
# Can go either direction
if vout > target:
direction = 0 #down
elif vout == target:
warnings.warn("Target same as current value")
direction = 1
else:
direction = 1 #up
delta = abs(vout - target)
# Flush communications
mf.clear_virtual_queues()
mf.clear_hw_queues()
# Get some pretty bars
barv = make_bar(min(target,vout),max(target,vout),prefix='Measured output: ',postfix='V',style='warning')
baremf = make_bar(0,300,prefix='Back EMF: ',postfix='mV',style='danger')
bart = make_bar(0,3.5,prefix='FAA: ',postfix='K',style='info')
print("Starting run from {} to {} (dir:{}) at {} with emf limit of {}".format(vout,target,direction,vstep,maxemf))
start = timer()
try:
# Start streaming EMF channel data
s970.start_data_stream(SRS.SIM970.CHANNELS.MAGEMF)
# And also temperatures
s921.start_data_stream()
print("Entering main loop")
while(abs(vout - target) > 0.004):
loop_start = timer()
# Loop until backemf is low enough, but at least once (poor mans do-while loop)
while True:
gotevent = s970.await_next_event(tm=1.0,clear_before=True,clear_after=True)
if not gotevent:
raise RuntimeError("Did not get voltage in time - aborting")
emf = abs(s970.lastvalues[SRS.SIM970.CHANNELS.MAGEMF])
emf_series.append(emf)
times_emf.append(timer()-start)
update_bar(baremf,emf*1000)
if abs(emf) < abs(maxemf): break
# Set new manual voltage
if (direction):
if (vout + vstep < target): vnew = vout + vstep
else: vnew = target
else:
if (vout - vstep > target): vnew = vout - vstep
else: vnew = target
s960.set_manual_output(vnew)
time.sleep(0.2)
# Check that new manual voltage is as intended (setting and actual output):
try:
vout_chk = s960.get_manual_output()
time.sleep(0.2)
vmeas_chk = s960.get_output_voltage()
time.sleep(0.2)
except Exception as e:
raise RuntimeError("Could not get s960 settings - ABORTING")
# Grab new temperature reading
if s921.await_next_event(tm=1.1,clear_before=False,clear_after=True):
temp = s921.lastvalues[1]
tempFAA_series.append(temp)
times_tempFAA.append(s921.lastdatatimes[1]-start)
update_bar(bart,temp)
else:
print("Got not temp data...weird...")
temp = 0
print("{:05.2f}% | VMEAS: {:.5f} | VRESP: {:.5f} | VOLD: {:.5f} | VNEW:{:.5f} | EMF: {:.6f} | TEMP: {:.6f}".format(\
100-abs(vout - target)*100/delta,vmeas_chk,vout_chk,vout,vnew,emf,temp))
vout = vnew; vmeas_series.append(vmeas_chk); times_vmeas.append(timer()-start)
update_bar(barv,vout_chk)
# Error checking
check_for_exceptions()
assert abs(vout_chk - vnew) < 0.000001 #rounding of floats a problem
assert abs(vmeas_chk - vnew) < 0.02
# Wait remainder of 1 second
while(timer()-loop_start < 1.0): time.sleep(0.05)
print("MAGCYCLE DONE - took {:2f} minutes".format((timer()-start)/60))
except Exception as e:
print(e)
finally:
# Stop data and clean up
s970.stop_data_stream()
s921.stop_data_stream()
time.sleep(0.2)
mf.clear_virtual_queues()
###Output
_____no_output_____
###Markdown
Lets plot our ramp!
###Code
%matplotlib notebook
fig,ax1 = plt.subplots()
l1, = ax1.plot(times_emf,np.array(emf_series)*1000,'-r',label='EMF')
ax1.set_ylabel("EMF (mV)")
ax1.set_xlabel("Time (s)")
ax2 = ax1.twinx()
l2, = ax2.plot(times_vmeas,np.array(vmeas_series)/0.5464,'-b',label='IOUT')
l3, = ax2.plot(times_tempFAA,tempFAA_series,'-g',label='FAA')
ax2.set_ylabel("OUTPUT (A) and TEMPERATURE (K)")
plt.legend([l1,l2,l3], [l.get_label() for l in [l1,l2,l3]])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
3. SoakTo allow for complete thermalization and spin aligment, field is held constant.Depending on who you ask, anywhere from 5 minutes to multiple hours is sufficient.There are strongly diminishing returns, so we only hold for 10 minutes this time.
###Code
# time.sleep(10*60)
# j/k
###Output
_____no_output_____
###Markdown
4. DeMagNow comes the fun part - we first open the heat switch to decouple mK stage from 4K
###Code
sd.heatswitch_open() # OPEN HS!!!
###Output
Heat Switch opening
4K-GGG LED ON
4K-FAA LED OFF
GGG-FAA LED OFF
Heat Switch almost open - this should be ok and will probably go away at lower currents
###Markdown
And then start a loop similar to what we had before. Physically, salt pill will cool down mK stages as entropy of magnetic moments increases (and thermal one decreases). This is only possible due to quasi-adiabatic, constant entropy nature of this process and so demag must be done slowly.
###Code
target = 0.0 #A
maxemf = 0.120 #V
vstep = 0.005 #Max slew rate in V/s
target=target*0.55 #convert to V
# Define some empty lists for logging
emf_series2, vmeas_series2, times_emf2, times_vmeas2 = ([] for i in range(4))
tempFAA_series2, times_tempFAA2 = ([] for i in range(2))
# Get current output twice - if this is somehow wrong, things will go very bad
vout = s960.get_manual_output()
vout2 = s960.get_manual_output()
assert vout == vout2
# Can go either direction
if vout > target:
direction = 0 #down
elif vout == target:
warnings.warn("Target same as current value")
direction = 1
else:
direction = 1 #up
delta = abs(vout - target)
# Flush communications
mf.clear_virtual_queues()
mf.clear_hw_queues()
# Get some pretty bars
barv = make_bar(min(target,vout),max(target,vout),prefix='Measured output: ',postfix='V',style='warning')
baremf = make_bar(0,300,prefix='Back EMF: ',postfix='mV',style='danger')
bart = make_bar(0,3,prefix='FAA: ',postfix='K',style='info')
print("Starting run from {} to {} (dir:{}) at {} with emf limit of {}".format(vout,target,direction,vstep,maxemf))
start = timer()
try:
# Start streaming EMF channel data
s970.start_data_stream(SRS.SIM970.CHANNELS.MAGEMF)
# And also temperatures
s921.start_data_stream()
while(abs(vout - target) > 0.0):
loop_start = timer()
# Loop until backemf is low enough, but at least once (poor mans do-while loop)
while True:
gotevent = s970.await_next_event(tm=1.0,clear_before=True,clear_after=True)
if not gotevent:
raise RuntimeError("Did not get voltage in time - aborting")
emf = abs(s970.lastvalues[SRS.SIM970.CHANNELS.MAGEMF])
emf_series2.append(emf)
times_emf2.append(timer()-start)
update_bar(baremf,emf*1000)
if abs(emf) < abs(maxemf): break
# Set new manual voltage
if (direction):
if (vout + vstep < target): vnew = vout + vstep
else: vnew = target
else:
if (vout - vstep > target): vnew = vout - vstep
else: vnew = target
s960.set_manual_output(vnew)
time.sleep(0.2)
# Check that new manual voltage is as intended (setting and actual output)
# Sometimes SRS goes crazy, so we try two times
attempts = 0
while attempts < 2:
try:
vout_chk = s960.get_manual_output()
time.sleep(0.2)
vmeas_chk = s960.get_output_voltage()
time.sleep(0.2)
break
except Exception as e:
warnings.warn(e)
attempts += 1
if attempts >=2: raise RuntimeError("Could not get s960 settings - ABORTING")
# Grab new temperature reading
if s921.await_next_event(tm=1.1,clear_before=False,clear_after=True):
temp = s921.lastvalues[1]
tempFAA_series2.append(temp)
times_tempFAA2.append(s921.lastdatatimes[1]-start)
update_bar(bart,temp)
else:
print("Got not temp data...weird...")
temp = 0
print("{:05.2f}% | VMEAS: {:.5f} | VRESP: {:.5f} | VOLD: {:.5f} | VNEW:{:.5f} | EMF: {:.6f} | TEMP: {:.6f}".format(\
100-abs(vout - target)*100/delta,vmeas_chk,vout_chk,vout,vnew,emf,temp))
vout = vnew; vmeas_series2.append(vmeas_chk); times_vmeas2.append(timer()-start)
update_bar(barv,vout_chk)
# Error checking
check_for_exceptions()
assert abs(vout_chk - vnew) < 0.000001 #rounding of floats a problem
assert abs(vmeas_chk - vnew) < 0.02
# Wait remainder of 1 second
while(timer()-loop_start < 1.0): time.sleep(0.05)
print("DONE - took {:2f} minutes".format((timer()-start)/60))
finally:
# Stop data and clean up
s970.stop_data_stream()
s921.stop_data_stream()
time.sleep(0.2)
mf.clear_virtual_queues()
###Output
_____no_output_____
###Markdown
Lets plot our ramp!
###Code
%matplotlib notebook
fig,ax1 = plt.subplots()
l1, = ax1.plot(times_emf2,np.array(emf_series2)*1000,'-r',label='EMF')
ax1.set_ylabel("EMF (mV)")
ax1.set_xlabel("Time (s)")
ax2 = ax1.twinx()
l2, = ax2.plot(times_vmeas2,np.array(vmeas_series2)/0.5464,'-b',label='IOUT')
l3, = ax2.plot(times_tempFAA2,tempFAA_series2,'-g',label='FAA')
ax2.set_ylabel("OUTPUT (A) and TEMPERATURE (K)")
plt.legend([l1,l2,l3], [l.get_label() for l in [l1,l2,l3]])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
You are done with cooldown part - proceed to other data taking Debug things - not too useful for ADR learning, but you can look if you want to...
###Code
mf.sim900ser.setBreak(0.5)
mf.reset()
mf.reset_serial_module(6)
s921.start_data_stream()
s921.await_next_event(tm=2.5)
s921.stop_data_stream()
###Output
_____no_output_____ |
pymaceuticals_matplotlib.ipynb | ###Markdown
Observations and Insights - Ceftamin and Infubinol have the the 2 highest median final tumor volumes of the targetted drug treatments indicating they are less efficent.- On Capomulin, there is a very clear correlation between average tumor volume and mouse weight. - As seen on the Tumor Volume vs Timepoint Line graph, drug treatment takes 4 time units to see an effect at which point the tumor volume drops. This cycle repeats where the volume of the tumor increases for another 4 time units until it experiences another drop.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
# Study data files
mouse_metadata_path = "data/Mouse_metadata.csv"
study_results_path = "data/Study_results.csv"
# Read the mouse data and the study results
mouse_metadata = pd.read_csv(mouse_metadata_path)
study_results = pd.read_csv(study_results_path)
# Combine the data into a single dataset
combined_data_df = mouse_metadata.merge(study_results, on="Mouse ID", how="left", sort="False")
# Display the data table for preview
combined_data_df.head()
# Checking the number of mice.
mouse_count = combined_data_df["Mouse ID"].count()
mouse_count
# Getting the duplicate mice by ID number that shows up for Mouse ID and Timepoint.
duplicate_mice = combined_data_df[combined_data_df.duplicated(["Mouse ID", "Timepoint"])]
duplicate_mice
# Optional: Get all the data for the duplicate mouse ID.
all_duplicate_mice = combined_data_df[combined_data_df.duplicated(["Mouse ID"])]
all_duplicate_mice
# Create a clean DataFrame by dropping the duplicate mouse by its ID.
clean_df = combined_data_df.drop_duplicates("Mouse ID")
clean_df
# Checking the number of mice in the clean DataFrame.
mouse_clean_count = clean_df["Mouse ID"].count()
mouse_clean_count
###Output
_____no_output_____
###Markdown
Summary Statistics
###Code
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
# Use groupby and summary statistical methods to calculate the following properties of each drug regimen:
# mean, median, variance, standard deviation, and SEM of the tumor volume.
# Assemble the resulting series into a single summary dataframe.
mean = combined_data_df.groupby("Drug Regimen")['Tumor Volume (mm3)'].mean()
median = combined_data_df.groupby("Drug Regimen")['Tumor Volume (mm3)'].median()
variance = combined_data_df.groupby("Drug Regimen")['Tumor Volume (mm3)'].var()
std_dev = combined_data_df.groupby("Drug Regimen")['Tumor Volume (mm3)'].std()
sem = combined_data_df.groupby("Drug Regimen")['Tumor Volume (mm3)'].sem()
# Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen
summary_df = pd.DataFrame({"Mean":mean, "Median": median, "Variance": variance, "Standard Deviation": std_dev, "SEM": sem})
summary_df
# Using the aggregation method, produce the same summary statistics in a single line
###Output
_____no_output_____
###Markdown
Bar and Pie Charts
###Code
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pandas.
drug_data_points = combined_data_df.groupby(["Drug Regimen"]).count()["Mouse ID"]
drug_data_points.plot(kind="bar")
plt.title("Drug Treatments")
plt.xlabel("Drug Regimen")
plt.ylabel("Mouse Count")
# Generate a bar plot showing the total number of unique mice tested on each drug regimen using pyplot.
drug_list = summary_df.index.tolist()
drug_list
drug_count = (combined_data_df.groupby(["Drug Regimen"])["Age_months"].count()).tolist()
drug_count
x_axis = np.arange(len(drug_count))
x_axis = drug_list
plt.bar(x_axis, drug_count, align="center")
plt.title("Drug Treatments")
plt.xlabel("Drug Regimen")
plt.ylabel("Mouse Count")
plt.xticks(rotation='vertical')
# Generate a pie plot showing the distribution of female versus male mice using pandas
gender_df = pd.DataFrame(combined_data_df.groupby(["Sex"]).count()).reset_index()
gender_df = gender_df[["Sex","Mouse ID"]]
gender_df.head()
gender_df.plot(kind="pie", y = "Mouse ID", autopct='%1.1f%%',
startangle=190, labels=gender_df["Sex"], legend = False)
plt.title("Male vs Female Mice Percentage")
plt.ylabel("")
# Generate a pie plot showing the distribution of female versus male mice using pyplot
gender_count = (combined_data_df.groupby(["Sex"])["Age_months"].count()).tolist()
gender_count
labels = ["Females", "Males"]
plt.pie(gender_count, labels=labels, autopct="%1.1f%%", startangle=190)
plt.axis("equal")
plt.title("Male vs Female Mice Percentage")
###Output
_____no_output_____
###Markdown
Quartiles, Outliers and Boxplots
###Code
# Calculate the final tumor volume of each mouse across four of the treatment regimens:
# Capomulin, Ramicane, Infubinol, and Ceftamin
final_tumor = combined_data_df[combined_data_df["Drug Regimen"].isin(["Capomulin", "Ramicane", "Infubinol", "Ceftamin"])]
# Start by getting the last (greatest) timepoint for each mouse
final_time = final_tumor.sort_values(["Timepoint"], ascending=True)
final_time
final_time_vol = final_time.groupby(['Drug Regimen', 'Mouse ID']).last()['Tumor Volume (mm3)']
final_time_vol
# Merge this group df with the original dataframe to get the tumor volume at the last timepoint
final_merge_df = combined_data_df.merge(final_time_vol, on="Mouse ID", how="left", sort="False")
final_merge_df.head()
# Put treatments into a list for for loop (and later for plot labels)
treatments = ['Capomulin', 'Ramicane', 'Infubinol','Ceftamin']
# Create empty list to fill with tumor vol data (for plotting)
final_df = final_merge_df.reset_index()
tumor_list = final_df.groupby('Drug Regimen')['Tumor Volume (mm3)_y'].apply(list)
tumor_list_df = pd.DataFrame(tumor_list)
tumor_list_df = tumor_list_df.reindex(treatments)
tumor_vols = [vol for vol in tumor_list_df['Tumor Volume (mm3)_y']]
# Calculate the IQR and quantitatively determine if there are any potential outliers.
tumor_capo = final_df[final_df["Drug Regimen"].isin(["Capomulin"])]
tumor_capo.head().reset_index()
capo_tumor = tumor_capo.sort_values(["Tumor Volume (mm3)_y"], ascending=True).reset_index()
capo_tumor = capo_tumor["Tumor Volume (mm3)_y"]
capo_tumor
# Locate the rows which contain mice on each drug and get the tumor volumes
# add subset
# Determine outliers using upper and lower bounds
quartiles = capo_tumor.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq - lowerq
print(f"The lower quartile of Capomulin final tumor volume is: {lowerq}")
print(f"The upper quartile of Capomulin final tumor volume is: {upperq}")
print(f"The interquartile range of Capomulin final tumor volume is: {iqr}")
print(f"The median of Capomulin final tumor volume is: {quartiles[0.5]}")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
tumor_ram = final_df[final_df["Drug Regimen"].isin(["Ramicane"])]
tumor_ram.head().reset_index()
ram_tumor = tumor_ram.sort_values(["Tumor Volume (mm3)_y"], ascending=True).reset_index()
ram_tumor = ram_tumor["Tumor Volume (mm3)_y"]
quartiles = ram_tumor.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq - lowerq
print(f"The lower quartile of Ramicane final tumor volume is: {lowerq}")
print(f"The upper quartile of Ramicane final tumor volume is: {upperq}")
print(f"The interquartile range of Ramicane final tumor volume is: {iqr}")
print(f"The median of Ramicane final tumor volume is: {quartiles[0.5]}")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
tumor_inf = final_df[final_df["Drug Regimen"].isin(["Infubinol"])]
tumor_inf.head().reset_index()
inf_tumor = tumor_inf.sort_values(["Tumor Volume (mm3)_y"], ascending=True).reset_index()
inf_tumor = inf_tumor["Tumor Volume (mm3)_y"]
quartiles = inf_tumor.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq - lowerq
print(f"The lower quartile of Infubinol final tumor volume is: {lowerq}")
print(f"The upper quartile of Infubinol final tumor volume is: {upperq}")
print(f"The interquartile range of Infubinol final tumor volume is: {iqr}")
print(f"The median of Infubinol final tumor volume is: {quartiles[0.5]}")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
tumor_cef = final_df[final_df["Drug Regimen"].isin(["Ceftamin"])]
tumor_cef.head().reset_index()
cef_tumor = tumor_cef.sort_values(["Tumor Volume (mm3)_y"], ascending=True).reset_index()
cef_tumor = cef_tumor["Tumor Volume (mm3)_y"]
quartiles = cef_tumor.quantile([.25,.5,.75])
lowerq = quartiles[0.25]
upperq = quartiles[0.75]
iqr = upperq - lowerq
print(f"The lower quartile of Ceftamin final tumor volume is: {lowerq}")
print(f"The upper quartile of Ceftamin final tumor volume is: {upperq}")
print(f"The interquartile range of Ceftamin final tumor volume is: {iqr}")
print(f"The median of Ceftamin final tumor volume is: {quartiles[0.5]}")
lower_bound = lowerq - (1.5*iqr)
upper_bound = upperq + (1.5*iqr)
print(f"Values below {lower_bound} could be outliers.")
print(f"Values above {upper_bound} could be outliers.")
# Generate a box plot of the final tumor volume of each mouse across four regimens of interest
plt.boxplot(tumor_vols, labels=treatments)
plt.ylim(10, 80)
plt.title("Final Tumor Volume per Treatment")
plt.ylabel("Tumor Volume(mm3)")
plt.show()
###Output
_____no_output_____
###Markdown
Line and Scatter Plots
###Code
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin
time_vs_tumor = combined_data_df[combined_data_df["Mouse ID"].isin(["j119"])]
time_vs_tumor
time_vs_tumor_data = time_vs_tumor[["Mouse ID", "Timepoint", "Tumor Volume (mm3)"]]
time_vs_tumor_data
line_plot_df = time_vs_tumor_data.reset_index()
line_plot_df
line_plot_final = line_plot_df[["Mouse ID", "Timepoint", "Tumor Volume (mm3)"]]
line_plot_final
lines = line_plot_final.plot.line()
plt.title("Tumor Volume vs Time")
plt.xlabel("Time")
plt.ylabel("Tumor Volume (mm3)")
# Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen
capomulin_scatter = combined_data_df[combined_data_df["Drug Regimen"].isin(["Capomulin"])]
capomulin_scatter_df = combined_data_df[["Mouse ID","Weight (g)", "Tumor Volume (mm3)"]]
capomulin_scatter_plot = capomulin_scatter.reset_index()
capomulin_sorted = capomulin_scatter_plot.sort_values(["Weight (g)"], ascending=True)
capomulin_grouped_weight = capomulin_scatter_plot.groupby("Weight (g)")["Tumor Volume (mm3)"].mean()
capomulin_plot = pd.DataFrame(capomulin_grouped_weight).reset_index()
capomulin_scatter = capomulin_plot.plot(kind='scatter', x='Weight (g)', y='Tumor Volume (mm3)', grid = True, )
plt.title("Average Tumor Volume vs Mouse Weight on Capomulin Treatment")
###Output
_____no_output_____
###Markdown
Correlation and Regression
###Code
# Calculate the correlation coefficient and linear regression model
# for mouse weight and average tumor volume for the Capomulin regimen
weight = capomulin_plot['Weight (g)']
tumor = capomulin_plot['Tumor Volume (mm3)']
correlation = st.pearsonr(weight,tumor)
round(correlation[0],2)
from scipy.stats import linregress
x_values = capomulin_plot['Weight (g)']
y_values = capomulin_plot['Tumor Volume (mm3)']
(slope, intercept, rvalue, pvalue, stderr) = linregress(x_values, y_values)
regress_values = x_values * slope + intercept
line_eq = "y = " + str(round(slope,2)) + "x + " + str(round(intercept,2))
plt.scatter(x_values,y_values)
plt.plot(x_values,regress_values,"r-")
plt.annotate(line_eq,(6,10),fontsize=15,color="red")
plt.title("Average Tumor Volume vs Mouse Weight on Capomulin Treatment")
plt.xlabel('Mouse Weight (g)')
plt.ylabel('Average Tumor Volume (mm3)')
plt.show()
print(f"The r-squared is: {rvalue**2}")
###Output
The r-squared is: 0.9034966277438599
|
Galaxy_Rotation/21 cm line kinematics-NO_PHOTUTILS.ipynb | ###Markdown
Measuring Rotation with 21 cmThe hydrogen 21 cm line is a useful tool for determining the kinematics of gas in a galaxy. As hydrogen is the most prevelant element in our universe, there will be lots of hydrogen in every galaxy. It fills every phase of the ISM and is more evenly distributed than the stars. The 21 cm line is preferred over optical hydrogen emission lines as it does not suffer from extinction caused by dust. Therefore, the 21 cm line can be used to map a galaxy in it's entirity. The doppler shift of this line in different parts of the galaxy can then be used to determine the rotation across the radius of the galaxy.For this project, you will work with a 'spectral cube' of the galaxy NGC7331, pictured below. I chose this galaxy for a specific reason, to do with measuring rotation. What do you think that reason is? Okay, we can get started! Let's load some programs first.
###Code
from astropy.io import fits
import matplotlib.pyplot as plt
import numpy as np
from matplotlib import colors
from matplotlib.patches import Ellipse
def elip_ap(gal_xc, gal_yc, width, height, angle, data):
sum = 0.0
cos_angle = np.cos(np.radians(180.-angle))
sin_angle = np.sin(np.radians(180.-angle))
for x in range(data.shape[0]):
xc = x - gal_xc
for y in range(data.shape[1]):
yc = y - gal_yc
xct = xc * cos_angle - yc * sin_angle
yct = xc * sin_angle + yc * cos_angle
rad_cc = (xct**2/(width/2.)**2) + (yct**2/(height/2.)**2)
if rad_cc <= 1.:
# point in ellipse
sum = sum+data[x,y]
else:
# point not in ellipse
sum = sum
return sum
###Output
_____no_output_____
###Markdown
Now, let's load our data. We will be working with a spectral cube from the THINGS (The HI Nearby Galaxy Survey) program. A spectral cube is a 2D spectra of an object. This means that we've added a new dimension to our images, the dimension of wavelength. Each pixel is the spectra of that position in the object. You might want to open the fits file using ds9 and play around with this cube to get a feel for how cubes work. See the image below for a visulization. 
###Code
cube = fits.open('NGC_7331_RO_CUBE_THINGS.FITS')
datacube = cube[0].data
hdu = cube[0].header
cube.close()
print(datacube.ndim)
print(datacube.shape)
###Output
4
(1, 116, 1024, 1024)
###Markdown
Great! Above, I've printed the number of dimensions in this image, and how many elements each dimension has. Don't worry about the first dimension, the other three are the ones we are interested in. The shape tells us that we have 116 slices in wavelength, and that our image is 1024x1024 pixels. We can treat each wavelength slice the same as we would any other image! The code below extracts the flux from an elliptical aperature around the galaxy at the 23 slice in wavelength space, and plots the galaxy.
###Code
fig, ax = plt.subplots()
theta = 350.0*np.pi/180.0
ellipse = Ellipse((512., 512.), 130., 400., angle=350.0, alpha=0.5)
slice1 = datacube[0][23]
'''
Okay, my elliptical aperture code takes a bit of doing, sorry!
Here you input the coordinate at the center of the ellipse in x,
then y, then the width of the ellipse, then the height of the ellipse,
then the angle of the ellipse (in degrees), and finally the data we want to work with.
'''
flux = elip_ap(512, 512, 400, 130, 350, slice1)
ax.add_artist(ellipse)
ax.imshow(slice1, cmap='gray_r', origin='lower')
print(flux)
###Output
2.9524491120740795
###Markdown
*A quick note: The WCS info in the header was a bit funky because this is a radio image (21 cm is long...) and I was kinda struggling to get it to work. That's why I used the EllipticalAperature program instead of SkyApperature program like we usually do. In this program, the ellipse is placed at a postion in pixel space on the image, as: EllipticalApperature([x_pixel, y_pixel], width, height, theta=rotation_angle_radians). Sorry for the confusion! We want to repeat this process for each wavelength slice in this datacube. Once we do that, we can look at the flux as a function of wavelength to determine the rotation across the galaxy. Use a for loop to find the flux for the elliptical apperature I created above for each wavelength, and add them to an array.If you are new to programming, check out this information about for loops https://www.w3schools.com/python/python_for_loops.asp and definitely ask for help! For loops are an incredibly valuable tool in coding! Once you have all the fluxes for each image slice, plot them vs the slice number. Sweet. You should see a double-peaked 21 cm profile (like this one:Why do we see this shape (instead of just a single-peak emission line)? Think doppler shifting! Before we figure out rotation and gas mass, we will need to get things into the right units. The slices have already been shifted to take into account the motion of the sun around the galaxy and converted from frequency space to velocity space using the doppler equation. We just need to use the information in the header to convert from slice number to actual km/s velocity. I've written the code to do this below
###Code
Zr = hdu['CRVAL3'] #velocity value of the reference slice in m/s
slice_0 = hdu['CRPIX3'] #the number of the reference slice
crdelt = hdu['CDELT3'] #the size of the step between each slice
vel = []
for x in range(datacube.shape[1]):
v = Zr + crdelt*(x-slice_0) #velocity equals reference velocity + seperation from reference point
v = v/1000. #convert from m/s to km/s
vel = np.append(vel, v) #add to an array we can use to represent the velocity of each slice
###Output
_____no_output_____
###Markdown
We also need to convert our flux values. The fluxes in this image are in units of Jansky/beam. We would like them to be in units of Jansky, which means we need to multiply by the beam size. For the VLA, the beam size is a circle with a 3 arcsecond radius. Again, we will need to use data from the header to find the beam size in pixels. We can then divide our fluxes by the beam size in pixels to get fluxes in Janskys.
###Code
pix_scale = hdu['CDELT2'] #header information with the degrees/pixel
print('The pixel scale is '+ repr(pix_scale) + ' degrees per pixel')
###Output
The pixel scale is 0.0004166666768degrees per pixel
|
7th day - Opecn CV mini projects/Jupyter notebooks/Blurring using Cv2.ipynb | ###Markdown
Different types of blurring using cv2
###Code
import cv2
import numpy as np
def Blur_fun(kernel = None,input_form = "video", path = "", operation = "meadianBlur", mode = "rbg"):
if input_form == "image":
img = cv2.imread(path)
if mode == "gray":
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
if operation == "custom":
img = cv2.filter2D(img, -1, kernel)
if operation == "blur":
img = cv2.blur(img, (5,5))
if operation == "GaussianBlur":
img = cv2.GaussianBlur(img, (5,5), 0)
if operation == "meadianBlur":
img = cv2.meadianBlur(img, 5)
if operation == "bilateralFilter":
img = cv2.bilateralFilter(img, 9, 75, 75)
cv2.imshow("Image", img)
cv2.waitKey(0)
cv2.destroyAllWindows()
if input_form == "video":
if path == "":
cap = cv2.VideoCapture(0)
if path != "":
cap = cv2.VideoCapture(path)
while(cap.isOpened()):
ret, img = cap.read()
if ret == True:
if mode == "gray":
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
if operation == "custom":
img = cv2.filter2D(img, -1, kernel)
if operation == "blur":
img = cv2.blur(img, (5,5))
if operation == "GaussianBlur":
img = cv2.GaussianBlur(img, (5,5), 0)
if operation == "meadianBlur":
img = cv2.medianBlur(img, 5)
if operation == "bilateralFilter":
img = cv2.bilateralFilter(img, 9, 75, 75)
cv2.imshow('frame', img)
if cv2.waitKey(1) & 0xff == ord('q'):
break
else:
break
cap.release()
cv2.destroyAllWindows()
kernel = np.ones((5,5), np.float32)/25
Blur_fun(kernel)
###Output
_____no_output_____ |
chapter_notebooks/handson-ml-chap2.ipynb | ###Markdown
Train-Test split based on original dataset(a column) proportions
###Code
from sklearn.model_selection import StratifiedShuffleSplit
split = StratifiedShuffleSplit(n_splits=1, test_size=0.2, random_state=42)
# looping over in this case doesn't matter as splits = 1
# multiple splits are used for CV
for train_index, test_index in split.split(df, df['income_cat']):
strat_train_set = df.loc[train_index]
strat_test_set = df.loc[test_index]
print(df['income_cat'].value_counts()/len(df))
print(strat_test_set['income_cat'].value_counts()/len(strat_test_set))
# Very similar proportions
# Doing Random split vs Stratified Split
from sklearn.model_selection import train_test_split
def income_cat_proportions(data):
return data["income_cat"].value_counts() / len(data)
train_set, test_set = train_test_split(df, test_size=0.2, random_state=42)
compare_props = pd.DataFrame({
"Overall": income_cat_proportions(df),
"Stratified": income_cat_proportions(strat_test_set),
"Random": income_cat_proportions(test_set),
}).sort_index()
compare_props["Rand. %error"] = 100 * compare_props["Random"] / compare_props["Overall"] - 100
compare_props["Strat. %error"] = 100 * compare_props["Stratified"] / compare_props["Overall"] - 100
print(compare_props)
# Removed because the origina repo removes it
for set_ in (strat_train_set, strat_test_set):
set_.drop("income_cat", axis=1, inplace=True)
df_new = strat_train_set.copy() # .copy() should be used as otherwise it keeps referring to the same menory location
df_new.plot(kind="scatter", x="longitude", y="latitude", alpha=0.3, s=df_new['population']/100,
label='population', figsize=(12, 8), c='median_house_value', cmap=plt.get_cmap("plasma"),
colorbar=True)
plt.legend()
plt.show()
import urllib
# Download the California image
DOWNLOAD_ROOT = "https://raw.githubusercontent.com/ageron/handson-ml2/master/"
filename = "california.png"
print("Downloading", filename)
url = DOWNLOAD_ROOT + "images/end_to_end_project/" + filename
urllib.request.urlretrieve(url, filename)
import matplotlib.image as mpimg
land = mpimg.imread(filename)
# Colorbar is False as we set it later
ax = df_new.plot(kind="scatter", x="longitude", y="latitude", figsize=(10,7),
s=df_new['population']/100, label="Population",
c="median_house_value", cmap=plt.get_cmap("plasma"),
colorbar=False, alpha=0.4)
plt.imshow(land, extent=[-124.55, -113.80, 32.45, 42.05], alpha=0.4)
plt.ylabel("Latitude", fontsize=14)
plt.xlabel("Longitude", fontsize=14)
prices = df_new["median_house_value"]
tick_values = np.linspace(prices.min(), prices.max(), 11)
cbar = plt.colorbar()
cbar.ax.set_yticklabels(["$%dk"%(round(v/1000)) for v in tick_values], fontsize=14)
cbar.set_label('Median House Value', fontsize=16)
plt.legend(fontsize=16)
plt.show()
from pandas.plotting import scatter_matrix
attributes = ["median_house_value", "median_income", "total_rooms",
"housing_median_age"]
# for same x and y we plotted it as histogram since scatter plot won't give any vital info
scatter_matrix(df_new[attributes], figsize=(12, 8))
plt.show()
# Here other than median income and median house value others don't show much correlation
corr = df_new.corr()
corr['median_house_value'].sort_values(ascending=False)
# Verifies above graph
###Output
_____no_output_____
###Markdown
Cleaning dataset
###Code
x_unclean = strat_train_set.drop('median_house_value', axis=1)
y_unclean = strat_train_set["median_house_value"]
incomplete_rows = x_unclean[x_unclean.isnull().any(axis=1)]
# default method(axis=0)ends up using only bool values for each column so can't be hashed
incomplete_rows.head()
# Methods discussed in book
# Ends up dropping all rows with nan vals
incomplete_rows.dropna(subset=['total_bedrooms'])
# drops column with nan values from dataframe
incomplete_rows.drop('total_bedrooms', axis=1).head()
# Filling nan values with median
median_val = x_unclean['total_bedrooms'].median()
incomplete_rows['total_bedrooms'].fillna(median_val).head()
# Another way to achieve above is use sklearn Imputer class
from sklearn.impute import SimpleImputer
impute = SimpleImputer(strategy='median')
# Remove text features as imputer won't work for them
x_nums = x_unclean.drop('ocean_proximity', axis=1)
impute.fit(x_nums)
impute.statistics_
# These are the median values of all the columns
x_nums_arr = impute.transform(x_nums)
# This gives numpy array so we need to convert to dataFrame for convenient viewing
df_nums = pd.DataFrame(x_nums_arr, columns=x_nums.columns, index=x_nums.index)
df_nums.loc[incomplete_rows.index.values]
# Now for categorical data
from sklearn.preprocessing import OrdinalEncoder
ordinal = OrdinalEncoder()
x_cat = ordinal.fit_transform(x_unclean[['ocean_proximity']])
x_cat[:-10], ordinal.categories_
# for non hierarchical categories
from sklearn.preprocessing import OneHotEncoder
encoder = OneHotEncoder()
x_cat_one = encoder.fit_transform(x_unclean[['ocean_proximity']])
x_cat_one
# Sparse array reprsentation saves memory
# Converting to dense array / set sparse=False for OneHotEncoder to directly get dense representation
x_cat_one.toarray()
###Output
_____no_output_____
###Markdown
* https://towardsdatascience.com/custom-transformers-and-ml-data-pipelines-with-python-20ea2a7adb65(Ref)Scikit-Learn provides us with two great base classes, TransformerMixin and BaseEstimator. Inheriting from TransformerMixin ensures that all we need to do is write our fit and transform methods and we get fit_transform for free. Inheriting from BaseEstimator ensures we get get_params and set_params for free. Since the fit method doesn’t need to do anything but return the object itself, all we really need to do after inheriting from these classes, is define the transform method for our custom transformer and we get a fully functional custom transformer that can be seamlessly integrated with a scikit-learn pipeline
###Code
from sklearn.base import BaseEstimator, TransformerMixin
rooms_ix, bedrooms_ix, population_ix, households_ix = 3, 4, 5, 6
class CombinedAttributesAdder(BaseEstimator, TransformerMixin):
def __init__(self, add_bedroom_per_room = True):
self.add_bedroom_per_room = add_bedroom_per_room
def fit(self, X, y=None):
return self
def transform(self, X):
rooms_per_household = X[:, rooms_ix]/X[:, households_ix]
population_per_household = X[:, population_ix]/X[:, households_ix]
if self.add_bedroom_per_room:
bedrooms_per_room = X[:, bedrooms_ix]/X[:, rooms_ix]
return np.c_[X, rooms_per_household, population_per_household, bedrooms_per_room]
else:
return np.c_[X, rooms_per_household, population_per_household]
attr_adder = CombinedAttributesAdder(add_bedroom_per_room=True)
attr_adder.get_params()
x_unclean_fe = attr_adder.transform(x_unclean.values)
# This is a numpy array, so we convert to pandas to view properly
x_unclean_fe = pd.DataFrame(x_unclean_fe,
columns=list(x_unclean.columns) + ['rooms_per_household', 'population_per_household', 'bedrooms_per_room'])
x_unclean_fe.head()
###Output
_____no_output_____
###Markdown
Building a pipeline for data transformation
###Code
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
# Pipeline for Transforming all numerical categories
num_pipeline = Pipeline([
('impute', SimpleImputer(strategy='median')),
('attribs_adder', CombinedAttributesAdder()),
('std_scaler', StandardScaler())
])
# The pipeline Constructor separate pipelines for numerical and categorical attributes
# We can group them together using Column Transformer
from sklearn.compose import ColumnTransformer
num_attr = list(x_nums.columns)
cat_attr = ['ocean_proximity']
comb_pipeline = ColumnTransformer([
("num", num_pipeline, num_attr),
("cat", OneHotEncoder(), cat_attr)
])
x_clean = comb_pipeline.fit_transform(x_unclean)
# above pipeline converts to array
x_unclean.shape
###Output
_____no_output_____
###Markdown
Training Models with Cross-Validation
###Code
from sklearn.model_selection import cross_val_score
from sklearn.metrics import mean_squared_error
from sklearn.linear_model import LinearRegression
from sklearn.ensemble import RandomForestRegressor
def scores(score):
print("Scores", score)
print("Mean", score.mean())
print("Standard Deviation", score.std())
rf = RandomForestRegressor()
rf_scores = cross_val_score(rf, x_clean, y_unclean,
scoring='neg_mean_squared_error', cv=10)
rf_rmse_scores = np.sqrt(-rf_scores)
scores(rf_scores)
scores(rf_rmse_scores)
###Output
Scores [-2.41957237e+09 -2.27944106e+09 -2.48635734e+09 -2.72377227e+09
-2.50185656e+09 -2.86681661e+09 -2.39314101e+09 -2.29235816e+09
-2.81700902e+09 -2.50364672e+09]
Mean -2528397111.0925493
Standard Deviation 196700511.39716807
Scores [49189.14890307 47743.49232309 49863.38672062 52189.77165616
50018.56214738 53542.66163316 48919.74051226 47878.5772391
53075.50302921 50036.4538617 ]
Mean 50245.72980257556
Standard Deviation 1940.0380664096897
###Markdown
Exercise Q1
###Code
from sklearn.svm import SVR
from sklearn.model_selection import GridSearchCV
params = [
{
'kernel':['linear'], 'C':[0.1, 0.5, 1, 10, 50, 100]
},
{
'kernel':['rbf'], 'C':[0.1, 0.5, 1, 10, 50, 100], 'gamma':[0.01, 0.03, 0.1, 0.3, 1.0]
}
]
svm = SVR(cache_size=500)
grid_search = GridSearchCV(svm, params, cv=5, scoring='neg_mean_squared_error', verbose=2)
grid_search.fit(x_clean, y_unclean)
neg_mse = grid_search.best_score_
rmse = np.sqrt(-neg_mse)
rmse
###Output
_____no_output_____
###Markdown
Q2
###Code
from sklearn.model_selection import RandomizedSearchCV
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import expon, reciprocal
# see https://docs.scipy.org/doc/scipy/reference/stats.html
# for `expon()` and `reciprocal()` documentation and more probability distribution functions.
# Note: gamma is ignored when kernel is "linear"
params_rand = {
'kernel': ['linear', 'rbf'],
'C': reciprocal(20, 200000),
'gamma': expon(scale=1.0),
}
rnd_search = RandomizedSearchCV(svm, params_rand,
n_iter=20, cv=5, scoring='neg_mean_squared_error',
verbose=2, random_state=42)
rnd_search.fit(x_clean, y_unclean)
neg_mse = grid_search.best_score_
rmse = np.sqrt(-neg_mse)
rmse
print(grid_search.best_estimator_)
# Taken from exercise solution, to understand which type of distribution we should pick for our vals
# exponential distribution is best when you know (more or less) what the scale of the hyperparameter should be
expon_distrib = expon(scale=1.)
samples = expon_distrib.rvs(10000, random_state=42)
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.title("Exponential distribution (scale=1.0)")
plt.hist(samples, bins=50)
plt.subplot(122)
plt.title("Log of this distribution")
plt.hist(np.log(samples), bins=50)
plt.show()
# The reciprocal distribution is useful when you have no idea what the scale of the hyperparameter should be
reciprocal_distrib = reciprocal(20, 200000)
samples = reciprocal_distrib.rvs(10000, random_state=42)
plt.figure(figsize=(10, 4))
plt.subplot(121)
plt.title("Reciprocal distribution (scale=1.0)")
plt.hist(samples, bins=50)
plt.subplot(122)
plt.title("Log of this distribution")
plt.hist(np.log(samples), bins=50)
plt.show()
###Output
_____no_output_____
###Markdown
Q3
###Code
# This assumes that the feature importances have been already calculated
from sklearn.base import BaseEstimator, TransformerMixin
class impFeatureSelector(BaseEstimator, TransformerMixin):
def __init__(self, feature_importance, k):
self.feature_importance = feature_importance
self.k = k
def fit(self, X, y=None):
self.top_features = np.argsort(self.feature_importance)[-self.k:]
return self
def transform(self, X):
return X[:, self.top_features]
rf.fit(x_clean, y_unclean)
k=5
rf.feature_importances_[np.argsort(rf.feature_importances_)[-k:]]
# Setting up all attribs
cat_encoder = comb_pipeline.named_transformers_["cat"]
cat_one_hot_attribs = list(cat_encoder.categories_[0])
all_attribs = list(x_nums.columns) + ['rooms_per_household', 'population_per_household', 'bedrooms_per_room'] + cat_one_hot_attribs
all_attribs
# List of important attributes
np.array(all_attribs)[np.argsort(rf.feature_importances_)[-k:]]
feature_selector_pipeline = Pipeline([
('preparation', comb_pipeline),
('feature_selection', impFeatureSelector(rf.feature_importances_, k))
])
top_k_attribs = feature_selector_pipeline.fit_transform(x_unclean)
top_k_attribs
# Same for both cases
x_clean[:,np.argsort(rf.feature_importances_)[-k:]]
###Output
_____no_output_____
###Markdown
Q4
###Code
complete_pipeline = Pipeline([
('preparation', comb_pipeline),
('feature_selection', impFeatureSelector(rf.feature_importances_, k)),
('rf', RandomForestRegressor())
])
complete_pipeline.fit(x_unclean, y_unclean)
# Not best parameters as shown here
complete_pipeline.predict(x_unclean[100:102]), y_unclean[100:102]
###Output
_____no_output_____
###Markdown
Q5 - From Solutions
###Code
from sklearn.model_selection import GridSearchCV
params = [{
'preparation__num__impute__strategy': ['mean', 'median', 'most_frequent'],
'feature_selection__k': list(range(1, len(rf.feature_importances_) + 1))
}]
grid_search_prep = GridSearchCV(complete_pipeline, params, cv=5,
scoring='neg_mean_squared_error', verbose=2)
grid_search_prep.fit(x_unclean, y_unclean)
grid_search_prep.best_params_
###Output
_____no_output_____ |
wikipedia-pages-analysis/wikipedia_pages_analysis.ipynb | ###Markdown
Analyzing Wikipedia PagesSkills: API, Web Scraping, Multi-Threading, Multi-Processing, BenchmarkingIn this project, we'll be working with data scraped from Wikipedia, a popular online encyclopedia. We'll be analyzing 54 megabytes worth of articles to figure out patterns in the Wikipedia writing and content presentation style. The scraping code is in this folder, in the scrape_random.py file.Our main goals will be to:- Extract only the text from the Wikipedia pages, and remove all HTML and Javascript markup.- Remove common page headers and footers from the Wikipedia pages.- Figure out what tags are the most common in Wikipedia pages.- Figure out patterns in the text.
###Code
# List all of the files in the wiki folder.
import os
list_wikifile = os.listdir('wiki')
print(list_wikifile)
# Count up and display the number of files in the wiki folder.
no_of_files = len(list_wikifile)
print(no_of_files)
# Display a single file from the wiki folder:
with open("wiki/Millennium_Art_Academy.html", encoding="utf-8") as file:
data = file.read()
data
###Output
_____no_output_____
###Markdown
Now that we know the file structure, and the structure of a single file, we can read in all of the files. This will get us started in our explorations.As this task is I/O bound, we can use threads to help us read in the data more quickly.We will benchmark the read process, with no thread, 4 threads and 8 threads (my maximum processor's threads per core)
###Code
# Import concurrent.futures package to execute multithreading process,
# and time package to benchmark the performance
import concurrent.futures as cf
import time
content = []
articles = [name[:-5] for name in os.listdir('wiki')]
print(articles)
# function to read all files
def read_all(files):
with open('wiki/{}'.format(files), encoding="utf-8") as file:
return file.read()
# no threads
start = time.time()
content_0 = []
for file in list_wikifile:
content_0.append(read_all(file))
duration_0 = time.time() - start
# 4 threads
start = time.time()
# Create pool of threads
pool = cf.ThreadPoolExecutor(max_workers=4)
content_4 = list(pool.map(read_all, list_wikifile))
duration_4 = time.time() - start
# 4 threads
start = time.time()
# Create pool of threads
pool = cf.ThreadPoolExecutor(max_workers=8)
content_8 = list(pool.map(read_all, list_wikifile))
duration_8 = time.time() - start
# Now, we compare the performance of different threads number
print(duration_0)
print(duration_4)
print(duration_8)
###Output
23.36261510848999
0.2063000202178955
0.2257218360900879
###Markdown
It can be seen that, in this case, using multi-threading method is advantageous. It may be because although files are opened, most of the task is not offset by the overhead of creating new threads. Now that we've read in the data files, we can remove the extraneous markup that's outside the divcontent tag that most of the content seems to be inside. We can use the BeautifulSoup package for this. BeautifulSoup enables us to extract all of the content inside a specific tag.Using the BeautifulSoup package, we'll parse each wiki article, then extract the div with id content and everything inside it.Since this operation is more CPU intensive than before, let's try using a process pool to see if the speed improves.We'll be using single core, dual core, and quad core (my processor's maximum core).
###Code
from bs4 import BeautifulSoup
# Function to parse the file using BeautifulSoup
def rm_markup(document):
with open('wiki/{}'.format(document), encoding="utf-8") as file:
data = file.read()
parser = BeautifulSoup(data, 'html.parser')
content_div = parser.find_all("div", id="content")[0]
return str(content_div)
# single core
start = time.time()
parsed_0 = []
for file in list_wikifile:
parsed_0.append(rm_markup(file))
duration = time.time() - start
# dual core
start = time.time()
# Create pool of process
pool = cf.ProcessPoolExecutor(max_workers=1)
parsed_2 = list(pool.map(rm_markup, list_wikifile))
duration_2 = time.time() - start
# # quad core
# start = time.time()
# # Create pool of process
# pool = cf.ProcessPoolExecutor(max_workers=4)
# parsed_4 = list(pool.map(rm_markup, list_wikifile))
# duration_4 = time.time() - start
# if __name__ == '__main__':
# rm_markup(list_wikifile)
# Now, we compare the performance of different core numbers
print(duration)
print(duration_2)
print(duration_4)
###Output
_____no_output_____
###Markdown
It seems that using multiprocessing is rather advantageous in our case. With best performance by using 2 or 4 cores for processing. Now that we've extracted the main part of each page, let's count up how many times each tag occurs. This will give us clues about how Wikipedia pages are typically structured. In this step, we will use multiprocessing because it will use many CPU resources. And we will use 3 cores considering cost vs benefit of overhead vs performance.
###Code
def count_tags(document):
parser = BeautifulSoup(document, 'html.parser')
all_tags = parser.find_all()
tags = {}
for tag in all_tags:
if tag.name not in all_tags:
tags[tag.name] = 0
tags[tag.name] += 1
return tags
start = time.time()
pool = cf.ProcessPoolExecutor(max_workers=3)
result = list(pool.map(count_tags, parsed_2))
overall_tags = {}
for each in result:
for k,v in each.items():
if k not in overall_tags:
overall_tags[k] = 0
overall_tags[k] += v
duration = (time.time() - start)
print(duration)
overall_tags
###Output
_____no_output_____
###Markdown
Based on our findings, it looks like there are quite a few td, a, li, and span tags. This indicates that articles tend to have lots of links, along with lists and tables. Links are the most numerous tag, which indicates how interconnected articles on Wikipedia are.Now we find the most common words.
###Code
from collections import Counter
import re
def count_words(html):
soup = BeautifulSoup(html, 'html.parser')
words = {}
text = soup.get_text()
text = re.sub("\W+", " ", text.lower())
words = text.split(" ")
words = [w for w in words if len(w) >= 5]
return Counter(words).most_common(10)
start = time.time()
pool = concurrent.futures.ProcessPoolExecutor(max_workers=3)
words = pool.map(count_words, parsed_2)
words = list(words)
word_counts = {}
for wc in words:
for word, count in wc:
if word not in word_counts:
word_counts[word] = 0
word_counts[word] += 1
end = time.time()
print(end - start)
word_counts
###Output
_____no_output_____ |
3_Machine Learning Modeling Pipelines in Production/Week2/C3_W2_Lab_1_Manual_Dimensionality.ipynb | ###Markdown
Ungraded lab: Manual Feature Engineering------------------------ Welcome, during this ungraded lab you are going to perform feature engineering using TensorFlow and Keras. By having a deeper understanding of the problem you are dealing with and proposing transformations to the raw features you will see how the predictive power of your model increases. In particular you will:1. Define the model using feature columns.2. Use Lambda layers to perform feature engineering on some of these features.3. Compare the training history and predictions of the model before and after feature engineering.**Note**: This lab has some tweaks compared to the code you just saw on the lectures. The major one being that time-related variables are not used in the feature engineered model.Let's get started! First, install and import the necessary packages, set up paths to work on and download the dataset. Imports
###Code
# Import the packages
# Utilities
import os
import logging
# For visualization
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas as pd
# For modelling
import tensorflow as tf
from tensorflow import feature_column as fc
from tensorflow.keras import layers, models
# Set TF logger to only print errors (dismiss warnings)
logging.getLogger("tensorflow").setLevel(logging.ERROR)
###Output
_____no_output_____
###Markdown
Load taxifare datasetFor this lab you are going to use a tweaked version of the [Taxi Fare dataset](https://www.kaggle.com/c/new-york-city-taxi-fare-prediction/data), which has been pre-processed and split beforehand. First, create the directory where the data is going to be saved.
###Code
if not os.path.isdir("/tmp/data"):
os.makedirs("/tmp/data")
###Output
_____no_output_____
###Markdown
Now download the data in `csv` format from a cloud storage bucket.
###Code
!gsutil cp gs://cloud-training-demos/feat_eng/data/taxi*.csv /tmp/data
###Output
_____no_output_____
###Markdown
Let's check that the files were copied correctly and look like we expect them to.
###Code
!ls -l /tmp/data/*.csv
###Output
_____no_output_____
###Markdown
Everything looks fine. Notice that there are three files, one for each split of `training`, `testing` and `validation`. Inspect tha dataNow take a look at the training data.
###Code
pd.read_csv('/tmp/data/taxi-train.csv').head()
###Output
_____no_output_____
###Markdown
The data contains a total of 8 variables.The `fare_amount` is the target, the continuous value we’ll train a model to predict. This leaves you with 7 features. However this lab is going to focus on transforming the geospatial ones so the time features `hourofday` and `dayofweek` will be ignored. Create an input pipeline To load the data for the model you are going to use an experimental feature of Tensorflow that lets loading directly from a `csv` file.For this you need to define some lists containing relevant information of the dataset such as the type of the columns.
###Code
# Specify which column is the target
LABEL_COLUMN = 'fare_amount'
# Specify numerical columns
# Note you should create another list with STRING_COLS if you
# had text data but in this case all features are numerical
NUMERIC_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude',
'passenger_count', 'hourofday', 'dayofweek']
# A function to separate features and labels
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
return row_data, label
# A utility method to create a tf.data dataset from a CSV file
def load_dataset(pattern, batch_size=1, mode='eval'):
dataset = tf.data.experimental.make_csv_dataset(pattern, batch_size)
dataset = dataset.map(features_and_labels) # features, label
if mode == 'train':
# Notice the repeat method is used so this dataset will loop infinitely
dataset = dataset.shuffle(1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
###Output
_____no_output_____
###Markdown
Create a DNN Model in KerasNow you will build a simple Neural Network with the numerical features as input represented by a [`DenseFeatures`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/DenseFeatures) layer (which produces a dense Tensor based on the given features), two dense layers with ReLU activation functions and an output layer with a linear activation function (since this is a regression problem).Since the model is defined using `feature columns` the first layer might look different to what you are used to. This is done by declaring two dictionaries, one for the inputs (defined as Input layers) and one for the features (defined as feature columns).Then computing the `DenseFeatures` tensor by passing in the feature columns to the constructor of the `DenseFeatures` layer and passing in the inputs to the resulting tensor (this is easier to understand with code):
###Code
def build_dnn_model():
# input layer
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
# feature_columns
feature_columns = {
colname: fc.numeric_column(colname)
for colname in NUMERIC_COLS
}
# Constructor for DenseFeatures takes a list of numeric columns
# and the resulting tensor takes a dictionary of Input layers
dnn_inputs = layers.DenseFeatures(feature_columns.values())(inputs)
# two hidden layers of 32 and 8 units, respectively
h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is a regression problem
output = layers.Dense(1, activation='linear', name='fare')(h2)
# Create model with inputs and output
model = models.Model(inputs, output)
# compile model (Mean Squared Error is suitable for regression)
model.compile(optimizer='adam',
loss='mse',
metrics=[
tf.keras.metrics.RootMeanSquaredError(name='rmse'),
'mse'
])
return model
###Output
_____no_output_____
###Markdown
We'll build our DNN model and inspect the model architecture.
###Code
# Save compiled model into a variable
model = build_dnn_model()
# Plot the layer architecture and relationship between input features
tf.keras.utils.plot_model(model, 'dnn_model.png', show_shapes=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
With the model architecture defined it is time to train it! Train the modelYou are going to train the model for 20 epochs using a batch size of 32.
###Code
NUM_EPOCHS = 20
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = len(pd.read_csv('/tmp/data/taxi-train.csv'))
NUM_EVAL_EXAMPLES = len(pd.read_csv('/tmp/data/taxi-valid.csv'))
print(f"training split has {NUM_TRAIN_EXAMPLES} examples\n")
print(f"evaluation split has {NUM_EVAL_EXAMPLES} examples\n")
###Output
_____no_output_____
###Markdown
Use the previously defined function to load the datasets from the original csv files.
###Code
# Training dataset
trainds = load_dataset('/tmp/data/taxi-train*', TRAIN_BATCH_SIZE, 'train')
# Evaluation dataset
evalds = load_dataset('/tmp/data/taxi-valid*', 1000, 'eval').take(NUM_EVAL_EXAMPLES//1000)
# Needs to be specified since the dataset is infinite
# This happens because the repeat method was used when creating the dataset
steps_per_epoch = NUM_TRAIN_EXAMPLES // TRAIN_BATCH_SIZE
# Train the model and save the history
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EPOCHS,
steps_per_epoch=steps_per_epoch)
###Output
_____no_output_____
###Markdown
Visualize training curvesNow lets visualize the training history of the model with the raw features:
###Code
# Function for plotting metrics for a given history
def plot_curves(history, metrics):
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(metrics):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history[f'val_{key}'])
plt.title(f'model {key}')
plt.ylabel(key)
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
# Plot history metrics
plot_curves(history, ['loss', 'mse'])
###Output
_____no_output_____
###Markdown
The training history doesn't look very promising showing an erratic behaviour. Looks like the training process struggled to transverse the high dimensional space that the current features create. Nevertheless let's use it for prediction.Notice that the latitude and longitude values should revolve around (`37`, `45`) and (`-70`, `-78`) respectively since these are the range of coordinates for New York city.
###Code
# Define a taxi ride (a data point)
taxi_ride = {
'pickup_longitude': tf.convert_to_tensor([-73.982683]),
'pickup_latitude': tf.convert_to_tensor([40.742104]),
'dropoff_longitude': tf.convert_to_tensor([-73.983766]),
'dropoff_latitude': tf.convert_to_tensor([40.755174]),
'passenger_count': tf.convert_to_tensor([3.0]),
'hourofday': tf.convert_to_tensor([3.0]),
'dayofweek': tf.convert_to_tensor([3.0]),
}
# Use the model to predict
prediction = model.predict(taxi_ride, steps=1)
# Print prediction
print(f"the model predicted a fare total of {float(prediction):.2f} USD for the ride.")
###Output
_____no_output_____
###Markdown
The model predicted this particular ride to be around 12 USD. However you know the model performance is not the best as it was showcased by the training history. Let's improve it by using **Feature Engineering**. Improve Model Performance Using Feature Engineering Going forward you will only use geo-spatial features as these are the most relevant when calculating the fare since this value is mostly dependant on the distance transversed:
###Code
# Drop dayofweek and hourofday features
NUMERIC_COLS = ['pickup_longitude', 'pickup_latitude',
'dropoff_longitude', 'dropoff_latitude']
###Output
_____no_output_____
###Markdown
Since you are dealing exclusively with geospatial data you will create some transformations that are aware of this geospatial nature. This help the model make a better representation of the problem at hand.For instance the model cannot magically understand what a coordinate is supposed to represent and since the data is taken from New York only, the latitude and longitude revolve around (`37`, `45`) and (`-70`, `-78`) respectively, which is arbitrary for the model. A good first step is to scale these values. **Notice all transformations are created by defining functions**.
###Code
def scale_longitude(lon_column):
return (lon_column + 78)/8.
def scale_latitude(lat_column):
return (lat_column - 37)/8.
###Output
_____no_output_____
###Markdown
Another important fact is that the fare of a taxi ride is proportional to the distance of the ride. But as the features currently are, there is no way for the model to infer that the pair of (`pickup_latitude`, `pickup_longitude`) represent the point where the passenger started the ride and the pair (`dropoff_latitude`, `dropoff_longitude`) represent the point where the ride ended. More importantly, the model is not aware that the distance between these two points is crucial for predicting the fare.To solve this, a new feature (which is a transformation of the other ones) that provides this information is required.
###Code
def euclidean(params):
lon1, lat1, lon2, lat2 = params
londiff = lon2 - lon1
latdiff = lat2 - lat1
return tf.sqrt(londiff*londiff + latdiff*latdiff)
###Output
_____no_output_____
###Markdown
Applying transformationsNow you will define the `transform` function which will apply the previously defined transformation functions. To apply the actual transformations you will be using `Lambda` layers apply a function to values (in this case the inputs).
###Code
def transform(inputs, numeric_cols):
# Make a copy of the inputs to apply the transformations to
transformed = inputs.copy()
# Define feature columns
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in numeric_cols
}
# Scaling longitude from range [-70, -78] to [0, 1]
for lon_col in ['pickup_longitude', 'dropoff_longitude']:
transformed[lon_col] = layers.Lambda(
scale_longitude,
name=f"scale_{lon_col}")(inputs[lon_col])
# Scaling latitude from range [37, 45] to [0, 1]
for lat_col in ['pickup_latitude', 'dropoff_latitude']:
transformed[lat_col] = layers.Lambda(
scale_latitude,
name=f'scale_{lat_col}')(inputs[lat_col])
# add Euclidean distance
transformed['euclidean'] = layers.Lambda(
euclidean,
name='euclidean')([inputs['pickup_longitude'],
inputs['pickup_latitude'],
inputs['dropoff_longitude'],
inputs['dropoff_latitude']])
# Add euclidean distance to feature columns
feature_columns['euclidean'] = fc.numeric_column('euclidean')
return transformed, feature_columns
###Output
_____no_output_____
###Markdown
Update the modelNext, you'll create the DNN model now with the engineered (transformed) features.
###Code
def build_dnn_model():
# input layer (notice type of float32 since features are numeric)
inputs = {
colname: layers.Input(name=colname, shape=(), dtype='float32')
for colname in NUMERIC_COLS
}
# transformed features
transformed, feature_columns = transform(inputs, numeric_cols=NUMERIC_COLS)
# Constructor for DenseFeatures takes a list of numeric columns
# and the resulting tensor takes a dictionary of Lambda layers
dnn_inputs = layers.DenseFeatures(feature_columns.values())(transformed)
# two hidden layers of 32 and 8 units, respectively
h1 = layers.Dense(32, activation='relu', name='h1')(dnn_inputs)
h2 = layers.Dense(8, activation='relu', name='h2')(h1)
# final output is a linear activation because this is a regression problem
output = layers.Dense(1, activation='linear', name='fare')(h2)
# Create model with inputs and output
model = models.Model(inputs, output)
# Compile model (Mean Squared Error is suitable for regression)
model.compile(optimizer='adam',
loss='mse',
metrics=[tf.keras.metrics.RootMeanSquaredError(name='rmse'), 'mse'])
return model
# Save compiled model into a variable
model = build_dnn_model()
###Output
_____no_output_____
###Markdown
Let's see how the model architecture has changed.
###Code
# Plot the layer architecture and relationship between input features
tf.keras.utils.plot_model(model, 'dnn_model_engineered.png', show_shapes=False, rankdir='LR')
###Output
_____no_output_____
###Markdown
This plot is very useful for understanding the relationships and dependencies between the original and the transformed features!**Notice that the input of the model now consists of 5 features instead of the original 7, thus reducing the dimensionality of the problem.**Let's now train the model that includes feature engineering.
###Code
# Train the model and save the history
history = model.fit(trainds,
validation_data=evalds,
epochs=NUM_EPOCHS,
steps_per_epoch=steps_per_epoch)
###Output
_____no_output_____
###Markdown
Notice that the features `passenger_count`, `hourofday` and `dayofweek` were excluded since they were omitted when defining the input pipeline.Now lets visualize the training history of the model with the engineered features.
###Code
# Plot history metrics
plot_curves(history, ['loss', 'mse'])
###Output
_____no_output_____
###Markdown
This looks a lot better than the previous training history! Now the loss and error metrics are decreasing with each epoch and both curves (train and validation) are very close to each other. Nice job!Let's do a prediction with this new model on the example we previously used.
###Code
# Use the model to predict
prediction = model.predict(taxi_ride, steps=1)
# Print prediction
print(f"the model predicted a fare total of {float(prediction):.2f} USD for the ride.")
###Output
_____no_output_____ |
Codes/P02_data_cleaning.ipynb | ###Markdown
The global warming issue and Narratives around it Part 2: Cleaning the imported data and doing a brief EDA for early assessment. Finally pickling the merged dataframe into a global dataframeIn this notebook, I cleaned the imported API dataframe and saved it as a clean version into "../datasets" folder for further processing. Importing the required libraries:
###Code
#imports
import pandas as pd
import regex as re
import warnings
warnings.filterwarnings('ignore')
from nltk.corpus import stopwords # Import the stopword list
from nltk.stem.porter import PorterStemmer
from nltk.stem import WordNetLemmatizer
import pickle
###Output
_____no_output_____
###Markdown
Part 2.1: Importing the saved raw data from reddit API and cleaning
###Code
#Global warming
file_path = "../datasets/" + "GlobalWarming" + "_raw" + ".csv"
df_gw = pd.read_csv(file_path)
# Keeping only a few columns which will be helpful during analysis
to_keep_clmns = ['author', 'created_utc', 'domain', 'id', 'num_comments', 'over_18',
'post_hint', 'score', 'selftext',
'title']
df_gw_clean = df_gw[to_keep_clmns]
df_gw_clean.head(10)
df_gw_clean.shape
df_gw_clean.isnull().sum()
###Output
_____no_output_____
###Markdown
Imputation time: Imputing the useful columns, dropping the useless columns, which also have many missing values.
###Code
#For title and selftext columns, I filled them with " " as they will be striped later, so I can merge them later.
df_gw_clean["title"].fillna(" ", inplace=True)
df_gw_clean["selftext"].fillna(" ", inplace=True)
#Merging the title and selftext for further processing
df_gw_clean['text_merged'] = df_gw_clean['title'] + " " + df_gw_clean['selftext']
df_gw_clean.drop(columns = ["title", "selftext"], inplace=True)
#For post_hint, I imputed them with "Empty"
df_gw_clean['post_hint'].fillna("Empty", inplace=True)
###Output
_____no_output_____
###Markdown
Checking the datatypes and also whether all columns have no NaN values:
###Code
df_gw_clean.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3934 entries, 0 to 3933
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 author 3934 non-null object
1 created_utc 3934 non-null int64
2 domain 3934 non-null object
3 id 3934 non-null object
4 num_comments 3934 non-null int64
5 over_18 3934 non-null bool
6 post_hint 3934 non-null object
7 score 3934 non-null int64
8 text_merged 3934 non-null object
dtypes: bool(1), int64(3), object(5)
memory usage: 249.8+ KB
###Markdown
And, checking the final dataframe produced:
###Code
df_gw_clean.head()
df_gw_clean.loc[15,"text_merged"]
###Output
_____no_output_____
###Markdown
**Everything looks good here !** Part 2.2: Importing the saved raw data from reddit API and cleaning
###Code
# ConspiracyTheory
file_path = "../datasets/" + "ConspiracyTheory" + "_raw" + ".csv"
df_ct = pd.read_csv(file_path)
# Keeping only a few columns which will be helpful during analysis
df_ct_clean = df_ct[to_keep_clmns]
df_ct_clean.head(2)
df_ct_clean.shape
###Output
_____no_output_____
###Markdown
Cleaning the data and getting the texts ready for process:
###Code
df_ct_clean.isnull().sum()
###Output
_____no_output_____
###Markdown
Imputation time: Imputing the useful columns, dropping the useless columns, which also have many missing values.
###Code
#For title and selftext columns, I filled them with " " as they will be striped later, so I can merge them later.
df_ct_clean["title"].fillna(" ", inplace=True)
df_ct_clean["selftext"].fillna(" ", inplace=True)
#Merging the title and selftext for further processing
df_ct_clean['text_merged'] = df_ct_clean['title'] + " " + df_ct_clean['selftext']
df_ct_clean.drop(columns = ["title", "selftext"], inplace=True)
#For post_hint, I imputed them with "Empty"
df_ct_clean['post_hint'].fillna("Empty", inplace=True)
df_ct_clean.loc[0, "text_merged"]
###Output
_____no_output_____
###Markdown
Checking the datatypes and also whether all columns have no NaN values:
###Code
df_ct_clean.head()
df_ct_clean.loc[2,"text_merged"]
###Output
_____no_output_____
###Markdown
**Everything looks good here too !** One last step is to combine dataframes into a single one:
###Code
#Adding one column to determine the subreddit pulled from
df_gw_clean["subreddit"] = "GlobalWarming"
df_ct_clean["subreddit"] = "ConspiracyTheory"
df_reddit = pd.concat([df_gw_clean, df_ct_clean], axis = 0, ignore_index=True)
df_reddit.head(5)
df_reddit.shape
df_reddit["text_merged"][4548]
###Output
_____no_output_____
###Markdown
Now, doing cleaning on the merged reddit dataframe
###Code
#Lots of cleaning on text
def text_cleaning(item):
#Removing "\n" characters
item = re.sub("\n", " ", item)
#Removing the [removed] characters
item = item.replace("[removed]", " ")
# Use regular expressions to do a find-and-replace
item = re.sub("[^a-zA-Z]", " ", item)
#Making all characters lower case
item = item.lower()
#Replacing multiple spaces
item = " ".join(item.split())
#Removing stopwords
stops = stopwords.words("english")
words = [w for w in item.split() if w not in stops]#stops
# Instantiate object of class PorterStemmer and stemming.
p_stemmer = PorterStemmer()
words = [p_stemmer.stem(i) for i in words]
# Adding space to stitch the words together
words = " ".join(list(words))
return words
df_reddit["text_merged"] = df_reddit["text_merged"].apply(text_cleaning)
#Stemming
df_reddit.reset_index(drop=True, inplace=True)
df_reddit.head(5)
df_reddit["text_merged"][50]
df_reddit.shape
###Output
_____no_output_____
###Markdown
Pickling the dataframe as they are large!
###Code
pickle.dump(df_reddit, open('../datasets/df_reddit.pkl', 'wb'))
print("Hello world!")
###Output
Hello world!
|
.ipynb_checkpoints/Taco Tutorial via SpMV-checkpoint.ipynb | ###Markdown
Introduction to TACOTACO, which stands for Tensor Algebra Compiler, is a library for performing sparse and dense linear algebra and tensor algebra computations. The computations can range from relatively simple ones like sparse matrix-vector multiplication to more complex ones like matricized tensor times Khatri-Rao product. All these computations can be performed on any mix of dense and sparse tensors. Under the hood, TACO automatically generates efficient code to perform these computations.This notebook provides a brief introduction to the TACO Python library. For a more comprehensive overview, please see the documentation linked [here](http://tensor-compiler.org/symphony-docs/index.html). We will also link to relevant pages as we progress. Table of Contents:* [Getting Started](first-section)* [Defining Tensor Formats](second-section)* [NumPy and SciPy I/O](third-section)* [Example Application: SpMV](fourth-section) Getting Started First, let's import TACO. Press `Shift` + `Enter` to run the code below.
###Code
import pytaco as pt
from pytaco import dense, compressed
###Output
_____no_output_____
###Markdown
In the above, `dense` and `compressed` are [mode (dimension) formats](http://tensor-compiler.org/symphony-docs/reference/rst_files/mode_format.html). We can think of tensors as multi-dimensional arrays, and the mode formats allow us to specify how we would like to store the data in each dimension: * If a dimension is `dense`, then all of the elements in that dimension are stored. * And if a dimension is `compressed`, then only nonzeros are stored.For example, we can declare a $512 \times 64 \times 2048$ [tensor](http://tensor-compiler.org/symphony-docs/reference/rst_files/tensor_class.html) whose first dimension is dense and second and third dimensions are compressed:
###Code
T = pt.tensor([512, 64, 2048], pt.format([dense, compressed, compressed]))
###Output
_____no_output_____
###Markdown
We can initialize $T$ by calling its `insert` [method](http://tensor-compiler.org/symphony-docs/reference/rst_files/functions/pytaco.tensor.insert.html) to add a nonzero element to the tensor. The `insert` method takes two arguments: a list of coordinates and the value to be inserted at those coordinates:
###Code
# Set T(0, 1, 0) = 42.0
T.insert([0, 1, 0], 42.0)
###Output
_____no_output_____
###Markdown
If multiple elements are inserted at the same coordinates, they are summed together:
###Code
# Set T(0, 0, 1) = 12.0 + 24.0 = 36.0
T.insert([0, 0, 1], 12.0)
T.insert([0, 0, 1], 24.0)
###Output
_____no_output_____
###Markdown
We can then iterate over the nonzero elements of the tensor as follows:
###Code
for coordinates, value in T:
print("Coordinates: {}, Value: {}".format(coordinates, value))
###Output
_____no_output_____
###Markdown
Defining Tensor Formats Consider a matrix $M$ (aka a two-dimensional tensor) containing the following values:$$M = \begin{bmatrix} 6 & \cdot & 9 & 8 \\ \cdot & \cdot & \cdot & \cdot \\ 5 & \cdot & \cdot & 7 \end{bmatrix}$$Denote the rows and columns as dimensions $d_1$ and $d_2$, respectively. We look at how $M$ is represented differently in different formats. For convenience, let's define a helper function to initialize $M$.
###Code
def make_example_matrix(format):
M = pt.tensor([3, 4], format)
M.insert([0, 0], 6)
M.insert([0, 2], 9)
M.insert([0, 3], 8)
M.insert([2, 0], 5)
M.insert([2, 3], 7)
return M
###Output
_____no_output_____
###Markdown
(dense $d_1$, dense $d_2$)Note that passing in `dense` makes all of the dimensions dense. This is equivalent to `pt.format([dense, dense])`.
###Code
make_example_matrix(dense)
###Output
_____no_output_____
###Markdown
For this example, we focus on the last line of the output, the `vals` array: since all values are stored, it is a flattened $3 \times 4$ matrix stored in row-major order. (dense $d_1$, compressed $d_2$)This is called compressed sparse row (CSR) format.
###Code
csr = pt.format([dense, compressed])
make_example_matrix(csr)
###Output
_____no_output_____
###Markdown
Since $d_1$ is dense, we need only store the `size` of the dimension; values, both zero and nonzero, are stored for every coordinate in dimension $d_1$. Since $d_2$ is compressed, we store a `pos` array (`[0, 3, 3, 5]`) and an `idx` array (`[0, 2, 3, 0, 3]`); these together form a segmented vector with one segment per entry in the previous dimension. The `idx` array stores all the indices with nonzero values in the dimension, while the `pos` array stores the location in the `idx` array where each segment begins. In particular, segment $i$ is stored in locations `pos[i]:pos[i+1]` in the `idx` array.The below animation visualizes the format. Hover over any non-empty entry of the matrix on the left to see how the value is stored.
###Code
%run ./animation/animation_2.py
###Output
_____no_output_____
###Markdown
(dense $d_2$, compressed $d_1$)We switch the order of the dimensions by passing in `[1, 0]` for the optional parameter `mode_ordering`. This results in a column-major (rather than row-major) format called compressed sparse column (CSC).
###Code
csc = pt.format([dense, compressed], [1, 0])
make_example_matrix(csc)
###Output
_____no_output_____
###Markdown
In this format, $d_2$ has only `size`, while $d_1$ has a `pos` and `idx` array.
###Code
%run ./animation/animation_3.py
###Output
_____no_output_____
###Markdown
(compressed $d_1$, compressed $d_2$, compressed $d_3$)To more clearly visualize the compressed sparse fiber (CSF) format, where all dimensions are sparse, we move to three dimensions. The tensor $N$ defined below is a $2 \times 3 \times 4$ tensor.Similarly as above, passing in `compressed` is equivalent to `pt.format([compressed, compressed])`.
###Code
N = pt.tensor([2, 3, 4], compressed)
# First layer.
N.insert([0, 0, 0], 6)
N.insert([0, 0, 2], 9)
N.insert([0, 0, 3], 8)
N.insert([0, 2, 0], 5)
N.insert([0, 2, 3], 7)
# Second layer.
N.insert([1, 0, 1], 2)
N.insert([1, 1, 0], 3)
N.insert([1, 2, 2], 6)
N.insert([1, 1, 3], 1)
N
###Output
_____no_output_____
###Markdown
This animation represents $N$ as two $3 \times 4$ matrices.
###Code
%run ./animation/animation_4.py
###Output
_____no_output_____
###Markdown
NumPy and SciPy I/O We can also initialize tensors with NumPy arrays or SciPy sparse (CSR or CSC) matrices. Let's start by importing the packages we need.
###Code
import numpy as np
import scipy as sp
###Output
_____no_output_____
###Markdown
Given a NumPy array such as the one randomly generated below, we can convert it to a TACO tensor using the `pytaco.from_array` [function](http://tensor-compiler.org/symphony-docs/reference/rst_files/functions/pytaco.from_array.html), which creates a dense tensor.
###Code
array = np.random.uniform(size=10)
tensor = pt.from_array(array)
tensor
###Output
_____no_output_____
###Markdown
We can also export TACO tensors to NumPy arrays.
###Code
tensor.to_array()
###Output
_____no_output_____
###Markdown
Similarly, given a SciPy sparse matrix, we can convert it to a TACO tensor. For a CSR matrix like the one below, we use the `pytaco.from_sp_csr` [function](http://tensor-compiler.org/symphony-docs/reference/rst_files/functions/pytaco.from_sp_csr.html), which creates a CSR tensor.
###Code
size = 100
density = 0.1
sparse_matrix = sp.sparse.rand(size, size, density, format = 'csr')
A = pt.from_sp_csr(sparse_matrix)
###Output
_____no_output_____
###Markdown
And we can export the tensor to a SciPy sparse matrix.
###Code
A.to_sp_csr()
###Output
_____no_output_____
###Markdown
Example Application: SpMV The following example demonstrates how computations on tensors are performed using TACO. Sparse matrix-vector multiplication (SpMV) is a bottleneck computation in many scientific and engineering computations. Mathematically, SpMV can be expressed as $$y = Ax + z,$$ where $A$ is a sparse matrix and $x$, $y$, and $z$ are dense vectors. The computation can also be expressed in [index notation](http://tensor-compiler.org/symphony-docs/pycomputations/index.htmlspecifying-tensor-algebra-computations) as $$y_i = A_{ij} \cdot x_j + z_i.$$ Starting with the $A$ generated above, we can view its `shape` attribute, which we expect to be `[size, size]`:
###Code
A.shape
###Output
_____no_output_____
###Markdown
Examining the formula, we need to define a vector $x$ whose length is the number of columns of $A$, and a vector $z$ whose length is the number of rows. We generate $x$ and $z$ randomly with NumPy.
###Code
x = pt.from_array(np.random.uniform(size=A.shape[1]))
z = pt.from_array(np.random.uniform(size=A.shape[0]))
###Output
_____no_output_____
###Markdown
Expressing the Computation We can express the result $y$ as a dense vector.
###Code
y = pt.tensor([A.shape[0]], dense)
###Output
_____no_output_____
###Markdown
The syntax for TACO computations closely mirrors index notation, with the caveat that we also have to explicitly declare the index variables beforehand:
###Code
i, j = pt.get_index_vars(2)
y[i] = A[i, j] * x[j] + z[i]
###Output
_____no_output_____
###Markdown
Performing the ComputationOnce a tensor algebra computation has been defined, we can simply invoke the result tensor's `evaluate` method to perform the actual computation.[Under the hood](http://tensor-compiler.org/symphony-docs/pycomputations/index.htmlperforming-the-computation), TACO will first invoke the result tensor's `compile` method to generate code that performs the computation. TACO will then perform the actual computation by first invoking `assemble` to compute the sparsity structure of the result and subsequently invoking `compute` to compute the values of the result's nonzero elements.
###Code
y.evaluate()
###Output
_____no_output_____
###Markdown
If we define a computation and then access the result without first manually invoking `evaluate` or `compile`/`assemble`/`compute`, TACO will automatically invoke the computation immediately before the result is accessed. Finally, we can display the result.
###Code
y
###Output
_____no_output_____ |
gilbert/cb_model_pipeline.ipynb | ###Markdown
Content Based ModelThis notebook uses code from the cross_val, sample_train_test, and evaluate pipeline.
###Code
import pandas as pd
import numpy as np
from sklearn.base import clone
from sklearn.ensemble import RandomForestRegressor
def load_data(aug_tt, item_tt, user_tt):
"""
Load the data from the transaction tables
Paramters
---------
aug_tt : str
File name of the parquet file with each row corresponding
to a user's features, an item's features, and the user's
rating for that item
item_tt : str
File name of the parquet file with each row corresponding
to an item's features
user_tt : str
File name of the parquet file with each row corresponding
to a user's features
Returns
-------
df : pandas DataFrame
The augmented transaction table
item_df : pandas DataFrame
The item features as a transaction table
user_df : pandas DataFrame
The userfeatures as a transaction table
item_ids : list
All unique item ids
user_ids : list
All unique user ids
"""
df = pd.read_parquet(aug_tt).dropna()
item_df = pd.read_parquet(item_tt)
item_ids = item_df['movieId'].unique()
item_df = item_df.drop(columns=['movieId'])
user_df = pd.read_parquet(user_tt).drop(columns=['userId'])
user_ids = df['userId'].unique()
return df, item_df, user_df, item_ids, user_ids
def fit_ml_cb(train_df, model, target_col='rating', drop_cols=['userId', 'movieId', 'timestamp']):
"""
Perform item-wise clustering and assign each item to a cluster of similar
items based on the users that
Paramters
---------
train_df : pandas DataFrame
The training set as a transaction table. Each row
corresponds to a user's features and that item's features
along with the user's rating for that item.
model : an sklearn regressor object
An object with a fit and predict method that outputs a
float.
target_col : str
The column corresponding to the rating.
drop_cols : list
Columns to be dropped in train_df.
Returns
-------
rs_model : an sklearn model object
The fitted version of the model input used to predict the
rating of a user for an object given the user's features
and the item's features.
"""
rs_model = clone(model)
target = train_df[target_col].dropna().values.ravel()
train_df = train_df.drop(columns=[target_col]+drop_cols)
rs_model = model.fit(train_df, target)
return rs_model
def reco_ml_cb(user_df, item_df, item_ids, model_fitted):
"""
Completes the entire utility matrix based on the model passed
Parameters
---------
train_df : pandas DataFrame
The training set as a transaction table. Each row
corresponds to a user's features and that item's features
along with the user's rating for that item.
model : an sklearn regressor object
An object with a fit and predict method that outputs a
float.
target_col : str
The column corresponding to the rating.
Returns
-------
full_matrix : a pandas DataFrame
The completed utility matrix.
"""
recos = {}
c = 1
for u, u_feats in user_df.iterrows():
print(c, 'out of', len(user_df), end='\r')
u_feats = pd.concat([pd.DataFrame(u_feats).T] *
len(item_ids)).reset_index(drop=True)
a_feats = u_feats.join(item_df)
reco = pd.Series(model_fitted.predict(a_feats), index=item_ids)
recos[u] = reco
c += 1
full_matrix = pd.DataFrame.from_dict(recos, orient='index')
return full_matrix
def reco_ml_cb_tt(df_test, model_fitted, target='rating', drop_cols=['userId', 'movieId', 'timestamp']):
"""
Make predictions on the test set and outputs an array of the predicted
values for them.
Paramters
---------
df_test : pandas DataFrame
The test set as a transaction table. Each row
corresponds to a user's features and that item's features
along with the user's rating for that item.
model_fitted : an sklearn regressor object
An object with a fit and predict method that outputs a
float. Must be fitted already
target_col : str
The column corresponding to the rating.
drop_cols : list
Columns to be dropped in df_test.
Returns
-------
result : numpy array
The results of the model using df_test's features
"""
df_test = df_test.drop(columns=[target]+drop_cols)
result = model_fitted.predict(df_test)
return result
def split_train_test(data, train_ratio=0.7,uid='userId', iid='movieId', rid='rating'):
"""
Splits the transaction data into train and test sets.
Parameters
----------
data : pandas DataFrame for transaction table containing user, item, and ratings
train_ratio : the desired ratio of training set, while 1-train ratio is automatically set for the test set
Returns
---------
df_train_fin : dataframe for the training set
df_test_fin : dataframe for the test set
df_test_fin* : possible option is a pivoted df ready as the util matrix input of the recsys. In our case, the
index='userId', columns='movieId', values='rating'. To generalize a transaction table,
index=column[0], columns=itemId, values=rating.
"""
list_df_train = []
list_df_test = []
#group by user id
d = dict(tuple(data.groupby(data.columns[0]))) #assuming column[0] is the userId
#splitting randomly per user
for i in (d):
if len(d[i])<2:
list_df_test.append(d[i])
else:
df_train = d[i].sample(frac=train_ratio)
ind = df_train.index
df_test = d[i].drop(ind)
list_df_train.append(df_train)
list_df_test.append(df_test)
# 2. merge selected train set per user to a single dataframe
df_train_fin = pd.concat(list_df_train)
df_test_fin = pd.concat(list_df_test)
# 3. Option to pivot it to create the utility matrix ready as input for recsys
df_test_um = df_test_fin.pivot(index=uid, columns=iid, values=rid)
# 4. get indices of train and test sets
indx_train = df_train_fin.index
indx_test = df_test_fin.index
return df_train_fin, df_test_fin, df_test_um, indx_train, indx_test #return indices
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
def evaluate(df_test_result, df_test_data):
"""
Calculates the mse and mae per user of the results of the recommender system for a given test set.
Parameters
----------
df_test_result : utility matrix containing the result of the recommender systems
df_test_data : pivoted test data generated from splitting the transaction table and tested on the recommender systems
Returns
---------
mse_list : list of mean squared error for each user
mae_list : list of mean absolute error for each user
"""
mse_list = []
mae_list = []
# test indices first, all user ids should be represented in the test matrix
idx_orig_data = df_test_data.index
idx_result = df_test_result.index
a=idx_orig_data.difference(idx_result)
if len(a)==0:
print('proceed')
for i in (df_test_result.index):
y_pred = df_test_result[df_test_result.index==i].fillna(0)
y = df_test_data[df_test_data.index==i].fillna(0)
y_pred = y_pred[y.columns]
mse = mean_squared_error(y, y_pred)
mae = mean_absolute_error(y, y_pred)
mse_list.append(mse)
mae_list.append(mae)
else:
print('error')
return mse_list, mae_list
import pandas as pd
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
def evaluate_arrays(model_result_arr, df_data, indx_test):
"""
Calculates the mse and mae of the recommender system for a given result and test set.
Parameters
----------
model_result_arr : ratings from the results of the recommender sys using test set
df_test_truth : the original dataframe for before splitting.
the original ratings or ground truth from the test set will be extracted from here using indices
indx_test : result indices of test set from splitting
Returns
---------
mse : mse value using sklearn
mae : mse value using sklearn
"""
df_test_truth = df_data.loc[pd.Index(indx_test), df_data.columns[2]]
test_arr = df_test_truth.values
# test indices first, all user ids should be represented in the test matrix
result_len = len(model_result_arr)
test_len = len(test_arr)
if result_len!=test_len:
raise ValueError('the arrays are of different lengths %s in %s' % (result_len,test_len))
else:
print('proceed')
mse = mean_squared_error(test_arr, model_result_arr)
mae = mean_absolute_error(test_arr, model_result_arr)
return mse, mae
def cross_val(df, k, model, split_method='random'):
"""
Performs cross-validation for different train and test sets.
Parameters
-----------
df : the data to be split in the form of vanilla/transaction++ table (uid, iid, rating, timestamp)
k : the number of times splitting and learning with the model is desired
model : an unfitted sklearn model
split_method : 'random' splitting or 'chronological' splitting of the data
Returns
--------
mse and mae : error metrics using sklearn
"""
mse = []
mae = []
if split_method == 'random':
for i in range(k):
print(i)
# 1. split
print('Starting splitting')
df_train, df_test, df_test_um, indx_train, indx_test = split_train_test(
df, 0.7)
print('Finished splitting')
# 2. train with model
model_clone = clone(model)
print('Starting training')
model_clone_fit = fit_ml_cb(df_train.sample(100), model_clone)
print('Finished training')
print('Starting completing matrix')
result = reco_ml_cb_tt(df_test, model_fit)
print('Finished completing matrix')
print('Starting computing MAE and MSE')
# 3. evaluate results (result is in the form of utility matrix)
mse_i, mae_i = evaluate_arrays(result, df, indx_test)
print('Finished computing MAE and MSE')
mse.append(mse_i)
mae.append(mae_i)
elif split_method == 'chronological':
# 1. split
df_train, df_test, df_test_um, indx_train, indx_test = split_train_test_chronological(
df, 0.7)
print('Starting splitting')
print('Finished splitting')
# 2. train with model
model_clone = clone(model)
print('Starting training')
model_clone_fit = fit_ml_cb(df_train.sample(100), model_clone)
print('Finished training')
print('Starting completing matrix')
result = reco_ml_cb_tt(df_test, model_fit)
print('Finished completing matrix')
print('Starting computing MAE and MSE')
# 3. evaluate results (result is in the form of utility matrix)
mse_i, mae_i = evaluate_arrays(result, df, indx_test)
print('Finished computing MAE and MSE')
mse.append(mse_i)
mae.append(mae_i)
return mse, mae
###Output
_____no_output_____
###Markdown
Model Pipeline
###Code
#Declare your model
rs_model1 = RandomForestRegressor(random_state=202109, n_jobs=-1)
#Load the data
df, item_df, user_df, item_ids, user_ids = load_data('augmented_transaction_table.parquet',
'item_feature.parquet',
'user_feature.parquet')
#Do your train and test split
df_train, df_test, df_test_um, indx_train, indx_test = split_train_test(df, 0.7) #To split the data
# #Fit your model to the train data
model_fit = fit_ml_cb(df_train.sample(100), rs_model1) #To fit the model
#Predict on the test data
preds_array = reco_ml_cb_tt(df_test, model_fit) #To make predictions as an array
mse, mae = cross_val(df, 5, model_fit, split_method='random')
mse,
evaluate_arrays(preds_array, df, indx_test) #MSE and MAE
# preds_matrix = reco_ml_cb(user_df, item_df, model_fit) #To complete the utility matrix
import unittest
class TestGetRec(unittest.TestCase):
import pandas as pd
import numpy as np
from sklearn.base import clone
from sklearn.ensemble import RandomForestRegressor
def test_matrix_shape(self):
df, item_df, user_df, item_ids, user_ids = load_data('augmented_transaction_table.parquet',
'item_feature.parquet',
'user_feature.parquet')
df_train, df_test, df_test_um, indx_train, indx_test = split_train_test(df, 0.7) #To split the data
model_fit = fit_ml_cb(df_train.sample(100), rs_model1)
matrix_result = reco_ml_cb(user_df, item_df, item_ids, model_fit)
self.assertEqual(matrix_result.shape[0], len(user_ids))
self.assertEqual(matrix_result.shape[1], len(item_ids))
def test_array_pred(self):
df, item_df, user_df, item_ids, user_ids = load_data('augmented_transaction_table.parquet',
'item_feature.parquet',
'user_feature.parquet')
df_train, df_test, df_test_um, indx_train, indx_test = split_train_test(df, 0.7) #To split the data
model_fit = fit_ml_cb(df_train.sample(100), rs_model1)
array_result = reco_ml_cb_tt(df_test, model_fit)
self.assertEqual(len(array_result), len(df_test))
unittest.main(argv=[''], verbosity=2, exit=False)
###Output
test_array_pred (__main__.TestGetRec) ... |
demo/PyPlink Demo.ipynb | ###Markdown
`PyPlink``PyPlink` is a Python module to read and write binary Plink files. Here are small examples for `PyPlink`.
###Code
from pyplink import PyPlink
###Output
_____no_output_____
###Markdown
Table of contents* [**Reading binary pedfile**](Reading-binary-pedfile) * [Getting the demo data](Getting-the-demo-data) * [Reading the binary file](Reading-the-binary-file) * [Getting dataset information](Getting-dataset-information) * [Iterating over all markers](Iterating-over-all-markers) * [*Additive format*](iterating_over_all_additive) * [*Nucleotide format*](iterating_over_all_nuc) * [Iterating over selected markers](Iterating-over-selected-markers) * [*Additive format*](iterating_over_selected_additive) * [*Nucleotide format*](iterating_over_selected_nuc) * [Extracting a single marker](Extracting-a-single-marker) * [*Additive format*](extracting_additive) * [*Nucleotide format*](extracting_nuc) * [Misc example](Misc-example) * [*Extracting a subset of markers and samples*](Extracting-a-subset-of-markers-and-samples) * [*Counting the allele frequency of markers*](Counting-the-allele-frequency-of-markers)* [**Writing binary pedfile**](Writing-binary-pedfile) * [SNP-major format](SNP-major-format) * [INDIVIDUAL-major-format](INDIVIDUAL-major-format) Reading binary pedfile Getting the demo dataThe [`Plink`](http://pngu.mgh.harvard.edu/~purcell/plink/) softwares provides a testing dataset on the [resources page](http://pngu.mgh.harvard.edu/~purcell/plink/res.shtml). It contains the 270 samples from the HapMap project (release 23) on build GRCh36/hg18.
###Code
import zipfile
try:
from urllib.request import urlretrieve
except ImportError:
from urllib import urlretrieve
# Downloading the demo data from Plink webset
urlretrieve(
"http://pngu.mgh.harvard.edu/~purcell/plink/dist/hapmap_r23a.zip",
"hapmap_r23a.zip",
)
# Extracting the archive content
with zipfile.ZipFile("hapmap_r23a.zip", "r") as z:
z.extractall(".")
###Output
_____no_output_____
###Markdown
Reading the binary fileTo read a binary file, `PyPlink` only requires the prefix of the files.
###Code
pedfile = PyPlink("hapmap_r23a")
###Output
_____no_output_____
###Markdown
Getting dataset information
###Code
print("{:,d} samples and {:,d} markers".format(
pedfile.get_nb_samples(),
pedfile.get_nb_markers(),
))
all_samples = pedfile.get_fam()
all_samples.head()
all_markers = pedfile.get_bim()
all_markers.head()
###Output
_____no_output_____
###Markdown
Iterating over all markers Additive formatCycling through genotypes as `-1`, `0`, `1` and `2` values, where `-1` is unknown, `0` is homozygous (major allele), `1` is heterozygous, and `2` is homozygous (minor allele).
###Code
for marker_id, genotypes in pedfile:
print(marker_id)
print(genotypes)
break
for marker_id, genotypes in pedfile.iter_geno():
print(marker_id)
print(genotypes)
break
###Output
rs10399749
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0
0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 -1
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 -1 0 0 0
0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 -1 0 0 0 0 0 0 0
-1 0 -1 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 -1 0 -1 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
###Markdown
Nucleotide formatCycling through genotypes as `A`, `C`, `G` and `T` values (where `00` is unknown).
###Code
for marker_id, genotypes in pedfile.iter_acgt_geno():
print(marker_id)
print(genotypes)
break
###Output
rs10399749
['CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' '00' 'CC' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' '00' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC'
'CC' '00' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' '00' 'CC' 'CC' 'CC' 'CC' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' '00' 'CC' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' '00'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' '00' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' '00' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' '00' 'CC' '00' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC'
'00' 'CC' '00' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' '00' 'CC' 'CC'
'CC' 'CC' 'CC' '00' 'CC' '00' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC'
'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC' 'CC']
###Markdown
Iterating over selected markers Additive formatCycling through genotypes as `-1`, `0`, `1` and `2` values, where `-1` is unknown, `0` is homozygous (major allele), `1` is heterozygous, and `2` is homozygous (minor allele).
###Code
markers = ["rs7092431", "rs9943770", "rs1587483"]
for marker_id, genotypes in pedfile.iter_geno_marker(markers):
print(marker_id)
print(genotypes, end="\n\n")
###Output
rs7092431
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
rs9943770
[ 0 0 0 2 0 2 0 1 1 0 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0 0
1 0 1 0 0 1 1 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0 1 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 0 0 0 0 0
0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 0 0 0
0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 -1 0 0 0 1 2 1 1 0 1 1 2 0 0 0 1 0 0 0 0 1 1 0 2
1 1 1 0 1 1 -1 1 0 0 1 0 0 2 0 1 2 0 1 1 1 2 0 1 0
0 0 0 0 0 0 2 1 2 1 1 2 1 1 1 1 0 1 1 2 1 0 -1 0 1
1 1 0 1 0 1 -1 2 1 0 0 0 1 1 1 2 0 0 1 1]
rs1587483
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 -1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
###Markdown
Nucleotide formatCycling through genotypes as `A`, `C`, `G` and `T` values (where `00` is unknown).
###Code
markers = ["rs7092431", "rs9943770", "rs1587483"]
for marker_id, genotypes in pedfile.iter_acgt_geno_marker(markers):
print(marker_id)
print(genotypes, end="\n\n")
###Output
rs7092431
['GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' '00'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG']
rs9943770
['AA' 'AA' 'AA' 'GG' 'AA' 'GG' 'AA' 'GA' 'GA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA'
'AA' 'GA' 'AA' 'AA' 'GA' 'AA' 'AA' 'GA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA'
'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'GA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA'
'GA' 'AA' 'GA' 'AA' 'AA' 'GA' 'AA' 'GA' 'AA' 'AA' 'GA' 'GA' 'GA' 'AA' 'AA'
'AA' 'AA' 'AA' 'AA' 'AA' 'GA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'GA'
'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'GA' 'AA' 'AA' 'GA' 'AA' 'AA' 'AA'
'AA' 'GA' 'GA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA'
'AA' 'AA' 'AA' 'AA' 'AA' 'GA' 'AA' 'AA' 'GA' 'AA' 'AA' 'AA' 'GA' 'AA' 'AA'
'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'GA' 'GA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA'
'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'GA' 'AA' 'AA' 'GA' 'AA' 'AA' 'AA' 'AA'
'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'GA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA'
'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'GA' '00' 'AA' 'AA' 'AA'
'GA' 'GG' 'GA' 'GA' 'AA' 'GA' 'GA' 'GG' 'AA' 'AA' 'AA' 'GA' 'AA' 'AA' 'AA'
'AA' 'GA' 'GA' 'AA' 'GG' 'GA' 'GA' 'GA' 'AA' 'GA' 'GA' '00' 'GA' 'AA' 'AA'
'GA' 'AA' 'AA' 'GG' 'AA' 'GA' 'GG' 'AA' 'GA' 'GA' 'GA' 'GG' 'AA' 'GA' 'AA'
'AA' 'AA' 'AA' 'AA' 'AA' 'AA' 'GG' 'GA' 'GG' 'GA' 'GA' 'GG' 'GA' 'GA' 'GA'
'GA' 'AA' 'GA' 'GA' 'GG' 'GA' 'AA' '00' 'AA' 'GA' 'GA' 'GA' 'AA' 'GA' 'AA'
'GA' '00' 'GG' 'GA' 'AA' 'AA' 'AA' 'GA' 'GA' 'GA' 'GG' 'AA' 'AA' 'GA' 'GA']
rs1587483
['GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' '00' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' '00' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'CG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' '00' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG'
'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG' 'GG']
###Markdown
Extracting a single marker Additive formatCycling through genotypes as `-1`, `0`, `1` and `2` values, where `-1` is unknown, `0` is homozygous (major allele), `1` is heterozygous, and `2` is homozygous (minor allele).
###Code
pedfile.get_geno_marker("rs7619974")
###Output
_____no_output_____
###Markdown
Nucleotide formatCycling through genotypes as `A`, `C`, `G` and `T` values (where `00` is unknown).
###Code
pedfile.get_acgt_geno_marker("rs7619974")
###Output
_____no_output_____
###Markdown
Misc example Extracting a subset of markers and samplesTo get all markers on the Y chromosomes for the males.
###Code
# Getting the Y markers
y_markers = all_markers[all_markers.chrom == 24].index.values
# Getting the males
males = all_samples.gender == 1
# Cycling through the Y markers
for marker_id, genotypes in pedfile.iter_geno_marker(y_markers):
male_genotypes = genotypes[males.values]
print("{:,d} total genotypes".format(len(genotypes)))
print("{:,d} genotypes for {:,d} males ({} on chr{} and position {:,d})".format(
len(male_genotypes),
males.sum(),
marker_id,
all_markers.loc[marker_id, "chrom"],
all_markers.loc[marker_id, "pos"],
))
break
###Output
270 total genotypes
142 genotypes for 142 males (rs1140798 on chr24 and position 169,542)
###Markdown
Counting the allele frequency of markersTo count the minor allele frequency of a subset of markers (only for founders).
###Code
# Getting the founders
founders = (all_samples.father == "0") & (all_samples.mother == "0")
# Computing the MAF
markers = ["rs7619974", "rs2949048", "rs16941434"]
for marker_id, genotypes in pedfile.iter_geno_marker(markers):
valid_genotypes = genotypes[founders.values & (genotypes != -1)]
maf = valid_genotypes.sum() / (len(valid_genotypes) * 2)
print(marker_id, round(maf, 6), sep="\t")
###Output
rs7619974 0.0
rs2949048 0.02381
rs16941434 0.357143
###Markdown
Writing binary pedfile *SNP-major* formatThe following examples shows how to write a binary file using the `PyPlink` module. The *SNP-major* format is the default. It means that the binary file is written one marker at a time.> Note that `PyPlink` only writes the `BED` file. The user is required to create the `FAM` and `BIM` files.
###Code
# The genotypes for 3 markers and 10 samples
all_genotypes = [
[0, 0, 0, 1, 0, 0, -1, 2, 1, 0],
[0, 0, 1, 1, 0, 0, 0, 1, 2, 0],
[0, 0, 0, 0, 1, 1, 0, 0, 0, 1],
]
# Writing the BED file using PyPlink
with PyPlink("test_output", "w") as pedfile:
for genotypes in all_genotypes:
pedfile.write_genotypes(genotypes)
# Writing a dummy FAM file
with open("test_output.fam", "w") as fam_file:
for i in range(10):
print("family_{}".format(i+1), "sample_{}".format(i+1), "0", "0", "0", "-9",
sep=" ", file=fam_file)
# Writing a dummy BIM file
with open("test_output.bim", "w") as bim_file:
for i in range(3):
print("1", "marker_{}".format(i+1), "0", i+1, "A", "T",
sep="\t", file=bim_file)
# Checking the content of the newly created binary files
pedfile = PyPlink("test_output")
pedfile.get_fam()
pedfile.get_bim()
for marker, genotypes in pedfile:
print(marker, genotypes)
###Output
marker_1 [ 0 0 0 1 0 0 -1 2 1 0]
marker_2 [0 0 1 1 0 0 0 1 2 0]
marker_3 [0 0 0 0 1 1 0 0 0 1]
###Markdown
The newly created binary files are compatible with Plink.
###Code
from subprocess import Popen, PIPE
# Computing frequencies
proc = Popen(["plink", "--noweb", "--bfile", "test_output", "--freq"],
stdout=PIPE, stderr=PIPE)
outs, errs = proc.communicate()
print(outs.decode(), end="")
with open("plink.frq", "r") as i_file:
print(i_file.read(), end="")
###Output
CHR SNP A1 A2 MAF NCHROBS
1 marker_1 A T 0.2222 18
1 marker_2 A T 0.25 20
1 marker_3 A T 0.15 20
###Markdown
*INDIVIDUAL-major* formatThe following examples shows how to write a binary file using the `PyPlink` module. The *INDIVIDUAL-major* format means that the binary file is written one sample at a time.**Files in *INDIVIDUAL-major* format is not readable by `PyPlink`.** You need to convert it using *Plink*.> Note that `PyPlink` only writes the `BED` file. The user is required to create the `FAM` and `BIM` files.
###Code
# The genotypes for 3 markers and 10 samples (INDIVIDUAL-major)
all_genotypes = [
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 1, 0],
[ 1, 1, 0],
[ 0, 0, 1],
[ 0, 0, 1],
[-1, 0, 0],
[ 2, 1, 0],
[ 1, 2, 0],
[ 0, 0, 1],
]
# Writing the BED file using PyPlink
with PyPlink("test_output_2", "w", bed_format="INDIVIDUAL-major") as pedfile:
for genotypes in all_genotypes:
pedfile.write_genotypes(genotypes)
# Writing a dummy FAM file
with open("test_output_2.fam", "w") as fam_file:
for i in range(10):
print("family_{}".format(i+1), "sample_{}".format(i+1), "0", "0", "0", "-9",
sep=" ", file=fam_file)
# Writing a dummy BIM file
with open("test_output_2.bim", "w") as bim_file:
for i in range(3):
print("1", "marker_{}".format(i+1), "0", i+1, "A", "T",
sep="\t", file=bim_file)
from subprocess import Popen, PIPE
# Computing frequencies
proc = Popen(["plink", "--noweb", "--bfile", "test_output_2", "--freq", "--out", "plink_2"],
stdout=PIPE, stderr=PIPE)
outs, errs = proc.communicate()
print(outs.decode(), end="")
with open("plink_2.frq", "r") as i_file:
print(i_file.read(), end="")
###Output
CHR SNP A1 A2 MAF NCHROBS
1 marker_1 A T 0.2222 18
1 marker_2 A T 0.25 20
1 marker_3 A T 0.15 20
|
5_RNN/2_Dinosaurus_Island_Character_level_language_model.ipynb | ###Markdown
Character level language model - Dinosaurus IslandWelcome to Dinosaurus Island! 65 million years ago, dinosaurs existed, and in this assignment they are back. You are in charge of a special task. Leading biology researchers are creating new breeds of dinosaurs and bringing them to life on earth, and your job is to give names to these dinosaurs. If a dinosaur does not like its name, it might go berserk, so choose wisely! Luckily you have learned some deep learning and you will use it to save the day. Your assistant has collected a list of all the dinosaur names they could find, and compiled them into this [dataset](dinos.txt). (Feel free to take a look by clicking the previous link.) To create new dinosaur names, you will build a character level language model to generate new names. Your algorithm will learn the different name patterns, and randomly generate new names. Hopefully this algorithm will keep you and your team safe from the dinosaurs' wrath! By completing this assignment you will learn:- How to store text data for processing using an RNN - How to synthesize data, by sampling predictions at each time step and passing it to the next RNN-cell unit- How to build a character-level text generation recurrent neural network- Why clipping the gradients is importantWe will begin by loading in some functions that we have provided for you in `rnn_utils`. Specifically, you have access to functions such as `rnn_forward` and `rnn_backward` which are equivalent to those you've implemented in the previous assignment. Updates If you were working on the notebook before this update...* The current notebook is version "3b".* You can find your original work saved in the notebook with the previous version name ("v3a") * To view the file directory, go to the menu "File->Open", and this will open a new tab that shows the file directory. List of updates 3b- removed redundant numpy import* `clip` - change test code to use variable name 'mvalue' rather than 'maxvalue' and deleted it from namespace to avoid confusion.* `optimize` - removed redundant description of clip function to discourage use of using 'maxvalue' which is not an argument to optimize* `model` - added 'verbose mode to print X,Y to aid in creating that code. - wordsmith instructions to prevent confusion - 2000 examples vs 100, 7 displayed vs 10 - no randomization of order* `sample` - removed comments regarding potential different sample outputs to reduce confusion.
###Code
import numpy as np
from utils import *
import random
import pprint
###Output
_____no_output_____
###Markdown
1 - Problem Statement 1.1 - Dataset and PreprocessingRun the following cell to read the dataset of dinosaur names, create a list of unique characters (such as a-z), and compute the dataset and vocabulary size.
###Code
data = open('dinos.txt', 'r').read()
data= data.lower()
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
print('There are %d total characters and %d unique characters in your data.' % (data_size, vocab_size))
###Output
There are 19909 total characters and 27 unique characters in your data.
###Markdown
* The characters are a-z (26 characters) plus the "\n" (or newline character).* In this assignment, the newline character "\n" plays a role similar to the `` (or "End of sentence") token we had discussed in lecture. - Here, "\n" indicates the end of the dinosaur name rather than the end of a sentence. * `char_to_ix`: In the cell below, we create a python dictionary (i.e., a hash table) to map each character to an index from 0-26.* `ix_to_char`: We also create a second python dictionary that maps each index back to the corresponding character. - This will help you figure out what index corresponds to what character in the probability distribution output of the softmax layer.
###Code
chars = sorted(chars)
print(chars)
char_to_ix = { ch:i for i,ch in enumerate(chars) }
ix_to_char = { i:ch for i,ch in enumerate(chars) }
pp = pprint.PrettyPrinter(indent=4)
pp.pprint(ix_to_char)
###Output
{ 0: '\n',
1: 'a',
2: 'b',
3: 'c',
4: 'd',
5: 'e',
6: 'f',
7: 'g',
8: 'h',
9: 'i',
10: 'j',
11: 'k',
12: 'l',
13: 'm',
14: 'n',
15: 'o',
16: 'p',
17: 'q',
18: 'r',
19: 's',
20: 't',
21: 'u',
22: 'v',
23: 'w',
24: 'x',
25: 'y',
26: 'z'}
###Markdown
1.2 - Overview of the modelYour model will have the following structure: - Initialize parameters - Run the optimization loop - Forward propagation to compute the loss function - Backward propagation to compute the gradients with respect to the loss function - Clip the gradients to avoid exploding gradients - Using the gradients, update your parameters with the gradient descent update rule.- Return the learned parameters **Figure 1**: Recurrent Neural Network, similar to what you had built in the previous notebook "Building a Recurrent Neural Network - Step by Step". * At each time-step, the RNN tries to predict what is the next character given the previous characters. * The dataset $\mathbf{X} = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is a list of characters in the training set.* $\mathbf{Y} = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$ is the same list of characters but shifted one character forward. * At every time-step $t$, $y^{\langle t \rangle} = x^{\langle t+1 \rangle}$. The prediction at time $t$ is the same as the input at time $t + 1$. 2 - Building blocks of the modelIn this part, you will build two important blocks of the overall model:- Gradient clipping: to avoid exploding gradients- Sampling: a technique used to generate charactersYou will then apply these two functions to build the model. 2.1 - Clipping the gradients in the optimization loopIn this section you will implement the `clip` function that you will call inside of your optimization loop. Exploding gradients* When gradients are very large, they're called "exploding gradients." * Exploding gradients make the training process more difficult, because the updates may be so large that they "overshoot" the optimal values during back propagation.Recall that your overall loop structure usually consists of:* forward pass, * cost computation, * backward pass, * parameter update. Before updating the parameters, you will perform gradient clipping to make sure that your gradients are not "exploding." gradient clippingIn the exercise below, you will implement a function `clip` that takes in a dictionary of gradients and returns a clipped version of gradients if needed. * There are different ways to clip gradients.* We will use a simple element-wise clipping procedure, in which every element of the gradient vector is clipped to lie between some range [-N, N]. * For example, if the N=10 - The range is [-10, 10] - If any component of the gradient vector is greater than 10, it is set to 10. - If any component of the gradient vector is less than -10, it is set to -10. - If any components are between -10 and 10, they keep their original values. **Figure 2**: Visualization of gradient descent with and without gradient clipping, in a case where the network is running into "exploding gradient" problems. **Exercise**: Implement the function below to return the clipped gradients of your dictionary `gradients`. * Your function takes in a maximum threshold and returns the clipped versions of the gradients. * You can check out [numpy.clip](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.clip.html). - You will need to use the argument "`out = ...`". - Using the "`out`" parameter allows you to update a variable "in-place". - If you don't use "`out`" argument, the clipped variable is stored in the variable "gradient" but does not update the gradient variables `dWax`, `dWaa`, `dWya`, `db`, `dby`.
###Code
### GRADED FUNCTION: clip
def clip(gradients, maxValue):
'''
Clips the gradients' values between minimum and maximum.
Arguments:
gradients -- a dictionary containing the gradients "dWaa", "dWax", "dWya", "db", "dby"
maxValue -- everything above this number is set to this number, and everything less than -maxValue is set to -maxValue
Returns:
gradients -- a dictionary with the clipped gradients.
'''
dWaa, dWax, dWya, db, dby = gradients['dWaa'], gradients['dWax'], gradients['dWya'], gradients['db'], gradients['dby']
### START CODE HERE ###
# clip to mitigate exploding gradients, loop over [dWax, dWaa, dWya, db, dby]. (≈2 lines)
for gradient in [dWaa, dWax, dWya, db, dby]:
np.clip(a=gradient, a_max=maxValue, a_min=(-1*maxValue), out=gradient)
### END CODE HERE ###
gradients = {"dWaa": dWaa, "dWax": dWax, "dWya": dWya, "db": db, "dby": dby}
return gradients
# Test with a maxvalue of 10
mValue = 10
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, mValue)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
###Output
gradients["dWaa"][1][2] = 10.0
gradients["dWax"][3][1] = -10.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 10.]
gradients["dby"][1] = [ 8.45833407]
###Markdown
** Expected output:**```Pythongradients["dWaa"][1][2] = 10.0gradients["dWax"][3][1] = -10.0gradients["dWya"][1][2] = 0.29713815361gradients["db"][4] = [ 10.]gradients["dby"][1] = [ 8.45833407]```
###Code
# Test with a maxValue of 5
mValue = 5
np.random.seed(3)
dWax = np.random.randn(5,3)*10
dWaa = np.random.randn(5,5)*10
dWya = np.random.randn(2,5)*10
db = np.random.randn(5,1)*10
dby = np.random.randn(2,1)*10
gradients = {"dWax": dWax, "dWaa": dWaa, "dWya": dWya, "db": db, "dby": dby}
gradients = clip(gradients, mValue)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
del mValue # avoid common issue
###Output
gradients["dWaa"][1][2] = 5.0
gradients["dWax"][3][1] = -5.0
gradients["dWya"][1][2] = 0.29713815361
gradients["db"][4] = [ 5.]
gradients["dby"][1] = [ 5.]
###Markdown
** Expected Output: **```Pythongradients["dWaa"][1][2] = 5.0gradients["dWax"][3][1] = -5.0gradients["dWya"][1][2] = 0.29713815361gradients["db"][4] = [ 5.]gradients["dby"][1] = [ 5.]``` 2.2 - SamplingNow assume that your model is trained. You would like to generate new text (characters). The process of generation is explained in the picture below: **Figure 3**: In this picture, we assume the model is already trained. We pass in $x^{\langle 1\rangle} = \vec{0}$ at the first time step, and have the network sample one character at a time. **Exercise**: Implement the `sample` function below to sample characters. You need to carry out 4 steps:- **Step 1**: Input the "dummy" vector of zeros $x^{\langle 1 \rangle} = \vec{0}$. - This is the default input before we've generated any characters. We also set $a^{\langle 0 \rangle} = \vec{0}$ - **Step 2**: Run one step of forward propagation to get $a^{\langle 1 \rangle}$ and $\hat{y}^{\langle 1 \rangle}$. Here are the equations:hidden state: $$ a^{\langle t+1 \rangle} = \tanh(W_{ax} x^{\langle t+1 \rangle } + W_{aa} a^{\langle t \rangle } + b)\tag{1}$$activation:$$ z^{\langle t + 1 \rangle } = W_{ya} a^{\langle t + 1 \rangle } + b_y \tag{2}$$prediction:$$ \hat{y}^{\langle t+1 \rangle } = softmax(z^{\langle t + 1 \rangle })\tag{3}$$- Details about $\hat{y}^{\langle t+1 \rangle }$: - Note that $\hat{y}^{\langle t+1 \rangle }$ is a (softmax) probability vector (its entries are between 0 and 1 and sum to 1). - $\hat{y}^{\langle t+1 \rangle}_i$ represents the probability that the character indexed by "i" is the next character. - We have provided a `softmax()` function that you can use. Additional Hints- $x^{\langle 1 \rangle}$ is `x` in the code. When creating the one-hot vector, make a numpy array of zeros, with the number of rows equal to the number of unique characters, and the number of columns equal to one. It's a 2D and not a 1D array.- $a^{\langle 0 \rangle}$ is `a_prev` in the code. It is a numpy array of zeros, where the number of rows is $n_{a}$, and number of columns is 1. It is a 2D array as well. $n_{a}$ is retrieved by getting the number of columns in $W_{aa}$ (the numbers need to match in order for the matrix multiplication $W_{aa}a^{\langle t \rangle}$ to work.- [numpy.dot](https://docs.scipy.org/doc/numpy/reference/generated/numpy.dot.html)- [numpy.tanh](https://docs.scipy.org/doc/numpy/reference/generated/numpy.tanh.html) Using 2D arrays instead of 1D arrays* You may be wondering why we emphasize that $x^{\langle 1 \rangle}$ and $a^{\langle 0 \rangle}$ are 2D arrays and not 1D vectors.* For matrix multiplication in numpy, if we multiply a 2D matrix with a 1D vector, we end up with with a 1D array.* This becomes a problem when we add two arrays where we expected them to have the same shape.* When two arrays with a different number of dimensions are added together, Python "broadcasts" one across the other.* Here is some sample code that shows the difference between using a 1D and 2D array.
###Code
matrix1 = np.array([[1,1],[2,2],[3,3]]) # (3,2)
matrix2 = np.array([[0],[0],[0]]) # (3,1)
vector1D = np.array([1,1]) # (2,)
vector2D = np.array([[1],[1]]) # (2,1)
print("matrix1 \n", matrix1,"\n")
print("matrix2 \n", matrix2,"\n")
print("vector1D \n", vector1D,"\n")
print("vector2D \n", vector2D)
print("Multiply 2D and 1D arrays: result is a 1D array\n",
np.dot(matrix1,vector1D))
print("Multiply 2D and 2D arrays: result is a 2D array\n",
np.dot(matrix1,vector2D))
print("Adding (3 x 1) vector to a (3 x 1) vector is a (3 x 1) vector\n",
"This is what we want here!\n",
np.dot(matrix1,vector2D) + matrix2)
print("Adding a (3,) vector to a (3 x 1) vector\n",
"broadcasts the 1D array across the second dimension\n",
"Not what we want here!\n",
np.dot(matrix1,vector1D) + matrix2
)
###Output
Adding a (3,) vector to a (3 x 1) vector
broadcasts the 1D array across the second dimension
Not what we want here!
[[2 4 6]
[2 4 6]
[2 4 6]]
###Markdown
- **Step 3**: Sampling: - Now that we have $y^{\langle t+1 \rangle}$, we want to select the next letter in the dinosaur name. If we select the most probable, the model will always generate the same result given a starting letter. To make the results more interesting, we will use np.random.choice to select a next letter that is *likely*, but not always the same. - Pick the next character's **index** according to the probability distribution specified by $\hat{y}^{\langle t+1 \rangle }$. - This means that if $\hat{y}^{\langle t+1 \rangle }_i = 0.16$, you will pick the index "i" with 16% probability. - Use [np.random.choice](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html). Example of how to use `np.random.choice()`: ```python np.random.seed(0) probs = np.array([0.1, 0.0, 0.7, 0.2]) idx = np.random.choice(range(len((probs)), p = probs) ``` - This means that you will pick the index (`idx`) according to the distribution: $P(index = 0) = 0.1, P(index = 1) = 0.0, P(index = 2) = 0.7, P(index = 3) = 0.2$. - Note that the value that's set to `p` should be set to a 1D vector. - Also notice that $\hat{y}^{\langle t+1 \rangle}$, which is `y` in the code, is a 2D array. - Also notice, while in your implementation, the first argument to np.random.choice is just an ordered list [0,1,.., vocab_len-1], it is *Not* appropriate to use char_to_ix.values(). The *order* of values returned by a python dictionary .values() call will be the same order as they are added to the dictionary. The grader may have a different order when it runs your routine than when you run it in your notebook. Additional Hints- [range](https://docs.python.org/3/library/functions.htmlfunc-range)- [numpy.ravel](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ravel.html) takes a multi-dimensional array and returns its contents inside of a 1D vector.```Pythonarr = np.array([[1,2],[3,4]])print("arr")print(arr)print("arr.ravel()")print(arr.ravel())```Output:```Pythonarr[[1 2] [3 4]]arr.ravel()[1 2 3 4]```- Note that `append` is an "in-place" operation. In other words, don't do this:```Pythonfun_hobbies = fun_hobbies.append('learning') Doesn't give you what you want``` - **Step 4**: Update to $x^{\langle t \rangle }$ - The last step to implement in `sample()` is to update the variable `x`, which currently stores $x^{\langle t \rangle }$, with the value of $x^{\langle t + 1 \rangle }$. - You will represent $x^{\langle t + 1 \rangle }$ by creating a one-hot vector corresponding to the character that you have chosen as your prediction. - You will then forward propagate $x^{\langle t + 1 \rangle }$ in Step 1 and keep repeating the process until you get a "\n" character, indicating that you have reached the end of the dinosaur name. Additional Hints- In order to reset `x` before setting it to the new one-hot vector, you'll want to set all the values to zero. - You can either create a new numpy array: [numpy.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html) - Or fill all values with a single number: [numpy.ndarray.fill](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.fill.html)
###Code
# GRADED FUNCTION: sample
def sample(parameters, char_to_ix, seed):
"""
Sample a sequence of characters according to a sequence of probability distributions output of the RNN
Arguments:
parameters -- python dictionary containing the parameters Waa, Wax, Wya, by, and b.
char_to_ix -- python dictionary mapping each character to an index.
seed -- used for grading purposes. Do not worry about it.
Returns:
indices -- a list of length n containing the indices of the sampled characters.
"""
# Retrieve parameters and relevant shapes from "parameters" dictionary
Waa, Wax, Wya, by, b = parameters['Waa'], parameters['Wax'], parameters['Wya'], parameters['by'], parameters['b']
vocab_size = by.shape[0]
n_a = Waa.shape[1]
### START CODE HERE ###
# Step 1: Create the a zero vector x that can be used as the one-hot vector
# representing the first character (initializing the sequence generation). (≈1 line)
x = np.zeros((vocab_size, 1))
# Step 1': Initialize a_prev as zeros (≈1 line)
a_prev = np.zeros((n_a, 1))
# Create an empty list of indices, this is the list which will contain the list of indices of the characters to generate (≈1 line)
indices = []
# idx is the index of the one-hot vector x that is set to 1
# All other positions in x are zero.
# We will initialize idx to -1
idx = -1
# Loop over time-steps t. At each time-step:
# sample a character from a probability distribution
# and append its index (`idx`) to the list "indices".
# We'll stop if we reach 50 characters
# (which should be very unlikely with a well trained model).
# Setting the maximum number of characters helps with debugging and prevents infinite loops.
counter = 0
newline_character = char_to_ix['\n']
while (idx != newline_character and counter != 50):
# Step 2: Forward propagate x using the equations (1), (2) and (3)
a = np.tanh(np.dot(Wax, x) + np.dot(Waa, a_prev) + b)
z = np.dot(Wya, a) + by
y = softmax(z)
# for grading purposes
np.random.seed(counter + seed)
# Step 3: Sample the index of a character within the vocabulary from the probability distribution y
idx = np.random.choice(list(range(vocab_size)), p=y.ravel())
# Append the index to "indices"
indices.append(idx)
# Step 4: Overwrite the input x with one that corresponds to the sampled index `idx`.
# (see additional hints above)
x = np.zeros((vocab_size, 1))
x[idx] = 1
# Update "a_prev" to be "a"
a_prev = a
# for grading purposes
seed += 1
counter +=1
### END CODE HERE ###
if (counter == 50):
indices.append(char_to_ix['\n'])
return indices
np.random.seed(2)
_, n_a = 20, 100
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
indices = sample(parameters, char_to_ix, 0)
print("Sampling:")
print("list of sampled indices:\n", indices)
print("list of sampled characters:\n", [ix_to_char[i] for i in indices])
###Output
Sampling:
list of sampled indices:
[12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]
list of sampled characters:
['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\n']
###Markdown
** Expected output:**```PythonSampling:list of sampled indices: [12, 17, 24, 14, 13, 9, 10, 22, 24, 6, 13, 11, 12, 6, 21, 15, 21, 14, 3, 2, 1, 21, 18, 24, 7, 25, 6, 25, 18, 10, 16, 2, 3, 8, 15, 12, 11, 7, 1, 12, 10, 2, 7, 7, 11, 17, 24, 12, 13, 24, 0]list of sampled characters: ['l', 'q', 'x', 'n', 'm', 'i', 'j', 'v', 'x', 'f', 'm', 'k', 'l', 'f', 'u', 'o', 'u', 'n', 'c', 'b', 'a', 'u', 'r', 'x', 'g', 'y', 'f', 'y', 'r', 'j', 'p', 'b', 'c', 'h', 'o', 'l', 'k', 'g', 'a', 'l', 'j', 'b', 'g', 'g', 'k', 'q', 'x', 'l', 'm', 'x', '\n']``` 3 - Building the language model It is time to build the character-level language model for text generation. 3.1 - Gradient descent * In this section you will implement a function performing one step of stochastic gradient descent (with clipped gradients). * You will go through the training examples one at a time, so the optimization algorithm will be stochastic gradient descent. As a reminder, here are the steps of a common optimization loop for an RNN:- Forward propagate through the RNN to compute the loss- Backward propagate through time to compute the gradients of the loss with respect to the parameters- Clip the gradients- Update the parameters using gradient descent **Exercise**: Implement the optimization process (one step of stochastic gradient descent). The following functions are provided:```pythondef rnn_forward(X, Y, a_prev, parameters): """ Performs the forward propagation through the RNN and computes the cross-entropy loss. It returns the loss' value as well as a "cache" storing values to be used in backpropagation.""" .... return loss, cache def rnn_backward(X, Y, parameters, cache): """ Performs the backward propagation through time to compute the gradients of the loss with respect to the parameters. It returns also all the hidden states.""" ... return gradients, adef update_parameters(parameters, gradients, learning_rate): """ Updates parameters using the Gradient Descent Update Rule.""" ... return parameters```Recall that you previously implemented the `clip` function: parameters* Note that the weights and biases inside the `parameters` dictionary are being updated by the optimization, even though `parameters` is not one of the returned values of the `optimize` function. The `parameters` dictionary is passed by reference into the function, so changes to this dictionary are making changes to the `parameters` dictionary even when accessed outside of the function.* Python dictionaries and lists are "pass by reference", which means that if you pass a dictionary into a function and modify the dictionary within the function, this changes that same dictionary (it's not a copy of the dictionary).
###Code
# GRADED FUNCTION: optimize
def optimize(X, Y, a_prev, parameters, learning_rate = 0.01):
"""
Execute one step of the optimization to train the model.
Arguments:
X -- list of integers, where each integer is a number that maps to a character in the vocabulary.
Y -- list of integers, exactly the same as X but shifted one index to the left.
a_prev -- previous hidden state.
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
b -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
learning_rate -- learning rate for the model.
Returns:
loss -- value of the loss function (cross-entropy)
gradients -- python dictionary containing:
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dWya -- Gradients of hidden-to-output weights, of shape (n_y, n_a)
db -- Gradients of bias vector, of shape (n_a, 1)
dby -- Gradients of output bias vector, of shape (n_y, 1)
a[len(X)-1] -- the last hidden state, of shape (n_a, 1)
"""
### START CODE HERE ###
# Forward propagate through time (≈1 line)
loss, cache = rnn_forward(X, Y, a_prev, parameters)
# Backpropagate through time (≈1 line)
gradients, a = rnn_backward(X, Y, parameters, cache)
# Clip your gradients between -5 (min) and 5 (max) (≈1 line)
gradients = clip(gradients, 5)
# Update parameters (≈1 line)
parameters = update_parameters(parameters, gradients, learning_rate)
### END CODE HERE ###
return loss, gradients, a[len(X)-1]
np.random.seed(1)
vocab_size, n_a = 27, 100
a_prev = np.random.randn(n_a, 1)
Wax, Waa, Wya = np.random.randn(n_a, vocab_size), np.random.randn(n_a, n_a), np.random.randn(vocab_size, n_a)
b, by = np.random.randn(n_a, 1), np.random.randn(vocab_size, 1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "b": b, "by": by}
X = [12,3,5,11,22,3]
Y = [4,14,11,22,25, 26]
loss, gradients, a_last = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
print("Loss =", loss)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("np.argmax(gradients[\"dWax\"]) =", np.argmax(gradients["dWax"]))
print("gradients[\"dWya\"][1][2] =", gradients["dWya"][1][2])
print("gradients[\"db\"][4] =", gradients["db"][4])
print("gradients[\"dby\"][1] =", gradients["dby"][1])
print("a_last[4] =", a_last[4])
###Output
Loss = 126.503975722
gradients["dWaa"][1][2] = 0.194709315347
np.argmax(gradients["dWax"]) = 93
gradients["dWya"][1][2] = -0.007773876032
gradients["db"][4] = [-0.06809825]
gradients["dby"][1] = [ 0.01538192]
a_last[4] = [-1.]
###Markdown
** Expected output:**```PythonLoss = 126.503975722gradients["dWaa"][1][2] = 0.194709315347np.argmax(gradients["dWax"]) = 93gradients["dWya"][1][2] = -0.007773876032gradients["db"][4] = [-0.06809825]gradients["dby"][1] = [ 0.01538192]a_last[4] = [-1.]``` 3.2 - Training the model * Given the dataset of dinosaur names, we use each line of the dataset (one name) as one training example. * Every 2000 steps of stochastic gradient descent, you will sample several randomly chosen names to see how the algorithm is doing. **Exercise**: Follow the instructions and implement `model()`. When `examples[index]` contains one dinosaur name (string), to create an example (X, Y), you can use this: Set the index `idx` into the list of examples* Using the for-loop, walk through the shuffled list of dinosaur names in the list "examples".* For example, if there are n_e examples, and the for-loop increments the index to n_e onwards, think of how you would make the index cycle back to 0, so that we can continue feeding the examples into the model when j is n_e, n_e + 1, etc.* Hint: n_e + 1 divided by n_e is zero with a remainder of 1.* `%` is the modulus operator in python. Extract a single example from the list of examples* `single_example`: use the `idx` index that you set previously to get one word from the list of examples. Convert a string into a list of characters: `single_example_chars`* `single_example_chars`: A string is a list of characters.* You can use a list comprehension (recommended over for-loops) to generate a list of characters.```Pythonstr = 'I love learning'list_of_chars = [c for c in str]print(list_of_chars)``````['I', ' ', 'l', 'o', 'v', 'e', ' ', 'l', 'e', 'a', 'r', 'n', 'i', 'n', 'g']``` Convert list of characters to a list of integers: `single_example_ix`* Create a list that contains the index numbers associated with each character.* Use the dictionary `char_to_ix`* You can combine this with the list comprehension that is used to get a list of characters from a string. Create the list of input characters: `X`* `rnn_forward` uses the **`None`** value as a flag to set the input vector as a zero-vector.* Prepend the list [**`None`**] in front of the list of input characters.* There is more than one way to prepend a value to a list. One way is to add two lists together: `['a'] + ['b']` Get the integer representation of the newline character `ix_newline`* `ix_newline`: The newline character signals the end of the dinosaur name. - get the integer representation of the newline character `'\n'`. - Use `char_to_ix` Set the list of labels (integer representation of the characters): `Y`* The goal is to train the RNN to predict the next letter in the name, so the labels are the list of characters that are one time step ahead of the characters in the input `X`. - For example, `Y[0]` contains the same value as `X[1]` * The RNN should predict a newline at the last letter so add ix_newline to the end of the labels. - Append the integer representation of the newline character to the end of `Y`. - Note that `append` is an in-place operation. - It might be easier for you to add two lists together.
###Code
# GRADED FUNCTION: model
def model(data, ix_to_char, char_to_ix, num_iterations = 35000, n_a = 50, dino_names = 7, vocab_size = 27, verbose = False):
"""
Trains the model and generates dinosaur names.
Arguments:
data -- text corpus
ix_to_char -- dictionary that maps the index to a character
char_to_ix -- dictionary that maps a character to an index
num_iterations -- number of iterations to train the model for
n_a -- number of units of the RNN cell
dino_names -- number of dinosaur names you want to sample at each iteration.
vocab_size -- number of unique characters found in the text (size of the vocabulary)
Returns:
parameters -- learned parameters
"""
# Retrieve n_x and n_y from vocab_size
n_x, n_y = vocab_size, vocab_size
# Initialize parameters
parameters = initialize_parameters(n_a, n_x, n_y)
# Initialize loss (this is required because we want to smooth our loss)
loss = get_initial_loss(vocab_size, dino_names)
# Build list of all dinosaur names (training examples).
with open("dinos.txt") as f:
examples = f.readlines()
examples = [x.lower().strip() for x in examples]
# Shuffle list of all dinosaur names
np.random.seed(0)
np.random.shuffle(examples)
# Initialize the hidden state of your LSTM
a_prev = np.zeros((n_a, 1))
# Optimization loop
for j in range(num_iterations):
### START CODE HERE ###
# Set the index `idx` (see instructions above)
idx = j % len(examples)
# Set the input X (see instructions above)
single_example = examples[idx]
single_example_chars = [c for c in single_example]
single_example_ix = [char_to_ix[c] for c in single_example_chars]
X = [None] + [char_to_ix[ch] for ch in examples[idx]];
# Set the labels Y (see instructions above)
ix_newline = char_to_ix["\n"]
Y = X[1:]+ [ix_newline]
# Perform one optimization step: Forward-prop -> Backward-prop -> Clip -> Update parameters
# Choose a learning rate of 0.01
curr_loss, gradients, a_prev = optimize(X, Y, a_prev, parameters, learning_rate = 0.01)
### END CODE HERE ###
# debug statements to aid in correctly forming X, Y
if verbose and j in [0, len(examples) -1, len(examples)]:
print("j = " , j, "idx = ", idx,)
if verbose and j in [0]:
print("single_example =", single_example)
print("single_example_chars", single_example_chars)
print("single_example_ix", single_example_ix)
print(" X = ", X, "\n", "Y = ", Y, "\n")
# Use a latency trick to keep the loss smooth. It happens here to accelerate the training.
loss = smooth(loss, curr_loss)
# Every 2000 Iteration, generate "n" characters thanks to sample() to check if the model is learning properly
if j % 2000 == 0:
print('Iteration: %d, Loss: %f' % (j, loss) + '\n')
# The number of dinosaur names to print
seed = 0
for name in range(dino_names):
# Sample indices and print them
sampled_indices = sample(parameters, char_to_ix, seed)
print_sample(sampled_indices, ix_to_char)
seed += 1 # To get the same result (for grading purposes), increment the seed by one.
print('\n')
return parameters
###Output
_____no_output_____
###Markdown
Run the following cell, you should observe your model outputting random-looking characters at the first iteration. After a few thousand iterations, your model should learn to generate reasonable-looking names.
###Code
parameters = model(data, ix_to_char, char_to_ix, verbose = True)
###Output
j = 0 idx = 0
single_example = turiasaurus
single_example_chars ['t', 'u', 'r', 'i', 'a', 's', 'a', 'u', 'r', 'u', 's']
single_example_ix [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19]
X = [None, 20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19]
Y = [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19, 0]
Iteration: 0, Loss: 23.087336
Nkzxwtdmfqoeyhsqwasjkjvu
Kneb
Kzxwtdmfqoeyhsqwasjkjvu
Neb
Zxwtdmfqoeyhsqwasjkjvu
Eb
Xwtdmfqoeyhsqwasjkjvu
j = 1535 idx = 1535
j = 1536 idx = 0
Iteration: 2000, Loss: 27.884160
Liusskeomnolxeros
Hmdaairus
Hytroligoraurus
Lecalosapaus
Xusicikoraurus
Abalpsamantisaurus
Tpraneronxeros
Iteration: 4000, Loss: 25.901815
Mivrosaurus
Inee
Ivtroplisaurus
Mbaaisaurus
Wusichisaurus
Cabaselachus
Toraperlethosdarenitochusthiamamumamaon
Iteration: 6000, Loss: 24.608779
Onwusceomosaurus
Lieeaerosaurus
Lxussaurus
Oma
Xusteonosaurus
Eeahosaurus
Toreonosaurus
Iteration: 8000, Loss: 24.070350
Onxusichepriuon
Kilabersaurus
Lutrodon
Omaaerosaurus
Xutrcheps
Edaksoje
Trodiktonus
Iteration: 10000, Loss: 23.844446
Onyusaurus
Klecalosaurus
Lustodon
Ola
Xusodonia
Eeaeosaurus
Troceosaurus
Iteration: 12000, Loss: 23.291971
Onyxosaurus
Kica
Lustrepiosaurus
Olaagrraiansaurus
Yuspangosaurus
Eealosaurus
Trognesaurus
Iteration: 14000, Loss: 23.382338
Meutromodromurus
Inda
Iutroinatorsaurus
Maca
Yusteratoptititan
Ca
Troclosaurus
Iteration: 16000, Loss: 23.255630
Meustolkanolus
Indabestacarospceryradwalosaurus
Justolopinaveraterasauracoptelalenyden
Maca
Yusocles
Daahosaurus
Trodon
Iteration: 18000, Loss: 22.905483
Phytronn
Meicanstolanthus
Mustrisaurus
Pegalosaurus
Yuskercis
Egalosaurus
Tromelosaurus
Iteration: 20000, Loss: 22.873854
Nlyushanerohyisaurus
Loga
Lustrhigosaurus
Nedalosaurus
Yuslangosaurus
Elagosaurus
Trrangosaurus
Iteration: 22000, Loss: 22.710545
Onyxromicoraurospareiosatrus
Liga
Mustoffankeugoptardoros
Ola
Yusodogongterosaurus
Ehaerona
Trododongxernochenhus
Iteration: 24000, Loss: 22.604827
Meustognathiterhucoplithaloptha
Jigaadosaurus
Kurrodon
Mecaistheansaurus
Yuromelosaurus
Eiaeropeeton
Troenathiteritaus
Iteration: 26000, Loss: 22.714486
Nhyxosaurus
Kola
Lvrosaurus
Necalosaurus
Yurolonlus
Ejakosaurus
Troindronykus
Iteration: 28000, Loss: 22.647640
Onyxosaurus
Loceahosaurus
Lustleonlonx
Olabasicachudrakhurgawamosaurus
Ytrojianiisaurus
Eladon
Tromacimathoshargicitan
Iteration: 30000, Loss: 22.598485
Oryuton
Locaaesaurus
Lustoendosaurus
Olaahus
Yusaurus
Ehadopldarshuellus
Troia
Iteration: 32000, Loss: 22.211861
Meutronlapsaurus
Kracallthcaps
Lustrathus
Macairugeanosaurus
Yusidoneraverataus
Eialosaurus
Troimaniathonsaurus
Iteration: 34000, Loss: 22.447230
Onyxipaledisons
Kiabaeropa
Lussiamang
Pacaeptabalsaurus
Xosalong
Eiacoteg
Troia
###Markdown
** Expected Output**```Pythonj = 0 idx = 0single_example = turiasaurussingle_example_chars ['t', 'u', 'r', 'i', 'a', 's', 'a', 'u', 'r', 'u', 's']single_example_ix [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19] X = [None, 20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19] Y = [20, 21, 18, 9, 1, 19, 1, 21, 18, 21, 19, 0] Iteration: 0, Loss: 23.087336NkzxwtdmfqoeyhsqwasjkjvuKnebKzxwtdmfqoeyhsqwasjkjvuNebZxwtdmfqoeyhsqwasjkjvuEbXwtdmfqoeyhsqwasjkjvuj = 1535 idx = 1535j = 1536 idx = 0Iteration: 2000, Loss: 27.884160...Iteration: 34000, Loss: 22.447230OnyxipaledisonsKiabaeropaLussiamangPacaeptabalsaurusXosalongEiacotegTroia``` ConclusionYou can see that your algorithm has started to generate plausible dinosaur names towards the end of the training. At first, it was generating random characters, but towards the end you could see dinosaur names with cool endings. Feel free to run the algorithm even longer and play with hyperparameters to see if you can get even better results. Our implementation generated some really cool names like `maconucon`, `marloralus` and `macingsersaurus`. Your model hopefully also learned that dinosaur names tend to end in `saurus`, `don`, `aura`, `tor`, etc.If your model generates some non-cool names, don't blame the model entirely--not all actual dinosaur names sound cool. (For example, `dromaeosauroides` is an actual dinosaur name and is in the training set.) But this model should give you a set of candidates from which you can pick the coolest! This assignment had used a relatively small dataset, so that you could train an RNN quickly on a CPU. Training a model of the english language requires a much bigger dataset, and usually needs much more computation, and could run for many hours on GPUs. We ran our dinosaur name for quite some time, and so far our favorite name is the great, undefeatable, and fierce: Mangosaurus! 4 - Writing like ShakespeareThe rest of this notebook is optional and is not graded, but we hope you'll do it anyway since it's quite fun and informative. A similar (but more complicated) task is to generate Shakespeare poems. Instead of learning from a dataset of Dinosaur names you can use a collection of Shakespearian poems. Using LSTM cells, you can learn longer term dependencies that span many characters in the text--e.g., where a character appearing somewhere a sequence can influence what should be a different character much much later in the sequence. These long term dependencies were less important with dinosaur names, since the names were quite short. Let's become poets! We have implemented a Shakespeare poem generator with Keras. Run the following cell to load the required packages and models. This may take a few minutes.
###Code
from __future__ import print_function
from keras.callbacks import LambdaCallback
from keras.models import Model, load_model, Sequential
from keras.layers import Dense, Activation, Dropout, Input, Masking
from keras.layers import LSTM
from keras.utils.data_utils import get_file
from keras.preprocessing.sequence import pad_sequences
from shakespeare_utils import *
import sys
import io
###Output
Using TensorFlow backend.
###Markdown
To save you some time, we have already trained a model for ~1000 epochs on a collection of Shakespearian poems called [*"The Sonnets"*](shakespeare.txt). Let's train the model for one more epoch. When it finishes training for an epoch---this will also take a few minutes---you can run `generate_output`, which will prompt asking you for an input (`<`40 characters). The poem will start with your sentence, and our RNN-Shakespeare will complete the rest of the poem for you! For example, try "Forsooth this maketh no sense " (don't enter the quotation marks). Depending on whether you include the space at the end, your results might also differ--try it both ways, and try other inputs as well.
###Code
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y, batch_size=128, epochs=1, callbacks=[print_callback])
# Run this cell to try with different inputs without having to re-train the model
generate_output()
###Output
Write the beginning of your poem, the Shakespeare machine will complete it. Your input is: I wanna live this earth
Here is your poem:
I wanna live this earth,
wor ally mins thy bouty sheebling a secred,
and hamed be lold, well your saxf thee frien,
i wherefor whats bey shaloses might wear:
thee o boon' hish adete chell'n pyst berame,
arook some quectides mashide themed by prace,
like yous of faules tripur: whice romte me bore,
that wherevein beauty fich my miding ho did;
hiby live jawhered on me then sid more,
and thing refolgacuouns i with par' ukcom |
big-data/ampcamp6.ipynb | ###Markdown
AMPCamp 6http://ampcamp.berkeley.edu/6 PreparationThis assumes that you're running the Bitnami Hadoop VM version 3.3.0 (October 2020).Before we can start with the AMPCamp 6 tutorial, we've to prepare our VM for PySpark.First, we need to install some additional dependencies:```shellsudo apt install python3-requests python3-notebook jupyter jupyter-core ```Also, we have to apply the following configuration:```shellexport PYSPARK_PYTHON=python3export PATH="$PATH:$HOME/stack/hadoop/spark/bin"sudo ufw allow 8888/tcpsudo ufw allow 4040/tcp```Now, to start a PySpark shell session:```shellpyspark```Or, to start [Jupyter](https://jupyter.org), with support for PySpark (copy the URL that will appear in a browser on your host machine):```shellPYSPARK_DRIVER_PYTHON=jupyter PYSPARK_DRIVER_PYTHON_OPTS="notebook --ip=$(hostname -I)" pyspark```We continue by downloading the data used in the tutorial. Run the code from the following cells into your PySpark shell or notebook.
###Code
import gzip
import itertools
import requests
import os
import pyspark
import shutil
import subprocess
# ./data/pagecounts
base_url = 'https://archive.org/download/wikipedia_visitor_stats_200905'
filenames = ['pagecounts-20090505-000000.gz', 'pagecounts-20090507-000000.gz']
os.makedirs('data', exist_ok=True)
for filename in filenames:
with requests.get(f'{base_url}/{filename}', stream=True) as stream:
with open(filename, 'wb') as file:
shutil.copyfileobj(stream.raw, file)
#
filename_noext = os.path.splitext(filename)[0]
with gzip.open(filename, 'rb') as src, open(filename_noext, 'wb') as dest:
for chunk in iter(lambda : src.read(100 * 1024), b''):
dest.write(chunk)
#
datetime = filename_noext[11:].encode('UTF-8')
with open(filename_noext, 'rb') as src, open('data/pagecounts', 'ab') as dest:
for line in src:
dest.write(datetime + b' ' + line)
#
os.remove(filename)
os.remove(filename_noext)
# ./data/wiki.parquet
base_url = 'https://github.com/databricks/spark-training/raw/master/data/wiki_parquet/'
filenames = [
'_SUCCESS', '._metadata', 'part-r-1.parquet',
'part-r-2.parquet', 'part-r-3.parquet', 'part-r-4.parquet',
'part-r-5.parquet', 'part-r-6.parquet', 'part-r-7.parquet',
'part-r-8.parquet', 'part-r-9.parquet', 'part-r-10.parquet'
]
os.makedirs('data/wiki.parquet', exist_ok=True)
for filename in filenames:
with requests.get(f'{base_url}/{filename}', stream=True) as stream:
with open(f'data/wiki.parquet/{filename}', 'wb') as file:
shutil.copyfileobj(stream.raw, file)
###Output
_____no_output_____
###Markdown
As a last step, make sure the `sc` and `sqlContext` PySpark objects exists in your PySpark shell or notebook:
###Code
sc = pyspark.SparkContext.getOrCreate()
sqlContext = pyspark.sql.SparkSession.builder.getOrCreate()
###Output
_____no_output_____
###Markdown
Now your environment is ready to follow the AMPCamp 6 tutorial. Data Exploration Using Sparkhttp://ampcamp.berkeley.edu/6/exercises/data-exploration-using-spark.html
###Code
sc
pagecounts = sc.textFile("data/pagecounts")
pagecounts
for x in pagecounts.take(10):
print(x)
pagecounts.count()
enPages = pagecounts.filter(lambda x: x.split(" ")[1] == "en").cache()
enPages.count()
enTuples = enPages.map(lambda x: x.split(" "))
enKeyValuePairs = enTuples.map(lambda x: (x[0][:8], int(x[3])))
enKeyValuePairs.reduceByKey(lambda x, y: x + y, 1).collect()
enPages.map(lambda x: x.split(" ")).map(lambda x: (x[0][:8], int(x[3]))).reduceByKey(lambda x, y: x + y, 1).collect()
reduced = enPages.map(lambda x: x.split(" ")).map(lambda x: (x[2], int(x[3]))).reduceByKey(lambda x, y: x + y, 40)
filtered = reduced.filter(lambda x: x[1] > 200000).map(lambda x: (x[1], x[0]))
filtered.collect()
###Output
_____no_output_____
###Markdown
Data Exploration Using Spark SQLhttp://ampcamp.berkeley.edu/6/exercises/data-exploration-using-spark-sql.html
###Code
sqlContext
wikiData = sqlContext.read.parquet("data/wiki.parquet")
wikiData.count()
wikiData.registerTempTable("wikiData")
result = sqlContext.sql("SELECT COUNT(*) AS pageCount FROM wikiData").collect()
result[0].pageCount
sqlContext.sql("SELECT username, COUNT(*) AS cnt FROM wikiData WHERE username <> '' GROUP BY username ORDER BY cnt DESC LIMIT 10").collect()
###Output
_____no_output_____ |
classification-fraud-detection/solution-1-services/1.0 data [explore].ipynb | ###Markdown
Credit Card Fraud Detection https://www.kaggle.com/mlg-ulb/creditcardfraud- - - The datasets contains transactions made by credit cards in September 2013 by european cardholders.This dataset presents transactions that occurred in two days, where we have 492 frauds out of 284,807 transactions. The dataset is highly unbalanced, the positive class (frauds) account for 0.172% of all transactions. Dataset descriptionThe dataset contains only numerical input variables which are the result of a PCA transformation. Due to confidentiality issues, we cannot provide the original features or more background information about the data. * Features V1, V2, … V28 are the principal components obtained with PCA. * Feature 'Time' contains the seconds elapsed between each transaction and the first transaction in the dataset. * Feature 'Amount' is the transaction Amount, this feature can be used for example-dependant cost-senstive learning. * Feature 'Class' is the response variable and it takes value 1 in case of fraud and 0 otherwise.- - - Exploratory Data Analysis (EDA)
###Code
data = pd.read_csv('../dataset/creditcard.csv')
data.head()
data.describe()
###Output
_____no_output_____
###Markdown
Data balance
###Code
ax = data['Class'].value_counts().plot(kind='barh')
ax.set_xscale('log')
ax.grid('on')
ax.set_title('class balance');
###Output
_____no_output_____
###Markdown
Spend distributionWhat is the overlap between normal (Class=0) transactions and fraudulent transactions (Class=1).
###Code
g = sns.catplot(x="Class", y="Amount", kind='boxen', data=data);
minimum_spend = 0.5
filtered_data = data[data['Amount'] > minimum_spend]
g = sns.catplot(x="Class", y="Amount", kind='boxen', data=filtered_data);
g.set(yscale="log");
###Output
_____no_output_____
###Markdown
Summary statistics
###Code
data.groupby('Class')['Amount'].describe().T
###Output
_____no_output_____
###Markdown
Correlation with target ('Class')
###Code
cmap = sns.diverging_palette(240, 10, n=10)
correlation = data.drop(columns=['Time','Class']).corrwith(data['Class'])
f, ax = plt.subplots(figsize=(15, 5))
xx = pd.DataFrame(correlation).reset_index()
xx.columns = ['Variable', 'Correlation']
sns.barplot(x='Variable', y='Correlation', data=xx, palette=cmap, ax=ax)
sns.despine()
###Output
_____no_output_____
###Markdown
The negative correlations mean that as the target variable decreases in value, the feature variable increases in value. (Linearly)
###Code
x = data.groupby('Class').corr()
mask = np.triu(np.ones_like(x.loc[0], dtype=np.bool))
cmap = sns.diverging_palette(240, 10, as_cmap=True, n=3)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 12))
sns.heatmap(x.loc[0], mask=mask, cmap=cmap, vmax=1, vmin=-1, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .25}, ax=ax1)
ax1.set_title('Class (0)')
sns.heatmap(x.loc[1], mask=mask, cmap=cmap, vmax=1, vmin=-1, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .25}, ax=ax2)
ax2.set_title('Class (1)');
###Output
_____no_output_____ |
Perceptron_Eg2.ipynb | ###Markdown
Here's a simple version of a perceptron using Python and NumPy. It will take two inputs and learn to act like the logical OR function.
###Code
from random import choice
from numpy import array, dot, random
unit_step = lambda x: 0 if x < 0 else 1
###Output
_____no_output_____
###Markdown
The first two entries of the NumPy array in each tuple are the two input values. The second element of the tuple is the expected result. And the third entry of the array is a "dummy" input (also called the bias) which is needed to move the threshold (also known as the decision boundary) up or down as needed by the step function. Its value is always 1, so that its influence on the result can be controlled by its weight.
###Code
training_data = [
(array([0,0,1]), 0),
(array([0,1,1]), 1),
(array([1,0,1]), 1),
(array([1,1,1]), 1),
]
###Output
_____no_output_____
###Markdown
As you can see, this training sequence maps exactly to the definition of the OR function:
###Code
from IPython.display import Image
Image("fig1.png")
# Choose 3 random numbers between 0 and 1 as the initial weights:
w = random.rand(3)
# Used to store the error values so that they can be plotted later on.
errors = []
# Controls the learning rate.
eta = 0.2
# Specifies the number of learning iterations.
n = 100
###Output
_____no_output_____
###Markdown
In order to find the ideal values for the weights w, we try to reduce the error magnitude to zero. In this simple case n = 100 iterations are enough; for a bigger and possibly "noisier" set of input data much larger numbers should be used. First we get a random input set from the training data. Then we calculate the dot product (sometimes also called scalar product or inner product) of the input and weight vectors. This is our (scalar) result, which we can compare to the expected value. If the expected value is bigger, we need to increase the weights, if it's smaller, we need to decrease them. This correction factor is calculated in the last line, where the error is multiplied with the learning rate (eta) and the input vector (x). It is then added to the weights vector, in order to improve the results in the next iteration.
###Code
for i in range(n):
x, expected = choice(training_data)
result = dot(w, x)
error = expected - unit_step(result)
errors.append(error)
w += eta * error * x
for x, _ in training_data:
result = dot(x, w)
print("{}: {} -> {}".format(x[:2], result,
unit_step(result)))
from pylab import plot, ylim
%matplotlib inline
ylim([-1,1])
plot(errors)
###Output
_____no_output_____ |
notebooks/Benchmark_Walkthrough.ipynb | ###Markdown
Imports
###Code
import numpy as np
import netCDF4 as nc
import pandas as pd
import datetime
import itertools
from sklearn.linear_model import Ridge
from sklearn import neighbors
from sklearn.cross_validation import train_test_split
from sklearn.metrics import mean_squared_error
import csv
import sklearn.preprocessing
###Output
_____no_output_____
###Markdown
Load one of the files for solar radiation measurements; the last variables contains the measurement
###Code
X = nc.Dataset('../data/kaggle_solar/train/dswrf_sfc_latlon_subset_19940101_20071231.nc','r+').variables.values()
###Output
_____no_output_____
###Markdown
Values for X are 5113 times values, 9 latitudes, 16 longitudes, 11 ensemble models, and 5 measurement times.The values functions pulls the results out of the OrderedDict
###Code
solar_rad = X[-1]
###Output
_____no_output_____
###Markdown
Some of the latitude and longitude measurements aren't relevant
###Code
solar_array = solar_rad[:,:,:,:,:]
solar_array.shape
reduced_solar = np.mean(solar_array,axis=1)
reduced_solar.shape
np.prod(reduced_solar.shape[0:2])
expand_solar = reduced_solar.reshape(np.prod(reduced_solar.shape[:1]),np.prod(reduced_solar.shape[1:]))
expand_solar
solar_df = pd.DataFrame(expand_solar)
solar_df
solar_df['time']= pd.to_datetime(X[1][:], format="%Y%m%d%H")
solar_df.set_index('time', inplace=True)
list_of_headers = [[str(int(val)) for val in (list(my_row))] for my_row in list(itertools.product(X[5][:],X[2][:],X[3][:]))]
col_names = ["dswrf_sfc_Time_%s_Lat_%s_Lon_%s"%(my_row[0],my_row[1],my_row[2]) for my_row in list_of_headers]
solar_df.rename(columns=dict(zip(solar_df.columns,col_names)),inplace=True)
solar_df.describe()
%matplotlib inline
solar_df.dswrf_sfc_Time_18_Lat_31_Lon_254.plot(kind='hist')
solar_df.Time_24_Lat_31_Lon_268.plot(kind='hist')
import matplotlib.pyplot as plt
plt.scatter(solar_df.dswrf_sfc_Time_18_Lat_31_Lon_257,solar_df.dswrf_sfc_Time_18_Lat_39_Lon_268)
###Output
_____no_output_____
###Markdown
Need to label by the incoming data type, the time, the location, potentially the model
###Code
plt.scatter(solar_df.dswrf_sfc_Time_18_Lat_39_Lon_257,solar_df.dswrf_sfc_Time_18_Lat_39_Lon_268)
plt.scatter(solar_df.dswrf_sfc_Time_18_Lat_39_Lon_268,solar_df.dswrf_sfc_Time_24_Lat_39_Lon_268)
###Output
_____no_output_____
###Markdown
Add another two files and go through the rest of the analysis. First, add another file
###Code
X2 = nc.Dataset('../data/kaggle_solar/train/dlwrf_sfc_latlon_subset_19940101_20071231.nc','r+').variables.values()
solar_rad_long = X2[-1]
solar_array_long = solar_rad_long[:,:,:,:,:]
reduced_solar_long = np.mean(solar_array_long,axis=1)
expand_solar_long = reduced_solar_long.reshape(np.prod(reduced_solar_long.shape[:1]),np.prod(reduced_solar_long.shape[1:]))
solar_df_long = pd.DataFrame(expand_solar_long)
solar_df_long['time']= pd.to_datetime(X[1][:], format="%Y%m%d%H")
solar_df_long.set_index('time', inplace=True)
list_of_headers2 = [[str(int(val)) for val in (list(my_row))] for my_row in list(itertools.product(X2[5][:],X2[2][:],X2[3][:]))]
col_names2 = ["dlwrf_sfc_Time_%s_Lat_%s_Lon_%s"%(my_row[0],my_row[1],my_row[2]) for my_row in list_of_headers2]
solar_df_long.rename(columns=dict(zip(solar_df_long.columns,col_names2)),inplace=True)
solar_df_all = pd.concat([solar_df, solar_df_long], axis=1)
solar_df_all
solar_df_all.describe()
###Output
_____no_output_____
###Markdown
Load in the y values
###Code
import pandas as pd
y_values = pd.read_csv('../solar/data/kaggle_solar/train.csv', parse_dates=[0])
y_values.set_index('Date', inplace=True)
y_values
model1 = Ridge(normalize=True)
model2 = neighbors.KNeighborsRegressor(10)
model2 = neighbors.KNeighborsRegressor
alphas = np.logspace(-4,2,20,base=10)
alphas
model1.alpha = alphas[10]
model2.alpha = alphas[10]
cv_splits = 10
X_train, X_cv, y_train, y_cv = train_test_split(solar_df_all, y_values, test_size=0.2, random_state=42)
model1.fit(X_train,y_train)
x_normed = sklearn.preprocessing.Normalizer(X_train)
x_normed.norm
model2.fit(x_normed.norm,y_train)
preds1 = model1.predict(X_cv)
preds2 = model2.predict(sklearn.preprocessing.Normalizer(X_cv).norm)
mse1 = mean_squared_error(y_cv, preds1)
mse2 = mean_squared_error(y_cv,preds2)
mse1
mse2
###Output
_____no_output_____
###Markdown
Load the test data
###Code
testX1 = nc.Dataset('../data/kaggle_solar/test/dswrf_sfc_latlon_subset_20080101_20121130.nc','r+').variables.values()
solar_rad_test_short = testX1[-1]
solar_array_short_test = solar_rad_test_short[:,:,:,:,:]
reduced_solar_short_test = np.mean(solar_array_short_test,axis=1)
expand_solar_short_test = reduced_solar_short_test.reshape(np.prod(reduced_solar_short_test.shape[:1]),np.prod(reduced_solar_short_test.shape[1:]))
solar_df_short_test = pd.DataFrame(expand_solar_short_test)
solar_df_short_test['time']= pd.to_datetime(testX1[1][:], format="%Y%m%d%H")
solar_df_short_test.set_index('time', inplace=True)
list_of_headers_test1 = [[str(int(val)) for val in (list(my_row))] for my_row in list(itertools.product(testX1[5][:],testX1[2][:],testX1[3][:]))]
test_names1 = ["dlwrf_sfc_Time_%s_Lat_%s_Lon_%s"%(my_row[0],my_row[1],my_row[2]) for my_row in list_of_headers_test2]
solar_df_short_test.rename(columns=dict(zip(solar_df_short_test.columns,test_names1)),inplace=True)
testX2 = nc.Dataset('../data/kaggle_solar/test/dlwrf_sfc_latlon_subset_20080101_20121130.nc','r+').variables.values()
solar_rad_test_long = testX2[-1]
solar_array_long_test = solar_rad_test_long[:,:,:,:,:]
reduced_solar_long_test = np.mean(solar_array_long_test,axis=1)
expand_solar_long_test = reduced_solar_long_test.reshape(np.prod(reduced_solar_long_test.shape[:1]),np.prod(reduced_solar_long_test.shape[1:]))
solar_df_long_test = pd.DataFrame(expand_solar_long_test)
solar_df_long_test['time']= pd.to_datetime(testX2[1][:], format="%Y%m%d%H")
solar_df_long_test.set_index('time', inplace=True)
list_of_headers_test2 = [[str(int(val)) for val in (list(my_row))] for my_row in list(itertools.product(testX2[5][:],testX2[2][:],testX2[3][:]))]
test_names2 = ["dlwrf_sfc_Time_%s_Lat_%s_Lon_%s"%(my_row[0],my_row[1],my_row[2]) for my_row in list_of_headers_test2]
solar_df_long_test.rename(columns=dict(zip(solar_df_long_test.columns,test_names2)),inplace=True)
solar_df_test_all = pd.concat([solar_df_short_test, solar_df_long_test], axis=1)
preds = model1.predict(solar_df_test_all)
pred2 = model2.predict(sklearn.preprocessing.Normalizer(solar_df_test_all).norm)
fexample = open('../data/kaggle_solar/sampleSubmission.csv')
fout = open('submission_two.csv', 'wb')
fReader = csv.reader(fexample,delimiter=',',skipinitialspace=True)
fWriter = csv.writer(fout)
for i,row in enumerate(fReader):
if i == 0:
fWriter.writerow(row)
else:
row[1:] = pred2[i-1]
fWriter.writerow(row)
fexample.close()
fout.close()
###Output
_____no_output_____ |
notebooks/Uplink preference backup and restore/merakiUplinkPreferenceRestore.ipynb | ###Markdown
Meraki Python SDK Demo: Uplink Preference Restore*This notebook demonstrates using the Meraki Python SDK to restore Internet (WAN) and VPN traffic uplink preferences, as well as custom performance classes, from an Excel file. If you have hundreds of WAN/VPN uplink preferences, they can be a challenge to manipulate. This demo seeks to prove how using the Meraki API and Python SDK can substantially streamline such complex deployments.*If you haven't already, please consult the corresponding **Meraki Python SDK Demo: Uplink Preference Backup**.If an admin has backed up his Internet and VPN traffic uplink preferences and custom performance classes, this tool will restore them to the Dashboard from the Excel file backup. This is a more advanced demo, intended for intermediate to advanced Python programmers, but has been documented thoroughly with the intention that even a determined Python beginner can understand the concepts involved.If an admin can use the appropriate template Excel file and update it with the appropriate info, e.g. subnets, ports, and WAN preference, then this tool can push those preferences to the Dashboard for the desired network's MX appliance. With the Meraki Dashboard API, its SDK and Python, we can restore hundreds of preferences without using the GUI.--->NB: Throughout this notebook, we will print values for demonstration purposes. In a production Python script, the coder would likely remove these print statements to clean up the console output. In this first cell, we import the required `meraki` and `os` modules, and open the Dashboard API connection using the SDK. We also import `openpyxl` for working with Excel files, and `netaddr` for working with IP addresses.
###Code
# Install the relevant modules. If you are using a local editor (e.g. VS Code, rather than Colab) you can run these commands, without the preceding %, via a terminal. NB: Run `pip install meraki==` to find the latest version of the Meraki SDK. Uncomment these lines if you're using Google Colab.
#%pip install meraki
#%pip install openpyxl
# If you are using Google Colab, please ensure you have set up your environment variables as linked above, then delete the two lines of ''' to activate the following code:
'''
%pip install colab-env -qU
import colab_env
'''
# The Meraki SDK
import meraki
# The built-in OS module, to read environment variables
import os
# We're also going to import Python's built-in JSON module, but only to make the console output pretty. In production, you wouldn't need any of the printing calls at all, nor this import!
import json
# The openpyxl module, to manipulate Excel files
import openpyxl
# The datetime module, to generate timestamps
import datetime
# Treat your API key like a password. Store it in your environment variables as 'MERAKI_DASHBOARD_API_KEY' and let the SDK call it for you.
# Or, call it manually after importing Python's os module:
# API_KEY = os.getenv('MERAKI_DASHBOARD_API_KEY')
# Initialize the Dashboard connection.
dashboard = meraki.DashboardAPI(suppress_logging=True)
# We'll also create a few reusable strings for later interactivity.
string_constants = dict()
string_constants['CONFIRM'] = 'OK, are you sure you want to do this? This script does not have an "undo" feature.'
string_constants['CANCEL'] = 'OK. Operation canceled.'
string_constants['WORKING'] = 'Working...'
string_constants['COMPLETE'] = 'Operation complete.'
string_constants['NETWORK_SELECTED'] = 'Network selected.'
string_constants['NO_VALID_OPTIONS'] = 'There are no valid options. Please try again with an API key that has access to the appropriate resources.'
# Some of the parameters we'll work with are optional. This string defines what value will be put into a cell corresponding with a parameter that is not set on that rule.
string_constants['NOT_APPLICABLE'] = 'N/A'
# This script is interactive; user choices and data will be stored here.
user_choices = dict()
user_data = dict()
# Set the filename to use for the backup workbook
WORKBOOK_FILENAME = 'exampleBackups/downloaded_rules_workbook_2020-12-01 backup with wan and vpn uplink prefs and cpcs.xlsx'
###Output
_____no_output_____
###Markdown
A basic pretty print formatter, `printj()`. It will make reading JSON on the console easier, but won't be necessary in production scripts.
###Code
def printj(ugly_json_object):
# The json.dumps() method converts a JSON object into human-friendly formatted text
pretty_json_string = json.dumps(ugly_json_object, indent = 2, sort_keys = False)
return print(pretty_json_string)
###Output
_____no_output_____
###Markdown
We'll reuse the custom class we created in the backup script.
###Code
class UserChoice:
'A re-usable CLI option prompt.'
def __init__(self, options_list=[], subject_of_choice='available options', single_option_noun='option', id_parameter='id', name_parameter='name', action_verb='choose', no_valid_options_message='no valid options'):
self.options_list = options_list # options_list is a list of dictionaries containing attributes id_parameter and name_parameter
self.subject_of_choice = subject_of_choice # subject_of_choice is a string that names the subject of the user's choice. It is typically a plural noun.
self.single_option_noun = single_option_noun # single_option_noun is a string that is a singular noun corresponding to the subject_of_choice
self.id_parameter = id_parameter # id_parameter is a string that represents the name of the sub-parameter that serves as the ID value for the option in options_list. It should be a unique value for usability.
self.name_parameter = name_parameter # name_paraemter is a string that represents the name of the sub-parameter that serves as the name value for the option in options_list. It does not need to be unique.
self.action_verb = action_verb # action_verb is a string that represents the verb of the user's action. For example, to "choose"
self.no_valid_options_message = no_valid_options_message # no_valid_options_message is a string that represents an error message if options_list is empty
# Confirm there are options in the list
if len(self.options_list):
print(f'We found {len(self.options_list)} {self.subject_of_choice}:')
# Label each option and show the user their choices.
option_index = 0
for option in self.options_list:
print(f"{option_index}. {option[self.id_parameter]} with name {option[self.name_parameter]}")
option_index+=1
print(f'Which {self.single_option_noun} would you like to {self.action_verb}?')
self.active_option = int(input(f'Choose 0-{option_index-1}:'))
# Ask until the user provides valid input.
while self.active_option not in list(range(option_index)):
print(f'{self.active_option} is not a valid choice. Which {self.single_option_noun} would you like to {self.action_verb}?')
self.active_option = int(input(f'Choose 0-{option_index-1}:'))
print(f'Your {self.single_option_noun} is {self.options_list[self.active_option][self.name_parameter]}.')
# Assign the class id and name vars to the chosen item's
self.id = self.options_list[self.active_option][self.id_parameter]
self.name = self.options_list[self.active_option][self.name_parameter]
###Output
_____no_output_____
###Markdown
Pulling organization and network IDsMost API calls require passing values for the organization ID and/or the network ID. Remember that `UserChoice` class we created earlier? We'll call that and supply parameters defining what the user can choose. Notice how, having defined the class earlier, we can re-use it with only a single declaration.
###Code
# getOrganizations will return all orgs to which the supplied API key has access
user_choices['all_organizations'] = dashboard.organizations.getOrganizations()
# Prompt the user to pick an organization.
user_choices['organization'] = UserChoice(
options_list=user_choices['all_organizations'],
subject_of_choice='organizations',
single_option_noun='organization',
no_valid_options_message=string_constants['NO_VALID_OPTIONS']
)
###Output
_____no_output_____
###Markdown
Identify networks with MX appliances, and prompt the user to choose oneWe want to:> Restore a backup of the uplink selection preferences, including custom performance classes.We can only run this on networks that have appliance devices, so we will find networks where `productTypes` contains `appliance`. Then we'll ask the user to pick one, then pull the uplink selection rules from it.Then let's ask the user which network they'd like to use.
###Code
user_choices['all_networks'] = dashboard.organizations.getOrganizationNetworks(organizationId=user_choices['organization'].id)
# Find the networks with appliances
user_choices['networks_with_appliances']= [network for network in user_choices['all_networks'] if 'appliance' in network['productTypes']]
# If any are found, let the user choose a network. Otherwise, let the user know that none were found. The logic for this class is defined in a cell above.
user_choices['network_choice'] = UserChoice(
options_list = user_choices['networks_with_appliances'],
subject_of_choice = 'networks with appliances',
single_option_noun = 'network'
)
###Output
_____no_output_____
###Markdown
Overall restore workflow Logical summaryThe restore workflow summarized is:1. Open a chosen Excel workbook that contains the backup information.2. Parse each worksheet into a Python object structured according to the API documentation.3. Restore the custom performance classes from the backup.4. Restore the WAN (Internet) and VPN uplink preferences from the backup. Code summaryTo structure the code, we'll break down the total functionality into discrete functions. These functions are where we define the operational logic for the restore. Most, if not all functions will return relevant information to be used by the next function in the restore procedure.1. The first function will ingest the Excel spreadsheet with the backup information and return the data as a Python object, structured according to the API documentation.2. Another function will restore the custom performance classes from the backup. This is a tricky operation for reasons you'll see below, but fully possible via Python methods and the Meraki SDK.3. Another function will, if necessary, update the loaded backup object with new ID assignments, in case restoring the custom performance classes backup resulted in new class IDs.4. Another function will restore the VPN preferences.5. Another function will restore the WAN preferencess.6. After definining each of those functions, we'll run them in succession to finalize the backup restoration.Once you understand the fundamentals, consider how you might improve this script, either with additional functionality or UX improvements!> NB: *Function* and *method* are used interchangeably. Python is an object-oriented language, and *method* is often preferred when discussing object-oriented programming.
###Code
# Ingest an Excel spreadsheet with the appropriate worksheets, and create an object that can be pushed as a configuration API call
def load_uplink_prefs_workbook(workbook_filename):
# Create a workbook object out of the actual workbook file
loaded_workbook = openpyxl.load_workbook(workbook_filename, read_only=True)
# Create empty rule lists to which we can add the rules defined in the workbook
loaded_custom_performance_classes = []
loaded_wan_uplink_prefs = []
loaded_vpn_uplink_prefs = []
# Open the worksheets
loaded_custom_performance_classes_worksheet = loaded_workbook['customPerformanceClasses']
loaded_wan_prefs_worksheet = loaded_workbook['wanUplinkPreferences']
loaded_vpn_prefs_worksheet = loaded_workbook['vpnUplinkPreferences']
## CUSTOM PERFORMANCE CLASSES ##
# We'll also count the number of classes to help the user know that it's working.
performance_class_count = 0
# For reference, the expected column order is
# ID [0], Name [1], Max Latency [2], Max Jitter [3], Max Loss Percentage [4]
# Append each performance class loaded_custom_performance_classes
for row in loaded_custom_performance_classes_worksheet.iter_rows(min_row=2):
# Let's start with an empty rule dictionary to which we'll add the relevant parameters
performance_class = {}
# Append the values
performance_class['customPerformanceClassId'] = row[0].value
performance_class['name'] = row[1].value
performance_class['maxLatency'] = row[2].value
performance_class['maxJitter'] = row[3].value
performance_class['maxLossPercentage'] = row[4].value
# Append the performance class to the loaded_custom_performance_classes list
loaded_custom_performance_classes.append(performance_class)
performance_class_count += 1
print(f'Loaded {performance_class_count} custom performance classes.')
## WAN PREFERENCES ##
# We'll also count the number of rules to help the user know that it's working.
rule_count = 0
# For reference, the expected column order is
# Protocol [0], Source [1], Src port [2], Destination [3], Dst port [4], Preferred uplink [5]
# Append each WAN preference to loaded_wan_uplink_prefs
for row in loaded_wan_prefs_worksheet.iter_rows(min_row=2):
# Let's start with an empty rule dictionary to which we'll add the relevant parameters
rule = {}
# We know that there will always be a preferred uplink
rule_preferred_uplink = row[5].value.lower()
# The first column is Protocol
rule_protocol = row[0].value.lower()
# Source column is [1]
rule_source = row[1].value
# Destination column is [3]
rule_destination = row[3].value.lower()
# Assemble the rule into a single Python object that uses the syntax that the corresponding API call expects
if rule_protocol == 'any':
# Since protocol is 'any' then src and dst ports are also 'any'
rule_src_port = 'any'
rule_dst_port = 'any'
# Protocol is any, so leave out the port numbers
rule_value = {
'protocol': rule_protocol,
'source': {
'cidr': rule_source
},
'destination': {
'cidr': rule_destination
}
}
else:
# Since protocol is not 'any', we pass these as-is
rule_src_port = row[2].value
rule_dst_port = row[4].value
# Rule isn't any, so we need the port numbers
rule_value = {
'protocol': rule_protocol,
'source': {
'port': rule_src_port,
'cidr': rule_source
},
'destination': {
'port': rule_dst_port,
'cidr': rule_destination
}
}
# Append the trafficFilters param to the rule
rule['trafficFilters'] = [
{
'type': 'custom', # This worksheet doesn't have any Type column
'value': rule_value
}
]
# Append the preferredUplink param to the rule
rule['preferredUplink'] = rule_preferred_uplink
# Append the rule to the loaded_wan_uplink_prefs list
loaded_wan_uplink_prefs.append(rule)
rule_count += 1
print(f'Loaded {rule_count} WAN uplink preferences.')
## VPN PREFERENCES ##
# For reference, the expected column order is
# Type [0], Protocol or App ID [1], Source or App Name [2], Src port [3], Destination [4],
# Dst port [5], Preferred uplink [6], Failover criterion [7], Performance class type [8],
# Performance class name [9], Performance class ID [10]
# We'll also count the number of rules to help the user know that it's working.
rule_count = 0
# Append each WAN preference to loaded_wan_uplink_prefs
for row in loaded_vpn_prefs_worksheet.iter_rows(min_row=2):
# Since the parameters can change depending on the various options, we'll start with an empty dictionary or list depending on parameter type and then add keys along the way to correspond to the relevant values.
rule = {}
rule_traffic_filters = {}
# We know that there will always be a preferred uplink. We don't need any special logic to assign this one, so we'll keep it at the top.
rule_preferred_uplink = row[6].value
# Add it to the rule
rule['preferredUplink'] = rule_preferred_uplink
# The first column is Type, and the type will define the structure for other parameters.
rule_type = row[0].value.lower() # Always lowercase.Chimpkennuggetss
# If the rule type is application or applicationCategory then we're not concerned with destination, dst port or similar, and Protocol or App ID [1] will be 'id' not 'protocol', etc.
if 'application' in rule_type:
rule_application_id = row[1].value.lower() # Always lowercase
rule_application_name = row[2].value # Leave it capitalized
# Assign the rule value
rule_value = {
'id': rule_application_id,
'name': rule_application_name
}
else:
# Assign the rule Protocol [1]
rule_protocol = row[1].value.lower() # Always lowercase
# Regardless of protocol, we need to assign Source [2]
rule_source = row[2].value.lower() # Always lowercase
# Regardless of protocol, we need to assign Destination [4]
rule_destination = row[4].value.lower() # Always lowercase
# Assign the rule ports, if appropriate
if rule_protocol in ('any', 'icmp'):
# Since protocol is 'any' or 'icmp' then we leave out src and dst ports
rule_value = {
'protocol': rule_protocol,
'source': {
'cidr': rule_source
},
'destination': {
'cidr': rule_destination
}
}
else:
# Since protocol is not 'any', we pass these from the worksheet
rule_src_port = row[3].value # Always lowercase
rule_dst_port = row[5].value # Always lowercase
rule_value = {
'protocol': rule_protocol,
'source': {
'port': rule_src_port,
'cidr': rule_source
},
'destination': {
'port': rule_dst_port,
'cidr': rule_destination
}
}
# Assemble the rule_traffic_filters parameter
rule_traffic_filters['type'] = rule_type
rule_traffic_filters['value'] = rule_value
# Add it to the rule
rule_traffic_filters_list = [rule_traffic_filters]
rule['trafficFilters'] = rule_traffic_filters_list
# Assign the optional failOverCriterion
rule_failover_criterion = row[7].value # Leave it capitalized
if rule_failover_criterion not in (string_constants['NOT_APPLICABLE'], ''):
# Add it to the rule
rule['failOverCriterion'] = rule_failover_criterion
# Assign the optional performanceClass
rule_performance_class_type = row[8].value
rule_performance_class_name = row[9].value
rule_performance_class_id = row[10].value
if rule_performance_class_type not in (string_constants['NOT_APPLICABLE'], ''):
# Add it to the rule
rule['performanceClass'] = {}
rule['performanceClass']['type'] = rule_performance_class_type
# If the performance class type is custom, then we use customPerformanceClassId
if rule_performance_class_type == 'custom':
# Add it to the rule
rule['performanceClass']['customPerformanceClassId'] = rule_performance_class_id
# Otherwise, we use builtinPerformanceClassName
else:
# Add it to the rule
rule['performanceClass']['builtinPerformanceClassName'] = rule_performance_class_name
# Append the rule to the loaded_vpn_uplink_prefs list
loaded_vpn_uplink_prefs.append(rule)
rule_count += 1
print(f'Loaded {rule_count} VPN uplink preferences.')
return(
{
'wanPrefs': loaded_wan_uplink_prefs,
'vpnPrefs': loaded_vpn_uplink_prefs,
'customPerformanceClasses': loaded_custom_performance_classes
}
)
# We'll use the filename we specified at the top of the notebook.
# Load the workbook!
user_data['loaded_combined_uplink_prefs'] = load_uplink_prefs_workbook(WORKBOOK_FILENAME)
###Output
_____no_output_____
###Markdown
Let's take a look at those uplink preferences!
###Code
printj(user_data['loaded_combined_uplink_prefs']['wanPrefs'])
# How might we look at the other components of the loaded backup?
###Output
_____no_output_____
###Markdown
Restoring custom performance classesRestoring custom performance classes can be tricky. IDs are unique, but a user might have changed the name or settings of a performance class after the last backup was taken, and that does not change the performance class's ID.Given this scenario, and many other hypotheticals, we will simplify the restore operation with a straightforward and predictable behavior. It is designed to be most in-line with common expectations about what "restoring a backup" commonly means.First we will check if the backup's classes are a perfect match to the currently configured ones. If so, there's no need to restore anything.Otherwise, if the backup contains a performance class with the same ID as one that exists in the current Dashboard configuration, then we will overwrite that existing class with the settings (including name) from the backup.Otherwise, if the backup contains a performance class with a different ID but the same name as one that exists in the current Dashboard configuration, then we will overwrite that existing class with the settings from the backup, and we will return the new/old IDsin a list of key-value pairs so that we can update the corresponding `vpnUplinkPreference` to use this new performance class ID. If that happens, then when the uplink preferences are restored in a later function `update_vpn_prefs_performance_class_ids`, they will use the same performance class settings as were backed up, but the updated performance class ID.Finally, if the backup contains a performance class that doesn't match any of the existing classes by name or ID, then we'll create it new, and return the new/old IDs as described above for a later function `update_vpn_prefs_performance_class_ids`.
###Code
# A new function to compare current vs. loaded custom performance classes
# It will take as input the network ID and the list of the custom performance classes loaded from the backup
# If it has to update any VPN prefs' custom performance class IDs, it will do so, and return the new/old IDs
# in a list of key-value pairs. Otherwise, it will return None.
def restore_custom_performance_classes(*, listOfLoadedCustomPerformanceClasses, networkId):
# Let's first make a list of the custom performance classes currently configured in the dashboard
list_of_current_custom_performance_classes = dashboard.appliance.getNetworkApplianceTrafficShapingCustomPerformanceClasses(networkId=networkId)
# Let's compare the currently configured classes with those from the backup. If they're the same, there's no need to restore anything.
if list_of_current_custom_performance_classes == listOfLoadedCustomPerformanceClasses:
print('The backed up custom performance classes matched the existing config.')
return(None)
# Otherwise, we will make several lists that we will use to compare the current config with the backup (loaded) config to determine what needs to be changed. We'll use the copy() method of the list object to make a new copy rather than creating a new reference to the original data.
# We'll remove from this list as we find classes that match by either ID or name
list_of_orphan_loaded_performance_classes = listOfLoadedCustomPerformanceClasses.copy()
list_of_orphan_current_performance_classes = list_of_current_custom_performance_classes.copy()
# We'll subtract from this list as we find classes that match by ID
list_of_id_unmatched_current_performance_classes = list_of_current_custom_performance_classes.copy()
# For those instances where we match by name, we'll store the new:old IDs in a new list
list_of_id_updates = []
# First let's check for current classes that match by ID. If they do, we'll update them with the loaded backup config.
for loaded_performance_class in listOfLoadedCustomPerformanceClasses:
# Let's look through each of the currently configured performance classes for that match
for current_performance_class in list_of_current_custom_performance_classes:
# Check if the IDs match up
if loaded_performance_class['customPerformanceClassId'] == current_performance_class['customPerformanceClassId']:
print(f"Matched {loaded_performance_class['customPerformanceClassId']} by ID! Restoring it from the backup.")
# Restore that class from the loaded backup configuration
dashboard.appliance.updateNetworkApplianceTrafficShapingCustomPerformanceClass(
networkId=networkId,
customPerformanceClassId=current_performance_class['customPerformanceClassId'],
maxJitter=loaded_performance_class['maxJitter'],
maxLatency=loaded_performance_class['maxLatency'],
name=loaded_performance_class['name'],
maxLossPercentage=loaded_performance_class['maxLossPercentage']
)
# Remove each from its respective orphan list
list_of_orphan_loaded_performance_classes.remove(loaded_performance_class)
list_of_orphan_current_performance_classes.remove(current_performance_class)
# Let's next check the orphan lists for classes that match by name. If they do, we'll update them with the loaded backup config.
# If we find a match, we'll also add a reference object to the name-only match list tying the new ID to the respective one from
# the loaded backup. If we find a match, we'll also remove it from both orphan lists.
for orphan_loaded_performance_class in list_of_orphan_loaded_performance_classes:
# Let's look through each of the currently configured performance classes for that match
for orphan_current_performance_class in list_of_orphan_current_performance_classes:
# Check if the names match up
if orphan_loaded_performance_class['name'] == orphan_current_performance_class['name']:
print(f"Matched custom performance class with ID {orphan_loaded_performance_class['customPerformanceClassId']} by name {orphan_loaded_performance_class['name']}! Restoring it from the backup.")
# Restore that class from the loaded backup configuration
dashboard.appliance.updateNetworkApplianceTrafficShapingCustomPerformanceClass(
networkId=networkId,
customPerformanceClassId=orphan_current_performance_class['customPerformanceClassId'],
maxJitter=orphan_loaded_performance_class['maxJitter'],
maxLatency=orphan_loaded_performance_class['maxLatency'],
name=orphan_loaded_performance_class['name'],
maxLossPercentage=orphan_loaded_performance_class['maxLossPercentage']
)
# Add it to the name-only matches list, list_of_name_matches
list_of_id_updates.append(
{
'loaded_id': orphan_loaded_performance_class['customPerformanceClassId'],
'current_id': orphan_current_performance_class['customPerformanceClassId']
}
)
# Remove each from its respective orphan list
list_of_orphan_loaded_performance_classes.remove(orphan_loaded_performance_class)
list_of_orphan_current_performance_classes.remove(orphan_current_performance_class)
# If there are any orphans left, they have not matched by ID or name. Create them new.
if len(list_of_orphan_loaded_performance_classes):
print(f'{len(list_of_orphan_loaded_performance_classes)} new custom performance classes need to be created:')
print(f'{list_of_orphan_loaded_performance_classes}\n')
for orphan_loaded_performance_class in list_of_orphan_loaded_performance_classes:
# Re-create the loaded class from the backup and get its new ID
# We'll also add the old and new IDs to the reference object we've created for this purpose
new_performance_class = dashboard.appliance.createNetworkApplianceTrafficShapingCustomPerformanceClass(
networkId=networkId,
maxJitter=orphan_loaded_performance_class['maxJitter'],
maxLatency=orphan_loaded_performance_class['maxLatency'],
name=orphan_loaded_performance_class['name'],
maxLossPercentage=orphan_loaded_performance_class['maxLossPercentage']
)
print(f'Created new custom performance class {new_performance_class} from the backup\'s {orphan_loaded_performance_class}.\n')
# Add it to the name-only matches list, list_of_name_matches
list_of_id_updates.append(
{
'loaded_id': orphan_loaded_performance_class['customPerformanceClassId'],
'current_id': new_performance_class['customPerformanceClassId']
}
)
print('These IDs from the backup will need to be updated:')
print(f'{list_of_id_updates}\n\n')
# Return the list of updated IDs in key-value pairs for later processing
return(list_of_id_updates)
###Output
_____no_output_____
###Markdown
Restoring VPN uplink preferencesNow that the custom performance classes are restored, we can restore the VPN uplink preferences. Method to update the VPN prefs with updated performance class IDsThis nesting-doll of a function simply looks for `vpnUplinkPreferences` that use custom performance classes, and updates those IDs to match any corresponding new ones created, such as when a backed up performance class was deleted after the backup.
###Code
def update_vpn_prefs_performance_class_ids(*, loaded_vpn_prefs, performance_class_id_updates):
vpn_prefs_updates = 0
# For each update in the ID updates list
for update in performance_class_id_updates:
# For each rule in loaded_vpn_prefs
for rule in loaded_vpn_prefs:
# If the rule's performance class type is set
if 'performanceClass' in rule.keys():
# If the rule's performance class type is custom
if rule['performanceClass']['type'] == 'custom':
# And if the rule's customPerformanceClassId matches one from our ID updates list
if rule['performanceClass']['customPerformanceClassId'] == update['loaded_id']:
# Then we update it with the new ID
rule['performanceClass']['customPerformanceClassId'] = update['current_id']
vpn_prefs_updates += 1
return(vpn_prefs_updates)
###Output
_____no_output_____
###Markdown
Method to restore the VPN preferences to DashboardSpecify the `networkId` and provide the VPN uplink preferences as a list. This method is documented [here](https://developer.cisco.com/meraki/api-v1/!update-network-appliance-traffic-shaping-uplink-selection).> NB: Setting a variable `response` equal to the SDK method is a common practice, because the SDK method will return the API's HTTP response. That information is useful to confirm that the operation was successful, but it is not strictly required.
###Code
# A new function to push VPN preferences
def restore_vpn_prefs(backup_vpn_prefs_list):
current_vpn_prefs_list = dashboard.appliance.getNetworkApplianceTrafficShapingUplinkSelection(
networkId=user_choices['network_choice'].id
)['vpnTrafficUplinkPreferences']
if current_vpn_prefs_list == backup_vpn_prefs_list:
print(f'The current VPN prefs list matches the backup. No VPN prefs changed.')
return(None)
else:
response = dashboard.appliance.updateNetworkApplianceTrafficShapingUplinkSelection(
networkId=user_choices['network_choice'].id,
vpnTrafficUplinkPreferences=backup_vpn_prefs_list
)
print(f'The VPN prefs list was restored.')
return(response)
###Output
_____no_output_____
###Markdown
Method to restore the WAN preferences to DashboardSpecify the `networkId` and provide the WAN uplink preferences as a list. This method is documented [here](https://developer.cisco.com/meraki/api-v1/!update-network-appliance-traffic-shaping-uplink-selection).> NB: Notice that this relies on the same SDK method as `restore_vpn_prefs()` above. Here we've split the restore into two functions to demonstrate that you can push only specific keyword arguments when the other keyword arguments are optional. Since it's the same method, we could consolidate this function, `restore_wan_prefs()`, and `restore_vpn_prefs()`, into a single function by passing both keyword arguments `vpnTrafficUplinkPreferences` and `wanTrafficUplinkPreferences` at the same time. This would then increase the amount of work accomplished by a single API call. We recommend following a best practice of accomplishing as much as possible with as few calls as possible, when appropriate.
###Code
# A new function to push WAN preferences
def restore_wan_prefs(backup_wan_prefs_list):
current_wan_prefs_list = dashboard.appliance.getNetworkApplianceTrafficShapingUplinkSelection(
networkId=user_choices['network_choice'].id
)['wanTrafficUplinkPreferences']
if current_wan_prefs_list == backup_wan_prefs_list:
print(f'The current WAN prefs list matches the backup. No WAN prefs changed.')
return(None)
else:
response = dashboard.appliance.updateNetworkApplianceTrafficShapingUplinkSelection(
networkId=user_choices['network_choice'].id,
wanTrafficUplinkPreferences=backup_wan_prefs_list
)
print(f'The WAN prefs list was restored.')
return(response)
###Output
_____no_output_____
###Markdown
Wrapping up!We've now built functions to handle the discrete tasks required to restore the configuration for the three items:* WAN uplink preferences* VPN uplink preferences* Custom performance classesWe had to translate the Excel workbook into a Python object that was structured according to the API specifications.We found that some extra logic was required to properly restore custom performance classes and VPN uplink preferences that use them, and wrote custom functions to handle it. > NB: We handled this one way of potentially many. Can you think of any other ways you might handle the problem of missing custom performance class IDs, or overlapping names or IDs?However, we haven't actually called any of these functions, so the restore hasn't happened! To actually call those functions, we'll run them in the next cell. Restore the backup!
###Code
# Restore the custom performance classes
user_data['updated_performance_class_ids'] = restore_custom_performance_classes(
listOfLoadedCustomPerformanceClasses=user_data['loaded_combined_uplink_prefs']['customPerformanceClasses'],
networkId=user_choices['network_choice'].id
)
# Update the custom perfromance class IDs
if user_data['updated_performance_class_ids']:
update_vpn_prefs_performance_class_ids(
loaded_vpn_prefs=user_data['loaded_combined_uplink_prefs']['vpnPrefs'],
performance_class_id_updates=user_data['updated_performance_class_ids']
)
# Restore the VPN prefs
user_data['restored_vpn_prefs'] = restore_vpn_prefs(
user_data['loaded_combined_uplink_prefs']['vpnPrefs']
)
# Restore the WAN prefs
user_data['restored_wan_prefs'] = restore_wan_prefs(
user_data['loaded_combined_uplink_prefs']['wanPrefs']
)
###Output
_____no_output_____ |
airbnb_dataset_analysis.ipynb | ###Markdown
Boston AirBNB Dataset Analysis Using the Boston AirBNB Dataset I will answer the following questions:1. What are the main characteristics of the AirBNB listings in Boston?2. Which variables can determine the price of an AirBNB listing?3. When is the best time of the year to find available properties in Boston? Business Understanding AirBNB is an American vacation rental online marketplace. This company acts as a broker, receiving comission from each booking. Data Understanding Get the data
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import r2_score, mean_squared_error
import seaborn as sns
%matplotlib inline
listings = pd.read_csv('listings.csv')
calendar = pd.read_csv('calendar.csv')
reviews = pd.read_csv('reviews.csv')
sns.set(style = 'darkgrid')
###Output
_____no_output_____
###Markdown
Once we have imported our datasets, let's take a quick look to each one of them in order to define how we can aswer our three questions
###Code
## Take a first look into "listings" dataset
listings.head()
# How many rows and columns are present in the "listings" dataset?
listings.shape
## Review the names and datatypes of each column in the "listings" dataset
listings.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3585 entries, 0 to 3584
Data columns (total 95 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 id 3585 non-null int64
1 listing_url 3585 non-null object
2 scrape_id 3585 non-null int64
3 last_scraped 3585 non-null object
4 name 3585 non-null object
5 summary 3442 non-null object
6 space 2528 non-null object
7 description 3585 non-null object
8 experiences_offered 3585 non-null object
9 neighborhood_overview 2170 non-null object
10 notes 1610 non-null object
11 transit 2295 non-null object
12 access 2096 non-null object
13 interaction 2031 non-null object
14 house_rules 2393 non-null object
15 thumbnail_url 2986 non-null object
16 medium_url 2986 non-null object
17 picture_url 3585 non-null object
18 xl_picture_url 2986 non-null object
19 host_id 3585 non-null int64
20 host_url 3585 non-null object
21 host_name 3585 non-null object
22 host_since 3585 non-null object
23 host_location 3574 non-null object
24 host_about 2276 non-null object
25 host_response_time 3114 non-null object
26 host_response_rate 3114 non-null object
27 host_acceptance_rate 3114 non-null object
28 host_is_superhost 3585 non-null object
29 host_thumbnail_url 3585 non-null object
30 host_picture_url 3585 non-null object
31 host_neighbourhood 3246 non-null object
32 host_listings_count 3585 non-null int64
33 host_total_listings_count 3585 non-null int64
34 host_verifications 3585 non-null object
35 host_has_profile_pic 3585 non-null object
36 host_identity_verified 3585 non-null object
37 street 3585 non-null object
38 neighbourhood 3042 non-null object
39 neighbourhood_cleansed 3585 non-null object
40 neighbourhood_group_cleansed 0 non-null float64
41 city 3583 non-null object
42 state 3585 non-null object
43 zipcode 3547 non-null object
44 market 3571 non-null object
45 smart_location 3585 non-null object
46 country_code 3585 non-null object
47 country 3585 non-null object
48 latitude 3585 non-null float64
49 longitude 3585 non-null float64
50 is_location_exact 3585 non-null object
51 property_type 3582 non-null object
52 room_type 3585 non-null object
53 accommodates 3585 non-null int64
54 bathrooms 3571 non-null float64
55 bedrooms 3575 non-null float64
56 beds 3576 non-null float64
57 bed_type 3585 non-null object
58 amenities 3585 non-null object
59 square_feet 56 non-null float64
60 price 3585 non-null object
61 weekly_price 892 non-null object
62 monthly_price 888 non-null object
63 security_deposit 1342 non-null object
64 cleaning_fee 2478 non-null object
65 guests_included 3585 non-null int64
66 extra_people 3585 non-null object
67 minimum_nights 3585 non-null int64
68 maximum_nights 3585 non-null int64
69 calendar_updated 3585 non-null object
70 has_availability 0 non-null float64
71 availability_30 3585 non-null int64
72 availability_60 3585 non-null int64
73 availability_90 3585 non-null int64
74 availability_365 3585 non-null int64
75 calendar_last_scraped 3585 non-null object
76 number_of_reviews 3585 non-null int64
77 first_review 2829 non-null object
78 last_review 2829 non-null object
79 review_scores_rating 2772 non-null float64
80 review_scores_accuracy 2762 non-null float64
81 review_scores_cleanliness 2767 non-null float64
82 review_scores_checkin 2765 non-null float64
83 review_scores_communication 2767 non-null float64
84 review_scores_location 2763 non-null float64
85 review_scores_value 2764 non-null float64
86 requires_license 3585 non-null object
87 license 0 non-null float64
88 jurisdiction_names 0 non-null float64
89 instant_bookable 3585 non-null object
90 cancellation_policy 3585 non-null object
91 require_guest_profile_picture 3585 non-null object
92 require_guest_phone_verification 3585 non-null object
93 calculated_host_listings_count 3585 non-null int64
94 reviews_per_month 2829 non-null float64
dtypes: float64(18), int64(15), object(62)
memory usage: 2.6+ MB
###Markdown
This dataset will help us answer questions 1 and 2. There are many columns with NaN values. However, we will handle this cases depending on how we analyze each variable later.
###Code
## Take a first look into the "calendar" dataset
calendar.head()
## Review the names and datatypes of each column in the "calendar" dataset
calendar.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1308890 entries, 0 to 1308889
Data columns (total 4 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 listing_id 1308890 non-null int64
1 date 1308890 non-null object
2 available 1308890 non-null object
3 price 643037 non-null object
dtypes: int64(1), object(3)
memory usage: 39.9+ MB
###Markdown
This dataset will help us answer question 3. We will not be using price to answer this question. So, there is no problem with the null values.
###Code
## Take a first look into "reviews" dataset
reviews.head()
## Review the names and datatypes of each column in the "reviews" dataset
reviews.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 68275 entries, 0 to 68274
Data columns (total 6 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 listing_id 68275 non-null int64
1 id 68275 non-null int64
2 date 68275 non-null object
3 reviewer_id 68275 non-null int64
4 reviewer_name 68275 non-null object
5 comments 68222 non-null object
dtypes: int64(3), object(3)
memory usage: 3.1+ MB
###Markdown
We will not use the "reviews" dataset in this notebook. However, it provides very insightful information about the experiences of the customers in each listing. We could use this information in a future analysis by performing some sentiment to this dataset. After taking a first look into each dataset, we will start answering each question Question 1: What are the main characteristics of the AirBNB listings in Boston? For this section we will only work with the "listing" dataset. First, we will make some data exploration to understand the information in this dataset and have a better perspective of the listings. Neighbourhoods: Which offer more listings and how are they distributed among Boston? Let's see first which are the neighbourhoods with more supply
###Code
## Ranking listings by neighbourhood
listings['neighbourhood'].value_counts()
###Output
_____no_output_____
###Markdown
We can see here that the leading category in "neighbourhood" is Allston-Brighton. However, this name is a combination of two differente neighbourhoods. Probably we should work with "neighbourhood_cleansed" to have bettter insights.
###Code
## Set an order to plot neighbourhood
n_order = listings['neighbourhood_cleansed'].value_counts().index
n_order
###Output
_____no_output_____
###Markdown
Now we have a better on this, let's make a plot of this ranking.
###Code
## Plot neighbourhood_cleansed
plt.figure(figsize = (16,6))
sns.countplot(y = 'neighbourhood_cleansed', order = n_order, color = 'b', data = listings)
plt.title('What is the distribution of properties by neighbourhoods?')
plt.ylabel('')
plt.xlabel('# of properties');
###Output
_____no_output_____
###Markdown
We can see that Jamaica Plain, South End, Back Bay, Fenway, Dorchester and Allston are neighbourhoods with a a high supply of properties to rent. However, I wonder if this are the best neighbourhoods to stay. Let's take a look to some of the main features of listings, "property_type","room_type" and "price". Type of properties
###Code
## Rank listings by type of property
listings['property_type'].value_counts()
## Define order to plot property types
pt_order = listings['property_type'].value_counts().index
## Plot property types
plt.figure(figsize = (16,6))
sns.countplot(x = 'property_type', order = pt_order, data = listings)
plt.title('What is the distribution of properties by type?')
plt.ylabel('# of properties')
plt.xlabel('');
###Output
_____no_output_____
###Markdown
We can see that apartments and houses represent almost 90% of the types of properties available at AirBNB in Boston. Now, let's take a look of listings grouped by "room_type"
###Code
## Take a look of listings by "room_type"
listings['room_type'].value_counts()
## Plot listings by "room_type"
plt.figure(figsize = (16,6))
sns.countplot(x = 'room_type', data = listings)
plt.title('What is the distribution of properties by type?')
plt.ylabel('# of properties')
plt.xlabel('Room type');
###Output
_____no_output_____
###Markdown
Prices in listings
###Code
## Check "price" values
listings['price']
###Output
_____no_output_____
###Markdown
I will have to correct the datatype from a string to a numerical
###Code
## Convert price to a float
listings['price'] = pd.to_numeric(listings['price'].str[1:], errors='coerce').tolist()
###Output
_____no_output_____
###Markdown
Now that we have the right data type for "price", let's group the listings by neighbourhood and find the distribution of prices in each of them.
###Code
## Get median neighbourhood prices
np_df = listings[['neighbourhood_cleansed','price']].dropna().groupby('neighbourhood_cleansed').median()
## Get neighbourhoods sorted by median
n_sorted = np_df.sort_values(by=['price'], ascending=False).index
n_sorted
## Plot neighbourhoods by median price
plt.figure(figsize = (16,6))
sns.boxplot(x = 'neighbourhood_cleansed', y = 'price', data = listings, order = n_sorted)
plt.title('Price distribution by neighbourhood')
plt.ylabel('Price USD')
plt.xlabel('Neighbourhood')
xt = plt.xticks(rotation=90);
###Output
_____no_output_____
###Markdown
We can see that the neighbourhoods that have a higher average price are concentrated near the downtown of Boston, a fact that is very common in many cities. Also, these kind of neighbourhoods don't have many outliers on their prices compared with other neighbourhoods.Now, let's combine the amount of lisintgs per neighbourhood with two types of score that I find valuable when I book a new property in a city: location and value. Neighbourhoods with the best combination of location and value
###Code
## Create dataframe grouping averages of scores by neighbourhood
l_scores = listings.dropna(subset=['review_scores_location','review_scores_value']).groupby('neighbourhood_cleansed').agg({'review_scores_location':['mean','count'],'review_scores_value':['mean','count']}).reset_index()
l_scores.columns = [' '.join(col).strip() for col in l_scores.columns.values]
## Filter dataframe by neighbourhoods with more than 20 properties
l_scores = l_scores[l_scores['review_scores_value count'] > 20].reset_index()
## Create scatterplot
plt.figure(figsize = (16,8))
p1 = sns.scatterplot(x = 'review_scores_location mean', y = 'review_scores_value mean', alpha = 0.4,
size = 'review_scores_value count', sizes = (0,8000), legend = False, data = l_scores)
for line in range(0,l_scores.shape[0]):
p1.text(l_scores['review_scores_location mean'][line]+0.002, l_scores['review_scores_value mean'][line], l_scores['neighbourhood_cleansed'][line], horizontalalignment='left', size='medium', color='black', weight='semibold')
plt.ylim(8.55,9.7);
###Output
_____no_output_____
###Markdown
Based on this chart, it would be convenient to look for apartments at "North End", "Beacon Hill" and "Back Bay". This zones have the best location scores, offer great value for the money paid and are not as expensive as the downtown and offer a great amount of available properties to rent Conclusions to answer question 1:After analyzing some of the main characteristics of the listings available in Boston we can answer the question with the following insights:- Almost 90% of the listings available in Boston are houses and apartments.- If someone is booking a property through AirBNB in Boston, it will found more options to book entire houses/apartments than rooms. However, the difference is not that big.- If someone is looking for a listing near the downtown, it will probably be more expensive than options a little farther. However, this pricier neighbourhoods are compensated by a great location and a high score in the value received Question 2: Which variables can determine the price of an AirBNB listing? Let's analyze which variables of the "listings" dataset can have an influence on price. First, we will review some of the numerical variables and how they correlate to price.
###Code
## Create dataframe for heatmap
h_df = listings[['price','square_feet','accommodates','bathrooms','bedrooms','beds','host_listings_count',
'number_of_reviews','review_scores_location','review_scores_value']]
plt.figure(figsize = (16,8))
sns.heatmap(h_df.corr(), annot=True, fmt='.2f');
###Output
_____no_output_____
###Markdown
Price has a strong correlation with "accommodates", "bedrooms", "beds" and "square feet" variables. However, there is also a correlation between them, particularly between "accommodates" and "bedrooms" and "beds". This makes sense because the number of people you can accommodate in a property has a correlation with the amount of bedrooms and beds availabl in that property.In order to avoid multicollinearity we will only use "accommodates" to start developing a regression model.
###Code
## Create Dataframe for model 1
m1 = h_df[['price','accommodates']].dropna()
## Simple Linear Regression
X = m1['accommodates'].values.reshape(-1,1)
y = m1['price'].values.reshape(-1,1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .3, random_state = 42)
## instantiate
lm_model = LinearRegression(normalize=True)
## fit
lm_model.fit(X_train, y_train)
## predict test data
y_test_preds = lm_model.predict(X_test)
## score model on the test
r2_test = r2_score(y_test, y_test_preds)
print('Test score: ' + str(r2_test))
## Review statsmodel from the simple linear regression
X_stats = sm.add_constant(X_train)
model = sm.OLS(y_train, X_stats)
results = model.fit()
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.328
Model: OLS Adj. R-squared: 0.328
Method: Least Squares F-statistic: 1222.
Date: Fri, 28 Aug 2020 Prob (F-statistic): 2.82e-218
Time: 16:09:38 Log-Likelihood: -14861.
No. Observations: 2501 AIC: 2.973e+04
Df Residuals: 2499 BIC: 2.974e+04
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 59.3421 3.668 16.179 0.000 52.150 66.534
x1 36.2870 1.038 34.954 0.000 34.251 38.323
==============================================================================
Omnibus: 1078.015 Durbin-Watson: 2.022
Prob(Omnibus): 0.000 Jarque-Bera (JB): 10854.630
Skew: 1.764 Prob(JB): 0.00
Kurtosis: 12.577 Cond. No. 7.46
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
We can see that "accommodates" cannot predict by itself the price of a property. We will add more numerical features such as "host_listings_count" and "review_scores_location" and run a Multiple Linear Regression
###Code
## Create Dataframe for multiple linear regression
m3 = h_df[['price','accommodates','host_listings_count','review_scores_location']].dropna()
m3.shape
## Multiple Linear Regression
X = m3[['accommodates','host_listings_count','review_scores_location']]
y = m3['price'].values.reshape(-1,1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .3, random_state = 42)
## instantiate
lm_model = LinearRegression(normalize=True)
## fit
lm_model.fit(X_train, y_train)
## predict test data
y_test_preds = lm_model.predict(X_test)
## score model on the test
r2_test = r2_score(y_test, y_test_preds)
print('Test score: ' + str(r2_test))
## Review statsmodel from the multiple linear regression
X_stats = sm.add_constant(X_train)
model = sm.OLS(y_train, X_stats)
results = model.fit()
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.426
Model: OLS Adj. R-squared: 0.425
Method: Least Squares F-statistic: 477.3
Date: Fri, 28 Aug 2020 Prob (F-statistic): 6.84e-232
Time: 16:09:38 Log-Likelihood: -11292.
No. Observations: 1932 AIC: 2.259e+04
Df Residuals: 1928 BIC: 2.261e+04
Df Model: 3
Covariance Type: nonrobust
==========================================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------------------
const -169.4863 20.623 -8.218 0.000 -209.932 -129.040
accommodates 34.4503 1.035 33.279 0.000 32.420 36.480
host_listings_count 0.1355 0.014 9.807 0.000 0.108 0.163
review_scores_location 23.8577 2.158 11.053 0.000 19.624 28.091
==============================================================================
Omnibus: 693.760 Durbin-Watson: 2.037
Prob(Omnibus): 0.000 Jarque-Bera (JB): 4440.667
Skew: 1.541 Prob(JB): 0.00
Kurtosis: 9.758 Cond. No. 1.58e+03
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 1.58e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
###Markdown
We can see that adding those additional variables improved the model. However, there are still some categorical variables I haven't include in the model and may have an impact in the price such as "neighbourhood_cleansed", "property_type", "room_type", and "host_is_superhost". Include categorical variables in the analysis
###Code
## Prepare Dataframe for Multiple Linear Regression with numerical and categorical variables
## Numerical columns selected: 'accommodates','host_listings_count','review_scores_location'
## Categorical variables selected: 'neighbourhood_cleansed','property_type','room_type','host_is_superhost'
m_df = listings[['price','accommodates','host_listings_count','review_scores_location',
'neighbourhood_cleansed','property_type','room_type','host_is_superhost']].dropna().reset_index()
m_df.shape
###Output
_____no_output_____
###Markdown
The column 'neighbourhood_cleansed' has to many categories, we will group them by making the following assumptions:1. Price becomes higher when the property is closer to Downtown2. We will clasify each neighbourhood based on its distance to the Downtown: - Less than 1 mile away from Downtown = 'Very Short' - Between 1 and 2 miles = 'Short' - Between 2 and 5 miles = 'Medium' - More than 5 miles = 'Far' *The distance from each neighbourhood to the Downtown was taken manually from Google Maps
###Code
## Create dataframe with the clasification of each neighbourhood
distance = {'neighbourhood': ['Jamaica Plain','South End','Back Bay','Fenway','Dorchester','Allston','Beacon Hill',
'Brighton','South Boston','Downtown','East Boston','Roxbury','North End','Mission Hill',
'Charlestown','South Boston Waterfront','Chinatown','Roslindale','West End',
'West Roxbury','Hyde Park','Mattapan','Bay Village','Longwood Medical Area',
'Leather District'],
'distance from downtown' : ['Far','Medium','Short','Medium','Far','Far','Short','Far','Medium',
'Very Short','Medium','Far','Very Short','Medium','Medium','Short','Far',
'Far','Short','Far','Far','Far','Short','Medium','Very Short']}
distance_df = pd.DataFrame(data = distance).set_index('neighbourhood')
distance_df
## Join the original Dataframe with the clasification by neighbourhood
m4 = m_df.join(distance_df, on = 'neighbourhood_cleansed', how = 'inner')
m4 = m4.drop(columns = ['neighbourhood_cleansed','index']).dropna()
m4.head()
## Set numerical and categorical columns
num_cols = ['accommodates','host_listings_count','review_scores_location']
cat_cols = ['property_type','room_type','host_is_superhost','distance from downtown']
fm_df = m4
for col in cat_cols:
fm_df = pd.concat([fm_df.drop(col, axis = 1),
pd.get_dummies(fm_df[col], prefix = col, prefix_sep = '_', dummy_na = True)], axis = 1)
fm_df.head()
## Run Multiple Linear Regression
y = fm_df['price']
X = fm_df.drop(['price'], axis = 1)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = .3, random_state = 42)
## instantiate
lm_model = LinearRegression(normalize=True)
## fit
lm_model.fit(X_train, y_train)
## predict test data
y_train_preds = lm_model.predict(X_train)
y_test_preds = lm_model.predict(X_test)
## score model on train
r2_train = r2_score(y_train, y_train_preds)
print('Train score: ' + str(r2_train))
## score model on the test
r2_test = r2_score(y_test, y_test_preds)
print('Test score: ' + str(r2_test))
## Review statsmodel from the multiple linear regression
X_stats = sm.add_constant(X_train)
model = sm.OLS(y_train, X_stats)
results = model.fit()
print(results.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: price R-squared: 0.554
Model: OLS Adj. R-squared: 0.549
Method: Least Squares F-statistic: 118.5
Date: Fri, 28 Aug 2020 Prob (F-statistic): 2.18e-316
Time: 16:09:39 Log-Likelihood: -11055.
No. Observations: 1930 AIC: 2.215e+04
Df Residuals: 1909 BIC: 2.227e+04
Df Model: 20
Covariance Type: nonrobust
=====================================================================================================
coef std err t P>|t| [0.025 0.975]
-----------------------------------------------------------------------------------------------------
const 9.9604 11.681 0.853 0.394 -12.949 32.870
accommodates 26.4866 1.109 23.873 0.000 24.311 28.663
host_listings_count 0.0918 0.013 7.176 0.000 0.067 0.117
review_scores_location 6.3037 2.132 2.956 0.003 2.122 10.486
property_type_Apartment -9.5211 12.827 -0.742 0.458 -34.677 15.635
property_type_Bed & Breakfast 9.3512 20.992 0.445 0.656 -31.818 50.520
property_type_Boat -4.6669 29.068 -0.161 0.872 -61.675 52.341
property_type_Condominium 7.9920 14.140 0.565 0.572 -19.740 35.724
property_type_Dorm -48.7164 69.755 -0.698 0.485 -185.521 88.088
property_type_Entire Floor -34.1536 69.791 -0.489 0.625 -171.028 102.721
property_type_Guesthouse 25.1558 69.815 0.360 0.719 -111.766 162.078
property_type_House 2.8546 13.313 0.214 0.830 -23.254 28.964
property_type_Loft 3.6572 19.949 0.183 0.855 -35.466 42.780
property_type_Other 32.4215 28.953 1.120 0.263 -24.361 89.204
property_type_Townhouse 17.3225 17.340 0.999 0.318 -16.685 51.330
property_type_Villa 8.2635 69.881 0.118 0.906 -128.788 145.315
property_type_nan 8.436e-15 1.2e-14 0.701 0.483 -1.52e-14 3.2e-14
room_type_Entire home/apt 48.0993 6.167 7.800 0.000 36.005 60.194
room_type_Private room -8.4592 5.530 -1.530 0.126 -19.304 2.386
room_type_Shared room -29.6798 9.655 -3.074 0.002 -48.615 -10.744
room_type_nan 3.123e-15 8.57e-15 0.365 0.716 -1.37e-14 1.99e-14
host_is_superhost_f -2.9992 5.912 -0.507 0.612 -14.595 8.596
host_is_superhost_t 12.9596 6.752 1.919 0.055 -0.282 26.202
host_is_superhost_nan 0 0 nan nan 0 0
distance from downtown_Far -36.0709 3.821 -9.440 0.000 -43.565 -28.577
distance from downtown_Medium -0.8181 3.962 -0.206 0.836 -8.589 6.953
distance from downtown_Short 39.4326 5.038 7.828 0.000 29.553 49.313
distance from downtown_Very Short 7.4167 5.747 1.291 0.197 -3.854 18.688
distance from downtown_nan 0 0 nan nan 0 0
==============================================================================
Omnibus: 794.080 Durbin-Watson: 2.078
Prob(Omnibus): 0.000 Jarque-Bera (JB): 6640.082
Skew: 1.714 Prob(JB): 0.00
Kurtosis: 11.415 Cond. No. 6.59e+16
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The smallest eigenvalue is 9.39e-27. This might indicate that there are
strong multicollinearity problems or that the design matrix is singular.
###Markdown
We can see that adding categorical variables added robustness to the model. However it's still not that good. Conclusions to answer question 2After the analysis we can have the following conclusions:- The number of accommodates has a positive correlation with price. Thus, an increase in the number of people that can stay at a property increases the price of it.- Renting an entire home/apt is associated with a high price of the property, which makes sense since this option offers bigger spaces and privacy that private rooms or shared rooms.- Properties that are closer to the Downtown are more associated with a higher price. Probably this is due to the convenience of its location for tourists. On the opposite, properties that are considered far from the Downtown are usually more cheaper.Even though we have some conclusions based on this model, it could be improved by adding aditional variables to it that may not be available in this dataset. In a future project, we could analyze the ammenities offered by the properties and see if they have any sort of correlation with price. When is the best time to find more properties available in Boston? To solve this question we can use the calendar dataset and plot the availability by date ( properties available on a given date / of properties). We will find the mean availability by date and then the mean availability grouped by month.
###Code
## Take a look at calendar dataset
calendar.head()
calendar.shape
calendar.info()
## Review data types
calendar.dtypes
## Change 'date' from object to datetime
calendar['date'] = pd.to_datetime(calendar['date'])
## Use "date" as index for the dataset
ts = calendar.set_index('date')
ts.head()
## Check if every listing has a similar number of entries
ts['listing_id'].value_counts()
## Drop the property with "listing_id" = 12898806
ts = ts[ts['listing_id'] != 12898806]
## Replace the values in "available" with 1 for 't' and 0 for 'f'
new_calendar = ts.replace({'f':0, 't':1})
new_calendar.available.value_counts()
## group the availability by date
ts1 = new_calendar.groupby(by = 'date').agg({'available':['count','sum','mean']})
ts1.columns = ts1.columns.get_level_values(1)
ts1.head()
###Output
_____no_output_____
###Markdown
Plot the availability by date
###Code
plt.figure(figsize = (16,8))
ts1['mean'].plot(linewidth = 1.5);
###Output
_____no_output_____
###Markdown
This plot doesn't tell me to much information, probably is better to look at this information at a monthly level.
###Code
## Group results by month
new_calendar['month_number'] = pd.DatetimeIndex(new_calendar.index).month
new_calendar['month'] = pd.DatetimeIndex(new_calendar.index).month_name()
new_calendar.head()
## Drop "price" column and NaN values
new_calendar = new_calendar.reset_index().set_index('listing_id')
new_calendar = new_calendar[['date','available','month_number','month']].dropna()
new_calendar.head()
## Group ts2 by month and sort the information by month_number
ts2 = new_calendar.groupby(['month','month_number'])['available'].mean().reset_index()
ts2 = ts2.sort_values('month_number').set_index('month_number')
ts2.head()
## Plot the availability by month
plt.figure(figsize = (16,8))
sns.barplot(x = 'month', y = 'available', color = 'b', data = ts2)
plt.ylim(0,1);
###Output
_____no_output_____ |
notebooks/gtrees.ipynb | ###Markdown
Goals: Separate the structure of a tree from the data of a tree. In other words, fitting a tree does two things: It creates the structure of a tree and it creates a mapping of each leaf to a value. Lookup therefore requires both finding the leaf node AND using the map to lookup the value. The loss function optimized by the tree is configurable, as is the leaf Terms: TreeA Tree is an object that takes input data and determines what leaf it ends up in. Unlike many tree implementations, the Tree itself doesn't store data about the value of a leaf. That is stored externally. loss_fnA loss_fn is a function that takes data rows, the predicted targets for those rows, and the actual targets for those rows, and returns a single value that determines the "LOSS" or "COST" of that prediction (lower cost/loss is better)```def loss_fn(predicted_targets, actual_targets) -> float```A loss function must be additive (so, one should not apply a mean as a part of it) leaf_prediction_fnA leaf_prediction_fn is a function which takes the features and actual targets that end up in a leaf and returns a Series of the predictions for each row ending up in that leaf. It is typically a constant function whose value is either the mean good rate in that leaf (among the actual targets) or the median target, but can be anything else```def leaf_prediction_fn(features) -> pd.Series``` leaf_prediction_builderA leaf_prediction_builder is a function which takes the features and actual targets that end up in a TRANING leaf and returns a leaf_prediction_fn. This leaf_prediction_fn is used to predict the value of testing rows that end up in the same leaf.```def leaf_prediction_builder(features, actual_targets) -> leaf_prediction_fn``` leaf_prediction_mapA leaf_prediction_map is a map of leaf ids (eg their hash) to the leaf_prediction_fn for that leaf. One can only use a tree to score data if one has a leaf_prediction_map. This design allows on to use the same tree as a subset of another tree without having their leaf values become entangled. -------------- Test Tree Manipulation Functions
###Code
%pdb off
t = gtree.BranchNode('A', 0.5, None, None)
t.left = gtree.LeafNode()
t.right = gtree.BranchNode('B', 0.9, None, None)
t.right.left = gtree.LeafNode()
t.right.right = gtree.LeafNode()
o = gtree.BranchNode('C', 0.1, None, None)
o.left = gtree.LeafNode()
o.right = gtree.LeafNode()
t.prn()
print '\n\n'
o.prn()
u = gtree.replace_branch_split(t, t.right, o)
u.prn()
print '\n\n'
t.prn()
v = gtree.replace_node(t, t.left, o)
v.prn()
print '\n\n'
t.prn()
t = gtree.BranchNode('A', 0.5, None, None)
t.left = gtree.LeafNode()
t.right = gtree.BranchNode('B', 0.9, None, None)
t.right.left = gtree.LeafNode()
t.right.right = gtree.BranchNode('C', 0.9, None, None)
t.right.right.right = gtree.LeafNode()
t.right.right.left = gtree.LeafNode()
t.prn()
print '\n\n'
gtree.prune(t, 2).prn()
print '\n\n'
t.prn()
data = pd.DataFrame({'A': [0.1, 10, .02],
'B': [10, 20, 30]},
index=['foo', 'bar', 'baz'])
class StaticLeaf(object):
def __init__(self, val):
self.val = val
def predict(self, df):
return np.array([self.val for _ in range(len(df))])
t = gtree.BranchNode('A', 0.5, None, None)
t.left = gtree.LeafNode() #'A', 0.5, 10, 20)
t.right = gtree.LeafNode() #'A', 0.5, 100, 0)
leaf_map = {hash(t.left): StaticLeaf(10),
hash(t.right): StaticLeaf(20)}
t.predict(data, leaf_map)
t
# Create a split on a DataFrame
df = pd.DataFrame({'foo': pd.Series([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])})
gtree._single_variable_best_split(
df,
'foo',
pd.Series([0, 0, 1, 0, 0, 1, 1, 0, 1, 1]),
loss='error_rate',
leaf_prediction='mean')
threshold = 0.5
truth = pd.Series([1, 0, 1], dtype=np.float32)
predicted = pd.Series([0, 1, 1], dtype=np.float32)
print gtree.loss(truth, predicted, type='error_rate')
print 1.0 - ((predicted >= threshold) == truth).mean() #+ (predicted < threshold) * (1 - truth)
###Output
_____no_output_____
###Markdown
Test Split Finding
###Code
df = pd.DataFrame({'A': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'B': [10, 20, 50, 30, 40, 50, 60, 50, 70, 90, 100, 110 ]}, dtype=np.float32)
target = pd.Series([0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0], dtype=np.float32)
tree, leaf_map = gtree.train_greedy_tree(df, target, loss='error_rate')
print '\nTree:\n'
tree.prn()
print leaf_map
gtree.calculate_leaf_map(tree, df, target)
gtree.random_node(tree)
print gtree.get_all_nodes(tree)
df = pd.DataFrame({'A': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12],
'B': [10, 20, 50, 30, 40, 50, 60, 50, 70, 90, 100, 110 ]}, dtype=np.float32)
target = pd.Series([0, 0, 1, 0, 0, 1, 1, 0, 1, 1, 0, 0], dtype=np.float32)
gtree._single_variable_best_split(df,'B', target,
loss='error_rate',
leaf_prediction='mean')
tree, leaf_map = gtree.train_greedy_tree(df, target,
loss='error_rate',
feature_sample_rate=.5,
row_sample_rate=.5)
print '\nTree:\n'
tree.prn()
print leaf_map
gtree.mate(tree, tree).prn()
#def make_hastie_sample(n_samples):
#
# features, targets = datasets.make_hastie_10_2(n_samples=n_samples)
#
# features = pd.DataFrame(features, columns=['feature_{}'.format(i) for i in range(features.shape[1])])
# targets = pd.Series(targets, name='target')
# targets = targets.map(lambda x: 1.0 if x > 0 else 0.0)
# return features, targets
#def make_kddcup(n_samples):
#
# features, targets = datasets.fetch_kddcup99(subset='smtp')
#
# features = pd.DataFrame(features, columns=['feature_{}'.format(i) for i in range(features.shape[1])])
# targets = pd.Series(targets, name='target')
# targets = targets.map(lambda x: 1.0 if x > 0 else 0.0)
#
# features = featurse.sample(n=n_samples)
#
# return features, targets.loc[features.index]
#def make_random_classification(n_samples, n_features=100):
# features, targets = datasets.make_classification(n_samples=n_samples,
# n_features=n_features,
# n_informative=8,
# n_classes=2,
# n_clusters_per_class=4)
#
# features = pd.DataFrame(features, columns=['feature_{}'.format(i) for i in range(features.shape[1])])
# targets = pd.Series(targets, name='target')
# targets = targets.map(lambda x: 1.0 if x > 0 else 0.0)
#
# return features, targets.loc[features.index]
###Output
_____no_output_____
###Markdown
Start the Test Analysis Here
###Code
#features, targets = make_hastie_sample(10000)
features, targets = tools.make_random_classification(10000)
features = pd.DataFrame(features, dtype=np.float32)
targets = pd.Series(targets, dtype=np.float32)
features.shape
targets.value_counts()
gtree.tree_logger.setLevel(logging.INFO)
tree, leaf_map = gtree.train_greedy_tree(features, targets,
loss='cross_entropy',
leaf_prediction='mean',
max_depth=7)
#gtree.cross_entropy_loss(targets[features.feature_45 < .2326])
#gtree._single_variable_best_split(features, 'feature_45', targets, None, None, None)
tree.prn()
set(pd.Series([1, 2, 3]))
tree.predict(features, leaf_map)
results = pd.DataFrame({'truth': targets, 'prediction': tree.predict(features, leaf_map)})
1.0 - gtree.error_rate_loss(results.prediction, results.truth) / len(targets)
results.plot(kind='scatter', x='prediction', y='truth')
fig = plt.figure(figsize=(12,8))
for label, grp in tree.predict(features, leaf_map).groupby(targets):
grp.hist(normed=True, alpha=0.5, label=str(label)) #, label=label)
plt.legend(loc='best')
None
###Output
_____no_output_____
###Markdown
Compare Methods
###Code
features, targets = tools.make_random_classification(5000)
features = pd.DataFrame(features, dtype=np.float32)
targets = pd.Series(targets, dtype=np.float32)
features_validation = features.sample(frac=.3)
targets_validation = targets.loc[features_validation.index]
features = features[~features.index.isin(features_validation.index)]
targets = targets.loc[features.index]
%pdb off
gtree.tree_logger.setLevel(logging.WARNING)
result, generations = gtree.evolve(features, targets,
loss='cross_entropy',
max_depth=3, min_to_split=10,
num_generations=15, num_survivors=10,
num_children=200, num_seed_trees=5)
leaf_map = gtree.calculate_leaf_map(result['tree'], features, targets, gtree.leaf_good_rate_split_builder)
print gtree.error_rate_loss(result['tree'].predict(features_validation, leaf_map), targets_validation)
generations[-1]['best_of_generation']['tree'].find_leaves(features).value_counts()
for gen in generations[-1]['generation']:
print '--------------------{:.4f}----------------------------'.format(gen['loss_testing'])
gen['tree'].prn()
for result in generations[-1]['generation']:
print '---------------------------------------------'
result['tree'].prn()
result = gtree.train_random_trees(features, targets, loss_fn=gtree.error_rate_loss,
max_depth=2,
min_to_split=10,
num_trees=10)
leaf_map = gtree.calculate_leaf_map(result['tree'], features, targets, gtree.leaf_good_rate_split_builder)
print gtree.error_rate_loss(result['tree'].predict(features_validation, leaf_map), targets_validation)
from sklearn import tree
from sklearn.model_selection import train_test_split
clf = tree.DecisionTreeClassifier(max_depth=2)
clf = clf.fit(features, targets)
predictions = pd.Series(clf.predict_proba(features_validation)[:, 1], index=features_validation.index)
gtree.error_rate_loss(predictions, targets_validation)
from sklearn.externals.six import StringIO
from sklearn import tree as sklearn_tree
import pydot
dot_data = StringIO()
sklearn_tree.export_graphviz(clf, out_file=dot_data)
graph = pydot.graph_from_dot_data(dot_data.getvalue())
graph.write_pdf("iris.pdf")
%alias_magic t timeit
sel = features[features['feature_3'] < 0].index
sel
%t features.loc[sel]
%t df.reindex_axis(sel, copy=False)
###Output
_____no_output_____
###Markdown
Evolve
###Code
# BC Dataset
#bc_info = datasets.load_breast_cancer()
features, target = datasets.make_hastie_10_2(n_samples=5000)
features = pd.DataFrame(features, dtype=np.float32)
target = pd.Series([1.0 if t == 1.0 else 0.0 for t in target], dtype=np.float32).dropna()
features = features.loc[target.index]
target.value_counts(dropna=False)
gtree.tree_logger.setLevel(logging.WARNING)
gtree.tree_logger.setLevel(logging.WARNING)
result, generations = gtree.evolve(features, target,
loss='cross_entropy',
leaf_prediction='logit',
max_depth=3,
min_to_split=10,
num_generations=5,
num_survivors=20,
num_children=50,
num_seed_trees=10)
generations
for gen in generations:
best = gen['best_of_generation']
print '=========================={:.4f} {:.4f}==============================\n'.format(
best['loss_training'],
best['loss_testing'])
best['tree'].prn()
generations[-1]['best_of_generation']
for k, v in generations[-1]['best_of_generation']['leaf_map'].iteritems():
print k, v.get_coeficients(), '\n'
generations[-1]['best_of_generation']['leaf_map']
gtree.tree_logger.setLevel(logging.WARNING)
result, generations = gtree.train_random_trees(features, target,
loss='cross_entropy',
leaf_prediction='logit',
max_depth=3,
min_to_split=10)
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
X
fns = {0: lambda x: (x[:,0] + x[:,1] + x[:,2]).reshape(len(x), 1), 1: lambda x: (-1*x[:,0]).reshape(len(x), 1)}
hashes = np.array([[0], [1], [0]])
hashes
predictions = np.zeros((len(X), 1))
predictions
zero = np.zeros((len(X), 1))
for i in [0, 1]:
#comparison = np.full((len(X), 1), i)
predictions[hashes.reshape(len(X))==i] = fns[i](X[hashes.reshape(len(X))==i, :]) # += np.where(hashes==i, fns[i](X), zero)
predictions
X[hashes==comparison, :]
X[:,0].reshape(3, 1)
X[hashes==np.array([1]).reshape((len(X), 1))]
hashes==1
X[np.array([True, False, True]), :]
import numpy as np
import statsmodels.discrete.discrete_model as sm
import statsmodels.tools.tools as sm_tools
X = np.array([[1, 2, 3],
[2, 7, 5],
[3, 10, 7],
[5, 18, 10],
[-10, 70, 3]
], dtype=np.float64)
y = np.array([[1], [0], [1], [0], [1]], dtype=np.float64)
logit = sm.Logit(y, sm_tools.add_constant(X))
fit = logit.fit_regularized(method='l1', alpha=1.0)
fit.params
logit.predict(fit.params, sm_tools.add_constant(X))
from sklearn.svm.base import _fit_liblinear
coef_, intercept_, n_iter = _fit_liblinear(
X, np.ravel(y), C=1.0, fit_intercept=True, intercept_scaling=1.0,
class_weight=None, penalty='l1', dual=False, verbose=True,
max_iter=5000, tol=1e-4, random_state=None,
sample_weight=None)
#n_iter = np.array([n_iter])
(coef_, intercept_, n_iter)
from scipy.special import expit
expit(coef_.dot(X.T) + intercept_)
###Output
_____no_output_____ |
8.Data Visualization with Python/5_Peer_Graded_Assignment_Questions.ipynb | ###Markdown
Assignment* [Story](story)* [Components of the report items](components-of-the-report-items)* [Expected layout](expected-layout)* [Requirements to create the dashboard](requirements-to-create-the-dashboard)* [What is new in this exercise compared to other labs?](what-is-new-in-this-exercise-compared-to-other-labs?)* [Review](review)* [Hints to complete TODOs](hints-to-complete-todos)* [Application](application) Story:As a data analyst, you have been given a task to monitor and report US domestic airline flights performance. Goal is to analyze the performance of the reporting airline to improve fight reliability thereby improving customer relaibility.Below are the key report items,* Yearly airline performance report * Yearly average flight delay statistics*NOTE:* Year range is between 2005 and 2020. Components of the report items1. Yearly airline performance report For the chosen year provide, * Number of flights under different cancellation categories using bar chart. * Average flight time by reporting airline using line chart. * Percentage of diverted airport landings per reporting airline using pie chart. * Number of flights flying from each state using choropleth map. * Number of flights flying to each state from each reporting airline using treemap chart.2. Yearly average flight delay statistics For the chosen year provide, * Monthly average carrier delay by reporting airline for the given year. * Monthly average weather delay by reporting airline for the given year. * Monthly average natioanl air system delay by reporting airline for the given year. * Monthly average security delay by reporting airline for the given year. * Monthly average late aircraft delay by reporting airline for the given year. *NOTE:* You have worked created the same dashboard components in `Flight Delay Time Statistics Dashboard` section. We will be reusing the same. Expected Layout Requirements to create the dashboard* Create dropdown using the reference [here](https://dash.plotly.com/dash-core-components/dropdown?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)* Create two HTML divisions that can accomodate two components (in one division) side by side. One is HTML heading and the other one is dropdown.* Add graph components.* Callback function to compute data, create graph and return to the layout. What's new in this exercise compared to other labs?* Make sure the layout is clean without any defualt graphs or graph layouts. We will do this by 3 changes: 1. Add `app.config.suppress_callback_exceptions = True` right after `app = JupyterDash(__name__)`. 2. Having empty html.Div and use the callback to Output the dcc.graph as the Children of that Div. 3. Add a state variable in addition to callback decorator input and output parameter. This will allow us to pass extra values without firing the callbacks. Here, we need to pass two inputs `chart type` and `year`. Input is read only after user entering all the information.* Use new html display style `flex` to arrange the dropdown menu with description.* Update app run step to avoid getting error message before initiating callback.*NOTE:* These steps are only for review. ReviewSearch/Look for review to know how commands are used and computations are carried out. There are 7 review items.* REVIEW1: Clear the layout and do not display exception till callback gets executed.* REVIEW2: Dropdown creation.* REVIEW3: Observe how we add an empty division and providing an id that will be updated during callback.* REVIEW4: Holding output state till user enters all the form information. In this case, it will be chart type and year.* REVIEW5: Number of flights flying from each state using choropleth* REVIEW6: Return dcc.Graph component to the empty division* REVIEW7: This covers chart type 2 and we have completed this exercise under Flight Delay Time Statistics Dashboard section Hints to complete TODOs TODO1Reference [link](https://dash.plotly.com/dash-html-components/h1?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)* Provide title of the dash application title as `US Domestic Airline Flights Performance`.* Make the heading center aligned, set color as `503D36`, and font size as `24`. Sample: style={'textAlign': 'left', 'color': '000000', 'font-size': 0} TODO2Reference [link](https://dash.plotly.com/dash-core-components/dropdown?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)Create a dropdown menu and add two chart options to it.Parameters to be updated in `dcc.Dropdown`:* Set `id` as `input-type`.* Set `options` to list containing dictionaries with key as `label` and user provided value for labels in `value`. *1st dictionary* * label: Yearly Airline Performance Report * value: OPT1 *2nd dictionary* * label: Yearly Airline Delay Report * value: OPT2* Set placeholder to `Select a report type`.* Set width as `80%`, padding as `3px`, font size as `20px`, text-align-last as `center` inside style parameter dictionary. Skeleton:``` dcc.Dropdown(id='....', options=[ {'label': '....', 'value': '...'}, {'label': '....', 'value': '...'} ], placeholder='....', style={....})``` TODO3Add a division with two empty divisions inside. For reference, observe how code under `REVIEW` has been structured.Provide division ids as `plot4` and `plot5`. Display style as `flex`. Skeleton```html.Div([ html.Div([ ], id='....'), html.Div([ ], id='....') ], style={....})``` TODO4Our layout has 5 outputs so we need to create 5 output components. Review how input components are constructured to fill in for output component.It is a list with 5 output parameters with component id and property. Here, the component property will be `children` as we have created empty division and passing in `dcc.Graph` after computation.Component ids will be `plot1` , `plot2`, `plot2`, `plot4`, and `plot5`. Skeleton```[Output(component_id='plot1', component_property='children'), Output(....), Output(....), Output(....), Output(....)]``` TODO5Deals with creating line plots using returned dataframes from the above step using `plotly.express`. Link for reference is [here](https://plotly.com/python/line-charts/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)Average flight time by reporting airline* Set figure name as `line_fig`, data as `line_data`, x as `Month`, y as `AirTime`, color as `Reporting_Airline` and `title` as `Average monthly flight time (minutes) by airline`. Skeleton```carrier_fig = px.line(avg_car, x='Month', y='CarrierDelay', color='Reporting_Airline', title='Average carrrier delay time (minutes) by airline')````) TODO6Deals with creating treemap plot using returned dataframes from the above step using `plotly.express`. Link for reference is [here](https://plotly.com/python/treemaps/?utm_medium=Exinfluencer&utm_source=Exinfluencer&utm_content=000026UJ&utm_term=10006555&utm_id=NA-SkillsNetwork-Channel-SkillsNetworkCoursesIBMDeveloperSkillsNetworkDV0101ENSkillsNetwork20297740-2021-01-01)Number of flights flying to each state from each reporting airline* Set figure name as `tree_fig`, data as `tree_data`, path as `['DestState', 'Reporting_Airline']`, values as `Flights`, colors as `Flights`, color_continuous_scale as `'RdBu'`, and title as `'Flight count by airline to destination state'` Skeleton```tree_fig = px.treemap(data, path=['...', '...'], values='...', color='...', color_continuous_scale='...', title='...' )``` Application
###Code
# Import required libraries
import pandas as pd
import dash
from dash import dcc
from dash import html
from dash.dependencies import Input, Output, State
from jupyter_dash import JupyterDash
import plotly.graph_objects as go
import plotly.express as px
from dash import no_update
# Create a dash application
app = JupyterDash(__name__)
JupyterDash.infer_jupyter_proxy_config()
# REVIEW1: Clear the layout and do not display exception till callback gets executed
app.config.suppress_callback_exceptions = True
# Read the airline data into pandas dataframe
airline_data = pd.read_csv('https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DV0101EN-SkillsNetwork/Data%20Files/airline_data.csv',
encoding = "ISO-8859-1",
dtype={'Div1Airport': str, 'Div1TailNum': str,
'Div2Airport': str, 'Div2TailNum': str})
# List of years
year_list = [i for i in range(2005, 2021, 1)]
"""Compute graph data for creating yearly airline performance report
Function that takes airline data as input and create 5 dataframes based on the grouping condition to be used for plottling charts and grphs.
Argument:
df: Filtered dataframe
Returns:
Dataframes to create graph.
"""
def compute_data_choice_1(df):
# Cancellation Category Count
bar_data = df.groupby(['Month','CancellationCode'])['Flights'].sum().reset_index()
# Average flight time by reporting airline
line_data = df.groupby(['Month','Reporting_Airline'])['AirTime'].mean().reset_index()
# Diverted Airport Landings
div_data = df[df['DivAirportLandings'] != 0.0]
# Source state count
map_data = df.groupby(['OriginState'])['Flights'].sum().reset_index()
# Destination state count
tree_data = df.groupby(['DestState', 'Reporting_Airline'])['Flights'].sum().reset_index()
return bar_data, line_data, div_data, map_data, tree_data
"""Compute graph data for creating yearly airline delay report
This function takes in airline data and selected year as an input and performs computation for creating charts and plots.
Arguments:
df: Input airline data.
Returns:
Computed average dataframes for carrier delay, weather delay, NAS delay, security delay, and late aircraft delay.
"""
def compute_data_choice_2(df):
# Compute delay averages
avg_car = df.groupby(['Month','Reporting_Airline'])['CarrierDelay'].mean().reset_index()
avg_weather = df.groupby(['Month','Reporting_Airline'])['WeatherDelay'].mean().reset_index()
avg_NAS = df.groupby(['Month','Reporting_Airline'])['NASDelay'].mean().reset_index()
avg_sec = df.groupby(['Month','Reporting_Airline'])['SecurityDelay'].mean().reset_index()
avg_late = df.groupby(['Month','Reporting_Airline'])['LateAircraftDelay'].mean().reset_index()
return avg_car, avg_weather, avg_NAS, avg_sec, avg_late
# Application layout
app.layout = html.Div(children=[
# TODO1: Add title to the dashboard
# REVIEW2: Dropdown creation
# Create an outer division
html.Div([
# Add an division
html.Div([
# Create an division for adding dropdown helper text for report type
html.Div(
[
html.H2('Report Type:', style={'margin-right': '2em'}),
]
),
# TODO2: Add a dropdown
# Place them next to each other using the division style
], style={'display':'flex'}),
# Add next division
html.Div([
# Create an division for adding dropdown helper text for choosing year
html.Div(
[
html.H2('Choose Year:', style={'margin-right': '2em'})
]
),
dcc.Dropdown(id='input-year',
# Update dropdown values using list comphrehension
options=[{'label': i, 'value': i} for i in year_list],
placeholder="Select a year",
style={'width':'80%', 'padding':'3px', 'font-size': '20px', 'text-align-last' : 'center'}),
# Place them next to each other using the division style
], style={'display': 'flex'}),
]),
# Add Computed graphs
# REVIEW3: Observe how we add an empty division and providing an id that will be updated during callback
html.Div([ ], id='plot1'),
html.Div([
html.Div([ ], id='plot2'),
html.Div([ ], id='plot3')
], style={'display': 'flex'}),
# TODO3: Add a division with two empty divisions inside. See above disvision for example.
])
# Callback function definition
# TODO4: Add 5 ouput components
@app.callback( [....],
[Input(component_id='input-type', component_property='value'),
Input(component_id='input-year', component_property='value')],
# REVIEW4: Holding output state till user enters all the form information. In this case, it will be chart type and year
[State("plot1", 'children'), State("plot2", "children"),
State("plot3", "children"), State("plot4", "children"),
State("plot5", "children")
])
# Add computation to callback function and return graph
def get_graph(chart, year, children1, children2, c3, c4, c5):
# Select data
df = airline_data[airline_data['Year']==int(year)]
if chart == 'OPT1':
# Compute required information for creating graph from the data
bar_data, line_data, div_data, map_data, tree_data = compute_data_choice_1(df)
# Number of flights under different cancellation categories
bar_fig = px.bar(bar_data, x='Month', y='Flights', color='CancellationCode', title='Monthly Flight Cancellation')
# TODO5: Average flight time by reporting airline
# Percentage of diverted airport landings per reporting airline
pie_fig = px.pie(div_data, values='Flights', names='Reporting_Airline', title='% of flights by reporting airline')
# REVIEW5: Number of flights flying from each state using choropleth
map_fig = px.choropleth(map_data, # Input data
locations='OriginState',
color='Flights',
hover_data=['OriginState', 'Flights'],
locationmode = 'USA-states', # Set to plot as US States
color_continuous_scale='GnBu',
range_color=[0, map_data['Flights'].max()])
map_fig.update_layout(
title_text = 'Number of flights from origin state',
geo_scope='usa') # Plot only the USA instead of globe
# TODO6: Number of flights flying to each state from each reporting airline
# REVIEW6: Return dcc.Graph component to the empty division
return [dcc.Graph(figure=tree_fig),
dcc.Graph(figure=pie_fig),
dcc.Graph(figure=map_fig),
dcc.Graph(figure=bar_fig),
dcc.Graph(figure=line_fig)
]
else:
# REVIEW7: This covers chart type 2 and we have completed this exercise under Flight Delay Time Statistics Dashboard section
# Compute required information for creating graph from the data
avg_car, avg_weather, avg_NAS, avg_sec, avg_late = compute_data_choice_2(df)
# Create graph
carrier_fig = px.line(avg_car, x='Month', y='CarrierDelay', color='Reporting_Airline', title='Average carrrier delay time (minutes) by airline')
weather_fig = px.line(avg_weather, x='Month', y='WeatherDelay', color='Reporting_Airline', title='Average weather delay time (minutes) by airline')
nas_fig = px.line(avg_NAS, x='Month', y='NASDelay', color='Reporting_Airline', title='Average NAS delay time (minutes) by airline')
sec_fig = px.line(avg_sec, x='Month', y='SecurityDelay', color='Reporting_Airline', title='Average security delay time (minutes) by airline')
late_fig = px.line(avg_late, x='Month', y='LateAircraftDelay', color='Reporting_Airline', title='Average late aircraft delay time (minutes) by airline')
return[dcc.Graph(figure=carrier_fig),
dcc.Graph(figure=weather_fig),
dcc.Graph(figure=nas_fig),
dcc.Graph(figure=sec_fig),
dcc.Graph(figure=late_fig)]
# Run the app
if __name__ == '__main__':
# REVIEW8: Adding dev_tools_ui=False, dev_tools_props_check=False can prevent error appearing before calling callback function
app.run_server(mode="inline", host="localhost", debug=False, dev_tools_ui=False, dev_tools_props_check=False)
###Output
_____no_output_____ |
01_GETDB.ipynb | ###Markdown
GetDB - Database GETDB> This notebook explains the utilization of the `RadiometryDB` responsible for opening and manipulating a Radiometry db
###Code
from WaterClass2.Radiometry import RadiometryDB
show_doc(RadiometryDB)
db = RadiometryDB('D:/GET-RadiometryDB/')
fig = db.summary(plot=True)
show_fig(fig)
###Output
_____no_output_____
###Markdown
Load Mean RadiometriesThe load_summary function load the mean radiometries. The results will be stored in the `.rdmtries` attribute.If the summary is not updated, one should run `db.create_summary_radiometries` function to reprocess it.
###Code
db.load_summary()
rrs = db.rdmtries['Rrs']
rrs.tail()
fig = px.scatter(rrs, x='665', y='SPM', color='Area')
show_fig(fig)
###Output
_____no_output_____
###Markdown
Plotting radiometries from an Area
###Code
area = rrs[rrs['Area']=='Maroni']
fig = plot_reflectances2(area, all_wls, hover_vars=['Area', 'SPM'])
show_fig(fig)
import nbdev
nbdev
###Output
_____no_output_____ |
python-tuts/0-beginner/2-Variables-Memory/08 - Shared References and Mutability.ipynb | ###Markdown
Shared References and Mutability The following sets up a shared reference between the variables my_var_1 and my_var_2
###Code
my_var_1 = 'hello'
my_var_2 = my_var_1
print(my_var_1)
print(my_var_2)
print(hex(id(my_var_1)))
print(hex(id(my_var_2)))
my_var_2 = my_var_2 + ' world!'
print(hex(id(my_var_1)))
print(hex(id(my_var_2)))
###Output
0x24c9144ca08
0x24c9144fab0
###Markdown
Be careful if the variable type is mutable!Here we create a list (*my_list_1*) and create a variable (*my_list_2*) referencing the same list object:
###Code
my_list_1 = [1, 2, 3]
my_list_2 = my_list_1
print(my_list_1)
print(my_list_2)
###Output
[1, 2, 3]
[1, 2, 3]
###Markdown
As we can see they have the same memory address (shared reference):
###Code
print(hex(id(my_list_1)))
print(hex(id(my_list_2)))
###Output
0x24c9144fc48
0x24c9144fc48
###Markdown
Now we modify the list referenced by *my_list_2*:
###Code
my_list_2.append(4)
###Output
_____no_output_____
###Markdown
*my_list_2* has been modified:
###Code
print(my_list_2)
###Output
[1, 2, 3, 4]
###Markdown
And since my_list_1 references the same list object, it has also changed:
###Code
print(my_list_1)
###Output
[1, 2, 3, 4]
###Markdown
As you can see, both variables still share the same reference:
###Code
print(hex(id(my_list_1)))
print(hex(id(my_list_2)))
###Output
0x24c9144fc48
0x24c9144fc48
###Markdown
Behind the scenes with Python's memory manager---- Recall from a few lectures back:
###Code
a = 10
b = 10
print(hex(id(a)))
print(hex(id(b)))
###Output
0x7559eaf0
0x7559eaf0
###Markdown
Same memory address!!This is safe for Python to do because integer objects are **immutable**. So, even though *a* and *b* initially shared the same mempry address, we can never modify *a*'s value by "modifying" *b*'s value. The only way to change *b*'s value is to change it's reference, which will never affect *a*.
###Code
b = 15
print(hex(id(a)))
print(hex(id(b)))
###Output
0x7559eaf0
0x7559eb90
###Markdown
However, for mutable objects, Python's memory manager does not do this, since that would **not** be safe.
###Code
my_list_1 = [1, 2, 3]
my_list_2 = [1, 2 , 3]
###Output
_____no_output_____
###Markdown
As you can see, although the two variables were assigned identical "contents", the memory addresses are not the same:
###Code
print(hex(id(my_list_1)))
print(hex(id(my_list_2)))
###Output
0x24c9146c5c8
0x24c913c6848
|
week2/Predicting house prices.ipynb | ###Markdown
Fire up graphlab create
###Code
import graphlab
###Output
_____no_output_____
###Markdown
Load some house sales dataDataset is from house sales in King County, the region where the city of Seattle, WA is located.
###Code
sales = graphlab.SFrame('home_data.gl/')
sales
###Output
_____no_output_____
###Markdown
Exploring the data for housing sales The house price is correlated with the number of square feet of living space.
###Code
graphlab.canvas.set_target('ipynb')
sales.show(view="Scatter Plot", x="sqft_living", y="price")
###Output
_____no_output_____
###Markdown
Create a simple regression model of sqft_living to price Split data into training and testing. We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
###Code
train_data,test_data = sales.random_split(.8,seed=0)
###Output
_____no_output_____
###Markdown
Build the regression model using only sqft_living as a feature
###Code
sqft_model = graphlab.linear_regression.create(train_data, target='price', features=['sqft_living'],validation_set=None)
###Output
_____no_output_____
###Markdown
Evaluate the simple model
###Code
print test_data['price'].mean()
print sqft_model.evaluate(test_data)
###Output
{'max_error': 4143550.8825285938, 'rmse': 255191.02870527358}
###Markdown
RMSE of about \$255,170! Let's show what our predictions look like Matplotlib is a Python plotting library that is also useful for plotting. You can install it with:'pip install matplotlib'
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(test_data['sqft_living'],test_data['price'],'.',
test_data['sqft_living'],sqft_model.predict(test_data),'-')
###Output
_____no_output_____
###Markdown
Above: blue dots are original data, green line is the prediction from the simple regression.Below: we can view the learned regression coefficients.
###Code
sqft_model.get('coefficients')
###Output
_____no_output_____
###Markdown
Explore other features in the dataTo build a more elaborate model, we will explore using more features.
###Code
my_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
sales[my_features].show()
sales.show(view='BoxWhisker Plot', x='zipcode', y='price')
###Output
_____no_output_____
###Markdown
Pull the bar at the bottom to view more of the data. 98039 is the most expensive zip code. Build a regression model with more features
###Code
my_features_model = graphlab.linear_regression.create(train_data,target='price',features=my_features,validation_set=None)
print my_features
###Output
['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']
###Markdown
Comparing the results of the simple model with adding more features
###Code
print sqft_model.evaluate(test_data)
print my_features_model.evaluate(test_data)
###Output
{'max_error': 4143550.8825285938, 'rmse': 255191.02870527358}
{'max_error': 3486584.509381705, 'rmse': 179542.4333126903}
###Markdown
The RMSE goes down from \$255,170 to \$179,508 with more features. Apply learned models to predict prices of 3 houses The first house we will use is considered an "average" house in Seattle.
###Code
house1 = sales[sales['id']=='5309101200']
house1
###Output
_____no_output_____
###Markdown
###Code
print house1['price']
print sqft_model.predict(house1)
print my_features_model.predict(house1)
###Output
[721918.9333272863]
###Markdown
In this case, the model with more features provides a worse prediction than the simpler model with only 1 feature. However, on average, the model with more features is better. Prediction for a second, fancier houseWe will now examine the predictions for a fancier house.
###Code
house2 = sales[sales['id']=='1925069082']
house2
###Output
_____no_output_____
###Markdown
###Code
print sqft_model.predict(house2)
print my_features_model.predict(house2)
###Output
[1446472.4690774973]
###Markdown
In this case, the model with more features provides a better prediction. This behavior is expected here, because this house is more differentiated by features that go beyond its square feet of living space, especially the fact that it's a waterfront house. Last house, super fancyOur last house is a very large one owned by a famous Seattleite.
###Code
bill_gates = {'bedrooms':[8],
'bathrooms':[25],
'sqft_living':[50000],
'sqft_lot':[225000],
'floors':[4],
'zipcode':['98039'],
'condition':[10],
'grade':[10],
'waterfront':[1],
'view':[4],
'sqft_above':[37500],
'sqft_basement':[12500],
'yr_built':[1994],
'yr_renovated':[2010],
'lat':[47.627606],
'long':[-122.242054],
'sqft_living15':[5000],
'sqft_lot15':[40000]}
###Output
_____no_output_____
###Markdown
###Code
print my_features_model.predict(graphlab.SFrame(bill_gates))
###Output
[13749825.525719076]
###Markdown
The model predicts a price of over $13M for this house! But we expect the house to cost much more. (There are very few samples in the dataset of houses that are this fancy, so we don't expect the model to capture a perfect prediction here.) My test
###Code
# first, the average of zipcode
aver_zipcode = sales[sales['zipcode'] == '98039']
print aver_zipcode['price'].mean()
sales
sq_ft2_4 = sales[2000 <= sales['sqft_living']]
sq_ft2_4 = sq_ft2_4[sq_ft2_4['sqft_living'] <= 4000]
print sq_ft2_4.shape[0]
print sales.shape[0]
print float(sq_ft2_4.shape[0]) / float(sales.shape[0])
advanced_feature = sales[0].keys()
print advanced_feature
advanced_model = graphlab.linear_regression.create(train_data, target='price', features=advanced_feature, validation_set=None)
print advanced_model.evaluate(test_data)
###Output
{'max_error': 4980250.086272331, 'rmse': 207072.16900266707}
|
dd_1/Part 3/Section 10 - Coding Exercises/Exercise 2 - Solution.ipynb | ###Markdown
Exercise 2 - Solution Suppose you have a list of all possible eye colors:
###Code
eye_colors = ("amber", "blue", "brown", "gray", "green", "hazel", "red", "violet")
###Output
_____no_output_____
###Markdown
Some other collection (say recovered from a database, or an external API) contains a list of `Person` objects that have an eye color property.Your goal is to create a dictionary that contains the number of people that have the eye color as specified in `eye_colors`. The wrinkle here is that even if no one matches some eye color, say `amber`, your dictionary should still contain an entry `"amber": 0`. Here is some sample data:
###Code
class Person:
def __init__(self, eye_color):
self.eye_color = eye_color
from random import seed, choices
seed(0)
persons = [Person(color) for color in choices(eye_colors[2:], k = 50)]
###Output
_____no_output_____
###Markdown
As you can see we built up a list of `Person` objects, none of which should have `amber` or `blue` eye colors Write a function that returns a dictionary with the correct counts for each eye color listed in `eye_colors`. We're going to use the `Counter` class for this problem.However, simply counting the eye colors in the `person` list is not going to be quite enough:
###Code
from collections import Counter
counts = Counter(p.eye_color for p in persons)
counts
###Output
_____no_output_____
###Markdown
As you can see we do not have entries for `amber` and `blue` for example. We could approach this in one of two ways:1. add zero count key/value pairs after the counting has occurred2. or, pre-initialize the `Counter` object with all the possible eye colors set to a count of `0`. Let's try the first approach:
###Code
counts = Counter(p.eye_color for p in persons)
result = {color: counts.get(color, 0) for color in eye_colors}
result
###Output
_____no_output_____
###Markdown
And now the second approach, where we initialize our Counter object with zero counts for each eye color first, and **then** do the counting:
###Code
counts = Counter({color: 0 for color in eye_colors})
counts
###Output
_____no_output_____
###Markdown
As you can see we have each color with a count of zero - now we simply update the counter based on the results in the `persons` list:
###Code
counts.update(p.eye_color for p in persons)
counts
###Output
_____no_output_____
###Markdown
Finally, let's package up one of those solutions into a function:
###Code
def count_eye_colors(persons, possible_eye_colors):
counts = Counter({color: 0 for color in possible_eye_colors})
counts.update(p.eye_color for p in persons)
return counts
###Output
_____no_output_____
###Markdown
which we can then call like this:
###Code
count_eye_colors(persons, eye_colors)
###Output
_____no_output_____ |
3-ImageClassification/Convolution.ipynb | ###Markdown
Convolución 
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Convolución en 1 dimensión Una convolución de una matriz unidimensional con un kernel consiste en tomar el kernel, deslizarlo a lo largo de la matriz, multiplicarlo con los elementos de la matriz que se superponen con el kernel en esa ubicación y sumar este producto.
###Code
array = np.array([1, 0, 1, 0, 1, 0, 1, 0, 1, 0])
kernel = np.array([1, -1, 0])
conv = np.array([0, 0, 0, 0, 0, 0, 0, 0, 0, 0])
# Output array
for ii in range(8):
conv[ii] = (kernel * array[ii:ii+3]).sum()
# Print conv
print(conv)
###Output
[ 1 -1 1 -1 1 -1 1 -1 0 0]
###Markdown
Convolución sobre una imagen
###Code
"""
Este código no funcionará hasta que carguemos una imagen en B/N en 'im'
"""
kernel = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]])
result = np.zeros(im.shape)
# Output array
for ii in range(im.shape[0] - 3):
for jj in range(im.shape[1] - 3):
result[ii, jj] = (im[ii:ii+3, jj:jj+3] * kernel).sum()
# Print result
print(result)
###Output
_____no_output_____
###Markdown
Kernels Recuerda que un kernel es como una ventana que se desplaza por la imagen y su producto resalta características de una imagen según su "configuración".
Como regla general, en dicha ventana, los '1' positivos representan la forma de la característica que queremos encontrar o resaltar.
###Code
# Kernel para resaltar líneas verticales en una imagen
kernel_v = np.array([[-1, 1, -1],
[-1, 1, -1],
[-1, 1, -1]])
# Kernel para resaltar líneas horizontales
kernel_h = np.array([[-1, -1, -1],
[1, 1, 1],
[-1, -1, -1]])
# Kernel que resalta un punto rodeado de píxeles oscuros
kernel = np.array([[-1, -1, -1],
[-1, 1, -1],
[-1, -1, -1]])
# Kernel que encuentra un punto negro rodeado de píxeles brillantes
kernel = np.array([[1, 1, 1],
[1, -1, 1],
[1, 1, 1]])
###Output
_____no_output_____
###Markdown
Red neuronal convolucional para clasificación de imágenes
###Code
"""A este código le falta importar imágenes de MNIST u otro dataset"""
# Importamos los componentes necesarios de Keras
from keras.models import Sequential
from keras.layers import Dense, Conv2D, Flatten
# Inicializamos nuestro modelo secuencial
model = Sequential()
# Agregamos una capa convolucional
# 10 neuronas con un kernel de 3x3
model.add(Conv2D(10, kernel_size=3), activation='relu', input_shape=(img_rows, img_cols, 1))
# Aplanamos la salida de la capa convolucional.
# Este paso es importante porque nos permitirá traducir entre el procesamiento de la imagen
# y la parte de clasificación de la red.
model.add(Flatten())
# Agregamos una capa de salida para 3 categorías
model.add(Dense(3, activation='softmax'))
# Compilamos el modelo
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
# Ajustamos el modelo al set de entrenamiento
model.fit(train_data, train_labels,
validation_split=0.2,
epochs=3, batch_size=10)
# Finalmente evaluamos
model.evaluate(test_data, test_labels, batch_size=10)
###Output
_____no_output_____
###Markdown
Agregar padding
###Code
# Comming soon
###Output
_____no_output_____ |
session-2/Session_2_first.ipynb | ###Markdown
###Code
!pip install -U -q PyDrive
from pydrive.auth import GoogleAuth
from pydrive.drive import GoogleDrive
from google.colab import auth
from oauth2client.client import GoogleCredentials
import zipfile, os
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications.mobilenet import preprocess_input
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Flatten
from tensorflow.nn import relu,softmax
from tensorflow.keras.optimizers import SGD
# data = > https://drive.google.com/file/d/1GEKK8oRNntFyR0ZxPdcvPut-15b7CvrW/view?usp=sharing
# small Data => https://drive.google.com/file/d/1OHGNsTfvVZvWYQ7B29SYcxrLGVdeCoQb/view?usp=sharing
auth.authenticate_user()
gauth = GoogleAuth()
gauth.credentials = GoogleCredentials.get_application_default()
drive = GoogleDrive(gauth)
if not os.path.exists('MLIntroSmallData'):
os.makedirs('MLIntroSmallData')
# Download Zip
myzip = drive.CreateFile({'id': '1OHGNsTfvVZvWYQ7B29SYcxrLGVdeCoQb'})
myzip.GetContentFile('smallData.zip')
# 3. Unzip
zip_ref = zipfile.ZipFile('smallData.zip', 'r')
zip_ref.extractall('MLIntroSmallData/smallData')
zip_ref.close()
if os.path.exists('MLIntroSmallData'):
print(os.listdir("MLIntroSmallData/smallData/smallData"))
#default sizes
Image_Width = 100
Image_Height = 100
Image_Depth = 3
targetSize = (Image_Width,Image_Height)
targetSize_withdepth = (Image_Width,Image_Height,Image_Depth)
CLASSES_COUNT = 2;
epochs = 500
#define the sub folders for both training and test
training = os.path.join("MLIntroSmallData/smallData/smallData",'train')
#now the easiest way to load data is to use the ImageDataGenerator
train_data_generator = ImageDataGenerator(preprocessing_function=preprocess_input)
train_generator = train_data_generator.flow_from_directory(training,
batch_size=20,
target_size=targetSize,
#seed=12
)
model = Sequential()
model.add(Flatten(input_shape=targetSize_withdepth))
model.add(Dense(1024,activation=relu))
model.add(Dense(512,activation=relu))
model.add(Dense(CLASSES_COUNT,activation=softmax))
model.compile(optimizer=SGD(),loss='categorical_crossentropy',metrics=['accuracy'])
model.summary()
model.fit_generator(generator=train_generator,epochs=500)
from sklearn.metrics import confusion_matrix, classification_report
def test(generator, model):
predictions = model.predict_generator(generator)
row_index = predictions.argmax(axis=1)
target_names = generator.class_indices.keys()
print(target_names)
print(confusion_matrix(generator.classes, row_index))
print('Classification Report')
print(classification_report(generator.classes, row_index, target_names=target_names))
test_data_generator = ImageDataGenerator(preprocessing_function=preprocess_input)
test_generator = test_data_generator.flow_from_directory("MLIntroSmallData/smallData/smallData/train",
target_size=(100,100),
shuffle=False)
test(generator=test_generator, model=model)
test_data_generator = ImageDataGenerator(preprocessing_function=preprocess_input)
test_generator = test_data_generator.flow_from_directory("MLIntroSmallData/smallData/smallData/test",
target_size=(100,100),
shuffle=False)
test(generator=test_generator, model=model)
###Output
dict_keys(['bar_chart', 'pie_chart'])
[[15 4]
[ 4 14]]
Classification Report
precision recall f1-score support
bar_chart 0.79 0.79 0.79 19
pie_chart 0.78 0.78 0.78 18
accuracy 0.78 37
macro avg 0.78 0.78 0.78 37
weighted avg 0.78 0.78 0.78 37
|
docs/labs/lab01/cs109b_lab01_intro.ipynb | ###Markdown
CS109B Data Science 2: Advanced Topics in Data Science Lab 01 - Coding Environment Setup **Harvard University****Spring 2022****Instructors:** Pavlos Protopapas and Mark Glickman**Lab Instructor:** Eleni Kaxiras---
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Learning GoalsThe purpose of this lab is to help you set up the coding environment for CS109B 1. Getting class material Option 1A: Download directly from Ed * Use the >> to download. Option 1B: Cloning the class repo and then copying the contents in a different directory so you can make changes.You may access the code used in class by cloning the class repo: [https://github.com/Harvard-IACS/2022-CS109B](https://github.com/Harvard-IACS/2022-CS109B)* Open the Terminal in your computer and go to the Directory where you want to clone the repo. Then run `git clone https://github.com/Harvard-IACS/2022-CS109B.git`* If you have already cloned the repo, OR if new material is added (happens every day), go inside the '/2022-CS109B/' directory and run `git pull`* **Caution:** If you change the notebooks and then run `git pull` your changes will be overwritten. So create a `playground` folder and copy the folder with the notebook with which you want to work. 2. Running code: Option 2A: Using your local environment Use Virtual Environments: we cannot stress this enough!Isolating your projects inside specific environments helps you manage dependencies and therefore keep your sanity. You can recover from mess-ups by simply deleting an environment. Sometimes certain installation of libraries conflict with one another. The two most popular tools for setting up environments are:- `conda` (a package and environment manager)- `pip` (a Python package manager) with `virtualenv` (a tool for creating environments)We recommend using `conda` package installation and environments. `conda` installs packages from the Anaconda Repository and Anaconda Cloud, whereas `pip` installs packages from PyPI. Even if you are using `conda` as your primary package installer and are inside a `conda` environment, you can still use `pip install` for those rare packages that are not included in the `conda` ecosystem. See here for more details on how to manage [Conda Environments](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html). Use the cs109b.yml file to create an environment:``` $ conda env create -f cs109b.yml$ conda activate cs109b``` We have included the packages that you will need in the `cs109b.yml` file. Option 2B: Using Cloud Resources Using FAS OnDemand (supported by CS109b)FAS provides a platform, accessible via the `FAS OnDemand` menu link in **Canvas**. Most of the libraries such as keras, tensorflow, pandas, etc., are pre-installed. If a library is missing you may install it via the Terminal.**NOTE**: The AWS platform is funded by FAS for the purposes of the class. You are not allowed to use it for purposes not related to this course. Make sure you stop your instance as soon as you do not need it.Information on how to use the platform is displayed when you click the link. For more see [Fas OnDemand Guide](https://canvas.harvard.edu/courses/84598/pages/fas-ondemand-guide). Using Google Colab (on your own)Google's Colab platform [https://colab.research.google.com/](https://colab.research.google.com/) offers a GPU enviromnent to test your ideas, it's fast, free, with the only caveat that your files persist only for 12 hours (last time we checked). The solution is to keep your files in a repository and just clone it each time you use Colab. Using AWS in the Cloud (on your own)For those of you who want to have your own machines in the Cloud to run whatever you want, Amazon Web Services is a (paid) solution. For more see: [https://docs.aws.amazon.com/polly/latest/dg/setting-up.html](https://docs.aws.amazon.com/polly/latest/dg/setting-up.html)Remember, AWS is a paid service, so if you let your machine run for days you will get charged! 3. Ensuring everything is installed correctly Some of the packages we will need for this class - **Clustering**: - Sklearn - [https://scikit-learn.org/stable/](https://scikit-learn.org/stable/) - scipy - [https://www.scipy.org](https://www.scipy.org) - gap_statistic (by Miles Granger) - [https://anaconda.org/milesgranger/gap-statistic/notebook](https://anaconda.org/milesgranger/gap-statistic/notebook)- **Bayes**: - pymc3 - [https://docs.pymc.io](https://docs.pymc.io) - **Neural Networks**: - keras - [https://www.tensorflow.org/guide/keras](https://www.tensorflow.org/guide/keras) Exercise 1: Run the following cells to make sure these packages load correctly in our environment.
###Code
from sklearn import datasets
iris = datasets.load_iris()
digits = datasets.load_digits()
digits.target # you should see [0, 1, 2, ..., 8, 9, 8]
from scipy import misc
import matplotlib.pyplot as plt
face = misc.face()
face.shape, type(face)
face[1:3, 1:3]
plt.imshow(face)
plt.show() # you should see a racoon
import pymc3 as pm
print('Running PyMC3 v{}'.format(pm.__version__)) # you should see 'Running on PyMC3 v3.8'
# making sure you have gap_statistic
from gap_statistic import OptimalK
###Output
_____no_output_____
###Markdown
4. Plotting `matplotlib` and `seaborn`- `matplotlib` - [seaborn: statistical data visualization](https://seaborn.pydata.org/). `seaborn` works great with `pandas`. It can also be customized easily. Here is the basic `seaborn` tutorial: [Seaborn tutorial](https://seaborn.pydata.org/tutorial.html). Plotting a function of 2 variables using contoursIn optimization, our objective function will often be a function of two or more variables. While it's hard to visualize a function of more than 3 variables, it's very informative to plot one of 2 variables. To do this we use contours. First we define the $x$ and $y$ variables and then construct their pairs using `meshgrid`. Plot the function $f(x,y) = x^2+y^2$
###Code
import seaborn as sn
x = np.linspace(-0.1, 0.1, 50)
y = np.linspace(-0.1, 0.1, 100)
xx, yy = np.meshgrid(x, y)
z = np.sqrt(xx**2+yy**2)
plt.contour(x,y,z);
###Output
_____no_output_____
###Markdown
5. We will be using `keras` via `tensorflow` **[TensorFlow](https://www.tensorflow.org)** is a framework for representing complicated ML algorithms and executing them in any platform, from a phone to a distributed system using GPUs. Developed by Google Brain, TensorFlow is used very broadly today. **[Keras](https://keras.io/)**, is a high-level API, created by François Chollet, and used for fast prototyping, advanced research, and production. `tf.keras` is now maintained by Tensorflow. Exercise 2: Run the following cells to make sure you have the basic libraries to do deep learning
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
# TensorFlow and tf.keras
import tensorflow as tf
from tensorflow.keras import layers
from tensorflow.keras import models
from tensorflow.keras.layers import Dense
from tensorflow.keras.models import Sequential
from tensorflow.keras.regularizers import l2
tf.keras.backend.clear_session() # For easy reset of notebook state.
# You should see a >=2.3.0 here!
# If you do not, upgrade your env to tensorflow==2.3.0
print(tf.__version__)
print(tf.keras.__version__)
# Check if your machine has NVIDIA GPUs.
hasGPU = tf.config.list_physical_devices()
print(f'My computer has the following devices: {hasGPU}')
###Output
My computer has the following devices: [PhysicalDevice(name='/physical_device:CPU:0', device_type='CPU'), PhysicalDevice(name='/physical_device:XLA_CPU:0', device_type='XLA_CPU')]
|
assignment_3/Assignment_3_Tunably_Rugged_Landscapes_[solutions].ipynb | ###Markdown
Assignment 3: Tunably Rugged LandscapesIn our assignment last week we got our first hillclimber up and running, while in class week started to talk about fitness landscapes to begin thinking about search spaces, and population-based evolutionary algorithms to start complexifying how we traverse these search spaces. In this week's assignment, we'll start to put these two things together and begin toying around with the pandora's box of algorithmic experimentation.In particular, we'll explore the idea of generating parameterized fitness functions to being to explore the relationship between the type of problem we're trying to solve, and what features our evolutionary algorithm should have to solve it. *Note*: I know this looks like a lot of coding! While we are building valuable infastructure here, much of the solutions here are modifications on prior work (from eariler in this assignment or the last one), and can largely be copy-and-pasted here, or written once as a function to call again later. Despite this, it's still always a good idea to start in on assignments early (even if just reading through all the questions to estimate how long it might take you to complete)
###Code
# imports
import numpy as np
import copy
import matplotlib.pyplot as plt
plt.style.use('seaborn')
import scikits.bootstrap as bootstrap
import warnings
warnings.filterwarnings('ignore') # Danger, Will Robinson! (not a scalable hack, and may surpress other helpful warning other than for ill-conditioned bootstrapped CI distributions)
import scipy.stats # for finding statistical significance
###Output
_____no_output_____
###Markdown
N-K LandscapeIn general, you'll be more likely to have a problem provided to you, rather than have to design a fitness funciton by hand. So in this week's assignment, I'll provide the full fitness-landscape-generating function for you. The below function implements Kaffman's N-K Landscape. While it's not entirely necessary for you to understand every implementaton detail below, the N-K landscape idea is chosen because it's a particularly interesting toy problem -- and more reading on it can be found via many online resources (e.g. Kauffman and Weinberger's *The NK model of rugged fitness landscapes and its application to maturation of the immune response* -- inlcuded in the assigment zip folder as it is firewalled online)The main things to know about the NK model are that: It is a model of a tunably rugged fitness landscape, that means we have parameters that can affect the shape and ruggedness of the fitness landscape produced by this model. While there are many variations, here we follow the original (simplest) model that includes just two parameters: **N** defines the length of the binary bit string genome, while **K** defines the ruggedness of the landscape (in particular how the fitness of each allele depends on other loci (nearby genes) in the genotype. *Note*: This is fully implemented and no action is needed from you, besides running the code block.
###Code
class Landscape:
""" N-K Fitness Landscape
"""
def __init__(self, n=10, k=2):
self.n = n # genome length
self.k = k # number of other loci interacting with each gene
self.gene_contribution_weight_matrix = np.random.rand(n,2**(k+1)) # for each gene, a lookup table for its fitness contribution, which depends on this gene's setting and also the setting of its interacting neighboring loci
# find values of interacting loci
def get_contributing_gene_values(self, genome, gene_num):
contributing_gene_values = ""
for i in range(self.k+1): # for each interacing loci (including the location of this gene itself)
contributing_gene_values += str(genome[(gene_num+i)%self.n]) # for simplicity we'll define the interacting genes as the ones immediately following the gene in question. Get the values at each of these loci
return contributing_gene_values # return the string containing the values of all loci which affect the fitness of this gene
# find the value of a partiuclar genome
def get_fitness(self, genome):
gene_values = np.zeros(self.n) # the value of each gene in the genome
for gene_num in range(len(genome)): # for each gene
contributing_gene_values = self.get_contributing_gene_values(genome, gene_num) # get the values of the loci which affect it
gene_values[gene_num] = self.gene_contribution_weight_matrix[gene_num,int(contributing_gene_values,2)] # use the values of the interacting loci (converted from a binary string to base-10 index) to find the lookup table entry for this combination of genome settings
return np.mean(gene_values) # define the fitness of the full genome as the average of the contribution of its genes (and return it for use in the evolutionary algoirthm)
###Output
_____no_output_____
###Markdown
HillclimberBased on the hillclimber function from you last assignment (and informed by the posted solution, if you wish), copy an slightly modify the hillclimber to use this fitnes function. For sake of running multiple trials, also please modify the record keeping to reutrn the solutions after the completion of the algorithm rather than printing them out during evolution. *Hint:* In python, functions can be treated as objects (e.g. passed as an argument to another function)
###Code
def hillclimber(total_generations = 100, bit_string_length = 10, num_elements_to_mutate= 1, fitness_function=None):
""" Basic hillclimber, copied from last assignment
parameters:
total_generations: (int) number of total iterations for stopping condition
bit_string_length: (int) length of bit string genome to be evoloved
num_elements_to_mutate: (int) number of alleles to modify during mutation
fitness_funciton: (callable function) that return the fitness of a genome
given the genome as an input parameter (e.g. as defined in Landscape)
returns:
solution: (numpy array) best solution found
solution_fitness: (float) fitness of returned solution
solution_generation: (int) generaton at which most fit solution was first discovered
"""
# the initialization proceedure
parent = np.random.randint(2, size = bit_string_length) #some initial candidate solution
parent_fitness = fitness_function(parent) # assign fitness based on fitness function given as argument
# initialize record keeping
solution = None # best genome so far
solution_fitness = 0 # fitness of best genome so far
solution_generation = 0 # time (generations) when solution was found
for generation_num in range(total_generations): # repeat
# the modification procedure
child = copy.deepcopy(parent) # inheretence from parent to child solution
element_to_mutate = np.random.randint(bit_string_length) # randomly select the location in the child bit string to mutate
child[element_to_mutate] = (child[element_to_mutate] + 1) % 2 # flip the bit at the chosen location
# the assessement procedure
child_fitness = fitness_function(child) # assign fitness to child
# selection procedure
if child_fitness > parent_fitness: # if child is better (positive mutation)
parent = child # the child will become the parent in the next generation
parent_fitness = child_fitness # we update fitness values for new parent here for ease of record keeping as well
# record keeping
if parent_fitness > solution_fitness: # if the new parent is the best found so far
solution = parent # update best solution records
solution_fitness = parent_fitness
solution_generation = generation_num
return solution, solution_fitness, solution_generation
###Output
_____no_output_____
###Markdown
Q1: Landscape Ruggedness's effect on HillclimbingIn class we discussed the potential for the fitness landscape to greatly affect a given search algorithm. Let't start by generating varyingly rugged landscapes, and investigating how this impacts the effectiveness of a standard hillclimber. For each value of `k = 0..14` and a genome legnth of `15` please generate 100 unique fitness landscapes, and record the fitness value and time to convergence (when the most fit solution was found) for the hillclimber algorithm above on that landscape. Print out the mean results for each `k` as you go to keep track of progress. This output may look something like this:
###Code
# hyperparameters
n=15; max_k=15; repetitions = 100
# initialize array to record results over different settings of k and repeated trials
solutions_found = np.zeros((max_k,repetitions,n))
fitness_found = np.zeros((max_k,repetitions))
generation_found = np.zeros((max_k,repetitions))
# initilize output
print(' k mean fitness mean generation found')
print('-- ------------ ---------------------')
for k in range(0,max_k): # for many values of k
for i in range(repetitions): # for many repeated (independent -- make sure your results differ each run!) trials
landscape = Landscape(n=n, k=k) # generate a random fitness landscape with this level of ruggeddness
solution, fitness, solution_generation = hillclimber(total_generations = 100, bit_string_length = n, num_elements_to_mutate = 1, fitness_function=landscape.get_fitness) # run a hillclimber
# record outputs
solutions_found[k,i,:] = solution
fitness_found[k,i] = fitness
generation_found[k,i] = solution_generation
# print average results for all repitions of this k
print('{k:2d} {fit:10.3f} {gen:16.3f}'.format(k=k, fit=np.mean(fitness_found[k]), gen=np.mean(generation_found[k]))) # output to observe progress
###Output
k mean fitness mean generation found
-- ------------ --------------------
0 0.663 39.910
1 0.692 36.870
2 0.707 40.310
3 0.702 34.650
4 0.695 33.070
5 0.694 32.180
6 0.691 27.340
7 0.681 24.000
8 0.680 23.600
9 0.676 21.540
10 0.669 20.580
11 0.658 19.870
12 0.655 17.780
13 0.651 16.280
14 0.642 15.790
###Markdown
Let's also record this result in a nested dictionary to be able to recall it later (for comparison to other results). There is an implementation given below, but you're welcome to use `pandas` if you're more comforatable with that library for data manipulation and visualization.
###Code
experiment_results = {}
experiment_results["hillclimber"] = {"solutions_found":solutions_found, "fitness_found":fitness_found, "generation_found":generation_found}
###Output
_____no_output_____
###Markdown
Q2: Plotting ResultsPlease visualize the above terminal output in a figure (feel free to recycle code from previous assignments). You'll be generating this same plot many time (and even comparing multiple runs on a single figure), so you may want to invest in implementing this as a function at some point during this assigment -- but that is not strictly necessary now, and fell free to ignore the code stub below. In particular, please plot the `Time to Convergence (Generations)` and `Fitness` values (as you vary `K`) as two separate figures, as a single figure with multiple y-axes is messy and confusing. Please include 95% boostrapped confidence intervals over your 100 repitions for eack `K`. Please also inlcude the title of each experiment as a legend (for now just `hillclimber` is sufficient for this baseline case, and titles will make more sense in follow up experimental conditions).
###Code
def plot_mean_and_bootstrapped_ci(input_data = None, name = "change me", x_label = "K", y_label="change me", y_limit = None):
"""
parameters:
input_data: (numpy array of shape (max_k, num_repitions)) solution metric to plot
name: (string) name for legend
x_label: (string) x axis label
y_label: (string) y axis label
returns:
None
"""
fig, ax = plt.subplots() # generate figure and axes
if isinstance(name, str): name = [name]; input_data = [input_data]
for this_input_data, this_name in zip(input_data, name):
max_k = this_input_data.shape[0]
boostrap_ci_generation_found = np.zeros((2,max_k))
for k in range(max_k):
boostrap_ci_generation_found[:,k] = bootstrap.ci(this_input_data[k], np.mean, alpha=0.05)
ax.plot(np.arange(max_k), np.mean(this_input_data,axis=1), label = this_name) # plot the fitness over time
ax.fill_between(np.arange(max_k), boostrap_ci_generation_found[0,:], boostrap_ci_generation_found[1,:],alpha=0.3) # plot, and fill, the confidence interval for fitness over time
ax.set_xlabel(x_label) # add axes labels
ax.set_ylabel(y_label)
if y_limit: ax.set_ylim(y_limit[0],y_limit[1])
plt.legend(loc='best'); # add legend
plot_mean_and_bootstrapped_ci(input_data = generation_found, name = "Hillclimber", x_label = "K", y_label = "Time to Convergence (Generations)")
plot_mean_and_bootstrapped_ci(input_data = fitness_found, name = "Hillclimber", x_label = "K", y_label = "Fitness")
###Output
_____no_output_____
###Markdown
Q3: Analysis of Hillclimber on Varying Ruggedness What do you notice about the trend line? Is this what you expected? Why or why not? **It makes sense that in a more rugged lanscape, with a greater number of local optima, a hillclimber would converge faster -- not being able to escape a local optima once it found it.****It also makes sense that there is a general trend of fitness decreasing with increased ruggedness, as premature convergence to local optima increases. I'm curiuos as to everyone's hypotheses around the decrease in fitness for very small K. Perhaps it's the case that this is just the case of a fitness function which samples fewer random fitness values has less likilhood of producing any very high fitness peaks? But if that were the case, one might expect higher variance in fitness at low values of K -- which doesn't appear to be the case. I'm not totally sure what the root cause is.** Q4: Random RestartsOne of the methods we talked about as a potential approach to escaping local optima in highly rugged fitness landscapes was to randomly restart search. Using the same number of total generations (`100`), please implement a function which restarts search to a new random initialization every `20` generations (passing this value as an additoinal parameter to your hillclimber function). Feel free to just copy and paste the hillclimber code block here to modify, for the sake of simplicity and easy gradability.
###Code
def hillclimber(total_generations = 100, bit_string_length = 10, num_elements_to_mutate= 1, fitness_function=None, restart_every = None):
""" Basic hillclimber, copied from last assignment
parameters:
total_generations: (int) number of total iterations for stopping condition
bit_string_length: (int) length of bit string genome to be evoloved
num_elements_to_mutate: (int) number of alleles to modify during mutation
fitness_funciton: (callable function) that return the fitness of a genome
given the genome as an input parameter (e.g. as defined in Landscape)
restart_every: (int) how frequently to randomly restart the hillclimber
returns:
solution: (numpy array) best solution found
solution_fitness: (float) fitness of returned solution
solution_generation: (int) generaton at which most fit solution was first discovered
"""
# the initialization proceedure
parent = np.random.randint(2, size = bit_string_length) #some initial candidate solution
parent_fitness = fitness_function(parent) # assign fitness based on fitness function given as argument
# initialize record keeping
solution = None # best genome so far
solution_fitness = 0 # fitness of best genome so far
solution_generation = 0 # time (generations) when solution was found
for generation_num in range(total_generations): # repeat
# random restart
if restart_every and (generation_num+1)%restart_every == 0: # if turned on (value above 0) and time to rest (every x-number of generations)
# the initialization proceedure
parent = np.random.randint(2, size = bit_string_length) #some initial candidate solution
parent_fitness = fitness_function(parent) # assign fitness based on fitness function given as argument
# the modification procedure
child = copy.deepcopy(parent) # inheretence from parent to child solution
element_to_mutate = np.random.randint(bit_string_length) # randomly select the location in the child bit string to mutate
child[element_to_mutate] = (child[element_to_mutate] + 1) % 2 # flip the bit at the chosen location
# the assessement procedure
child_fitness = fitness_function(child) # assign fitness to child
# selection procedure
if child_fitness > parent_fitness: # if child is better (positive mutation)
parent = child # the child will become the parent in the next generation
parent_fitness = child_fitness # we update fitness values for new parent here for ease of record keeping as well
# record keeping
if parent_fitness > solution_fitness: # if the new parent is the best found so far
solution = parent # update best solution records
solution_fitness = parent_fitness
solution_generation = generation_num
return solution, solution_fitness, solution_generation
###Output
_____no_output_____
###Markdown
Q4b: Run ExperimentSlightly modify (feel free to copy and paste here) your experiment running code black above to analyze the effect of modifying `K` on `Time to Convergence (Generations)` and `Fitness`, again print progress and plotting results. Please also save these results (and subsequent new ones) to your `experimental_results` dictionary for later use.
###Code
# hyperparameters
n=15; max_k=15; repetitions = 100
# initialize array to record results over different settings of k and repeated trials
solutions_found = np.zeros((max_k,repetitions,n))
fitness_found = np.zeros((max_k,repetitions))
generation_found = np.zeros((max_k,repetitions))
# initilize output
print(' k mean fitness mean generation found')
print('-- ------------ --------------------')
for k in range(0,max_k): # for many values of k
for i in range(repetitions): # for many repeated (independent -- make sure your results differ each run!) trials
landscape = Landscape(n=n, k=k) # generate a random fitness landscape with this level of ruggeddness
solution, fitness, solution_generation = hillclimber(total_generations = 100, bit_string_length = n, num_elements_to_mutate = 1, fitness_function=landscape.get_fitness, restart_every=20) # run a hillclimber
# record outputs
solutions_found[k,i,:] = solution
fitness_found[k,i] = fitness
generation_found[k,i] = solution_generation
# print average results for all repitions of this k
print('{k:2d} {fit:10.3f} {gen:16.3f}'.format(k=k, fit=np.mean(fitness_found[k]), gen=np.mean(generation_found[k]))) # output to observe progress
experiment_results["restart every 20"] = {"solutions_found":solutions_found, "fitness_found":fitness_found, "generation_found":generation_found} # save results for later use
plot_mean_and_bootstrapped_ci(input_data = generation_found, name = "Restart Every 20", x_label = "K", y_label = "Time to Convergence (Generations)")
plot_mean_and_bootstrapped_ci(input_data = fitness_found, name = "Restart Every 20", x_label = "K", y_label = "Fitness")
###Output
_____no_output_____
###Markdown
Q5: Analysis of Random RestartsWhat trends do you see? Is this what you were expecting? How does this compare to the original hillclimber algorithm without random resets (please note any y-axis differences when comparing values/shapes of the curves)? **I'm not surprised that the time to convergence increased, but am a bit surprised that it didn't increase more, or that the trend didn't reverse itself from before with more rugged landscapes showing late convergence times as the subsequent restarts find new peaks. From an exploration-exploitation perspective, I guess it does make sense that with all peaks being of random (and independent) sizes, the likihood of finding a new better peak does drop over time (as fitness values from the previous best increase). I should also note here, that the way I implemented this is as a serial hillclimber, such that each random restart doesn't reset the time to find it -- whereas a parallel hillcimber that was exploring all solutions simultaneously as differnet trials, might have different implications for (wallclock or compute) time****I don't have a good intuition of the number of local optima (that could have been a question in this assignment -- but it takes a decent amount of compute to do exahustive search, necessary for knowing the total number of optima, on any reasonably sizes search landscape). But I am somewhat surprised that there's still such a noticable drop off if fitness as ruggednes increases, as with 5-attempts to drop a new random starting point over our 100 generations, I would have imagined it reasonably likely to land on a good peak at some point. It may also be the case that with such a high value of K, the fitness landscape becomes hierarchically rugged, in which case the value of each local optima would NOT be independedent and random, and may more peaks would need to be explored to find one of the highly fit ones.** Q6: Modifying mutation sizeWe've talked about a number of other potential modifications/complexifications to the original hillclimber aglorithm in class, so let's experiment with some of them here. Here, please modying your above a hillclimber (again please just copy and paste the code block here) to mutate multiple loci when generating the child from a parent.*Hint*: Be careful of the difference between modifying multiple genes and modifying the same gene multiple times
###Code
def hillclimber(total_generations = 100, bit_string_length = 10, num_elements_to_mutate= 1, fitness_function=None, restart_every = None):
""" Basic hillclimber, copied from last assignment
parameters:
total_generations: (int) number of total iterations for stopping condition
bit_string_length: (int) length of bit string genome to be evoloved
num_elements_to_mutate: (int) number of alleles to modify during mutation
fitness_funciton: (callable function) that return the fitness of a genome
given the genome as an input parameter (e.g. as defined in Landscape)
restart_every: (int) how frequently to randomly restart the hillclimber
returns:
solution: (numpy array) best solution found
solution_fitness: (float) fitness of returned solution
solution_generation: (int) generaton at which most fit solution was first discovered
"""
# the initialization proceedure
parent = np.random.randint(2, size = bit_string_length) #some initial candidate solution
parent_fitness = fitness_function(parent) # assign fitness based on fitness function given as argument
# initialize record keeping
solution = None # best genome so far
solution_fitness = 0 # fitness of best genome so far
solution_generation = 0 # time (generations) when solution was found
for generation_num in range(total_generations): # repeat
# random restart
if restart_every and (generation_num+1)%restart_every == 0: # if turned on (value above 0) and time to rest (every x-number of generations)
# the initialization proceedure
parent = np.random.randint(2, size = bit_string_length) #some initial candidate solution
parent_fitness = fitness_function(parent) # assign fitness based on fitness function given as argument
# the modification procedure
child = copy.deepcopy(parent) # inheretence from parent to child solution
elements_to_mutate = set() # using a set, rather than a list to keep track of the loci to mutate, as this will remove duplicate indexes, meaning each mutation will be to a different loci
while len(elements_to_mutate)<num_elements_to_mutate: # while (instead of for) also acocunts for the potential of a randomly chosen loci not being a new index in the set
elements_to_mutate.add(np.random.randint(bit_string_length)) # randomly select the location in the child bit string to mutate
for this_element_to_mutate in elements_to_mutate:
child[this_element_to_mutate] = (child[this_element_to_mutate] + 1) % 2 # flip the bit at the chosen location
# the assessement procedure
child_fitness = fitness_function(child) # assign fitness to child
# selection procedure
if child_fitness > parent_fitness: # if child is better (positive mutation)
parent = child # the child will become the parent in the next generation
parent_fitness = child_fitness # we update fitness values for new parent here for ease of record keeping as well
# record keeping
if parent_fitness > solution_fitness: # if the new parent is the best found so far
solution = parent # update best solution records
solution_fitness = parent_fitness
solution_generation = generation_num
return solution, solution_fitness, solution_generation
###Output
_____no_output_____
###Markdown
Q6b: ExpectationsIn this experiment, let's set the number of elements to be mutated to `5` when generating a new child.Before running the code, what do (did) you expect the result to be based on the results of the original hillclimber, the random restart condition, and the implications that a larger mutatoin rate may have? **I expect the preformance to be worse -- I figure that the jumps in the fitness landscape with this high a mutation size would lead to a pseudo-random search** Q7: Run experimentRun the experiment and visualize (similar to **Q4b**, and feel free to copy a paste here again) to analyze the effect of a larger mutation size on the realationship between `K` and `Time to Convergence (Generations)` / `Fitness`.
###Code
# hyperparameters
n=15; max_k=15; repetitions = 100
# initialize array to record results over different settings of k and repeated trials
solutions_found = np.zeros((max_k,repetitions,n))
fitness_found = np.zeros((max_k,repetitions))
generation_found = np.zeros((max_k,repetitions))
# initilize output
print(' k mean fitness mean generation found')
print('-- ------------ --------------------')
for k in range(0,max_k): # for many values of k
for i in range(repetitions): # for many repeated (independent -- make sure your results differ each run!) trials
landscape = Landscape(n=n, k=k) # generate a random fitness landscape with this level of ruggeddness
solution, fitness, solution_generation = hillclimber(total_generations = 100, bit_string_length = n, num_elements_to_mutate = 5, fitness_function=landscape.get_fitness) # run a hillclimber
# record outputs
solutions_found[k,i,:] = solution
fitness_found[k,i] = fitness
generation_found[k,i] = solution_generation
# print average results for all repitions of this k
print('{k:2d} {fit:10.3f} {gen:16.3f}'.format(k=k, fit=np.mean(fitness_found[k]), gen=np.mean(generation_found[k]))) # output to observe progress
experiment_results["mutate 5"] = {"solutions_found":solutions_found, "fitness_found":fitness_found, "generation_found":generation_found}
plot_mean_and_bootstrapped_ci(input_data = generation_found, name = "Mutate 5", x_label = "K", y_label = "Time to Convergence (Generations)")
plot_mean_and_bootstrapped_ci(input_data = fitness_found, name = "Mutate 5", x_label = "K", y_label = "Fitness")
###Output
_____no_output_____
###Markdown
Q7b: AnalysisIs this what you expected/predicted? If not, what is different and why might that be? **In terms of fitness, it looks like the values are lower for small values of K, but higher for large values of K. I didn't predict this, but in retrospect I could imagine extremely rugged landscapes were random search is the best solution (and didn't intuit these values of K to produce such a rugged landscpae, but evidentally that is the case). It's also interesting that with this pseudo-random search, the ruggedness of the landscape (outside of the very small values of K) does not seem to affect the final fitness value much.** Q8: Accepting Negative MutationsAnother way we might be able to get out of local optima is by taking steps downhill away from that optima. Add another arguement (`downhill_prob`) to your `hillclimber` function, which accepts a child with a negative mutataion with that given probability.
###Code
def hillclimber(total_generations = 100, bit_string_length = 10, num_elements_to_mutate= 1, fitness_function=None, restart_every = None, downhill_prob=0):
""" Basic hillclimber, copied from last assignment
parameters:
total_generations: (int) number of total iterations for stopping condition
bit_string_length: (int) length of bit string genome to be evoloved
num_elements_to_mutate: (int) number of alleles to modify during mutation
fitness_funciton: (callable function) that return the fitness of a genome
given the genome as an input parameter (e.g. as defined in Landscape)
restart_every: (int) how frequently to randomly restart the hillclimber
downhill_prob: (float) proportion of times when a downhill mutation is accepted
returns:
solution: (numpy array) best solution found
solution_fitness: (float) fitness of returned solution
solution_generation: (int) generaton at which most fit solution was first discovered
"""
# the initialization proceedure
parent = np.random.randint(2, size = bit_string_length) #some initial candidate solution
parent_fitness = fitness_function(parent) # assign fitness based on fitness function given as argument
# initialize record keeping
solution = None # best genome so far
solution_fitness = 0 # fitness of best genome so far
solution_generation = 0 # time (generations) when solution was found
for generation_num in range(total_generations): # repeat
if restart_every and (generation_num+1)%restart_every == 0:
# the initialization proceedure
parent = np.random.randint(2, size = bit_string_length) #some initial candidate solution
parent_fitness = fitness_function(parent) # assign fitness based on fitness function given as argument
# the modification procedure
child = copy.deepcopy(parent) # inheretence from parent to child solution
elements_to_mutate = set()
while len(elements_to_mutate)<num_elements_to_mutate:
elements_to_mutate.add(np.random.randint(bit_string_length)) # randomly select the location in the child bit string to mutate
for this_element_to_mutate in elements_to_mutate:
child[this_element_to_mutate] = (child[this_element_to_mutate] + 1) % 2 # flip the bit at the chosen location
# the assessement procedure
child_fitness = fitness_function(child) # assign fitness to child
# selection procedure
if child_fitness > parent_fitness or np.random.rand() < downhill_prob: # if child is better (positive mutation)
parent = child # the child will become the parent in the next generation
parent_fitness = child_fitness # we update fitness values for new parent here for ease of record keeping as well
# record keeping
if parent_fitness > solution_fitness: # if the new parent is the best found so far
solution = parent # update best solution records
solution_fitness = parent_fitness
solution_generation = generation_num
return solution, solution_fitness, solution_generation
###Output
_____no_output_____
###Markdown
Q8b: Run the experimentSame as above (run and plot), but now investigating the effect of a `downhill_prob` of `0.1` (10% chance) on relationship between ruggedness and performance
###Code
# hyperparameters
n=15; max_k=15; repetitions = 100
# initialize array to record results over different settings of k and repeated trials
solutions_found = np.zeros((max_k,repetitions,n))
fitness_found = np.zeros((max_k,repetitions))
generation_found = np.zeros((max_k,repetitions))
# initilize output
print(' k mean fitness mean generation found')
print('-- ------------ --------------------')
for k in range(0,max_k): # for many values of k
for i in range(repetitions): # for many repeated (independent -- make sure your results differ each run!) trials
landscape = Landscape(n=n, k=k) # generate a random fitness landscape with this level of ruggeddness
solution, fitness, solution_generation = hillclimber(total_generations = 100, bit_string_length = n, num_elements_to_mutate = 1, fitness_function=landscape.get_fitness, restart_every=0, downhill_prob=0.1) # run a hillclimber
# record outputs
solutions_found[k,i,:] = solution
fitness_found[k,i] = fitness
generation_found[k,i] = solution_generation
# print average results for all repitions of this k
print('{k:2d} {fit:10.3f} {gen:16.3f}'.format(k=k, fit=np.mean(fitness_found[k]), gen=np.mean(generation_found[k]))) # output to observe progress
experiment_results["downhill probability 10"] = {"solutions_found":solutions_found, "fitness_found":fitness_found, "generation_found":generation_found}
plot_mean_and_bootstrapped_ci(input_data = generation_found, name = "Downhill Probability 10", x_label = "K", y_label = "Time to Convergence (Generations)")
plot_mean_and_bootstrapped_ci(input_data = fitness_found, name = "Downhill Probability 10", x_label = "K", y_label = "Fitness")
###Output
_____no_output_____
###Markdown
Q9: Visualizing Mulitple RunsOn the same plot (which may require modifying or reimplementing your plotting function, if you made one above), please plot the curves for all 4 of our experiments above on a single plot (including bootsrapped confidence intervals for all). *Hint*: Legends are especially important here!*Hint*: It may be convenient to iterate over the dictionaries, turning them into lists before plotting (depending on your plotting script)
###Code
names_list = experiment_results.keys()
fitness_found_list = []
generation_found_list = []
for this_name in names_list:
fitness_found_list.append(experiment_results[this_name]["fitness_found"])
generation_found_list.append(experiment_results[this_name]["generation_found"])
plot_mean_and_bootstrapped_ci(input_data = fitness_found_list, name = names_list, x_label = "K", y_label = "Fitness")
plot_mean_and_bootstrapped_ci(input_data = generation_found_list, name = names_list, x_label = "K", y_label = "Time to Convergence (Generations)")
###Output
_____no_output_____
###Markdown
Q9b: Analyzing Mulitple RunsDo any new relationships or questions occur to you as you view these? **I expected the solutions to escape local optima to outperform the naive hillclimber, but did not expect them to converge to such a similar value as K increases (for both fitness and time to convergence). I wonder how universal this is, as the hyperparameter values I selected for each of these approaches were decided upon quite arbitrarily and without any intention of creating such a convergence. It's nice to see that the flat (inelastic) scaling of fitness with ruggeddness shown in the very high mutation rate setting wasn't also true with the other approaches (they do drop in fitness as landscapes become more rugged -- as I might expect), but it's intersting to see that convergence rate is pretty flat with respect to ruggedness across these different approaches.** Q10: Statistical SignificanceUsing the [`ranksums` test for significance](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ranksums.html), please compare the values for each algorithm at `K=14` using your saved `experiment_results`, reporting the p-value for each combination of the 4 experiments. Please do this for both the resulting fitness values, and the generation for which that solution was found. The output may look something like this:
###Code
k = 14
for metric in ["fitness_found","generation_found"]:
print (metric,"\n----------------")
for i in range(len(names_list)):
name_i = list(names_list)[i]
data_i = experiment_results[name_i][metric][k]
for j in range(i+1,len(names_list)):
name_j = list(names_list)[j]
data_j = experiment_results[name_j][metric][k]
_, p_val = scipy.stats.ranksums(data_i, data_j)
print('{name_i:>23} ({mean_i:3.2f}) <-> {name_j:>23} ({mean_j:3.2f}); p-val={p_val:5.3f}'.format(name_i=name_i ,mean_i=np.mean(data_i), name_j=name_j, mean_j=np.mean(data_j), p_val=p_val))
print ('')
###Output
fitness_found
----------------
hillclimber (0.64) <-> restart every 20 (0.68); p-val=0.000
hillclimber (0.64) <-> mutate 5 (0.68); p-val=0.000
hillclimber (0.64) <-> downhill probability 10 (0.67); p-val=0.000
restart every 20 (0.68) <-> mutate 5 (0.68); p-val=0.874
restart every 20 (0.68) <-> downhill probability 10 (0.67); p-val=0.403
mutate 5 (0.68) <-> downhill probability 10 (0.67); p-val=0.509
generation_found
----------------
hillclimber (15.79) <-> restart every 20 (44.06); p-val=0.000
hillclimber (15.79) <-> mutate 5 (45.91); p-val=0.000
hillclimber (15.79) <-> downhill probability 10 (45.66); p-val=0.000
restart every 20 (44.06) <-> mutate 5 (45.91); p-val=0.537
restart every 20 (44.06) <-> downhill probability 10 (45.66); p-val=0.569
mutate 5 (45.91) <-> downhill probability 10 (45.66); p-val=0.890
###Markdown
Q11: Hyperparameter SearchIts cool to see the differences that these approaches have over the baseline hillcimber, but the values for each parameter that we've asked you to investigate are totally arbitrarily chosen. For example, who's to say that doing random resets every `20` generations is ideal? So let's find out! Please modify the code above for which you varied `K` to see the effect on `Fitness` and `Time to Convergence (Generations)`, to now keep a constant `K=14` and vary how frequently do you random resets within the fixed `100` generations of evolution. Explore this relationship for values of resets ranging from never (`0`) up to every `29` generations.
###Code
# hyperparameters
n=15; k=14; max_restart_every=30; repetitions = 100
# initialize array to record results over different settings of num_elements_to_mutate and repeated trials
solutions_found = np.zeros((max_restart_every,repetitions,n))
fitness_found = np.zeros((max_restart_every,repetitions))
generation_found = np.zeros((max_restart_every,repetitions))
# initilize output
print(' restart every mean fitness mean generation found')
print('-------------- ------------ --------------------')
for this_restart_every in range(0,max_restart_every): # for many values of k
for i in range(repetitions): # for many repeated (independent -- make sure your results differ each run!) trials
landscape = Landscape(n=n, k=k) # generate a random fitness landscape with this level of ruggeddness
solution, fitness, solution_generation = hillclimber(total_generations = 100, bit_string_length = n, num_elements_to_mutate = 1, fitness_function=landscape.get_fitness, restart_every=this_restart_every) # run a hillclimber
# record outputs
solutions_found[this_restart_every,i,:] = solution
fitness_found[this_restart_every,i] = fitness
generation_found[this_restart_every,i] = solution_generation
# print average results for all repitions of this k
print('{this_restart_every:8d} {fit:15.3f} {gen:17.3f}'.format(this_restart_every=this_restart_every, fit=np.mean(fitness_found[this_restart_every]), gen=np.mean(generation_found[this_restart_every]))) # output to observe progress
###Output
restart every mean fitness mean generation found
-------------- ------------ --------------------
0 0.643 15.770
1 0.699 51.570
2 0.694 53.450
3 0.690 45.570
4 0.687 48.970
5 0.683 50.230
6 0.687 51.350
7 0.685 50.950
8 0.688 50.150
9 0.685 47.620
10 0.687 48.740
11 0.686 48.690
12 0.683 50.810
13 0.685 45.760
14 0.676 47.530
15 0.680 52.940
16 0.676 53.470
17 0.682 47.420
18 0.678 51.300
19 0.681 45.270
20 0.675 48.620
21 0.679 51.250
22 0.676 46.400
23 0.677 45.840
24 0.677 45.900
25 0.675 50.660
26 0.677 48.840
27 0.672 53.370
28 0.677 44.430
29 0.668 47.280
###Markdown
Q11b: VisualizationSimilar to before (with `K`), please plot `Fitness` and `Time to Convergence (Generations)` as a funciton of how frequently we apply random restarts (`Restart Every`)
###Code
plot_mean_and_bootstrapped_ci(input_data = fitness_found, name = "hillclimber", x_label = "Restart Every", y_label = "Fitness")
plot_mean_and_bootstrapped_ci(input_data = generation_found, name = "hillclimber", x_label = "Restart Every", y_label = "Time to Convergence (Generations)")
###Output
_____no_output_____
###Markdown
Q11c: The effect of ruggednessThe above plots are for a single value of `K`=14. Repeat this same experiment below, just changing the value of `K` to `0`, to see what this experiment looksl like on a less-rugged landscape.
###Code
# hyperparameters
n=15; k=0; max_restart_every=30; repetitions = 100
# initialize array to record results over different settings of num_elements_to_mutate and repeated trials
solutions_found = np.zeros((max_restart_every,repetitions,n))
fitness_found = np.zeros((max_restart_every,repetitions))
generation_found = np.zeros((max_restart_every,repetitions))
# initilize output
print(' restart every mean fitness mean generation found')
print('-- ------------ --------------------')
for this_restart_every in range(0,max_restart_every): # for many values of k
for i in range(repetitions): # for many repeated (independent -- make sure your results differ each run!) trials
landscape = Landscape(n=n, k=k) # generate a random fitness landscape with this level of ruggeddness
solution, fitness, solution_generation = hillclimber(total_generations = 100, bit_string_length = n, num_elements_to_mutate = 1, fitness_function=landscape.get_fitness, restart_every=this_restart_every) # run a hillclimber
# record outputs
solutions_found[this_restart_every,i,:] = solution
fitness_found[this_restart_every,i] = fitness
generation_found[this_restart_every,i] = solution_generation
# print average results for all repitions of this k
print('{this_restart_every:2d} {fit:10.3f} {gen:16.3f}'.format(this_restart_every=this_restart_every, fit=np.mean(fitness_found[this_restart_every]), gen=np.mean(generation_found[this_restart_every]))) # output to observe progress
# experiment_results["restart every 20"] = {"solutions_found":solutions_found, "fitness_found":fitness_found, "generation_found":generation_found}
plot_mean_and_bootstrapped_ci(input_data = fitness_found, name = "hillclimber", x_label = "restart every", y_label = "Fitness")
plot_mean_and_bootstrapped_ci(input_data = generation_found, name = "hillclimber", x_label = "restart every", y_label = "Time to Convergence (Generations)")
###Output
_____no_output_____ |
Model-Study/mlModelsRandomForestKFoldLooRepeated.ipynb | ###Markdown
RandomForest
###Code
steps = [('scaler', StandardScaler()),(('rf', RandomForestClassifier(n_estimators=200, max_features=8, max_depth=12)))]
pipeline = Pipeline(steps)
# For RF is not a good option use scaler
random_scaled = pipeline.fit(X_train, y_train)
# y_pred = pipeline.predict(X_test)
# accuracy_score(y_test, y_pred)
# y_pred_prob = pipeline.predict_proba(X_test)[:,1]
# plotRoc(y_test, y_pred_prob)
# printcfm(y_test, y_pred,title='confusion matrix')
###Output
_____no_output_____
###Markdown
Positive Predictive Value (PPV)$$Precision=\frac{TP}{TP+FP}$$Sensitivity, Hit Rate, True Positive Rate$$Recall=\frac{TP}{TP+FN}$$Harmonic mean between Precision and Recall$$F1 Score=2 * \frac{Precision * Recall}{Precision + Recall}$$
###Code
# print(classification_report(y_test, y_pred))
###Output
_____no_output_____
###Markdown
Fine-tunning the model. To turn on Fine-tunning: define ft = 1
###Code
ft = 0
###Output
_____no_output_____
###Markdown
1 - Grid Search
###Code
if ft == 1 :
rf = RandomForestClassifier(n_jobs=-1, random_state=42)
parameters = {'n_estimators' : [400, 500, 600],
'min_samples_split': np.arange(2, 5),
'max_features': ['auto', 'sqrt', 'log2'],
'max_depth' : [4,7,15],
'bootstrap': [True,False],
'warm_start': [True,False],
'criterion' :['entropy']
}
cv = GridSearchCV(rf, param_grid=parameters, verbose=3, n_jobs=-1, cv=5)
#"max_depth": np.arange(1, 50),
#"max_features": [1, 3, 10],
#"min_samples_leaf": np.arange(1, 10),
#"criterion": ["gini", "entropy"]
cv.fit(X_train, y_train);
###Output
_____no_output_____
###Markdown
[Parallel(n_jobs=-1)]: Done 24 tasks | elapsed: 17.1s [Parallel(n_jobs=-1)]: Done 120 tasks | elapsed: 1.3min[Parallel(n_jobs=-1)]: Done 280 tasks | elapsed: 2.9min[Parallel(n_jobs=-1)]: Done 504 tasks | elapsed: 5.2min[Parallel(n_jobs=-1)]: Done 792 tasks | elapsed: 8.2min[Parallel(n_jobs=-1)]: Done 1144 tasks | elapsed: 11.8min[Parallel(n_jobs=-1)]: Done 1560 tasks | elapsed: 16.1min[Parallel(n_jobs=-1)]: Done 2040 tasks | elapsed: 21.0min[Parallel(n_jobs=-1)]: Done 2584 tasks | elapsed: 26.4min[Parallel(n_jobs=-1)]: Done 3192 tasks | elapsed: 32.3min[Parallel(n_jobs=-1)]: Done 3864 tasks | elapsed: 39.0min[Parallel(n_jobs=-1)]: Done 4600 tasks | elapsed: 46.5min[Parallel(n_jobs=-1)]: Done 5400 out of 5400 | elapsed: 54.9min finished
###Code
if ft == 1:
print("Best params: ", cv.best_params_,)
print("Best Score: %3.3f" %(cv.best_score_))
y_pred = cv.predict(X_train_scaled)
final_model =cv.best_estimator_
print(final_model)
###Output
_____no_output_____
###Markdown
Best Model Result (11/2019)RandomForestClassifier(bootstrap=False, class_weight=None, criterion='entropy', max_depth=100, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=1000, n_jobs=-1, oob_score=False, random_state=42, verbose=0, warm_start=False) Best Model Result (11/2018_v2)RandomForestClassifier(bootstrap=False, class_weight=None, criterion='entropy', max_depth=15, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=3, min_weight_fraction_leaf=0.0, n_estimators=500, n_jobs=-1, oob_score=False, random_state=42, verbose=0, warm_start=True) Best Model Result (11/2018)RandomForestClassifier(bootstrap=False, class_weight=None, criterion='entropy', max_depth=15, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=3, min_weight_fraction_leaf=0.0, n_estimators=500, n_jobs=-1, oob_score=False, random_state=42, verbose=0, warm_start=True) Best Model Result (09/2018)RandomForestClassifier(bootstrap=False, class_weight=None, criterion='entropy', max_depth=7, max_features='auto', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=1, min_samples_split=2, min_weight_fraction_leaf=0.0, n_estimators=525, n_jobs=-1, oob_score=False, random_state=42, verbose=0, warm_start=True) Regularizating the model Fill max_depth value
###Code
max_depth=5
final_model = RandomForestClassifier(bootstrap=False, class_weight=None,
criterion='entropy', max_depth=max_depth, max_features='auto',
max_leaf_nodes=None, min_impurity_decrease=0.0,
min_impurity_split=None, min_samples_leaf=1,
min_samples_split=2, min_weight_fraction_leaf=0.0,
n_estimators=1000, n_jobs=-1, oob_score=False, random_state=42,
verbose=0, warm_start=False)
###Output
_____no_output_____
###Markdown
Predicting the Classes in Trainning Set
###Code
final_model.fit(X_train, y_train)
y_pred = final_model.predict(X_train)
y_pred_prob = final_model.predict_proba(X_train)[:,1]
plotRoc(y_train, y_pred_prob)
auc_train = roc_auc_score(y_train, y_pred_prob)
cv_scores = cross_val_score(final_model, X_train, y_train, cv=5)
print(cv_scores)
printcfm(y_train, y_pred, title='confusion matrix')
print(classification_report(y_train, y_pred))
###Output
precision recall f1-score support
0 0.96 0.92 0.94 60
1 0.92 0.97 0.94 60
avg / total 0.94 0.94 0.94 120
###Markdown
Evaluating the model with Cross-Validation
###Code
y_pred_prob = final_model.predict_proba(X_train)[:,1]
y_scores = cross_val_predict(final_model, X_train, y_train, cv=3, verbose=3, method='predict_proba')
y_train_pred = cross_val_predict(final_model, X_train, y_train, cv=3, verbose=3)
# hack to work around issue #9589 in Scikit-Learn 0.19.0
if y_scores.ndim == 2:
y_scores = y_scores[:, 1]
# print(y_scores)
# print(np.mean(y_scores))
plotRoc(y_train, y_scores)
auc_cv = roc_auc_score(y_train, y_scores)
printcfm(y_train, y_train_pred, title='confusion matrix')
print(classification_report(y_train, y_train_pred))
###Output
precision recall f1-score support
0 0.96 0.82 0.88 60
1 0.84 0.97 0.90 60
avg / total 0.90 0.89 0.89 120
###Markdown
Evaluating the model with LOO
###Code
loo = LeaveOneOut()
loo.get_n_splits(X_train)
for train, test in loo.split(X_train):
print("%s %s" % (train, test))
cv=loo
#y_pred_prob = final_model.predict_proba(X_train)[:,1]
y_scores = cross_val_predict(final_model, X_train, y_train, cv=cv, verbose=10, method='predict_proba', n_jobs=-1)
y_train_pred = cross_val_predict(final_model, X_train, y_train, cv=cv, verbose=10)
# hack to work around issue #9589 in Scikit-Learn 0.19.0
if y_scores.ndim == 2:
y_scores = y_scores[:, 1]
# print(y_scores)
# print(np.mean(y_scores))
plotRoc(y_train, y_scores)
auc_LoO = roc_auc_score(y_train, y_scores)
printcfm(y_train, y_train_pred, title='confusion matrix')
print(classification_report(y_train, y_train_pred))
###Output
precision recall f1-score support
0 0.94 0.83 0.88 60
1 0.85 0.95 0.90 60
avg / total 0.90 0.89 0.89 120
###Markdown
Evaluating the model with Repeated K fold
###Code
def perform_repeated_cv(X, y , model):
#set random seed for repeatability
random.seed(1)
#set the number of repetitions
n_reps = 45
# perform repeated cross validation
accuracy_scores = np.zeros(n_reps)
precision_scores= np.zeros(n_reps)
recall_scores = np.zeros(n_reps)
auc_scores = np.zeros(n_reps)
#result_pred = pd.DataFrame(index=np.arange(30))
result_pred = y
##############################
tprs = []
aucs = []
mean_fpr = np.linspace(0, 1, 100)
fig = plt.figure(figsize=(20, 10))
###############################
for u in range(n_reps):
#randomly shuffle the dataset
indices = np.arange(X.shape[0])
np.random.shuffle(indices)
# X = X[indices]
# y = y[indices] #dataset has been randomly shuffled
X = X.iloc[indices]
y = y.iloc[indices] #dataset has been randomly shuffled
#initialize vector to keep predictions from all folds of the cross-validation
y_predicted = np.zeros(y.shape)
probas = np.zeros(y.shape)
#perform 10-fold cross validation
kf = KFold(n_splits=4 , random_state=142)
for train, test in kf.split(X):
#split the dataset into training and testing
# X_train = X[train]
# X_test = X[test]
# y_train = y[train]
# y_test = y[test]
X_train = X.iloc[train]
X_test = X.iloc[test]
y_train = y.iloc[train]
y_test = y.iloc[test]
# #standardization
# scaler = preprocessing.StandardScaler().fit(X_train)
# X_train = scaler.transform(X_train)
# X_test = scaler.transform(X_test)
#train model
clf = model
clf.fit(X_train, y_train)
#make predictions on the testing set
y_predicted[test] = clf.predict(X_test)
# print(y_predicted[test],y_test,type(y_predicted))
#y_train_pred_array = np.append(y_train_pred_array,y_train_pred)
# print(result_pred)
###############################plot
# probas_ = clf.predict_proba(X_test)
probas[test] = clf.predict_proba(X_test)[:, 1]
# print(probas[test], type(probas), probas.size)
# print(y,y_predicted)
#result_pred = y
df_pred = pd.DataFrame(y_predicted, index=y.index,columns=[u])
result_pred = pd.concat([result_pred, df_pred], axis=1)
# Compute ROC curve and area the curve
fpr, tpr, thresholds = roc_curve(y, probas)
tprs.append(interp(mean_fpr, fpr, tpr))
tprs[-1][0] = 0.0
#roc_auc = auc(fpr, tpr) - Change to obtain AUC by predict proba
#06/11 - 23:26 roc_auc = roc_auc_score(y, y_predicted)
roc_auc = roc_auc_score(y, probas)
aucs.append(roc_auc)
plt.plot(fpr, tpr, lw=1, alpha=0.3,
label='ROC fold %d (AUC = %0.2f)' % (u, roc_auc))
################################
#record scores
accuracy_scores[u] = accuracy_score(y, y_predicted)
precision_scores[u] = precision_score(y, y_predicted)
recall_scores[u] = recall_score(y, y_predicted)
#06/11 - 18:39 auc_scores[u] = roc_auc_score(y, y_predicted)
auc_scores[u] = roc_auc_score(y, probas)
###############################plot
# print(result_pred)
plt.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r',
label='Chance', alpha=.8)
mean_tpr = np.mean(tprs, axis=0)
mean_tpr[-1] = 1.0
# mean_auc = auc(mean_fpr, mean_tpr)
mean_auc = np.mean(aucs)
std_auc = np.std(aucs)
plt.plot(mean_fpr, mean_tpr, color='b',
label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc),
lw=2, alpha=.8)
std_tpr = np.std(tprs, axis=0)
tprs_upper = np.minimum(mean_tpr + std_tpr, 1)
tprs_lower = np.maximum(mean_tpr - std_tpr, 0)
plt.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2,
label=r'$\pm$ 1 std. dev.')
plt.xlim([-0.05, 1.05])
plt.ylim([-0.05, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic example')
#plt.legend(loc="lower right")
plt.legend(loc='upper center', bbox_to_anchor=(0.5, -0.05),
fancybox=True, shadow=True, ncol=5)
plt.show()
################################
#return all scores
return accuracy_scores, precision_scores, recall_scores, auc_scores, result_pred
# classifier = RandomForestClassifier(bootstrap=False, class_weight=None,
# criterion='entropy', max_depth=6, max_features='auto',
# max_leaf_nodes=None, min_impurity_decrease=0.0,
# min_impurity_split=None, min_samples_leaf=1,
# min_samples_split=2, min_weight_fraction_leaf=0.0,
# n_estimators=600, n_jobs=-1, oob_score=False, random_state=42,
# verbose=0, warm_start=False)
# # classifier = RandomForestClassifier(bootstrap=False, class_weight=None,
# # criterion='entropy', max_depth=6, max_features='auto',
# # max_leaf_nodes=None, min_impurity_decrease=0.0,
# # min_impurity_split=None, min_samples_leaf=1,
# # min_samples_split=2, min_weight_fraction_leaf=0.0,
# # n_estimators=1000, n_jobs=-1, oob_score=False, random_state=42,
# # verbose=0, warm_start=True)
accuracy_scores, precision_scores, recall_scores, auc_scores, result_pred = perform_repeated_cv(X_train, y_train, final_model)
print(accuracy_scores, accuracy_scores.size)
print(precision_scores, recall_scores)
print(auc_scores, auc_scores.size)
fig = plt.figure(figsize=(20, 10))
plt.plot(auc_scores, '--o')
plt.legend(loc='lower right')
plt.ylabel('AUC', fontsize=20);
plt.xlabel('Repetições', fontsize=20);
plt.tick_params(axis='both', which='major', labelsize=20);
plt.tick_params(axis='both', which='minor', labelsize=18);
#plt.xlim([0, 18])
#plt.ylim([0.5, 1])
plt.legend(('Acurácia', 'AUC'), loc='lower right', prop={'size': 20})
plt.show()
auc_scores.mean()
auc_scores.std()
print("AUC: %0.2f (+/- %0.2f)" % (np.mean(auc_scores), np.std(auc_scores)))
#result_pred.to_csv('result_kfold_RF.csv', encoding='utf-8')
###Output
_____no_output_____
###Markdown
Predicting the Classes in Test Set
###Code
final_model.fit(X_train, y_train)
y_pred = final_model.predict(X_test)
y_pred_prob = final_model.predict_proba(X_test)[:,1]
plotRoc(y_test, y_pred_prob)
auc_test = roc_auc_score(y_test, y_pred_prob)
printcfm(y_test, y_pred, title='confusion matrix')
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
0 0.82 0.72 0.77 25
1 0.50 0.64 0.56 11
avg / total 0.72 0.69 0.70 36
###Markdown
Varying the Threshold for test set
###Code
predict_mine = np.where(y_pred_prob > .0, 1, 0)
printcfm(y_test, predict_mine, title='confusion matrix')
print(classification_report(y_test, predict_mine))
###Output
precision recall f1-score support
0 0.00 0.00 0.00 25
1 0.31 1.00 0.47 11
avg / total 0.09 0.31 0.14 36
###Markdown
Results
###Code
print("max_depth: ", max_depth)
print("AUC Train: %3.3f" % (auc_train))
print("AUC Repeated k-fold: %0.2f (+/- %0.2f)" % (np.mean(auc_scores), np.std(auc_scores)))
print("AUC LoO: %3.3f" % (auc_LoO))
print("AUC test: %3.3f" % (auc_test))
print("AUC cv: %3.3f" % (auc_cv))
#print("Accuracy Train: %3.2f%%" % (acc_train*100))
#print("Accuracy Test %3.2f%%" % (acc_test*100))
###Output
max_depth: 5
AUC Train: 0.997
AUC Repeated k-fold: 0.94 (+/- 0.02)
AUC LoO: 0.960
AUC test: 0.847
AUC cv: 0.971
###Markdown
Draft
###Code
# X=np.concatenate((X_train),axis=0)
# y=np.append(y_train)
X=X_train
y=y_train
# validation curve off
vc = 0
if vc == 1:
print(__doc__)
param_range = np.arange(1, 800, 20)
train_scores, test_scores = validation_curve(
final_model, X, y, param_name="n_estimators", param_range=param_range,
cv=10, scoring="roc_auc", n_jobs=-1)
train_scores_mean = np.mean(train_scores, axis=1)
train_scores_std = np.std(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
test_scores_std = np.std(test_scores, axis=1)
plt.title("Validation Curve with RF")
plt.xlabel("$\gamma$")
plt.ylabel("AUC")
#plt.ylim(0.0, 1.1)
#plt.xlim(-1, 22)
lw = 2
plt.semilogx(param_range, train_scores_mean, label="Training score",
color="darkorange", lw=lw)
plt.fill_between(param_range, train_scores_mean - train_scores_std,
train_scores_mean + train_scores_std, alpha=0.2,
color="darkorange", lw=lw)
plt.semilogx(param_range, test_scores_mean, label="Cross-validation score",
color="navy", lw=lw)
plt.fill_between(param_range, test_scores_mean - test_scores_std,
test_scores_mean + test_scores_std, alpha=0.2,
color="navy", lw=lw)
plt.legend(loc="best")
plt.show()
###Output
_____no_output_____
###Markdown
Export results
###Code
export = 0
rf_df = pd.concat([X_test, y_test], axis=1) # features and actual
rf_df['Predicted'] = y_pred # creates a predicted column to the complete_df, now you'll have features, actual, and predicted
rf_df
if export == 1:
rf_df.to_csv('rf_results.csv', encoding='utf-8')
###Output
_____no_output_____ |
examples/12-tutorial-stray-field.ipynb | ###Markdown
Calculating a stray field using an airbox methodIn order to calculate the stray field outside the sample, we have to define an "airbox" which is going to contain our sample. In this example we define a box with 100 nm edgle length as a mesh which then contains a magnetic sample which is a cube with 50 nm dimensions. We achieve this by implementing a Python fuction for defining the Ms (`norm_fun`). Outside our sample the value of saturation magnetisation is zero.
###Code
import discretisedfield as df
import micromagneticmodel as mm
import oommfc as oc
region = df.Region(p1=(-100e-9, -100e-9, -100e-9), p2=(100e-9, 100e-9, 100e-9))
mesh = df.Mesh(region=region, cell=(5e-9, 5e-9, 5e-9))
def norm_fun(pos):
x, y, z = pos
if -50e-9 <= x <= 50e-9 and -50e-9 <= y <= 50e-9 and -50e-9 <= z <= 50e-9:
return 8e5
else:
return 0
system = mm.System(name='airbox_method')
system.energy = mm.Exchange(A=1e-12) + mm.Demag()
system.dynamics = mm.Precession(gamma0=mm.consts.gamma0) + mm.Damping(alpha=1)
system.m = df.Field(mesh, dim=3, value=(0, 0, 1), norm=norm_fun)
###Output
_____no_output_____
###Markdown
We can now plot the norm to confirm our definition.
###Code
system.m.norm.plane('z').mpl()
###Output
_____no_output_____
###Markdown
In the next step, we can relax the system and show its magnetisation.
###Code
md = oc.MinDriver()
md.drive(system)
system.m.plane('z').mpl(figsize=(10, 10))
###Output
Running OOMMF (ExeOOMMFRunner)[2022/02/25 18:06]... (7.5 s)
###Markdown
Stray field can now be calculated as an effective field for the demagnetisation energy.
###Code
stray_field = oc.compute(system.energy.demag.effective_field, system)
###Output
Running OOMMF (ExeOOMMFRunner)[2022/02/25 18:06]... (1.1 s)
###Markdown
`stray_field` is a `df.Field` and all operations characteristic to vector fields can be performed.
###Code
stray_field.plane('z').mpl(figsize=(8, 8), vector_kw={'scale': 1e6})
###Output
_____no_output_____ |
deepke-master/tutorial-notebooks/GCN.ipynb | ###Markdown
relation extraction 实践> Tutorial作者:余海阳([email protected])在这个演示中,我们使用 `gcn ` 模型实现中文关系抽取。希望在这个demo中帮助大家了解知识图谱构建过程中,三元组抽取构建的原理和常用方法。本demo使用 `python3` 运⾏。 数据集在这个示例中,我们采样了一些中文文本,抽取其中的三元组。sentence|relation|head|tail:---:|:---:|:---:|:---:孔正锡在2005年以一部温馨的爱情电影《长腿叔叔》敲开电影界大门。|导演|长腿叔叔|孔正锡《伤心的树》是吴宗宪的音乐作品,收录在《你比从前快乐》专辑中。|所属专辑|伤心的树|你比从前快乐2000年8月,「天坛大佛」荣获「香港十大杰出工程项目」第四名。|所在城市|天坛大佛|香港- train.csv: 包含6个训练三元组,文件的每一⾏表示一个三元组, 按句子、关系、头实体、尾实体排序,并用`,`分隔。- valid.csv: 包含3个验证三元组,文件的每一⾏表示一个三元组, 按句子、关系、头实体、尾实体排序,并用`,`分隔。- test.csv: 包含3个测试三元组,文件的每一⾏表示一个三元组, 按句子、关系、头实体、尾实体排序,并用`,`分隔。- relation.csv: 包含4种关系三元组,文件的每一⾏表示一个三元组种类, 按头实体种类、尾实体种类、关系、序号排序,并用`,`分隔。 GCN 原理回顾句子信息主要包括word embedding和position embedding,以及通过语法树得到的邻接矩阵adj_matrix。该邻接矩阵的点,为每个word token,语法树中相连接的词语构建边。输入到多层(一般取2,3层,过多层并不会明显提升结果)图卷积神经网络层后,经过最大池化输入到全连接层,即可得到句子的关系信息。
###Code
# 使用pytorch运行神经网络,运行前确认是否安装
!pip install torch
!pip install matplotlib
!pip install transformers
# 导入所使用模块
import os
import csv
import math
import pickle
import logging
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
import matplotlib.pyplot as plt
from torch import optim
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence
from torch.utils.data import Dataset,DataLoader
from sklearn.metrics import precision_recall_fscore_support
from typing import List, Tuple, Dict, Any, Sequence, Optional, Union
from transformers import BertTokenizer, BertModel
logger = logging.getLogger(__name__)
# 模型调参的配置文件
class Config(object):
model_name = 'gcn' # ['cnn', 'gcn', 'lm']
use_pcnn = True
min_freq = 1
pos_limit = 20
out_path = 'data/out'
batch_size = 2
word_dim = 10
pos_dim = 5
dim_strategy = 'sum' # ['sum', 'cat']
out_channels = 20
intermediate = 10
kernel_sizes = [3, 5, 7]
activation = 'gelu'
pooling_strategy = 'max'
dropout = 0.3
epoch = 10
num_relations = 4
learning_rate = 3e-4
lr_factor = 0.7 # 学习率的衰减率
lr_patience = 3 # 学习率衰减的等待epoch
weight_decay = 1e-3 # L2正则
early_stopping_patience = 6
train_log = True
log_interval = 1
show_plot = True
only_comparison_plot = False
plot_utils = 'matplot'
lm_file = 'bert-base-chinese'
lm_num_hidden_layers = 2
rnn_layers = 2
cfg = Config()
# word token 构建 one-hot 词典,后续输入到embedding层得到对应word信息矩阵
# 一般默认0为pad,1为unknown
class Vocab(object):
def __init__(self, name: str = 'basic', init_tokens = ["[PAD]", "[UNK]"]):
self.name = name
self.init_tokens = init_tokens
self.trimed = False
self.word2idx = {}
self.word2count = {}
self.idx2word = {}
self.count = 0
self._add_init_tokens()
def _add_init_tokens(self):
for token in self.init_tokens:
self._add_word(token)
def _add_word(self, word: str):
if word not in self.word2idx:
self.word2idx[word] = self.count
self.word2count[word] = 1
self.idx2word[self.count] = word
self.count += 1
else:
self.word2count[word] += 1
def add_words(self, words: Sequence):
for word in words:
self._add_word(word)
def trim(self, min_freq=2, verbose: Optional[bool] = True):
'''当 word 词频低于 min_freq 时,从词库中删除
Args:
param min_freq: 最低词频
'''
assert min_freq == int(min_freq), f'min_freq must be integer, can\'t be {min_freq}'
min_freq = int(min_freq)
if min_freq < 2:
return
if self.trimed:
return
self.trimed = True
keep_words = []
new_words = []
for k, v in self.word2count.items():
if v >= min_freq:
keep_words.append(k)
new_words.extend([k] * v)
if verbose:
before_len = len(keep_words)
after_len = len(self.word2idx) - len(self.init_tokens)
logger.info('vocab after be trimmed, keep words [{} / {}] = {:.2f}%'.format(before_len, after_len, before_len / after_len * 100))
# Reinitialize dictionaries
self.word2idx = {}
self.word2count = {}
self.idx2word = {}
self.count = 0
self._add_init_tokens()
self.add_words(new_words)
# 预处理过程所需要使用的函数
Path = str
def load_csv(fp: Path, is_tsv: bool = False, verbose: bool = True) -> List:
if verbose:
logger.info(f'load csv from {fp}')
dialect = 'excel-tab' if is_tsv else 'excel'
with open(fp, encoding='utf-8') as f:
reader = csv.DictReader(f, dialect=dialect)
return list(reader)
def load_pkl(fp: Path, verbose: bool = True) -> Any:
if verbose:
logger.info(f'load data from {fp}')
with open(fp, 'rb') as f:
data = pickle.load(f)
return data
def save_pkl(data: Any, fp: Path, verbose: bool = True) -> None:
if verbose:
logger.info(f'save data in {fp}')
with open(fp, 'wb') as f:
pickle.dump(data, f)
def _handle_relation_data(relation_data: List[Dict]) -> Dict:
rels = dict()
for d in relation_data:
rels[d['relation']] = {
'index': int(d['index']),
'head_type': d['head_type'],
'tail_type': d['tail_type'],
}
return rels
def _add_relation_data(rels: Dict,data: List) -> None:
for d in data:
d['rel2idx'] = rels[d['relation']]['index']
d['head_type'] = rels[d['relation']]['head_type']
d['tail_type'] = rels[d['relation']]['tail_type']
def _convert_tokens_into_index(data: List[Dict], vocab):
unk_str = '[UNK]'
unk_idx = vocab.word2idx[unk_str]
for d in data:
d['token2idx'] = [vocab.word2idx.get(i, unk_idx) for i in d['tokens']]
def _add_pos_seq(train_data: List[Dict], cfg):
for d in train_data:
d['head_offset'], d['tail_offset'], d['lens'] = int(d['head_offset']), int(d['tail_offset']), int(d['lens'])
entities_idx = [d['head_offset'], d['tail_offset']] if d['head_offset'] < d['tail_offset'] else [d['tail_offset'], d['head_offset']]
d['head_pos'] = list(map(lambda i: i - d['head_offset'], list(range(d['lens']))))
d['head_pos'] = _handle_pos_limit(d['head_pos'], int(cfg.pos_limit))
d['tail_pos'] = list(map(lambda i: i - d['tail_offset'], list(range(d['lens']))))
d['tail_pos'] = _handle_pos_limit(d['tail_pos'], int(cfg.pos_limit))
if cfg.use_pcnn:
d['entities_pos'] = [1] * (entities_idx[0] + 1) + [2] * (entities_idx[1] - entities_idx[0] - 1) +\
[3] * (d['lens'] - entities_idx[1])
def _handle_pos_limit(pos: List[int], limit: int) -> List[int]:
for i, p in enumerate(pos):
if p > limit:
pos[i] = limit
if p < -limit:
pos[i] = -limit
return [p + limit + 1 for p in pos]
def seq_len_to_mask(seq_len: Union[List, np.ndarray, torch.Tensor], max_len=None, mask_pos_to_true=True):
"""
将一个表示sequence length的一维数组转换为二维的mask,默认pad的位置为1。
转变 1-d seq_len到2-d mask.
:param list, np.ndarray, torch.LongTensor seq_len: shape将是(B,)
:param int max_len: 将长度pad到这个长度。默认(None)使用的是seq_len中最长的长度。但在nn.DataParallel的场景下可能不同卡的seq_len会有
区别,所以需要传入一个max_len使得mask的长度是pad到该长度。
:return: np.ndarray, torch.Tensor 。shape将是(B, max_length), 元素类似为bool或torch.uint8
"""
if isinstance(seq_len, list):
seq_len = np.array(seq_len)
if isinstance(seq_len, np.ndarray):
seq_len = torch.from_numpy(seq_len)
if isinstance(seq_len, torch.Tensor):
assert seq_len.dim() == 1, logger.error(f"seq_len can only have one dimension, got {seq_len.dim()} != 1.")
batch_size = seq_len.size(0)
max_len = int(max_len) if max_len else seq_len.max().long()
broad_cast_seq_len = torch.arange(max_len).expand(batch_size, -1).to(seq_len.device)
if mask_pos_to_true:
mask = broad_cast_seq_len.ge(seq_len.unsqueeze(1))
else:
mask = broad_cast_seq_len.lt(seq_len.unsqueeze(1))
else:
raise logger.error("Only support 1-d list or 1-d numpy.ndarray or 1-d torch.Tensor.")
return mask
def _lm_serialize(data: List[Dict], cfg):
logger.info('use bert tokenizer...')
tokenizer = BertTokenizer.from_pretrained(cfg.lm_file)
for d in data:
sent = d['sentence'].strip()
sent = sent.replace(d['head'], d['head_type'], 1).replace(d['tail'], d['tail_type'], 1)
sent += '[SEP]' + d['head'] + '[SEP]' + d['tail']
d['token2idx'] = tokenizer.encode(sent, add_special_tokens=True)
d['lens'] = len(d['token2idx'])
# 预处理过程
logger.info('load raw files...')
train_fp = os.path.join('data/train.csv')
valid_fp = os.path.join('data/valid.csv')
test_fp = os.path.join('data/test.csv')
relation_fp = os.path.join('data/relation.csv')
train_data = load_csv(train_fp)
valid_data = load_csv(valid_fp)
test_data = load_csv(test_fp)
relation_data = load_csv(relation_fp)
for d in train_data:
d['tokens'] = eval(d['tokens'])
for d in valid_data:
d['tokens'] = eval(d['tokens'])
for d in test_data:
d['tokens'] = eval(d['tokens'])
logger.info('convert relation into index...')
rels = _handle_relation_data(relation_data)
_add_relation_data(rels, train_data)
_add_relation_data(rels, valid_data)
_add_relation_data(rels, test_data)
logger.info('verify whether use pretrained language models...')
if cfg.model_name == 'lm':
logger.info('use pretrained language models serialize sentence...')
_lm_serialize(train_data, cfg)
_lm_serialize(valid_data, cfg)
_lm_serialize(test_data, cfg)
else:
logger.info('build vocabulary...')
vocab = Vocab('word')
train_tokens = [d['tokens'] for d in train_data]
valid_tokens = [d['tokens'] for d in valid_data]
test_tokens = [d['tokens'] for d in test_data]
sent_tokens = [*train_tokens, *valid_tokens, *test_tokens]
for sent in sent_tokens:
vocab.add_words(sent)
vocab.trim(min_freq=cfg.min_freq)
logger.info('convert tokens into index...')
_convert_tokens_into_index(train_data, vocab)
_convert_tokens_into_index(valid_data, vocab)
_convert_tokens_into_index(test_data, vocab)
logger.info('build position sequence...')
_add_pos_seq(train_data, cfg)
_add_pos_seq(valid_data, cfg)
_add_pos_seq(test_data, cfg)
logger.info('save data for backup...')
os.makedirs(cfg.out_path, exist_ok=True)
train_save_fp = os.path.join(cfg.out_path, 'train.pkl')
valid_save_fp = os.path.join(cfg.out_path, 'valid.pkl')
test_save_fp = os.path.join(cfg.out_path, 'test.pkl')
save_pkl(train_data, train_save_fp)
save_pkl(valid_data, valid_save_fp)
save_pkl(test_data, test_save_fp)
if cfg.model_name != 'lm':
vocab_save_fp = os.path.join(cfg.out_path, 'vocab.pkl')
vocab_txt = os.path.join(cfg.out_path, 'vocab.txt')
save_pkl(vocab, vocab_save_fp)
logger.info('save vocab in txt file, for watching...')
with open(vocab_txt, 'w', encoding='utf-8') as f:
f.write(os.linesep.join(vocab.word2idx.keys()))
# pytorch 构建自定义 Dataset
class Tree(object):
def __init__(self):
self.parent = None
self.num_children = 0
self.children = list()
def add_child(self, child):
child.parent = self
self.num_children += 1
self.children.append(child)
def size(self):
s = getattr(self, '_size', -1)
if s != -1:
return self._size
else:
count = 1
for i in range(self.num_children):
count += self.children[i].size()
self._size = count
return self._size
def __iter__(self):
yield self
for c in self.children:
for x in c:
yield x
def depth(self):
d = getattr(self, '_depth', -1)
if d != -1:
return self._depth
else:
count = 0
if self.num_children > 0:
for i in range(self.num_children):
child_depth = self.children[i].depth()
if child_depth > count:
count = child_depth
count += 1
self._depth = count
return self._depth
def head_to_adj(head, directed=False, self_loop=True):
"""
Convert a sequence of head indexes to an (numpy) adjacency matrix.
"""
seq_len = len(head)
head = head[:seq_len]
root = None
nodes = [Tree() for _ in head]
for i in range(seq_len):
h = head[i]
setattr(nodes[i], 'idx', i)
if h == 0:
root = nodes[i]
else:
nodes[h - 1].add_child(nodes[i])
assert root is not None
ret = np.zeros((seq_len, seq_len), dtype=np.float32)
queue = [root]
idx = []
while len(queue) > 0:
t, queue = queue[0], queue[1:]
idx += [t.idx]
for c in t.children:
ret[t.idx, c.idx] = 1
queue += t.children
if not directed:
ret = ret + ret.T
if self_loop:
for i in idx:
ret[i, i] = 1
return ret
def collate_fn(cfg):
def collate_fn_intra(batch):
batch.sort(key=lambda data: int(data['lens']), reverse=True)
max_len = int(batch[0]['lens'])
def _padding(x, max_len):
return x + [0] * (max_len - len(x))
def _pad_adj(adj, max_len):
adj = np.array(adj)
pad_len = max_len - adj.shape[0]
for i in range(pad_len):
adj = np.insert(adj, adj.shape[-1], 0, axis=1)
for i in range(pad_len):
adj = np.insert(adj, adj.shape[0], 0, axis=0)
return adj
x, y = dict(), []
word, word_len = [], []
head_pos, tail_pos = [], []
pcnn_mask = []
adj_matrix = []
for data in batch:
word.append(_padding(data['token2idx'], max_len))
word_len.append(int(data['lens']))
y.append(int(data['rel2idx']))
if cfg.model_name != 'lm':
head_pos.append(_padding(data['head_pos'], max_len))
tail_pos.append(_padding(data['tail_pos'], max_len))
if cfg.model_name == 'gcn':
head = eval(data['dependency'])
adj = head_to_adj(head, directed=True, self_loop=True)
adj_matrix.append(_pad_adj(adj, max_len))
if cfg.use_pcnn:
pcnn_mask.append(_padding(data['entities_pos'], max_len))
x['word'] = torch.tensor(word)
x['lens'] = torch.tensor(word_len)
y = torch.tensor(y)
if cfg.model_name != 'lm':
x['head_pos'] = torch.tensor(head_pos)
x['tail_pos'] = torch.tensor(tail_pos)
if cfg.model_name == 'gcn':
x['adj'] = torch.tensor(adj_matrix)
if cfg.model_name == 'cnn' and cfg.use_pcnn:
x['pcnn_mask'] = torch.tensor(pcnn_mask)
return x, y
return collate_fn_intra
class CustomDataset(Dataset):
"""默认使用 List 存储数据"""
def __init__(self, fp):
self.file = load_pkl(fp)
def __getitem__(self, item):
sample = self.file[item]
return sample
def __len__(self):
return len(self.file)
# embedding层
class Embedding(nn.Module):
def __init__(self, config):
"""
word embedding: 一般 0 为 padding
pos embedding: 一般 0 为 padding
dim_strategy: [cat, sum] 多个 embedding 是拼接还是相加
"""
super(Embedding, self).__init__()
# self.xxx = config.xxx
self.vocab_size = config.vocab_size
self.word_dim = config.word_dim
self.pos_size = config.pos_limit * 2 + 2
self.pos_dim = config.pos_dim if config.dim_strategy == 'cat' else config.word_dim
self.dim_strategy = config.dim_strategy
self.wordEmbed = nn.Embedding(self.vocab_size,self.word_dim,padding_idx=0)
self.headPosEmbed = nn.Embedding(self.pos_size,self.pos_dim,padding_idx=0)
self.tailPosEmbed = nn.Embedding(self.pos_size,self.pos_dim,padding_idx=0)
def forward(self, *x):
word, head, tail = x
word_embedding = self.wordEmbed(word)
head_embedding = self.headPosEmbed(head)
tail_embedding = self.tailPosEmbed(tail)
if self.dim_strategy == 'cat':
return torch.cat((word_embedding,head_embedding, tail_embedding), -1)
elif self.dim_strategy == 'sum':
# 此时 pos_dim == word_dim
return word_embedding + head_embedding + tail_embedding
else:
raise Exception('dim_strategy must choose from [sum, cat]')
# gcn 模型
class GCN(nn.Module):
def __init__(self, cfg):
super(GCN, self).__init__()
self.embedding = Embedding(cfg)
self.fc1 = nn.Linear(10, 20)
self.fc2 = nn.Linear(20, 20)
self.fc3 = nn.Linear(20, cfg.num_relations)
self.dropout = nn.Dropout(cfg.dropout)
def forward(self, x):
word, adj, head_pos, tail_pos = x['word'], x['adj'], x['head_pos'], x['tail_pos']
inputs = self.embedding(word, head_pos, tail_pos)
AxW = F.leaky_relu(self.fc1(torch.bmm(adj,inputs)))
AxW = self.dropout(AxW)
AxW = F.leaky_relu(self.fc2(torch.bmm(adj,AxW)))
AxW = self.dropout(AxW)
output = self.fc3(torch.bmm(adj,AxW))
output = torch.max(output, dim=1)[0]
return output
# p,r,f1 指标测量
class PRMetric():
def __init__(self):
"""
暂时调用 sklearn 的方法
"""
self.y_true = np.empty(0)
self.y_pred = np.empty(0)
def reset(self):
self.y_true = np.empty(0)
self.y_pred = np.empty(0)
def update(self, y_true:torch.Tensor, y_pred:torch.Tensor):
y_true = y_true.cpu().detach().numpy()
y_pred = y_pred.cpu().detach().numpy()
y_pred = np.argmax(y_pred,axis=-1)
self.y_true = np.append(self.y_true, y_true)
self.y_pred = np.append(self.y_pred, y_pred)
def compute(self):
p, r, f1, _ = precision_recall_fscore_support(self.y_true,self.y_pred,average='macro',warn_for=tuple())
_, _, acc, _ = precision_recall_fscore_support(self.y_true,self.y_pred,average='micro',warn_for=tuple())
return acc,p,r,f1
# 训练过程中的迭代
def train(epoch, model, dataloader, optimizer, criterion, cfg):
model.train()
metric = PRMetric()
losses = []
for batch_idx, (x, y) in enumerate(dataloader, 1):
optimizer.zero_grad()
y_pred = model(x)
loss = criterion(y_pred, y)
loss.backward()
optimizer.step()
metric.update(y_true=y, y_pred=y_pred)
losses.append(loss.item())
data_total = len(dataloader.dataset)
data_cal = data_total if batch_idx == len(dataloader) else batch_idx * len(y)
if (cfg.train_log and batch_idx % cfg.log_interval == 0) or batch_idx == len(dataloader):
# p r f1 皆为 macro,因为micro时三者相同,定义为acc
acc,p,r,f1 = metric.compute()
print(f'Train Epoch {epoch}: [{data_cal}/{data_total} ({100. * data_cal / data_total:.0f}%)]\t'
f'Loss: {loss.item():.6f}')
print(f'Train Epoch {epoch}: Acc: {100. * acc:.2f}%\t'
f'macro metrics: [p: {p:.4f}, r:{r:.4f}, f1:{f1:.4f}]')
if cfg.show_plot and not cfg.only_comparison_plot:
if cfg.plot_utils == 'matplot':
plt.plot(losses)
plt.title(f'epoch {epoch} train loss')
plt.show()
return losses[-1]
# 测试过程中的迭代
def validate(epoch, model, dataloader, criterion,verbose=True):
model.eval()
metric = PRMetric()
losses = []
for batch_idx, (x, y) in enumerate(dataloader, 1):
with torch.no_grad():
y_pred = model(x)
loss = criterion(y_pred, y)
metric.update(y_true=y, y_pred=y_pred)
losses.append(loss.item())
loss = sum(losses) / len(losses)
acc,p,r,f1 = metric.compute()
data_total = len(dataloader.dataset)
if verbose:
print(f'Valid Epoch {epoch}: [{data_total}/{data_total}](100%)\t Loss: {loss:.6f}')
print(f'Valid Epoch {epoch}: Acc: {100. * acc:.2f}%\tmacro metrics: [p: {p:.4f}, r:{r:.4f}, f1:{f1:.4f}]\n\n')
return f1,loss
# 加载数据集
train_dataset = CustomDataset(train_save_fp)
valid_dataset = CustomDataset(valid_save_fp)
test_dataset = CustomDataset(test_save_fp)
train_dataloader = DataLoader(train_dataset, batch_size=cfg.batch_size, shuffle=True, collate_fn=collate_fn(cfg))
valid_dataloader = DataLoader(valid_dataset, batch_size=cfg.batch_size, shuffle=True, collate_fn=collate_fn(cfg))
test_dataloader = DataLoader(test_dataset, batch_size=cfg.batch_size, shuffle=True, collate_fn=collate_fn(cfg))
# 因为加载预处理后的数据,才知道vocab_size
vocab = load_pkl(vocab_save_fp)
vocab_size = vocab.count
cfg.vocab_size = vocab_size
# main 入口,定义优化函数、loss函数等
# 开始epoch迭代
# 使用valid 数据集的loss做早停判断,当不再下降时,此时为模型泛化性最好的时刻。
model = GCN(cfg)
print(model)
optimizer = optim.Adam(model.parameters(), lr=cfg.learning_rate, weight_decay=cfg.weight_decay)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(optimizer, factor=cfg.lr_factor, patience=cfg.lr_patience)
criterion = nn.CrossEntropyLoss()
best_f1, best_epoch = -1, 0
es_loss, es_f1, es_epoch, es_patience, best_es_epoch, best_es_f1, = 1000, -1, 0, 0, 0, -1
train_losses, valid_losses = [], []
logger.info('=' * 10 + ' Start training ' + '=' * 10)
for epoch in range(1, cfg.epoch + 1):
train_loss = train(epoch, model, train_dataloader, optimizer, criterion, cfg)
valid_f1, valid_loss = validate(epoch, model, valid_dataloader, criterion)
scheduler.step(valid_loss)
train_losses.append(train_loss)
valid_losses.append(valid_loss)
if best_f1 < valid_f1:
best_f1 = valid_f1
best_epoch = epoch
# 使用 valid loss 做 early stopping 的判断标准
if es_loss > valid_loss:
es_loss = valid_loss
es_f1 = valid_f1
best_es_f1 = valid_f1
es_epoch = epoch
best_es_epoch = epoch
es_patience = 0
else:
es_patience += 1
if es_patience >= cfg.early_stopping_patience:
best_es_epoch = es_epoch
best_es_f1 = es_f1
if cfg.show_plot:
if cfg.plot_utils == 'matplot':
plt.plot(train_losses, 'x-')
plt.plot(valid_losses, '+-')
plt.legend(['train', 'valid'])
plt.title('train/valid comparison loss')
plt.show()
print(f'best(valid loss quota) early stopping epoch: {best_es_epoch}, '
f'this epoch macro f1: {best_es_f1:0.4f}')
print(f'total {cfg.epoch} epochs, best(valid macro f1) epoch: {best_epoch}, '
f'this epoch macro f1: {best_f1:.4f}')
test_f1, _ = validate(0, model, test_dataloader, criterion,verbose=False)
print(f'after {cfg.epoch} epochs, final test data macro f1: {test_f1:.4f}')
###Output
_____no_output_____ |
Finance in Python Part 1 - Technical Indicators.ipynb | ###Markdown
Created on Sat Aug 21 04:43:30 2021@author: Shannan Purpose The goal of this analysis is primarily to learn more about how to compute and visualise the signals provided by technical indicators. A secondary aim is to make money in the stock market! Technical Indicators used: **RSI, Bollinger Bands, SMA** Program
###Code
import pandas as pd
import numpy as np
# will help with grabbing data from yahoo finance API
import pandas_datareader.data as web
import matplotlib.pyplot as plt
import matplotlib
from matplotlib import style
import os
import datetime as dt
import sklearn
import bs4
from bs4 import BeautifulSoup
# Configuring chart settings
matplotlib.rcParams['figure.dpi'] = 200
style.use('ggplot')
# =============================================================================
# Sourcing the relevant data from Yahoo Finance
# =============================================================================
start = dt.datetime(2020,12,1) # Invested around here
end = dt.datetime(2021,8,24) # Most recent Date
# df = web.DataReader('0327.HK','yahoo',start,end)
df.to_csv('pax global_info.csv') # convert the dataframe into a csv
df = pd.read_csv('pax global_2020_Q3.csv',parse_dates=True,index_col = 0)
df.head()
# df[['Adj Close','Open','High']].plot().line(x = df.index)
# =============================================================================
# Technical indicators
#
# Compute the following technical indicators
# (1) simple moving averages (SMAs)
# (2) RSI
# (3) BollingerBands
# =============================================================================
def calc_sma(df,col_name,periods=50):
"""
Parameters
df: dataframe containing relevant stock information
col_name: column for which you want the SMA to be computed
periods: the number of periods over which you want to compute the SMA
Returns
No return value
Adds a new column to your dataframe with the Simple Moving Average for
the relevant column
"""
df[f'{col_name}_sma{periods}'] = df[col_name].rolling(window = periods,min_periods=periods).mean()
def calc_bollinger_bands(df,col_name,num_std,periods=20):
"""
Parameters
df: dataframe containing relevant stock information
col_name: column for which you want the SMA to be computed
num_std: number of standard deviations you want the bollinger bands to capture
periods: the number of periods over which you want to compute the SMA
Returns
Adds 2 new columns for the upper and lower bollinger bands and 1 for the SMA
you want to compute for the relevant column
"""
calc_sma(df,col_name,periods)
std = df['Adj Close'].rolling(window = periods,min_periods=periods).std()
df[f'{col_name}_ubb_{periods}_{num_std}'] = df[f'{col_name}_sma{periods}'] + std * num_std
df[f'{col_name}_lbb_{periods}_{num_std}'] = df[f'{col_name}_sma{periods}'] - std * num_std
def calc_rsi(df, col_name, ema = True,periods = 14):
"""
Parameters
df: dataframe containing relevant stock information
col_name: column for which you want the RSI to be computed
ema: if True, compute RSI using EMA; else use SMA
periods: defaults to 14, but in general is the number of periods over which you want to compute
Returns a pd.Series with the relative strength index
"""
delta = df[col_name].diff()
up = delta.clip(lower=0)
down = -1 * delta.clip(upper=0)
if ema == True:
ma_up = up.ewm(com = periods - 1, adjust=True, min_periods = periods).mean()
ma_down = down.ewm(com = periods - 1, adjust=True, min_periods = periods).mean()
else:
ma_up = up.rolling(window = periods, adjust=False).mean()
ma_down = down.rolling(window = periods, adjust=False).mean()
rsi = ma_up / ma_down
rsi = 100 - (100/(1 + rsi))
return rsi
# =============================================================================
# Strategy Implementation
# RSI
# Bollinger Bands (BB)
# SMA
# RSI/BB
# =============================================================================
# =============================================================================
# Implement a Relative Strength Index Strategy
#
# BUY: if the adj close price RSI crosses above the RSI lower bound
# SELL: if the RSI crosses below the RSI upper bound
# =============================================================================
def implement_rsi_strategy(col_name,rsi,lb_rsi,ub_rsi):
buy_price = []
sell_price = []
rsi_signal = []
signal = 0
for i in range(0, len(rsi)):
if rsi[i-1] > 30 and rsi[i] < 30:
if signal != 1:
buy_price.append(col_name[i])
sell_price.append(np.nan)
signal = 1
rsi_signal.append(signal)
else:
buy_price.append(np.nan)
sell_price.append(np.nan)
rsi_signal.append(0)
elif rsi[i-1] < 70 and rsi[i] > 70:
if signal != -1:
buy_price.append(np.nan)
sell_price.append(col_name[i])
signal = -1
rsi_signal.append(signal)
else:
buy_price.append(np.nan)
sell_price.append(np.nan)
rsi_signal.append(0)
else:
buy_price.append(np.nan)
sell_price.append(np.nan)
rsi_signal.append(0)
return buy_price, sell_price, rsi_signal
# =============================================================================
# Implement a Bollinger Band Strategy
#
# BUY: if the adj close price crosses below the lower bollinger band
# SELL: if the adj close price crosses above the upper bollinger band
# =============================================================================
def implement_bb_strategy(col_name,lbb,ubb):
buy_price = []
sell_price = []
bb_signal = []
signal = 0
for i in range(0, len(col_name)):
# buy signal
if (rsi[i-1] < lb_rsi and rsi[i] > lb_rsi):
if signal != 1:
buy_price.append(col_name[i])
sell_price.append(np.nan)
signal = 1
rsi_signal.append(signal)
else:
buy_price.append(np.nan)
sell_price.append(np.nan)
rsi_signal.append(np.nan)
# sell signal
elif (rsi[i-1] > ub_rsi and rsi[i] < ub_rsi):
if signal != -1:
buy_price.append(np.nan)
sell_price.append(col_name[i])
signal = -1
rsi_signal.append(signal)
else:
buy_price.append(np.nan)
sell_price.append(np.nan)
rsi_signal.append(np.nan)
# do nothing if not signal
else:
buy_price.append(np.nan)
sell_price.append(np.nan)
rsi_signal.append(np.nan)
return buy_price, sell_price, bb_signal
def implement_sma_strategy(col_name,sma_ST,sma_LT):
buy_price = []
sell_price = []
sma_signal = []
signal = 0
for i in range(0, len(sma_LT)):
if (sma_ST[i-1] < sma_LT[i-1] and sma_ST[i] > sma_LT[i]):
if signal != 1:
buy_price.append(col_name[i])
sell_price.append(np.nan)
signal = 1
sma_signal.append(signal)
else:
buy_price.append(np.nan)
sell_price.append(np.nan)
sma_signal.append(np.nan)
# sell signal
elif (sma_ST[i-1] > sma_LT[i-1] and sma_ST[i] < sma_LT[i]):
if signal != -1:
buy_price.append(np.nan)
sell_price.append(col_name[i])
signal = -1
sma_signal.append(signal)
else:
buy_price.append(np.nan)
sell_price.append(np.nan)
sma_signal.append(np.nan)
# do nothing if not signal
else:
buy_price.append(np.nan)
sell_price.append(np.nan)
sma_signal.append(np.nan)
return buy_price, sell_price, bb_signal
# =============================================================================
# Implement a Compoound Strategy involving RSI and Bollinger Bands
#
# BUY: if the adj close price (1) crosses below the lower bollinger band AND
# (2) RSI crosses above the RSI lower bound
# SELL: if the adj close price (1) crosses above the upper bollinger band AND
# (2) RSI crosses below the RSI upper bound
# =============================================================================
def implement_bb_rsi_strategy(col_name,lbb,ubb,rsi,lb_rsi,ub_rsi):
buy_price = []
sell_price = []
bb_rsi_signal = []
signal = 0
for i in range(0, len(col_name)):
# buy signal
if (col_name[i-1] > lbb[i-1] and col_name[i] < lbb[i] and
rsi[i-1] < lb_rsi and rsi[i] > lb_rsi):
if signal != 1:
buy_price.append(col_name[i])
sell_price.append(np.nan)
signal = 1
bb_rsi_signal.append(signal)
else:
buy_price.append(np.nan)
sell_price.append(np.nan)
bb_rsi_signal.append(np.nan)
# sell signal
elif (col_name[i-1] < ubb[i-1] and col_name[i] > ubb[i] and
rsi[i-1] > ub_rsi and rsi[i] < ub_rsi):
if signal != -1:
buy_price.append(np.nan)
sell_price.append(col_name[i])
signal = -1
bb_rsi_signal.append(signal)
else:
buy_price.append(np.nan)
sell_price.append(np.nan)
bb_rsi_signal.append(np.nan)
# do nothing if not signal
else:
buy_price.append(np.nan)
sell_price.append(np.nan)
bb_rsi_signal.append(np.nan)
return buy_price, sell_price, bb_rsi_signal
# =============================================================================
# implement_bb_rsi_sma_strategy
#
# BUY: if the adj close price (1) crosses below the lower bollinger band AND
# (2) RSI crosses above the lower bound AND
# (3) SMA50 crosses above SMA200
#
# SELL: if the adj close price (1) crosses above the upper bollinger band AND
# (2) RSI crosses below the upper bound AND
# (3) SMA50 crosses below SMA200
#
# TRACK:
# (1) number of trades
# (2) number of buys and sells
# (3) average win from buy and sell
# =============================================================================
df = pd.read_csv('pax global_info.csv',parse_dates=['Date'],index_col = ['Date'])
df.head()
calc_sma(df,'Close',25)
calc_sma(df,'Close',50)
calc_sma(df,'Close',100)
calc_bollinger_bands(df,'Close',2,20)
df['rsi'] = calc_rsi(df,'Close',ema=True,periods=14)
df.head()
buy_price_rsi, sell_price_rsi, rsi_signal = implement_rsi_strategy(df['Close'],df['rsi'], 30, 70)
buy_price_bb, sell_price_bb, bb_signal = implement_bb_strategy(df['Close'], df['Close_lbb_20_2'], df['Close_ubb_20_2'])
buy_price_sma, sell_price_sma, sma_signal = implement_sma_strategy(df['Close'],df['Close_sma25'],df['Close_sma50'])
buy_price_bb_rsi, sell_price_bb_rsi, bb_rsi_signal = implement_bb_rsi_strategy(df['Close'], df['Close_lbb_20_2'], df['Close_ubb_20_2'],df['rsi'], lb_rsi=30, ub_rsi=70)
# =============================================================================
# graphing with matplotlib
# each matplotlib graphic frame can have multiple subplots
# subplots are referred to as axes (ax)
# =============================================================================
# =============================================================================
# create subplots
# subplot2grid creates an axis (subplot) in location inside a regular grid
# ax1 will span 5 rows and be located in column 1 of our grid space
# ax2 will span 1 row and be located in column 1 of our grid space
# =============================================================================
# =============================================================================
# plot the RSI buy and sell signals along with stock trading volume and the RSI graph
# =============================================================================
df['buy_price_rsi'] = buy_price_rsi
df['sell_price_rsi'] = sell_price_rsi
ax1 = plt.subplot2grid((7,1),(0,0),rowspan = 5,colspan = 1)
ax2 = plt.subplot2grid((7,1),(5,0),rowspan = 1,colspan = 1,sharex=ax1)
ax3 = plt.subplot2grid((7,1),(6,0),rowspan = 1,colspan = 1,sharex=ax1)
ax1.plot(df.index,df['Close'],color = 'black',linewidth=1)
ax1.legend(['Close','Buy','Sell'])
ax1.scatter(df.index[df['buy_price_rsi'].isnull()==False],
y = df['buy_price_rsi'][df['buy_price_rsi'].isnull()==False],
marker = '^', color = 'green', label = 'BUY', s = 50)
ax1.scatter(df.index[df['sell_price_rsi'].isnull()==False],
y = df['sell_price_rsi'][df['sell_price_rsi'].isnull()==False],
marker = 'v', color = 'red', label = 'SELL', s = 50)
ax1.legend(['Close','Buy','Sell'])
ax2.bar(df.index,df['Volume'])
ax2.legend(['Volume'])
ax3.plot(df.index,df['rsi'],color='black',linewidth=0.5)
ax3.axhline(y=70,linewidth=0.5)
ax3.axhline(y=30,linewidth=0.5)
ax3.legend(['RSI'])
# =============================================================================
# plot the Bollinger Band buy and sell signals along with stock trading volume
# =============================================================================
df['buy_price_bb'] = buy_price_bb
df['sell_price_bb'] = sell_price_bb
ax1 = plt.subplot2grid((6,1),(0,0),rowspan = 5,colspan = 1)
ax2 = plt.subplot2grid((6,1),(5,0),rowspan = 1,colspan = 1)
# plot stuff on the subplots
ax1.plot(df.index,df['Close'],color = 'blue',linewidth=1)
ax1.plot(df.index,df['Close_sma20'],linestyle='--', linewidth = 1,color = 'black')
ax1.plot(df.index,df[['Close_ubb_20_2','Close_lbb_20_2']],
linestyle = '--',linewidth=1,color = 'grey')
ax1.scatter(df.index[df['buy_price_bb'].isnull()==False],
y = df['buy_price_bb'][df['buy_price_bb'].isnull()==False],
marker = '^', color = 'green', label = 'BUY', s = 50)
ax1.scatter(df.index[df['sell_price_bb'].isnull()==False],
y = df['sell_price_bb'][df['sell_price_bb'].isnull()==False],
marker = 'v', color = 'red', label = 'SELL', s = 50)
ax1.legend(['Close','SMA(20)','Upper Bollinger Band','Lower Bollinger Band','Buy','Sell'])
ax2.bar(df.index,df['Volume'])
ax2.legend(['Volume'])
# =============================================================================
# plot the SMA 25, 100 buy and sell signals along with stock trading volume
# =============================================================================
df['buy_price_bb_rsi'] = buy_price_sma
df['sell_price_bb_rsi'] = sell_price_sma
ax1 = plt.subplot2grid((6,1),(0,0),rowspan = 5,colspan = 1)
ax2 = plt.subplot2grid((6,1),(5,0),rowspan = 1,colspan = 1)
ax1.plot(df.index,df['Close'],color = 'black',linewidth=1)
ax1.plot(df.index,df['Close_sma25'],linestyle='--', linewidth = 1,color = 'purple')
ax1.plot(df.index,df['Close_sma50'],linestyle='--', linewidth = 1,color = 'gold')
ax1.scatter(df.index[df['buy_price_bb_rsi'].isnull()==False],
y = df['buy_price_bb_rsi'][df['buy_price_bb_rsi'].isnull()==False],
marker = '^', color = 'green', label = 'BUY', s = 50)
ax1.scatter(df.index[df['sell_price_bb_rsi'].isnull()==False],
y = df['sell_price_bb_rsi'][df['sell_price_bb_rsi'].isnull()==False],
marker = 'v', color = 'red', label = 'SELL', s = 50)
ax1.legend(['Close','SMA(25)','SMA(50)','Buy','Sell'])
ax2.bar(df.index,df['Volume'])
ax2.legend(['Volume'])
# =============================================================================
# plot the Bollinger Band & RSI buy and sell signals along with stock trading volume
# =============================================================================
ax1 = plt.subplot2grid((7,1),(0,0),rowspan = 5,colspan = 1)
ax2 = plt.subplot2grid((7,1),(5,0),rowspan = 1,colspan = 1)
ax3 = plt.subplot2grid((7,1),(6,0),rowspan = 1,colspan = 1)
df['buy_price_bb_rsi'] = buy_price_bb_rsi
df['sell_price_bb_rsi'] = sell_price_bb_rsi
# plot stuff on the subplots
ax1.plot(df.index,df['Close'],color = 'skyblue',linewidth=1)
ax1.plot(df.index,df['Close_sma20'],linestyle='--', linewidth = 1,color = 'black')
ax1.plot(df.index,df[['Close_ubb_20_2','Close_lbb_20_2']],
linestyle = '--',linewidth=1,color = 'grey')
ax1.scatter(df.index[df['buy_price_bb_rsi'].isnull()==False],
y = df['buy_price_bb_rsi'][df['buy_price_bb_rsi'].isnull()==False],
marker = '^', color = 'green', label = 'BUY', s = 50)
ax1.scatter(df.index[df['sell_price_bb_rsi'].isnull()==False],
y = df['sell_price_bb_rsi'][df['sell_price_bb_rsi'].isnull()==False],
marker = 'v', color = 'red', label = 'SELL', s = 50)
ax1.legend(['Close','SMA(20)','Upper Bollinger Band',
'Lower Bollinger Band','Buy','Sell'])
ax2.bar(df.index,df['Volume'])
ax2.legend(['Volume'])
ax3.plot(df.index,df['rsi'],color='black',linewidth=0.5)
ax3.axhline(y=70,linewidth=0.5)
ax3.axhline(y=30,linewidth=0.5)
ax3.legend(['RSI'])
###Output
_____no_output_____ |
linear_model/logistic-regression.ipynb | ###Markdown
Linear Model [Logistic Regression] Import dependencies
###Code
import os
import sys
import pickle
from datetime import datetime as dt
import tensorflow as tf
import numpy as np
from tqdm import tqdm
import matplotlib.pyplot as plt
from matplotlib import style
style.use('ggplot')
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load in the data
###Code
from tensorflow.examples.tutorials.mnist import input_data
save_dir = '../saved/logistic-regression'
data_dir = '../datasets/MNIST'
saved_data = os.path.join(save_dir, f'data/{os.path.basename(data_dir)}.pkl')
if not os.path.isfile(saved_data):
start = dt.now()
data = input_data.read_data_sets(data_dir, one_hot=True)
print(f'Took {dt.now() - start}')
if not os.path.exists(os.path.dirname(saved_data)):
os.makedirs(os.path.dirname(saved_data))
pickle.dump(file=open(saved_data, 'wb'), obj=data)
print('\nCached data for future use.')
else:
start = dt.now()
data = pickle.load(file=open(saved_data, 'rb'))
print('Loaded cached data.')
print(f'Took {dt.now() - start}')
del start
print('Training data = {:,}'.format(len(data.train.labels)))
print('Testing data = {:,}'.format(len(data.test.labels)))
print('Validation data = {:,}'.format(len(data.validation.labels)))
###Output
Training data = 55,000
Testing data = 10,000
Validation data = 5,000
###Markdown
Hyperparameters
###Code
# Input
image_size = 28
image_shape = (image_size, image_size)
image_size_flat = image_size * image_size
num_classes = 10
# Training
save_step = 500
num_iter = 0
val_batch = 50
test_batch = 70
train_batch = 200
learning_rate = 1e-3
###Output
_____no_output_____
###Markdown
Building the _Computational Graph_ Placeholder variables
###Code
X = tf.placeholder(tf.float32, [None, image_size_flat])
y = tf.placeholder(tf.float32, [None, num_classes])
y_cls = tf.argmax(y, axis=1)
###Output
_____no_output_____
###Markdown
Trainable variables
###Code
W = tf.Variable(tf.truncated_normal(shape=[image_size_flat, num_classes]))
b = tf.Variable(tf.zeros(shape=[num_classes]))
###Output
_____no_output_____
###Markdown
Forward Propagation
###Code
logits = tf.add(tf.matmul(X, W), b)
y_pred = tf.nn.softmax(logits)
y_pred_cls = tf.argmax(y_pred, axis=1)
###Output
_____no_output_____
###Markdown
Cost function and Gradient Descent
###Code
xentropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)
loss = tf.reduce_mean(xentropy)
# Optimizer
global_step = tf.Variable(0, trainable=False)
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_step = optimizer.minimize(loss, global_step=global_step)
###Output
_____no_output_____
###Markdown
Performance measure
###Code
correct = tf.equal(y_pred_cls, y_cls)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))
###Output
_____no_output_____
###Markdown
Running the Computational Graph Define `tf.Session`
###Code
sess = tf.Session()
###Output
_____no_output_____
###Markdown
_initializing model/global variables_
###Code
def init():
global num_iter
__init = tf.global_variables_initializer()
sess.run(__init)
num_iter = 0
init() # Initialize to start with
tensorboard_dir = os.path.join(save_dir, 'tensorboard')
logdir = os.path.join(tensorboard_dir, 'log')
model_dir = os.path.join(save_dir, 'models')
model_path = os.path.join(model_dir, 'model.ckpt')
# Histograms
tf.summary.histogram('W', values=W, family='params')
tf.summary.histogram('b', values=b, family='params')
# Scalars
tf.summary.scalar('loss', tensor=loss, family='evaluation')
tf.summary.scalar('accuracy', tensor=accuracy, family='evaluation')
# merged...
merged = tf.summary.merge_all()
# Savers and writers
saver = tf.train.Saver()
writer = tf.summary.FileWriter(logdir=logdir, graph = sess.graph)
###Output
_____no_output_____
###Markdown
Maybe restore checkpoint
###Code
if tf.gfile.Exists(model_dir):
try:
print('INFO: Attempting to restore last checkpoint')
last_ckpt = tf.train.latest_checkpoint(model_dir)
saver.restore(sess=sess, save_path=last_ckpt)
print(f'INFO: Successfully restored last checkpoint {last_ckpt}')
except Exception as e:
sys.stderr.write(f'ERR: Could not restore checkpoint. {e}')
sys.stderr.flush()
else:
tf.gfile.MakeDirs(model_dir)
print(f'INFO: Created checkpoint dir @ {model_dir}')
###Output
INFO: Attempting to restore last checkpoint
INFO:tensorflow:Restoring parameters from ../saved/logistic-regression/models/model.ckpt-20903
INFO: Successfully restored last checkpoint ../saved/logistic-regression/models/model.ckpt-20903
###Markdown
Define Some Helper functions _Train the model_
###Code
def train(iterations=1000):
global num_iter
start = dt.now()
for _ in range(iterations):
# Increment the iteration counter.
num_iter += 1
# Get batches
X_batch, y_batch = data.train.next_batch(train_batch)
feed_dict = {X: X_batch, y: y_batch}
_, i_global = sess.run([train_step, global_step], feed_dict=feed_dict)
if num_iter % save_step == 0:
saver.save(sess=sess, save_path=model_path, global_step=global_step)
summary = sess.run(merged, feed_dict=feed_dict)
writer.add_summary(summary, global_step=i_global)
sys.stdout.write(f'\rIter: {num_iter:,}\tGlobal iter: {i_global:,}'
f'\tTime taken: {dt.now() - start}')
print()
print(f"{80*'='}")
print(f'\tCompleted {num_iter:,} iterations.')
print(f"{80*'='}")
###Output
_____no_output_____
###Markdown
Run accuracy
###Code
def score(test=True, validation=False, use_batch=True):
print(80*'=')
print('Accuracy after {:,} iterations'.format(num_iter))
feed_dict = None
if test:
if use_batch:
X_batch, y_batch = data.test.next_batch(test_batch)
feed_dict = {X: X_batch, y: y_batch}
else:
feed_dict = {X: data.test.images, y: data.test.labels}
acc = sess.run(accuracy, feed_dict=feed_dict)
print('Accuracy on test set: {:.02%}'.format(acc))
if validation:
if use_batch:
X_batch, y_batch = data.validation.next_batch(val_batch)
feed_dict = {X: X_batch, y: y_batch}
else:
feed_dict = {X: data.validation.images, y: data.validation.labels}
acc = sess.run(accuracy, feed_dict=feed_dict)
print('Accuracy on validation set: {:.02%}'.format(acc))
print(80*'=')
###Output
_____no_output_____
###Markdown
Training the model
###Code
train(iterations=10)
score(test=True, use_batch=False)
train(iterations=90)
score(test=True, use_batch=False)
train(iterations=900)
score(test=True, use_batch=True)
train(iterations=9000)
score(test=True, validation=True, use_batch=True)
###Output
Iter: 10,000 Global iter: 30,903 Time taken: 0:00:22.784164
================================================================================
Completed 10,000 iterations.
================================================================================
================================================================================
Accuracy after 10,000 iterations
Accuracy on test set: 90.00%
Accuracy on validation set: 92.00%
================================================================================
###Markdown
Clear cached files
###Code
import shutil
# Clear saved mnist `data`
shutil.rmtree(os.path.dirname(saved_data))
# sess.close()
###Output
_____no_output_____ |
estimation_unit_test.ipynb | ###Markdown
Test get_donor_tensor methodDesign of get_donor_tensor method:- brute force: iterate over all possible combinations of units- for each combination of units, calculate number of overalapping timestamps- find the set of units and timestamps with the largest min(num_donor_units, num_overlapping_timestamps)
###Code
from importlib import reload
estimation.utils.estimation_utils = reload(estimation.utils.estimation_utils)
# prepare example 1
num_units = 5
num_times = 3
num_interventions = 2
num_outcomes = 1
tensor = (random(num_units*num_times*num_interventions*num_outcomes, 1, density=0.9, random_state=0).A.\
reshape(num_units, num_times, num_interventions, num_outcomes)*100).astype('int8').astype('f')
tensor[tensor == 0] = np.nan
print("------Original tensor------")
print(tensor)
# print out for sanity check
unit = 1
time = 0
intervention = 0
outcome = 0
tensor[unit, time, intervention, outcome] = np.nan
print("------Sanity check printout------")
print("Unit: %d Time: %d Intervention: %d Outcome: %d" % (unit, time, intervention, outcome))
print("Missing value: (%d, %d)" % (unit, time*num_interventions + intervention))
print(tensor[:, :, :, outcome].reshape(num_units, -1))
# find donor tensor
estimation.utils.estimation_utils.get_donor_tensor(tensor[:, :, :, outcome], unit, time, intervention)
# prepare example 2
num_units = 4
num_times = 6
num_interventions = 2
num_outcomes = 1
tensor = (random(num_units*num_times*num_interventions*num_outcomes, 1, density=0.9, random_state=0).A.\
reshape(num_units, num_times, num_interventions, num_outcomes)*100).astype('int8').astype('f')
tensor[tensor == 0] = np.nan
print("------Original tensor------")
print(tensor)
# print out for sanity check
unit = 2
time = 4
intervention = 1
outcome = 0
tensor[unit, time, intervention, outcome] = np.nan
print("------Sanity check printout------")
print("Unit: %d Time: %d Intervention: %d Outcome: %d" % (unit, time, intervention, outcome))
print("Missing value: (%d, %d)" % (unit, time*num_interventions + intervention))
print(tensor[:, :, :, outcome].reshape(num_units, -1))
# find donor tensor
estimation.utils.estimation_utils.get_donor_tensor(tensor[:, :, :, outcome], unit, time, intervention)
# prepare example 3
num_units = 10
num_times = 3
num_interventions = 4
num_outcomes = 2
tensor = (random(num_units*num_times*num_interventions*num_outcomes, 1, density=0.8, random_state=0).A.\
reshape(num_units, num_times, num_interventions, num_outcomes)*100).astype('int8').astype('f')
tensor[tensor == 0] = np.nan
print("------Original tensor------")
print(tensor)
# print out for sanity check
unit = 6
time = 1
intervention = 0
outcome = 0
tensor[unit, time, intervention, outcome] = np.nan
print("------Sanity check printout------")
print("Unit: %d Time: %d Intervention: %d Outcome: %d" % (unit, time, intervention, outcome))
print("Missing value: (%d, %d)" % (unit, time*num_interventions + intervention))
print(tensor[:, :, :, outcome].reshape(num_units, -1))
# find donor tensor
estimation.utils.estimation_utils.get_donor_tensor(tensor[:, :, :, outcome], unit, time, intervention)
###Output
------Original tensor------
[[[[86. 18.]
[29. 51.]
[25. 93.]
[ 8. 45.]]
[[ 3. nan]
[98. 26.]
[73. 11.]
[58. 6.]]
[[68. 37.]
[17. 94.]
[87. nan]
[61. 13.]]]
[[[71. nan]
[37. 6.]
[ 7. nan]
[39. 64.]]
[[nan 75.]
[79. 69.]
[nan 72.]
[58. nan]]
[[86. 28.]
[69. 34.]
[20. 69.]
[96. nan]]]
[[[33. 77.]
[48. 58.]
[20. 37.]
[90. 39.]]
[[14. 28.]
[nan 84.]
[18. 51.]
[63. 99.]]
[[ 5. 37.]
[42. nan]
[67. 90.]
[nan 45.]]]
[[[nan 73.]
[86. 62.]
[52. nan]
[72. nan]]
[[55. nan]
[nan 19.]
[24. 8.]
[10. nan]]
[[nan 57.]
[27. 5.]
[43. 18.]
[97. 96.]]]
[[[ 2. 58.]
[13. nan]
[ 1. 16.]
[68. nan]]
[[67. 77.]
[ 2. 21.]
[97. 69.]
[53. 92.]]
[[63. 9.]
[30. nan]
[58. nan]
[49. 45.]]]
[[[78. 42.]
[23. 78.]
[24. 94.]
[ 1. nan]]
[[ 4. 94.]
[48. 77.]
[77. 27.]
[22. 92.]]
[[35. 47.]
[94. 26.]
[nan 23.]
[nan 5.]]]
[[[73. 22.]
[61. nan]
[nan 18.]
[82. 95.]]
[[72. 21.]
[31. 95.]
[ 1. 1.]
[38. 33.]]
[[51. 74.]
[27. 57.]
[19. nan]
[31. nan]]]
[[[83. 66.]
[25. 96.]
[nan 36.]
[nan nan]]
[[46. nan]
[76. 62.]
[86. 29.]
[13. 51.]]
[[34. nan]
[45. 14.]
[23. 49.]
[ 5. 8.]]]
[[[nan nan]
[17. nan]
[ 3. nan]
[27. 53.]]
[[nan 18.]
[ 5. 72.]
[30. 14.]
[87. 25.]]
[[nan 79.]
[ 2. nan]
[nan 58.]
[56. 87.]]]
[[[nan 97.]
[21. 37.]
[35. 70.]
[nan 1.]]
[[55. nan]
[13. 85.]
[nan 22.]
[79. 32.]]
[[nan 66.]
[79. 94.]
[nan 40.]
[ 7. 89.]]]]
------Sanity check printout------
Unit: 6 Time: 1 Intervention: 0 Outcome: 0
Missing value: (6, 4)
[[86. 29. 25. 8. 3. 98. 73. 58. 68. 17. 87. 61.]
[71. 37. 7. 39. nan 79. nan 58. 86. 69. 20. 96.]
[33. 48. 20. 90. 14. nan 18. 63. 5. 42. 67. nan]
[nan 86. 52. 72. 55. nan 24. 10. nan 27. 43. 97.]
[ 2. 13. 1. 68. 67. 2. 97. 53. 63. 30. 58. 49.]
[78. 23. 24. 1. 4. 48. 77. 22. 35. 94. nan nan]
[73. 61. nan 82. nan 31. 1. 38. 51. 27. 19. 31.]
[83. 25. nan nan 46. 76. 86. 13. 34. 45. 23. 5.]
[nan 17. 3. 27. nan 5. 30. 87. nan 2. nan 56.]
[nan 21. 35. nan 55. 13. nan 79. nan 79. nan 7.]]
(0, 2, 3, 4, 5) [1 3 6 7 9]
X train shape: (5, 5)
y train shape: (5,)
X test shape: (5, 1)
Donor units: (0, 2, 3, 4, 5)
Overlapping measurements of donor units: [1 3 6 7 9]
|
doc/tutorials/vta.ipynb | ###Markdown
VTA 测试
###Code
import logging
from mxnet.gluon.model_zoo.vision import get_model
from tvm import relay
from tvm.driver import tvmc
from tvm.driver.tvmc import TVMCModel, TVMCPackage, TVMCResult
pretrained = True
shape_dict = {'data': (1, 3, 224, 224)}
model_name = 'mobilenet1.0'
out_dir = 'outputs'
logging.basicConfig(filename=f'{out_dir}/{model_name}.log')
logger = logging.getLogger(name='logger')
logger.setLevel(logging.DEBUG)
model = get_model(model_name, pretrained=pretrained)
mod, params = relay.frontend.from_mxnet(model, shape_dict)
model = TVMCModel(mod, params)
tvmc.compile(model, target="llvm", package_path="whatever")
new_package = TVMCPackage(package_path="whatever")
result = tvmc.run(new_package, device='cpu') #Step 3: Run
logger.info(model.mod['main'].astext()) # 记录 mod
print(result.format_times())
result.save(f'{out_dir}/{model_name}_resluts')
model.save(f"{out_dir}/{model_name}.params")
tuning_records = tvmc.tune(model, target="llvm")
tvmc_package = tvmc.compile(model, target="llvm", tuning_records=tuning_records)
result = tvmc.run(tvmc_package, device="cpu")
print(result)
###Output
Execution time summary:
mean (ms) median (ms) max (ms) min (ms) std (ms)
22.4732 20.9165 54.9719 9.5562 13.0234
Output Names:
['output_0']
|
.ipynb_checkpoints/Data_exploration_student-checkpoint.ipynb | ###Markdown
Data Carpentry Reproducible Research Workshop - Data Exploration Learning objectivesUse the Python Pandas library in the Jupyter Notebook to:* Assess the structure and cleanliness of a dataset, including the size and shape of the data, and the number of variables of each type.* Describe findings, translate results from code to text using Markdown comments in the Jupyter Notebook, summarizing your thought process in a narrative.* Modify raw data to prepare a clean data set -- including copying data, removing or replacing missing and incoherent data, dropping columns, removing duplicates.* Assess whether data is “Tidy” and identify appropriate steps and write and execute code to arrange it into a tidy format - including merging, reshaping, subsetting, grouping, sorting, and making appropriate new columns.* Identify several relevant summary measures* Illustrate data in plots and determine the need for repeated or further analysis.* Justify these decisions in Markdown in the Jupyter Notebook. Setting up the notebook About Libraries in PythonA library in Python contains a set of tools (called functions) that perform tasks on our data. Importing a library is like getting a piece of lab equipment out of a storage locker and setting it up on the bench for use in a project. Once a library is imported, it can be used or called to perform many tasks.Python doesn’t load all of the libraries available to it by default. We have to add an import statement to our code in order to use library functions. To import a library, we use the syntax `import libraryName`. If we want to give the library a nickname to shorten the command, we can add `as nickNameHere`. An example of importing the Pandas library using the common nickname `pd` is below.**`import`** `pandas` **`as`** `pd` matplotlib and other plotting librariesmatplotlib is the most widely used Python library for plotting. We can run it in the notebook using the magic command `%matplotlib inline`. If you do not use `%matplotlib inline`, your plots will be generated outside of the notebook and may be difficult to find. See [the IPython docs](http://ipython.readthedocs.io/en/stable/interactive/plotting.html) for other IPython magics commands.In this lesson, we will only use matplotlib and Seaborn, another package that works in tandem with matplotlib to make nice graphics. There is a whole range of graphics packages in Python, ranging from basic visualizations to fancy, interactive graphics like [Bokeh](http://bokeh.pydata.org/en/latest/) and [Plotly](https://plot.ly/python/). We encourage you to explore on your own! Chances are, if you can imagine a plot you'd like to make, somebody else has written a package to do it. MarkdownText can be added to Jupyter Notebooks using Markdown cells. Markdown is a popular markup language that is a superset of HTML. To learn more, see [Jupyter's Markdown guide](http://jupyter-notebook.readthedocs.io/en/latest/examples/Notebook/Working%20With%20Markdown%20Cells.html) or revisit the [Reproducible Research lesson on Markdown](https://github.com/Reproducible-Science-Curriculum/introduction-RR-Jupyter/blob/master/notebooks/Navigating%20the%20notebook%20-%20instructor%20script.ipynb). The Pandas LibraryOne of the best options for working with tabular data in Python is the Python Data Analysis Library (a.k.a. Pandas). The Pandas library is built on top of the NumPy package (another Python library). Pandas provides data structures, produces high quality plots with matplotlib, and integrates nicely with other libraries that use NumPy arrays. Those familiar with spreadsheets should become comfortable with Pandas data structures.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Each time we call a function that’s in a library, we use the syntax `LibraryName.FunctionName`. Adding the library name with a `.` before the function name tells Python where to find the function. In the example above, we have imported Pandas as `pd`. This means we don’t have to type out `pandas` each time we call a Pandas function.See this free [Pandas cheat sheet](https://www.datacamp.com/community/blog/python-pandas-cheat-sheet) from DataCamp for the most common Pandas commands. Getting data into the notebookWe will begin by locating and reading our data which are in a table format as a tab-delimited file. We will use Pandas’ `read_table` function to pull the file directly into a `DataFrame`. What’s a `DataFrame`?A `DataFrame` is a 2-dimensional data structure that can store in columns data of different types (including characters, integers, floating point values, factors and more). It is similar to a spreadsheet or a SQL table or data.frame in R. A `DataFrame` always has an index (0-based). An index refers to the position of an element in the data structure.Note that we use `pd.read_table`, not just `read_table` or `pandas.read_table`, because we imported Pandas as `pd`.In our original file, the columns in the data set are separated by a TAB. We need to tell the `read_table` function in Pandas that that is the delimiter with `sep = ‘\t’`.
###Code
url = "https://raw.githubusercontent.com/Reproducible-Science-Curriculum/data-exploration-RR-Jupyter/master/gapminderDataFiveYear_superDirty.txt"
#You can also read your table in from a file directory
gapminder = pd.read_table(url, sep = "\t")
###Output
_____no_output_____
###Markdown
The first thing to do when loading data into the notebook is to actually "look" at it. How many rows and columns are there? What types of variables are in it and what values can they take?There are usually too many rows to print to the screen. By default, when you type the name of the `DataFrame` and run a cell, Pandas knows to not print the whole thing. Instead, you will see the first and last few rows with dots in between. A neater way to see a preview of the dataset is the `head()` method. Calling `dataset.head()` will display the first 5 rows of the data. You can specify how many rows you want to see as an argument, like `dataset.head(10)`. The `tail()` method does the same with the last rows of the `DataFrame`.
###Code
#head
#tail
#gapminder
###Output
_____no_output_____
###Markdown
Sometimes the table has too many columns to print on screen. Calling `df.columns.values` will print all the column names in an array.
###Code
#columns
###Output
_____no_output_____
###Markdown
Assess the structure and cleanliness How many rows and columns are in the data?We often want to know how many rows and columns are in the data -- what is the "shape" of the `DataFrame`. Shape is an attribute of the `DataFrame`. Pandas has a convenient way for getting that information by using `DataFrame.shape` (using `DataFrame` here as a generic name for your `DataFrame`). This returns a tuple (immutable values separated by commas) representing the dimensions of the `DataFrame` (rows, columns).To get the shape of the gapminder `DataFrame`:
###Code
#shape
###Output
_____no_output_____
###Markdown
We can learn even more about our `DataFrame`. The `info()` method gives a few useful pieces of information, including the shape of the `DataFrame`, the variable type of each column, and the amount of memory stored.The output from `info()` displayed below shows that the fields ‘year’ and ‘pop’ (population) are represented as ‘float’ (that is: numbers with a decimal point). This is not appropriate: year and population should be integers or whole numbers. We can change the data-type with the function `astype()`. The code for `astype()` is shown below; however, we will change the data types later in this lesson.
###Code
#info
###Output
_____no_output_____
###Markdown
The `describe()` method will take the numeric columns and provide a summary of their values. This is useful for getting a sense of the ranges of values and seeing if there are any unusual or suspicious numbers.
###Code
#describe
###Output
_____no_output_____
###Markdown
The `DataFrame` function `describe()` just blindly looks at all numeric variables. We wouldn't actually want to take the mean year. Additionally, we obtain ‘NaN’ values for our quartiles. This suggests we might have missing data which we can (and will) deal with shortly when we begin to clean our data.For now, let's pull out only the columns that are truly continuous numbers (i.e. ignore the description for ‘year’). This is a preview of selecting columns from the data; we'll talk more about how to do it later in the lesson.
###Code
#describe continuous
###Output
_____no_output_____
###Markdown
We can also extract one specific variable metric at a time if we wish:
###Code
#min, max, mean, std, count
###Output
_____no_output_____
###Markdown
Values in columns Next, let's say you want to see all the unique values for the `region` column. One way to do this is:
###Code
#unique
###Output
_____no_output_____
###Markdown
This output is useful, but it looks like there may be some formatting issues causing the same region to be counted more than once. Let's take it a step further and find out to be sure. As mentioned previously, the command `value_counts()` gives you a first global idea of your categorical data such as strings. In this case that is the column `region`. Run the code below.
###Code
# How many unique regions are in the data?
# use len
# How many times does each unique region occur?
# region counts
###Output
_____no_output_____
###Markdown
The table reveals some problems in our data set. The data set covers 12 years, so each ‘region’ should appear 12 times, but some regions appear more than 12 times and others fewer than 12 times. We also see inconsistencies in the region names (string variables are very susceptible to those), for instance:Asia_china vs. Asia_ChinaAnother type of problem we see is the various names of 'Congo'. In order to analyze this dataset appropriately we need to take care of these issues. We will fix them in the next section on data cleaning. ExercisesAre there other columns in our `DataFrame` that have categorical variables? If so, run some code to list the categories below. Save your list to a variable and count the number of unique categories using `len`. What is the outcome when you run `value_counts()`? Data cleaning Referencing objects vs copying objectsBefore we get started with cleaning our data, let's practice good data hygiene by first creating a copy of our original data set. Often, you want to leave the original data untouched. To protect your original, you can make a copy of your data (and save it to a new `DataFrame` variable) before operating on the data or a subset of the data. This will ensure that a new version of the original data is created and your original is preserved. Why this is importantSuppose you take a subset of your `DataFrame` and store it in a new variable, like `gapminder_early = gapminder[gapminder['year'] < 1970]`. Doing this does not actually create a new object. Instead, you have just given a name to that subset of the original data: `gapminder_early`. This subset still points to the original rows of `gapminder`. Any changes you make to the new `DataFrame` `gapminder_early` will appear in the corresponding rows of your original `gapminder` `DataFrame` too.
###Code
gapminder = pd.read_table(url, sep = "\t")
gapminder_copy = gapminder.copy()
gapminder_copy.head()
###Output
_____no_output_____
###Markdown
Handling Missing DataMissing data (often denoted as 'NaN'- not a number- in Pandas, or as 'null') is an important issue to handle because Pandas cannot compute on rows or columns with missing data. 'NaN' or 'null' does not mean the value at that position is zero, it means that there is no information at that position. Ignoring missing data doesn't make it go away. There are different ways of dealing with it which include:* analyzing only the available data (i.e. ignore the missing data)* input the missing data with replacement values and treating these as though they were observed* input the missing data and account for the fact that these were inputed with uncertainty (ex: create a new boolean variable so you know that these values were not actually observed)* use statistical models to allow for missing data--make assumptions about their relationships with the available data as necessaryFor our purposes with the dirty gapminder data set, we know our missing data is excess (and unnecessary) and we are going to choose to analyze only the available data. To do this, we will simply remove rows with missing values.This is incredibly easy to do because Pandas allows you to either remove all instances with null data or replace them with a particular value.`df = df.dropna()` drops rows with any column having NA/null data. `df = df.fillna(value)` replaces all NA/null data with the argument `value`.For more fine-grained control of which rows (or columns) to drop, you can use `how` or `thresh`. These are more advanced topics and are not covered in this lesson; you are encouraged to explore them on your own.
###Code
# drop na
###Output
_____no_output_____
###Markdown
Changing Data TypesWe can change the data-type with the function `astype()`. The code for `astype()` is shown below.
###Code
#astype()
###Output
_____no_output_____
###Markdown
Handling (Unwanted) Repetitive DataYou can identify which observations are duplicates.The call `df.duplicated()` will return boolean values for each row in the `DataFrame` telling you whether or not a row is repeated.In cases where you don’t want repeated values (we wouldn’t--we only want each country to be represented once for every relevant year), you can easily drop such duplicate rows with the call `df.drop_duplicates()`.
###Code
# duplicated() #shows we have a repetition within the first __ rows
###Output
_____no_output_____
###Markdown
Let's look at the first five rows of our data set again (remember we removed the NaNs):
###Code
# How do we look at the first 5 rows?
###Output
_____no_output_____
###Markdown
Our statement from above is correct, rows 1 & 2 are duplicated. Let's fix that:
###Code
# df.drop_duplicates()
###Output
_____no_output_____
###Markdown
Reindexing with `reset_index()`Now we have 1704 rows, but our indexes are off because we removed duplicate rows. We can reset our indices easily with the call `reset_index(drop=True)`. Remember, Python is 0-indexed so our indices will be valued 0-1703.The concept of reindexing is important. When we removed some of the messier, unwanted data, we had "gaps" in our index values. By correcting this, we can improve our search functionality and our ability to perform iterative functions on our cleaned data set.
###Code
# reset_index()
###Output
_____no_output_____
###Markdown
Handling Inconsistent DataThe `region` column is a bit too messy for what we'd like to do.The `value_counts()` operation above revealed some issues that we can solve with several different techniques. String manipulationsCommon problems with string variables are leading and trailing white space and upper case vs. lower case in the same data set.The following three commands remove all such lingering spaces (left and right) and put everything in lowercase. If you prefer, the three commands can be written in one single line (which is a concept called chaining).
###Code
gapminder_copy['region'] = gapminder_copy['region'].str.lstrip() # Strip white space on left
gapminder_copy['region'] = gapminder_copy['region'].str.rstrip() # Strip white space on right
gapminder_copy['region'] = gapminder_copy['region'].str.lower() # Convert to lowercase
gapminder_copy['region'].value_counts() # How many times does each unique region occur?
# We could have done this in one line!
# gapminder_copy['region'] = gapminder_copy['region'].str.lstrip().str.rstrip().lower()
###Output
_____no_output_____
###Markdown
regex + `replace()`A regular expression, a.k.a. regex, is a sequence of characters that define a search pattern. In a regular expression, the symbol “*” matches the preceding character 0 or more times, whereas “+” matches the preceding character 1 or more times. “.” matches any single character. Writing “x|y” means to match either ‘x’ or ‘y’.For more regex shortcuts (cheatsheet): https://www.shortcutfoo.com/app/dojos/regex/cheatsheetTo play "regex golf," check out this [tutorial by Peter Norvig](https://www.oreilly.com/learning/regex-golf-with-peter-norvig) (you may need an O'Reilly or social media account to play).Pandas allows you to use `regex` in its `replace()` function -- when a regex term is found in an element, the element is then replaced with the specified replacement term. In order for it to appropriately correct elements, both regex and inplace variables need to be set to `True` (as their defaults are False). This ensures that the initial input string is read as a regular expression and that the elements will be modified in place.For more documentation on the replace method: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.replace.htmlHere's an incorrect regex example: we create a temporary `DataFrame` in which a regex pulls all values that contain the term “congo”. Unfortunately, this creates 24 instances of the Democratic Republic of the Congo -- this is an error in our cleaning! We can revert back to the non-temporary `DataFrame` and correctly modify our regex to isolate only the Democratic Republic instances (as opposed to including the Republic as well).
###Code
# This gives a problem -- 24 values of the congo!
temp = gapminder_copy['region'].replace(".*congo.*", "africa_dem rep congo", regex=True)
temp.value_counts()
# What happened? This shows all the rows that have congo in the name.
gapminder_copy[gapminder_copy["region"].str.contains('congo')]
###Output
_____no_output_____
###Markdown
Using regex to correctly consolidate the Congo regions...As noted above, regular expressions (often simply "regex") provide a powerful tool for fixing errors that arise in strings. In order to correctly label the two different countries that include the word "congo", we need to design anduse (via `pd.df.replace()`) a regex that correctly differentiates between thetwo countries.Recall that the "." is the wildcard (matching any single character); combining this with "*" allows us to match any number of single characters an unspecified number of times. By combining these characters with substrings corresponding tovariations in the naming of the Democratic Republic of the Congo, we cancorrectly normalize the name.If you feel that the use of regex is not particularly straightforward, you arecorrect -- appropriately using these tools takes a great deal of time to master.When designing regex for these sorts of tasks, you might find the followingprototyper helpful: https://regex101.com/
###Code
gapminder_copy['region'].replace(".*congo, dem.*", "africa_dem rep congo", regex=True, inplace=True)
gapminder_copy['region'].replace(".*_democratic republic of the congo", "africa_dem rep congo", regex=True, inplace=True)
gapminder_copy['region'].value_counts() # Now it's fixed.
###Output
_____no_output_____
###Markdown
Exercise (regex):Now that we've taken a close look at how to properly design and use regex toclean string entries in our data, let's try to normalize the naming of a fewother countries. Using the pandas code we constructed above as a template,construct similar code (using `pd.df.replace()`) to set the naming of the IvoryCoast and Canada to "africa_cote d'ivoire" and "americas_canada", respectively.
###Code
# Try this on your own
###Output
_____no_output_____
###Markdown
Tidy dataHaving what is called a "_Tidy_ data set" can make cleaning, analyzing, and visualizing your data much easier. You should aim for having Tidy data when cleaning and preparing your data set for analysis. Two of the important aspects of Tidy data are:* every variable has its own column* every observation has its own row(There are other aspects of Tidy data, here is a good blog post about Tidy data in Python: http://www.jeannicholashould.com/tidy-data-in-python.html)Currently the gapminder dataset has a single column for continent and country (the ‘region’ column). We can split that column into two, by using the underscore that separates continent from country.We can create a new column in the `DataFrame` by naming it before the = sign:`gapminder['country'] = `The following commands use the function `split()` to split the string at the underscore (the first argument), which results in a list of two elements: before and after the \_. The second argument tells `split()` that the split should take place only at the first occurrence of the underscore.
###Code
gapminder_copy['country']=gapminder_copy['region'].str.split('_', 1).str[1]
gapminder_copy['continent']=gapminder_copy['region'].str.split('_', 1).str[0]
gapminder_copy.head()
###Output
_____no_output_____
###Markdown
Removing and renaming columnsWe have now added the columns `country` and `continent`, but we still have the old `region` column as well. In order to remove that column we use the `drop()` command. The first argument of the `drop()` command is the name of the element to be dropped. The second argument is the *axis* number: *0 for row, 1 for column*.
###Code
# drop()
###Output
_____no_output_____
###Markdown
Finally, it is a good idea to look critically at your column names. Use lowercase for all column names to avoid confusing `gdppercap` with `gdpPercap` or `GDPpercap`. Avoid spaces in column names to simplify manipulating your data - look out for lingering white space at the beginning or end of your column names. The following code turns all column names to lowercase.
###Code
# str.lower()
###Output
_____no_output_____
###Markdown
We also want to remove the space from the `life exp` column name. We can do that with Pandas `rename` method. It takes a dictionary as its argument, with the old column names as keys and new column names as values.If you're unfamiliar with dictionaries, they are a very useful data structure in Python. You can read more about them [here](https://docs.python.org/3/tutorial/datastructures.htmldictionaries).
###Code
# rename columns
###Output
_____no_output_____
###Markdown
Merging dataOften we have more than one `DataFrame` that contains parts of our data set and we want to put them together. This is known as merging the data.Our advisor now wants us to add a new country called The People's Republic of Berkeley to the gapminder data set that we have cleaned up. Our goal is to get this new data into the same `DataFrame` in the same format as the gapminder data and, in this case, we want to concatenate (add) it onto the end of the gapminder data.Concatentating is a simple form of merging, there are many useful (and more complicated) ways to merge data. If you are interested in more information, the [Pandas Documentation](http://pandas.pydata.org/pandas-docs/stable/merging.html) is useful.
###Code
PRB = pd.read_table('https://raw.githubusercontent.com/Reproducible-Science-Curriculum/data-exploration-RR-Jupyter/master/PRB_data.txt', sep = "\t")
PRB.head()
## bring in PRB data (no major problems) and make it conform to the gapminder at this point
# clean the data to look like the current gapminder
# double check that the gapminder is the same
# combine the data sets with concat
###Output
_____no_output_____
###Markdown
Now that the `DataFrames` have been concatenated, notice that the index is funky. It repeats the numbers 0 - 11 in the `peoples republic of berkeley data`. **Exercise:** fix the index.
###Code
# our code for fixing index
###Output
_____no_output_____
###Markdown
Subsetting and sortingThere are many ways in which you can manipulate a Pandas `DataFrame` - here we will discuss two approaches: subsetting and sorting. SubsettingWe can subset (or slice) by giving the numbers of the rows you want to see between square brackets.*REMINDER:* Python uses 0-based indexing. This means that the first element in an object is located at position 0. this is different from other tools like R and Matlab that index elements within objects starting at 1.
###Code
#Select the first 15 rows
# Use a different way to select the first 15 rows
#Select the last 10 rows
###Output
_____no_output_____
###Markdown
Exercise*What does the negative number (in the third cell) mean?* Answer: *What happens when you leave the space before or after the colon empty?* Answer: Subsetting can also be done by selecting for a particular column or for a particular value in a column; for instance select the rows that have ‘africa’ in the column ‘continent. Note the double equal sign: single equal signs are used in Python to assign something to a variable. The double equal sign is a comparison: the variable to the left has to be exactly equal to the string to the right.**There other ways of subsetting that are worth knowing about. Do an independent reading of using .loc/.iloc with `DataFrames`**
###Code
#Select for a particular column
#this syntax, calling the column as an attribute, gives you the same output
###Output
_____no_output_____
###Markdown
We can also create a new object that contains the data within the `continent` column SortingSorting may help to further organize and inspect your data. The command `sort_values()` takes a number of arguments; the most important ones are `by` and `ascending.` The following command will sort your `DataFrame` by year, beginning with the most recent.
###Code
#sort_values()
###Output
_____no_output_____
###Markdown
ExerciseOrganize your data set by country, from ‘Afganistan’ to ‘Zimbabwe’. Summarize and plotSummaries (but can’t *say* statistics…)* Sort data* Basic summariesPlots * of subsets * single variables* pairs of variables* Matplotlib syntax (w/ Seaborn for defaults (prettier, package also good for more analysis later...))Exploring is often iterative - summarize, plot, summarize, plot, etc. - sometimes it branches… Summarizing dataRemember that the `info()` method gives a few useful pieces of information, including the shape of the `DataFrame`, the variable type of each column, and the amount of memory stored. We can see many of our changes (continent and country columns instead of region, higher number of rows, etc.) reflected in the output of the `info()` method.
###Code
gapminder_comb.info()
###Output
_____no_output_____
###Markdown
We also saw above that the `describe()` method will take the numeric columns and give a summary of their values. We have to remember that we changed the column names and this time it shouldn't have NaNs.
###Code
gapminder_comb[['pop', 'lifeexp', 'gdppercap']].describe()
###Output
_____no_output_____
###Markdown
More summariesWhat if we just want a single value, like the mean of the population? We can call mean on a single column this way: What if we want to know the mean population by _continent_? Then we need to use the Pandas `groupby()` method and tell it which column we want to group by. What if we want to know the median population by continent? Or the number of entries (rows) per continent? Sometimes we don't want a whole `DataFrame`. Here is another way to do this that produces a `Series` that tells us number of entries (rows) as opposed to a `DataFrame`. We can also look at the mean GDP per capita of each country: What if we wanted a new `DataFrame` that just contained these summaries? This could be a table in a report, for example.
###Code
continent_mean_pop = gapminder_comb[['continent', 'pop']].groupby(by='continent').mean()
continent_mean_pop = continent_mean_pop.rename(columns = {'pop':'meanpop'})
continent_row_ct = gapminder_comb[['continent', 'country']].groupby(by='continent').count()
continent_row_ct = continent_row_ct.rename(columns = {'country':'nrows'})
continent_median_pop = gapminder_comb[['continent', 'pop']].groupby(by='continent').median()
continent_median_pop = continent_median_pop.rename(columns = {'pop':'medianpop'})
gapminder_summs = pd.concat([continent_row_ct,continent_mean_pop,continent_median_pop], axis=1)
gapminder_summs = gapminder_summs.rename(columns = {'y':'year'})
gapminder_summs
###Output
_____no_output_____
###Markdown
Visualization with `matplotlib`Recall that [matplotlib](http://matplotlib.org) is Python's main visualization library. It provides a range of tools for constructing plots and numerous high-level plotting libraries (e.g., [Seaborn](http://seaborn.pydata.org)) are built with matplotlib in mind. When we were in the early stages of setting up our analysis, we loaded these libraries like so:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
sns.set()
###Output
_____no_output_____
###Markdown
*Consider the above three commands to be essential practice for plotting (asessential as **`import`** `pandas` **`as`** `pd` is for data munging).*Now, let's turn to data visualization. In order to get a feel for the propertiesof the data set we are working with, data visualization is key. While, we willfocus only on the essentials of how to properly construct plots in univariateand bivariate settings here, it's worth noting that both matplotlib and Seabornsupport a diversity of plots: [matplotlib gallery](http://matplotlib.org/gallery.html), [Seaborngallery](http://seaborn.pydata.org/examples/). --- Single variables* __Histograms__ - provide a quick way of visualizing the distribution of numerical data, or the frequencies of observations for categorical variables.
###Code
#import numpy as np
###Output
_____no_output_____
###Markdown
* __Boxplots__ - provide a way of comparing the summary measures (e.g., max, min, quartiles) across variables in a data set. Boxplots can be particularly useful with larger data sets.--- Pairs of variables* __Scatterplots__ - visualization of relationships across two variables...
###Code
# scatter plot goes here
plt.scatter(gapminder_copy['gdppercap'], gapminder_copy['lifeexp'])
plt.xlabel('gdppercap')
plt.ylabel('lifeexp')
# let's try plotting the log of x
plt.scatter(gapminder_copy['gdppercap'], gapminder_copy['lifeexp'])
plt.xscale('log')
plt.xlabel('gdppercap')
plt.ylabel('lifeexp')
# Try creating a plot on your own
###Output
_____no_output_____ |
ml_repo/Feature Selection/Feature Selection.ipynb | ###Markdown
Univariate Selection
###Code
import pandas as pd
import numpy as np
from sklearn.feature_selection import chi2
from sklearn.feature_selection import SelectKBest
data = pd.read_csv("mobile-price-classification/train.csv")
data.head()
X = data.iloc[:, 0:20]
y = data.iloc[:, -1]
# apply SelectKBest top 10 features
best_features = SelectKBest(score_func=chi2, k =10 )
fit = best_features.fit(X, y)
fit.scores_
dfscores = pd.DataFrame(fit.scores_)
dfcolumns = pd.DataFrame(X.columns)
featureScores = pd.concat([dfcolumns, dfscores], axis= 1)
featureScores.columns = ['Features', 'Score']
ten_features = featureScores.sort_values(by='Score', ascending= False).head(10)['Features'].values
###Output
_____no_output_____
###Markdown
Feature Importance
###Code
from sklearn.ensemble import RandomForestClassifier
import matplotlib.pyplot as plt
model = RandomForestClassifier()
model.fit(X, y)
model.feature_importances_
featureImportance = pd.DataFrame(model.feature_importances_, index=X.columns, columns=['Importance'])
featureImportance = featureImportance.sort_values(by="Importance", ascending= False)
featureImportance
plt.figure(figsize=(20,10))
plt.bar(featureImportance.index, featureImportance['Importance'])
plt.show()
###Output
_____no_output_____
###Markdown
Correlation Matix
###Code
import seaborn as sns
data_corr = data.corr()
data_corr
plt.figure(figsize=(20,20))
sns.heatmap(data_corr, annot=True)
plt.show()
###Output
_____no_output_____
###Markdown
Check the model Performance
###Code
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
rfc = RandomForestClassifier(max_depth=10)
scores = cross_val_score(rfc, X, y, cv = 10)
scores.mean()
X[ten_features]
scores_new = cross_val_score(rfc, X[ten_features], y, cv = 10)
scores_new.mean()
###Output
_____no_output_____ |
scripts/advanced_report/20Replica.ipynb | ###Markdown
Replica of tutorial 20, built using Python
###Code
import numpy as np
import pygsti
from pygsti.construction import std1Q_XYI
#The usual GST setup: we're going to run GST on the standard XYI 1-qubit gateset
gs_target = std1Q_XYI.gs_target
fiducials = std1Q_XYI.fiducials
germs = std1Q_XYI.germs
maxLengths = [1,2,4,8]
listOfExperiments = pygsti.construction.make_lsgst_experiment_list(
gs_target.gates.keys(), fiducials, fiducials, germs, maxLengths)
#Create some datasets for analysis
gs_datagen1 = gs_target.depolarize(gate_noise=0.1, spam_noise=0.001)
gs_datagen2 = gs_target.depolarize(gate_noise=0.05, spam_noise=0.01).rotate(rotate=0.01)
ds1 = pygsti.construction.generate_fake_data(gs_datagen1, listOfExperiments, nSamples=1000,
sampleError="binomial", seed=1234)
ds2 = pygsti.construction.generate_fake_data(gs_datagen2, listOfExperiments, nSamples=1000,
sampleError="binomial", seed=1234)
ds3 = ds1.copy_nonstatic(); ds3.add_counts_from_dataset(ds2); ds3.done_adding_data()
#Run GST on all three datasets
gs_target.set_all_parameterizations("TP")
results1 = pygsti.do_long_sequence_gst(ds1, gs_target, fiducials, fiducials, germs, maxLengths, verbosity=0)
results2 = pygsti.do_long_sequence_gst(ds2, gs_target, fiducials, fiducials, germs, maxLengths, verbosity=0)
results3 = pygsti.do_long_sequence_gst(ds3, gs_target, fiducials, fiducials, germs, maxLengths, verbosity=0)
#make some shorthand variable names for later
tgt = results1.estimates['default'].gatesets['target']
ds1 = results1.dataset
ds2 = results2.dataset
ds3 = results3.dataset
gs1 = results1.estimates['default'].gatesets['go0']
gs2 = results2.estimates['default'].gatesets['go0']
gs3 = results3.estimates['default'].gatesets['go0']
gss = results1.gatestring_structs['final']
###Output
_____no_output_____
###Markdown
After running GST, a `Workspace` object can be used to interpret the results:
###Code
from pygsti.report import workspace
ws = workspace.Workspace()
ws.init_notebook_mode(connected=False, autodisplay=True)
ws.GatesVsTargetTable(gs1, tgt)
ws.SpamVsTargetTable(gs2, tgt)
ws.ColorBoxPlot(("chi2","logl"), gss, ds1, gs1, boxLabels=True)
ws.FitComparisonTable(gss.Ls, results1.gatestring_structs['iteration'],
results1.estimates['default'].gatesets['iteration estimates'], ds1)
ws.FitComparisonTable(["GS1","GS2","GS3"], [gss, gss, gss], [gs1,gs2,gs3], ds1, Xlabel="GateSet")
ws.ChoiTable(gs3, display=('matrix','barplot'))
ws.GateMatrixPlot(gs1['Gx'],scale=1.5, boxLabels=True)
ws.GateMatrixPlot(pygsti.tools.error_generator(gs1['Gx'], tgt['Gx']), scale=1.5)
ws.ErrgenTable(gs3,tgt)
ws.PolarEigenvaluePlot([np.linalg.eigvals(gs2['Gx'])],["purple"],scale=1.5)
ws.GateEigenvalueTable(gs2, display=('evals','polar'))
###Output
_____no_output_____ |
nn_cnn/Ch05/05_01/05_01 start.ipynb | ###Markdown
Introduction to Convolution Neural Networks Import the libraries
###Code
from keras.layers import Conv2D, MaxPooling2D, Flatten,Dense
from keras.models import Sequential
from keras.datasets import mnist
from keras.utils import to_categorical
###Output
_____no_output_____
###Markdown
Load the data
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
###Output
_____no_output_____
###Markdown
Pre-processingOur MNIST images only have a depth of 1. We must explicitly declare that.
###Code
num_classes = 10
epochs = 3
X_train = X_train.reshape(60000,28,28,1)
X_test = X_test.reshape(10000,28,28,1)
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')
X_train /= 255.0
X_test /= 255.0
y_train = to_categorical(y_train,num_classes)
y_test = to_categorical(y_test, num_classes)
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
###Output
_____no_output_____ |
.ipynb_checkpoints/demo-CNN-checkpoint.ipynb | ###Markdown
长句子被剪裁(`maxlen`),短句子被 0 padding
###Code
print(data.shape)
print(maxlen)
word_index = tokenizer.word_index
type(word_index)
[str(key)+": "+str(value) for key, value in word_index.items()][:5]
labels = np.array(df.sentiment)
labels[:5]
indices = np.arange(data.shape[0])
np.random.shuffle(indices)
data = data[indices]
labels = labels[indices]
training_samples = int(len(indices) * .8)
validation_samples = len(indices) - training_samples
training_samples
validation_samples
X_train = data[:training_samples]
y_train = labels[:training_samples]
X_valid = data[training_samples: training_samples + validation_samples]
y_valid = labels[training_samples: training_samples + validation_samples]
X_train[:5,:5]
# !pip install gensim
from gensim.models import KeyedVectors
# myzip = mypath / 'zh.zip'
# !unzip $myzip
zh_model = KeyedVectors.load_word2vec_format('zh.vec')
zh_model.vectors.shape
zh_model.vectors[0].shape
list(iter(zh_model.vocab))[:5]
embedding_dim = len(zh_model[next(iter(zh_model.vocab))])
embedding_dim
print('最大值: ',zh_model.vectors.max())
print('最小值: ',zh_model.vectors.min())
embedding_matrix = np.random.uniform(zh_model.vectors.min(), zh_model.vectors.max(), [max_words, embedding_dim])
###Output
_____no_output_____
###Markdown
随机数参考 https://stackoverflow.com/questions/11873741/sampling-random-floats-on-a-range-in-numpy
###Code
embedding_matrix = (embedding_matrix - 0.5) * 2
zh_model.get_vector("的").shape
zh_model.get_vector("李").shape
for word, i in word_index.items():
if i < max_words:
try:
embedding_vector = zh_model.get_vector(word)
embedding_matrix[i] = embedding_vector
except:
pass # 如果无法获得对应的词向量,我们就干脆跳过,使用默认的随机向量。
###Output
_____no_output_____
###Markdown
这也是为什么,我们前面尽量把二者的分布调整成一致。
###Code
embedding_matrix.shape
from keras.models import Sequential
from keras.layers import Embedding, Flatten, Dense, LSTM, Dropout, SpatialDropout1D
from keras.layers.convolutional import Conv1D, MaxPooling1D
LSTM_units = 16
model = Sequential()
model.add(Embedding(max_words, embedding_dim))
model.add(SpatialDropout1D(0.2))
model.add(Conv1D(filters=32, kernel_size=3, padding='same', activation='relu'))
# filters 和 kernel_size 不要太大,否则容易过拟合
model.add(MaxPooling1D(pool_size=2))
# pool_size 不要太大,否则容易过拟合
model.add(LSTM(LSTM_units))
model.add(Dense(1, activation='sigmoid'))
model.summary()
model.layers[0].set_weights([embedding_matrix])
# model.layers[0].trainable = False # 不跑,用预训练模型。
model.compile(optimizer='rmsprop',
loss='binary_crossentropy',
metrics=['acc'])
history = model.fit(X_train, y_train,
epochs=10,
batch_size=32,
validation_data=(X_valid, y_valid))
# 有监督的学习
model.save("sentiment_model-CNN-v.1.0.0.h5")
###Output
D:\install\miniconda\lib\site-packages\tensorflow_core\python\framework\indexed_slices.py:424: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
###Markdown
参考 https://github.com/chen0040/keras-sentiment-analysis-web-api/blob/master/keras_sentiment_analysis/library/cnn_lstm.py
###Code
import matplotlib.pyplot as plt
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.legend(['training', 'validation'], loc='upper left')
plt.title('Training and validation accuracy')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.legend(['training', 'validation'], loc='upper left')
plt.title('Training and validation loss')
plt.show()
###Output
_____no_output_____ |
exercises/Ex1-Dice_Simulation_empty.ipynb | ###Markdown
Dice SimulaitonIn this excercise, we want to simulate the outcome of rolling dice. We will walk through several levels of building up funcitonality. Single DieLet's create a function that will return a random value between one and six that emulates the outcome of the roll of one die. Python has a random number package called `random`.
###Code
import random
def single_die():
"""Outcome of a single die roll"""
pass
###Output
_____no_output_____
###Markdown
CheckTo check our function, let's call it 50 times and print the output. We should see numbers between 1 and 6.
###Code
for _ in range(50):
print(single_die(),end=' ')
###Output
_____no_output_____
###Markdown
Multiple Dice RollNow let's make a function that returns the sum of N 6-sided dice being rolled.
###Code
def dice_roll(dice_count):
"""Outcome of a rolling dice_count dice
Args:
dice_count (int): number of dice to roll
Returns:
int: sum of face values of dice
"""
pass
###Output
_____no_output_____
###Markdown
CheckLet's perform the same check with 100 values and make sure we see values in the range of 2 to 12.
###Code
for _ in range(100):
print(dice_roll(2), end=' ')
###Output
_____no_output_____
###Markdown
Capture the outcome of multiple rollsWrite a function that will return a list of values for many dice rolls
###Code
def dice_rolls(dice_count, rolls_count):
"""Return list of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
Returns:
list: list of dice roll values.
"""
pass
print(dice_rolls(2,100))
###Output
_____no_output_____
###Markdown
Plot ResultMake a function that plots the histogram of the dice values.
###Code
import pylab as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10, 4)
def dice_histogram(dice_count, rolls_count, bins):
"""Plots outcome of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
bins (int): number of histogram bins
"""
pass
dice_histogram(2, 10000, 200)
###Output
_____no_output_____
###Markdown
AsideThe outputs follow a binomial distribution. As the number of dice increase, the binomial distribution approaches a Gaussian distribution due to the Central Limit Theorem (CLT). Try making a histogram with 100 dice. The resulting plot is a "Bell Curve" that represents the Gaussian distribution.
###Code
dice_histogram(100, 10000, 200)
###Output
_____no_output_____
###Markdown
Slow?That seemed slow. How do we time it?
###Code
import time
start = time.time()
dice_histogram(100, 10000, 200)
print(time.time()-start, 'seconds')
###Output
_____no_output_____
###Markdown
Seems like a long time... Can we make it faster? Yes! Optimize w/ NumpyUsing lots of loops in python is not usually the most efficient way to accomplish numeric tasks. Instead, we should use numpy. With numpy we can "vectorize" operations and under the hood numpy is doing the computation with C code that has a python interface. We don't have to worry about anything under the hood. 2-D Array of ValuesStart by checking out [numpy's randint function](https://docs.scipy.org/doc/numpy/reference/generated/numpy.random.randint.html). Let's rewrite `dice_rolls` using numpy functions and no loops. To do this, we are going to use `np.random.randint` to create a 2-D array of random dice rolls. That array will have `dice_count` rows and `rolls_count` columns--ie, the size of the array is `(dice_count, rolls_count)`.
###Code
import numpy as np
np.random.randint(1,7,(2,10))
###Output
_____no_output_____
###Markdown
The result is a `np.array` object with is like a list, but better. The most notable difference is that we can to element-wise math operations on numpy arrays easily. Column sumTo find the roll values, we need to sum up the 2-D array by each column.
###Code
np.sum(np.random.randint(1,7,(2,10)),axis=0)
###Output
_____no_output_____
###Markdown
Let's use this knowledge to rewrite `dice_rolls`
###Code
def dice_rolls_np(dice_count, rolls_count):
"""Return list of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
Returns:
np.array: list of dice roll values.
"""
pass
print(dice_rolls(2,100))
###Output
_____no_output_____
###Markdown
Histogram and timeit
###Code
def dice_histogram_np(dice_count, rolls_count, bins):
"""Plots outcome of many dice rolls
Args:
dice_count (int): number of dice to roll
rolls_count (int): number of rolls to do
bins (int): number of histogram bins
"""
pass
start = time.time()
dice_histogram_np(100, 10000, 200)
print(time.time()-start, 'seconds')
###Output
_____no_output_____
###Markdown
That is way faster! `%timeit`Jupyter has a magic function to time function execution. Let's try that:
###Code
%timeit dice_rolls_np(100, 1000)
%timeit dice_rolls(100, 1000)
###Output
_____no_output_____
###Markdown
The improvement in the core function call was two orders of magnitude, but when we timed it initially, we were also waiting for the plot to render which consumed the majority of the time. Risk Game SimulationIn risk two players roll dice in each battle to determine how many armies are lost on each side. Here are the rules:- The attacking player rolls three dice- The defending player rolls two dice- The defending player wins dice ties- The dice are matched in sorted order- The outcome is a measure of the net increase in armies for the the attacking player with values of -2, -1, 0, 1, 2Let's make a function that simulates the outcome of one Risk battle and outputs the net score. The functions we created in the first part of this tutorial are not useful for this task.
###Code
def risk_battle():
"""Risk battle simulation"""
# get array of three dice values
attacking_dice = 0 # fixme
# get array of two dice values
defending_dice = 0 # fixme
# sort both sets and take top two values
attacking_dice_sorted = 0 # fixme
defending_dice_sorted = 0 # fixme
# are the attacking values greater?
# attack_wins = attacking_dice_sorted[:2] > defending_dice_sorted[:2]
# convert boolean values to -1, +1
# attack_wins_pm = attack_wins*2 - 1
# sum up these outcomes
return 0 # fixme
for _ in range(50):
print(risk_battle(), end=' ')
###Output
_____no_output_____
###Markdown
HistogramLet's plot the histogram. Instead of making a function, let's just use list comprehension to make a list and then plot.
###Code
outcomes = [risk_battle() for _ in range(10000)]
plt.hist(outcomes)
plt.show()
###Output
_____no_output_____
###Markdown
Expected MarginIf we run many simulations, how many armies do we expect the attacker to be ahead by on average?
###Code
np.mean([risk_battle() for _ in range(10000)])
###Output
_____no_output_____ |
eda/skeleton_notebook-run2.ipynb | ###Markdown
Library
###Code
import pandas as pd
import sys
print("Can you see this?")
!{sys.executable} -m pip install PyAthena
from pyathena import connect
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler, OneHotEncoder
from sklearn.pipeline import Pipeline
from sklearn.compose import ColumnTransformer
from sklearn.cluster import KMeans,DBSCAN, SpectralClustering, MiniBatchKMeans
from sklearn.metrics import silhouette_score, calinski_harabasz_score
from sklearn.impute import SimpleImputer
from sklearn.decomposition import SparsePCA
from scipy import sparse as sp
import scipy
np.random.seed(1)
###Output
_____no_output_____
###Markdown
Getting Data
###Code
### UNCOMMENT THE LOGIC BELOW ON THE FIRST RUN###
# conn = connect(s3_staging_dir='s3://athena-results-c7fhgh8/',
# region_name='us-east-1')
# df = pd.read_sql("select * from \"millionsongdataset-intermediate\".songdata;", conn)
# %store df
%store -r df
df.shape
df.isna().sum()
###Output
_____no_output_____
###Markdown
Preprocessing Create Clean Frame* Filter 0 year and years that are > 2022. ==> Note that this SIGNIFICANTLY reduces of records we can work with so may choose not to do it.* Select a subset of columns
###Code
# filtered_df = df[(df['year']>0)&(df['year']<=2022)][['loudness','tempo','artist_hotttness'
# ,'artist_familiarity','genre','song_hotttness'
# ,'track_id','song_id','artist_id'
# ,'artist_name','title']].copy()
filtered_df_2 = df[[
'loudness',
'tempo',
'artist_familiarity'
]].copy()
###Output
_____no_output_____
###Markdown
Pipeline for Feature Selection
###Code
scaler_step = Pipeline([
("imputer", SimpleImputer(strategy='constant', fill_value=0)),
("scaler", StandardScaler())
])
encoder_step = Pipeline([
("encoder", OneHotEncoder())
])
transformers = ColumnTransformer([
("scaler_process", scaler_step, ['loudness',
'tempo',
'artist_familiarity'
])
# ,
# ("encoder_process", encoder_step, ['genre'])
])
feature_pipeline = Pipeline([
("processor", transformers),
("kmeans_modeller", MiniBatchKMeans(random_state=1))
])
###Output
_____no_output_____
###Markdown
Split the data
###Code
train_indices = np.random.choice(filtered_df_2.index, size=int(filtered_df_2.shape[0]*0.8), replace=False)
test_df_2 = filtered_df_2[~filtered_df_2.index.isin(train_indices)]
train_df_2 = filtered_df_2[filtered_df_2.index.isin(train_indices)]
###Output
_____no_output_____
###Markdown
Feed the train set to feature pipeline
###Code
feature_pipeline.fit(train_df_2)
###Output
_____no_output_____
###Markdown
Extract model
###Code
feat_selection_kmeans = feature_pipeline['kmeans_modeller']
###Output
_____no_output_____
###Markdown
Extract transformed dataframe
###Code
transformed_train_df_2 = feature_pipeline['processor'].fit_transform(train_df_2)
###Output
_____no_output_____
###Markdown
Scoring the clustering methods Kmeans
###Code
silhouette_score(transformed_train_df_2, feat_selection_kmeans.labels_, metric='euclidean',sample_size=int(train_df_2.shape[0]*0.3))
calinski_harabasz_score(transformed_train_df_2, feat_selection_kmeans.labels_)
kmeans_classes = np.unique(feat_selection_kmeans.labels_)
kmeans_classes
centers=feat_selection_kmeans.cluster_centers_
### UNCOMMENT LOGIC BELOW ON FIRST RUN###
%store centers
#store -r centers
centers.shape
centers
###Output
_____no_output_____ |
wikipedia-question-5.ipynb | ###Markdown
Copyright (C) 2016-2020 Ben Lewis, Morten Wang, Benjamin Mako HillLicensed under the MIT license, see ../LICENSE Question 8: How many views did Panama_Papers have… the day it was created?
###Code
import requests
from urllib.parse import quote
###Output
_____no_output_____
###Markdown
Notes:1. Documentation is online at: https://wikimedia.org/api/rest_v1/?doc2. There is no view data for 20160403, bug?
###Code
ENDPOINT = 'https://wikimedia.org/api/rest_v1/metrics/pageviews/per-article/'
wp_code = 'en.wikipedia'
access = 'all-access'
agents = 'all-agents'
page_title = 'Panama Papers'
period = 'daily'
start_date = '20160404'
end_date = '20160404'
wp_call = requests.get(ENDPOINT + wp_code + '/' + access + '/' + agents + '/' + quote(page_title, safe='') + '/' + period + '/' + start_date + '/' + end_date)
response = wp_call.json()
response
print(page_title + ' had ' + str(response['items'][0]['views']) + ' page views the first day')
###Output
_____no_output_____ |
docs/tutorials/notebooks/R4ML_Data_Preprocessing_and_Dimension_Reduction.ipynb | ###Markdown
R4ML: Data Preparation and Dimensionality reduction (Part 2) [Alok Singh](https://github.com/aloknsingh/) Contents 1. Introduction 2. Data Preparation and Pre-processing 2.1. Why to do data preparation and pre-processing? 2.2. Typical time spend by a Data Scientist. 2.3. How does R4ML Address Data Preparation and Pre-Processing? 2.4. An Example of Data Pre-Processing. 2.5. Next steps for Advanced Users. 2.6. Summary of Data Preparation 3 Dimensionality Reduction 3.1. What is Dimensionality Reduction? 3.2. Why is Dimensionality Reduction Useful? 3.3. R4ML for Dimensionality Reduction of Big data with an example. 3.4. Summary of Dimensionality Reduction. 4. Summary and next steps ... 1. IntroductionIn our first notebook we received an introduction to R4ML and conducted some exploratory data analysis with it.In this section, we will go over the typical operations a data scientist performs while cleaning and pre-processing the input data. Now, we will look into dimensionality reduction, first starting out with the boiler plate code from Part I for the purposes of loading etc.The following code is copy pasted from part I:
###Code
library(R4ML)
library(SparkR)
r4ml.session()
# read the airline dataset
airt <- airline
# testing, we just use the small dataset
airt <- airt[airt$Year >= "2007",]
airt <- airline
# testing, we just use the small dataset
airt <- airt[airt$Year >= "2007",]
air_hf <- as.r4ml.frame(airt)
airs <- r4ml.sample(air_hf, 0.1)[[1]]
#
total_feat <- c(
"Year", "Month", "DayofMonth", "DayOfWeek", "DepTime","CRSDepTime","ArrTime",
"CRSArrTime", "UniqueCarrier", "FlightNum", "TailNum", "ActualElapsedTime",
"CRSElapsedTime", "AirTime", "ArrDelay", "DepDelay", "Origin", "Dest",
"Distance", "TaxiIn", "TaxiOut", "Cancelled", "CancellationCode",
"Diverted", "CarrierDelay", "WeatherDelay", "NASDelay", "SecurityDelay",
"LateAircraftDelay")
#categorical features
#"Year", "Month", "DayofMonth", "DayOfWeek",
cat_feat <- c("UniqueCarrier", "FlightNum", "TailNum", "Origin", "Dest",
"CancellationCode", "Diverted")
numeric_feat <- setdiff(total_feat, cat_feat)
# these features have no predictive power as it is uniformly distributed i.e less information
unif_feat <- c("Year", "Month", "DayofMonth", "DayOfWeek")
# these are the constant features and we can ignore without much difference in output
const_feat <- c("WeatherDelay", "NASDelay", "SecurityDelay", "LateAircraftDelay")
col2rm <- c(unif_feat, const_feat, cat_feat)
rairs <- SparkR::as.data.frame(airs)
airs_names <- names(rairs)
rairs2_names <- setdiff(airs_names, col2rm)
rairs2 <- rairs[, rairs2_names]
# first we will create the imputation maps
# we impute all the columns
airs_ia <- total_feat
# imputation methods
airs_im <- lapply(total_feat,
function(feat) {
v= if (feat %in% numeric_feat) {"global_mean"} else {"constant"}
v
})
# convert to vector
airs_im <- sapply(airs_im, function(e) {e})
#imputation values
airs_iv <- setNames(as.list(rep("CAT_NA", length(cat_feat))), cat_feat)
na.cols<- setdiff(total_feat, airs_ia)
# we cache the output so that not to re-exe DAG
dummy <- cache(airs)
###Output
Warning message:
“WARN[R4ML]: Reloading SparkR”Loading required namespace: SparkR
_______ _ _ ____ ____ _____
|_ __ \ | | | | |_ \ / _||_ _|
| |__) || |__| |_ | \/ | | |
| __ / |____ _| | |\ /| | | | _
_| | \ \_ _| |_ _| |_\/_| |_ _| |__/ |
|____| |___| |_____||_____||_____||________|
[R4ML]: version 0.8.0
Warning message:
“no function found corresponding to methods exports from ‘R4ML’ for: ‘collect’”
Attaching package: ‘SparkR’
The following object is masked from ‘package:R4ML’:
predict
The following objects are masked from ‘package:stats’:
cov, filter, lag, na.omit, predict, sd, var, window
The following objects are masked from ‘package:base’:
as.data.frame, colnames, colnames<-, drop, endsWith, intersect,
rank, rbind, sample, startsWith, subset, summary, transform, union
Warning message:
“WARN[R4ML]: driver.memory not defined. Defaulting to 2G”Spark package found in SPARK_HOME: /usr/local/src/spark21master/spark
###Markdown
2. Data Preparation and Pre-processing? 2.1. Why data preparation and pre-processing? * Real life data has many missing values.* Outliers are present, which will affect the model predictive powers.* Data may be corrupted.* Certain features (columns) of data are expressed as strings but most of the ML algorithms, works in the double and integers. * Without pre-processing, the analytics power of the models is reduced significantly. 2.2. Typical time spent by a Data Scientist.Here is the detail. And as we can see that there is the need for unified API for doing the data preparation to simply the journey of data scientist.  2.3. How does R4ML Address Data Preparation and Pre-Processing? * R4ML provides a unified API to run the various data preprecessing tasks from one entry point. * The user can also run the individual data preprocessing via the r4ml.frame and r4ml.matrix api. * Its API is more closer to the model of [R's caret](http://caret.r-forge.r-project.org/) package. The function is called **r4ml.ml.preprocess** and it supports all major pre-processing tasks, feature transformation routines e.g R4ML supports all the major pre-processing task: | **Method** |** R4ML options **| **Description** | | :----------------- |:-------------|:-------------|| **NA Removal ** | imputationMethod, imputationValues, omit.na | this options let one remove the missing data or user can substitute with the constant or with the mean (if it is numeric)|| **Binning ** | binningAttrs, numBins| One of the typical use case of binning is say we have the height of the people in feet and inches and if we only care about three top level category i.e short , medium and tall, user can use this option || **Scaling and Centering** | scalingAttrs| Most of the algo’s prediction power or the result become better if it is normalized i.e substract the mean and then divide it by stddev. || **Encoding (Recode)** | recodeAttrs| Since most of the machine learning algorithm is the matrix based linear algebra. Many times the categorical columns have inherit order in it like height or shirt_size, it options provide user the ability to encode those string columns into the ordinal categorical numeric value || **Encoding (OneHot or DummyCoding)** | dummyCodeAttrs | when categorical columns have no inherit orders like people’s race or states they lives in, in that case, we would like to do one hot encoding for those. | 2.4. An Example of Data Processing.We already have the airs r4ml.frame and we will first cache it (to improve the performance we need to cache before heavy lifting).In the next cell, we regroup the various columns into the above methodsand in the cell after, that we run the r4ml.ml.preprocess.
###Code
airs_t_res <- r4ml.ml.preprocess(
airs,
dummycodeAttrs = c(),
binningAttrs = c(), numBins=4,
missingAttrs = airs_ia,
imputationMethod = airs_im,
imputationValues = airs_iv,
omit.na=na.cols, # we remove all the na, this is just the dummy
recodeAttrs=cat_feat # recode all the categorical features
)
# the transformed data frame
airs_t <- airs_t_res$data
# the relevant metadata list
airs_t_mdb <- airs_t_res$metadata
# cache the transformed data
dummy <- cache(airs_t)
showDF(airs_t, n = 2)
###Output
INFO[r4ml.ml.preprocess]: running proxy.omit.na
INFO[r4ml.ml.preprocess]: running proxy.impute
INFO[r4ml.ml.preprocess]: running proxy.normalize
INFO[r4ml.ml.preprocess]: running proxy.binning
INFO[r4ml.ml.preprocess]: running proxy.recode
INFO[r4ml.ml.preprocess]: running proxy.onehot
###Markdown
2.5. Next steps for Advanced Users.All the values are numeric now and we are ready for the next steps of the pipeline. However, we want to end this topic with a note to the user that in using the custom DML (explained later), the advanced user can write the advanced data pre-processing step independently. Other data pre-processing steps, that are typically done: ** Input data transformation (Box Cox)**We saw in the previous section that the input data when transformed by log gave us a better feature. That was the special case of Box-Cox transformation. Alternatively, statistical methods can be used to empirically identify an appropriate transformation. Box and Cox proposes a family of transformations that are indexed by a parameter lambda, (this feature will be available in future.)Practically, one can calculate the kurtosis and skewness for various lambda or one can do the significance testing to check which lambda gives the better result. ** Outlier Detection ** | **Method** | **Description** | | :----------------- |:-------------|| **Box Plot (univariate Stats) ** | This way one can visually detect outliers for each of dimensions. But again it is not the best as it is not explaning all the details.|| **mahalanobis distance (for multivariate stats) ** | This is like calculating the number of units of variance in each dimensions after rotating the axis in the direction of maximum variance.|| **use classifier like logistic regression** | use the mahalanobis distance, to calculate the threshold i.e 3 sigma distanace and then anything more than level it as Y = 1 and then run logistic regression to find out what other variables are outliers.|| **Spacial sign transformation ** | If a model is considered to be sensitive to outliers, one data transformation that can minimize the problem is the spatial sign. This procedure projects the predictor values onto a multidimensional sphere. This has the effect of making all the samples the same distance from the center of the sphere. Mathematically, each sample is divided by its squared norm.| 2.6. Summary of Data PreparationIn this subsection, we saw how R4ML, helps simplify the 60% of the time spent by data scientists on data preparation by providing a unified and expandable API for pre-processing.We also saw a small example of R4ML in action for this and we gave a few advanced technique for power users. 3. Dimensionality Reduction 3.1. What is Dimensionality ReductionDimensionality reduction is choosing a basis or mathematical representation within which you can describe most but not all of the variance within your data,thereby retaining the relevant information, while reducing the amount of information necessary to represent it. There are a variety of techniques for doing this including but not limited to PCA, ICA, and Matrix Feature Factorization. These will take existing data and reduce it to the most discriminative components. All of these allow you to represent most of the information in your dataset with fewer, more discriminative features. 3.2. Why is Dimensionality Reduction Useful?* In terms of performance, having data of high dimensionality is problematic because: * It can mean high computational cost to perform learning and inference and * It often leads to [overfitting](https://en.wikipedia.org/wiki/Overfitting) when learning a model, which means that the model will perform well on the training data but poorly on test data. Dimensionality reduction addresses both of these problems, while (hopefully) preserving most of the relevant information in the data needed to learn accurate, predictive models. Also note that, in general visualization of lower dimension data and it's interpretation are more straightforward and it coule be used for geting insights into the data. 3.3. R4ML for Dimensionality Reduction of Big dataR4ML provides the inbuilt r4ml.pca for dimensionality reduction for big dataset. This can allow one to convert the higher dimensional data into the lower dimensions . For example see the following illustration: PCA transformationPrincipal component analysis (PCA) rotates the original data space such that the axes of the new coordinate system point into the directions of highest variance of the data. The axes or new variables are termed principal components (PCs) and are ordered by variance: the first component, PC 1, represents the direction of the highest variance of the data. The direction of the second component, PC 2, represents the highest of the remaining variance orthogonal to the first component. This can be naturally extended to obtain the required number of components which together span a component space covering the desired amount of variance.Since components describe specific directions in the data space, each component depends on certain fraction of each of the original variables: each component is a linear combination of all original variables. Dimensionality reductionLow variance can often be assumed to represent undesired background noise. The dimensionality of the data can therefore be reduced, without loss of relevant information, by extracting a lower dimensional component space covering the highest variance. Using a lower number of principal components instead of the high-dimensional original data is a common pre-processing step that often improves results of subsequent analyses such as classification. For visualization, the first and second component can be plotted against each other to obtain a two-dimensional representation of the data that captures most of the variance (assumed to be most of the relevant information), useful to analyze and interpret the structure of a data set. An Example:We will use r4ml.pca and will experiment to see if there are any dependencies or co-releation between the scale (regression or continous) variables and reduce dimensions.
###Code
# since from the exploratory data analysis, we knew that certain
# variables can be remove for the analysis, we will perform the pca
# on the smaller set of features
airs_t3_tmp <- select(airs_t, names(rairs2)) # recall rairs2 from before
airs_t3 <- as.r4ml.matrix(airs_t3_tmp)
# do the PCA analysis with 12 components
airs_t3.pca <- r4ml.pca(airs_t3, center=T, scale=T, projData=F, k=12)
# the eigen values for each of the components which is the square of
# stddev and is equivalent to the variance
[email protected]
# the corresponding eigenvectors are
[email protected]
###Output
_____no_output_____
###Markdown
How many eigenvectors do we need?Now the question is how many eigenvalues (and corresponding eigenvectors) we need, so that we can explain most of the variance (say 90%) of the input data. We will try to find it out in the following code:
###Code
sorted_evals <- sort([email protected]$EigenValues1, decreasing = T)
csum_sorted_evals <- cumsum(sorted_evals)
# find the cut-off i.e the number of principal component which capture
# 90% of variance
evals_ratio <- csum_sorted_evals/max(csum_sorted_evals)
evals_ratio
pca.pc.count <- which(evals_ratio > 0.9)[1]
pca.pc.count
###Output
_____no_output_____
###Markdown
Analytically we can see that we need the first six principal components. Let's also verify it intuitively or graphically. Here we will plot the PCA variance and see the overall area spanned.
###Code
library(ggplot2)
pca.plot.df <- data.frame(
index=1:length(sorted_evals),
PrincipalComponent=sprintf("PC-%02d", 1:length(sorted_evals)),
Variances = sorted_evals)
pca.g1 <- ggplot(data=pca.plot.df,
aes(x=PrincipalComponent, y=Variances, group=1, colour=Variances))
pca.g1 <- pca.g1 + geom_point() + geom_line()
# highlight the area containing the 90% variances
#subset region and plot
pca.g_data <- ggplot_build(pca.g1)$data[[1]]
#plot the next shaded graph
pca.g2 <- pca.g1 + geom_area(data=subset(pca.g_data, x<=pca.pc.count),
aes(x=x, y=y), fill="red4", inherit.aes=F)
pca.g2 <- pca.g2 + geom_point() + geom_line()
pca.g2
###Output
_____no_output_____ |
Lab 4 - How to work with open data.ipynb | ###Markdown
Lab 4 - How to work with open data*© 2020 Colin Conrad*Welcome to Week 4 of INFO 6270! Last week we covered basic data cleaning and analysis using lists and dictionaries. This week we are making our way to our final lesson on basic Python: libraries and external files. This week we will do a few things that will more relatable (and useful!) to most of you. We will start by learning how to navigate files in our Python environment before making our way to work with CSV and PDF files. This week's work covers a **lot** of ground from [Sweigart (2020)](https://automatetheboringstuff.com/). Instead of going through these chapters in depth, we will borrow some of the material from Chapters 15 and 16 throughout and apply it in a way that is more relevant to our context. If you are interested (and have the time) it is also helpful to have read Chapters 6 and 7 on string manipulation and regular expressions. The later is quite complex however and we will not cover it in this course; if you want to be a data science expert though, you should definitely learn regular expressions!**This week, we will achieve the following objectives:**- Locate files using Python- Retrieve CSV data- Analyze and write CSV data- Retrieve and analyze PDF data- Write PDF filesWeekly reading: Sweigart (2014) Ch. 15 and 16. Case: The 2016 Canadian CensusThe Canadian Census Program is a data collection program conducted by Statistics Canada every five years and has occurred regularly since 1851, before confederation. Census records are maintained by two federal government agencies based on their date of record. Census records prior to 1926 are curated by [Library and Archives Canada](https://www.bac-lac.gc.ca/eng/census/Pages/census.aspx), while records after 1926 are maintained by [Statistics Canada](https://www12.statcan.gc.ca/census-recensement/index-eng.cfm). These records represent the most comprehensive data on the Canadian population and are (for the most part) publicly available.In addition to census profiles on various regions throughout the country, the Census Program provides [detailed data tables](https://www12.statcan.gc.ca/census-recensement/2016/dp-pd/dt-td/index-eng.cfm) on a variety of topics. Housing is one such topic and the data table titled ["Tenure including presence of mortgage payments and subsidized housing"](https://www12.statcan.gc.ca/census-recensement/2016/dp-pd/dt-td/Rp-eng.cfm?TABID=2&LANG=E&A=R&APATH=3&DETAIL=0&DIM=0&FL=A&FREE=0&GC=01&GL=-1&GID=1341679&GK=1&GRP=1&O=D&PID=110574&PRID=10&PTYPE=109445&S=0&SHOWALL=0&SUB=0&Temporal=2017&THEME=121&VID=0&VNAMEE=&VNAMEF=&D1=0&D2=0&D3=0&D4=0&D5=0&D6=0) is particularly relevant to understanding housing affordability among Canadians because it provides the number of Canadian households which reported unaffordable housing or whether their households were in need of repairs. In this last lab in a series related to housing security, we will retrieve and analyze the tables provided by the census to generate insights about housing needs in Canada and Halifax specifically, if so desired. Objective 1: Locate files using PythonBefore we can get started with data science in earnest, we need to know more about one last core Python feature: *libraries*. As mentioned in class, one of the main advantages of Python versus other programming languages is that it is *high level and highly portable*. This is to say that you can do a lot with Python in a few lines of code. One of the main things that makes this possible are Python's libraries.In programming, a library is a collection of pre-defined routines that you can import into your code without writing them. These greatly accelerate the time that it takes to finish a programming task, and in some cases, save you years of work. Just like libraries designed for humans, Python programming libraries are generally curated by groups of people who ensure that the library is usable. As a free and open source programming language, experienced developers will often create and curate libraries for free, sometimes at great expense of their time!Let's start by using the `pathlib` library. This library is provided in all Python distributions and is maintained by the Python Software Foundation. Similarly to Sweigart in Chapter 9, we can import the `Path` method from the `pathlib` library, which can help us locate our files.
###Code
from pathlib import Path # imports the Path function into our environment
###Output
_____no_output_____
###Markdown
The code above will import a function for us called `Path` which will help Python navigate throughout our computer's operating system. If you are taking this course, chances are that you would find it difficult to write a function that can interface between our Python environment and the operating system. Fortunately, more experienced developers have created this function for us to use. Let's try executing the `Path` method.
###Code
Path() # imports this Python file's path in the Windows or Mac operating system
###Output
_____no_output_____
###Markdown
This is probably not that exciting to you yet. However, what `Path()` reveals is that Python is actively listening to your local directory. We can get more context by asking the Path what its current director is. We can do that using the `cwd()` ("**c**urrent **w**orking **d**irectory") method.
###Code
Path.cwd() # gets the full current working directory; yours is probably different from Colin's
###Output
_____no_output_____
###Markdown
How this works is complicated and well-beyond the scope of this course. Fortunately however, we did not have to understand how it works in order to use the `Path` method. This will be an ongoing theme from this point onwards. Libraries enhance our ability to do things--we don't need to know *how* they work for now, just that they do! Navigating your local folderThough the `Path` function is handy, it is not really evident until we pair it with another library. The `os` library is Python's library for navigating **o**perating **s**ystems, such as Windows or Mac OS. While `Path()` gives us paths, `os` allows Python to send commands to your computer. This is very handy if you want to change your directory or make new ones. Let's again start by importing os.
###Code
import os # import the os library
###Output
_____no_output_____
###Markdown
Let's see this library in action. One `os` method is called `listdir()` which will give us the directories in our current path.
###Code
os.listdir() # lists the files in the current directory
###Output
_____no_output_____
###Markdown
Chances are high that you downloaded this file as well as the `img` folder and placed them together. If so, you should see a series of files and folders, including Lab 4 and img. This is very handy for figuring out the names of files that we download! We can also take this one step further by navigating to the `img` subfolder. Let's create a new path containing our current path as well as the `img` subfolder. The following line will combine this file's current path and the `img` folder.
###Code
img_path = Path("img") # the path to the img subfolder
###Output
_____no_output_____
###Markdown
The os library also contains a `chdir()` method which allows us to **c**hange **d**irectories. We can now combine the `img_path` with the os library to navigate to the subfolder.
###Code
os.chdir(img_path) # change to the image path
###Output
_____no_output_____
###Markdown
If we now run `listdir()` again, we should see the contents of the `img` subfolder instead.
###Code
os.listdir()
###Output
_____no_output_____
###Markdown
Great work! Let's navigate back to the original path. We can do this by changing to the folder above using the ".." string. This is a feature of operating systems for moving up a level. This should bring us back to where we started.
###Code
first_path = Path("..") # denotes the directory above the current one
os.chdir(first_path) # change to the above directory
###Output
_____no_output_____
###Markdown
*Challenge Question 1 (2 points)*The `os` library will be very helpful for many future tasks. It is also very important to refer to documentation on how the various libraries that we will use work. Using the [documentation for os](https://docs.python.org/3/library/os.html) look up the `makedir()` command. Using the `mkdir` method of the os library, create a subdirectory (a.k.a. a subfolder) in your current folder called `data`. We will use this folder later place our csv files.
###Code
# insert code here
###Output
_____no_output_____
###Markdown
Objective 2: Retrieve CSV dataIt's now time to apply this skill to something practical. One of the main reasons why we would want to navigate the operating system in our Python environment is so that we can read and write files. When working with open data in Python you will have to retrieve files from the internet and process them.The first step with any data project is to source and attain the data that you plan to analyze. Statistics Canada's census portal provides a (somewhat complicated) interface for attaining the data that we are interested in. By default, the portal gives a user the ability to create a custom table containing only the data that a user is interested in. Visit the provided at [this link](https://www12.statcan.gc.ca/census-recensement/2016/dp-pd/dt-td/Rp-eng.cfm?TABID=2&LANG=E&A=R&APATH=3&DETAIL=0&DIM=0&FL=A&FREE=0&GC=01&GL=-1&GID=1341679&GK=1&GRP=1&O=D&PID=110574&PRID=10&PTYPE=109445&S=0&SHOWALL=0&SUB=0&Temporal=2017&THEME=121&VID=0&VNAMEE=&VNAMEF=&D1=0&D2=0&D3=0&D4=0&D5=0&D6=0) and click on the Download tab. Select `CSV (comma-separated values) file` from the Download interface. This will download a CSV file into your downloads folder.Rename this file to `w4_canada_housing.csv` and move it into the `/data` subdirectory that you created earlier. You can do this by right clicking on the file that you downloaded and selecting rename and then by dragging it to the relevant folder. Once your data is in the relevant folder, you are ready to interpret it. The CSV libraryYou will probably be unsurprised to learn that we will again leverage a library to read this file; in this case, the `csv` library. This library is the bread and butter of basic data science and we will come back to it almost every class from here on out. Python's [csv library](https://docs.python.org/3/library/csv.html) is a fantastic resource for reading and writing csv files. This one takes a little getting used to, so it is better to simply give an example of its basic use and then explain it. The following cell gives code for reading the file that you downloaded from Statistics Canada.
###Code
import csv
with open('data/w4_canada_housing.csv', newline='') as csvfile: # tells Python which file to read
housing_reader = csv.reader(csvfile, delimiter=',') # draws on the reader object to read the file
for row in housing_reader:
print(row)
###Output
_____no_output_____
###Markdown
As you can see, the file is quite messy. The code really only has two unfamiliar parts to it. The first is the `with open('data/w4_canada_housing.csv', newline='') as csvfile:`. The `with open()` statement is the way that you command Python to open an external file. In this line of code, you are telling Python to open this file and call its contents `csvfile` in our environment.The second unfamiliar piece of code is `housing_reader = csv.reader(csvfile, delimiter=',')`. The `csv.reader` is an object contained in the `csv` library which is designed to read csv files. In the `(csvfile, delimiter=',')` bit, we commanded the csv reader to read the opened `csvfile` and that each data in that file was separated (delimited) by the character `,`. CSV (comma separated values) files are simply a series of data separated by commas. In this line of code, we thus created a reader called `housing_reader` which reads the data inside of the csv file.The reader object consists of a series of rows for each row in the csv file. We can loop through the rows using a for loop, just like with other data! Unfortunately, Statistics Canada's CSV files contain a lot of data which are not useful for this task. What we need is a way to clean the data efficiently. Fortunately, we learned this skill in Week 2; let's apply it to CSV files! *Challenge Question 2 (2 points)*Currently the Statistics Canada table is structured poorly for Python analysis. Fortunately most of the unusable data are systematically structured similarly, each being placed on a single column row. Modify the code below to do the following:- check to see if the row contains too few columns- append the rows that have the useful data to a listDoing this will give us a "list of lists" (a.k.a. two dimensional list) which we can use for analysis later.
###Code
import csv
housing = []
with open('data/w4_canada_housing.csv', newline='') as csvfile:
housing_reader = csv.reader(csvfile, delimiter=',')
for row in housing_reader:
# add some logic to filter out rows that have too few items
# add some logic to append the row to the housing list
###Output
_____no_output_____
###Markdown
Sample Test Should return:`[['Housing indicators (5)', 'Total - Tenure including presence of mortgage payments and subsidized housing [4]', ' Owner', ' With mortgage', ' Without mortgage', ' Renter', ' Subsidized housing', ' Not subsidized housing', ' '], ['Total - Housing indicators [5]', '13798300', '9357290', '5680655', '3676630', '4441020', '575830', '3865190 '], [' Adequacy: major repairs needed', '867565', '516640', '337990', '178645', '350925', '54300', '296625 '], [' Suitability: not suitable', '670735', '253560', '199985', '53575', '417175', '48835', '368335 '], [' Affordability: 30% or more of household income is spent on shelter costs', '3325950', '1550380', '1308780', '241600', '1775570', '238825', '1536740 '], [' Adequacy, suitability or affordability: major repairs needed, or not suitable, or 30% or more of household income is spent on shelter costs [6]', '4373550', '2140660', '1694325', '446335', '2232895', '304675', '1928215 ']]`
###Code
print(housing)
###Output
_____no_output_____
###Markdown
Objective 3: Analyze and write CSV data In addition to reading CSV files, the CSV library helps us write files. One of the most tangible, practical uses for Python in an office setting is that you can clean such files and return them accordingly. Let' start by retrieving the current `housing` list.
###Code
for h in housing: # print each line separately for readability
print(h)
###Output
_____no_output_____
###Markdown
It would be desirable to reduce the length of the long titles, such as `Adequacy: major repairs needed` and replace them with something more digestible. This would be a pain to do in a defined business analytics program. Cleaning your CSV data An effective way to clean csv data is to create a function that iterates through a list file. For instance, we already decided to save the contents of our csv file in a list called `housing`. We could create the `cleanCSV` function which removes the colons as follows.
###Code
def cleanCSV(csvfile):
new_file = []
i = 0
while i < len(csvfile): # the length of the number of rows
new_list = [] # a placeholder for cleaned row strings
j = 0
while j < len(csvfile[i]): # the number of values in this row
if ":" in csvfile[i][j]:
colon_index = csvfile[i][j].index(":") #retrieve the index of the colon
new_list.append(csvfile[i][j][:colon_index]) # [:colon_index] retrieves the string characters to the left of the index.
else:
new_list.append(csvfile[i][j]) # if there is no colon in the value, append it to the placeholder
j += 1
new_file.append(new_list) # append the cleaned list to the first level list
i += 1
return(new_file) # returns the cleaned file
###Output
_____no_output_____
###Markdown
We can then create a `new_housing` list which contains the cleaned version of the `housing` data.
###Code
new_housing = cleanCSV(housing) # apply the function
for h in new_housing: # print each line separately for readability
print(h)
###Output
_____no_output_____
###Markdown
This is a bit better. Some of the unwieldly titles have changed. We can then use this cleaned data to write a CSV file. Writing a CSV file Similarly to the `reader`, the Python csv library has a `writer`. The writer similarly uses the `with open(` structure, though contains the 'w' option. The writer similarly can create new rows and is designed to be iterated. Try executing the following code-- you will be left with cleaned data file called `w4_canada_housing_cleaned.csv` which you can open in Excel.
###Code
with open('data/w4_canada_housing_cleaned.csv', 'w', newline='') as csvfile:
housing_writer = csv.writer(csvfile, delimiter=',')
for row in new_housing:
housing_writer.writerow(row)
###Output
_____no_output_____
###Markdown
 *Challenge Question 3 (2 points)*Though the `cleanCSV()` function currently cleans out values to the left of the colon, there is still data which can be further cleaned. Modify the function to clean the data further. There are many ways to answer this question; you will be evaluated based on whether you:- modify the cleanCSV function- apply it to create an even cleaner csv file Modify the cleanCSV function here!
###Code
def cleanCSV(csvfile):
new_file = []
i = 0
while i < len(csvfile): # the length of the number of rows
new_list = [] # a placeholder for cleaned row strings
j = 0
while j < len(csvfile[i]): # the number of values in this row
if ":" in csvfile[i][j]:
colon_index = csvfile[i][j].index(":") #retrieve the index of the colon
new_list.append(csvfile[i][j][:colon_index]) # [:colon_index] retrieves the string characters to the left of the index.
else:
new_list.append(csvfile[i][j]) # if there is no colon in the value, append it to the placeholder
j += 1
new_file.append(new_list) # append the cleaned list to the first level list
i += 1
return(new_file) # returns the cleaned file
###Output
_____no_output_____
###Markdown
Apply the function
###Code
new_housing = cleanCSV(housing) # apply the function
###Output
_____no_output_____
###Markdown
Write the file
###Code
import csv
with open('data/w4_canada_housing_cleaned.csv', 'w', newline='') as csvfile:
housing_writer = csv.writer(csvfile, delimiter=',')
for row in new_housing:
housing_writer.writerow(row)
###Output
_____no_output_____
###Markdown
Objective 4: Retrieve and analyze PDF data In addition to csv files, Python can often be used to process files generated by everyday business applications such as Adobe Acrobat and Microsoft Word. Unlike the csv library however, Python does not have built-in libraries for processing these types of files by default and we must install new libraries to add this functionality to our environment. Installing librariesThe easiest way to install Python libraries is by using the `pip` tool. Pip is a recursive acronym which stands for "**p**ip **i**nstalls **p**ackages. It is package management tool which indexes Python libraries and makes it easy to install them in your Python environment.In order to complete this step you must go outside of your Jupyter environment. Look for a tool called the **Anaconda Prompt** which should have been installed on your computer when you installed Anaconda. This is a shell (a.k.a. command line) interface that uses your Anaconda Python installation rather than your system's Python. This comes with the advantage that we do not need to be administrators in order to install new stuff.In the anaconda terminal write the following command: `pip install PyPDF2`.This will install the PyPDF2 library in your Anaconda environment. You can similarly install other Python libraries which you find interesting. PDF documentsPortable Document Format (PDF) documents are employed virtually everywhere in the working world. Frustratingly, organizations will occasionally post data tables in this format, which makes it difficult to employ analytical tools such as Excel or Tableau. Fortunately, these documents are files like any other, and programming languages such as Python can be used to interpret their data, though with some added difficulty.In this week's data folder you will find a list of Halifax city councillors which was provided [online in pdf format](https://www.halifax.ca/sites/default/files/documents/city-hall/districts-councillors/CouncillorsExternalContactList.pdf). I speculate that this was an effort to prevent web crawlers from creating spam. We will now learn why this is futile.Let's start by importing the PyPDF library and reading the document. The following code should bring the file's data into your Python environment.
###Code
import PyPDF2 # imports the PyPDF2 lubrary
pdfFileObj = open('data/halifax_councillors_list.pdf', 'rb') # creates a PDF file object which can be read by ython
pdfReader = PyPDF2.PdfFileReader(pdfFileObj) # creates a Reader object whihc can read the PDF file
###Output
_____no_output_____
###Markdown
We can now extract text from the document. As Sweigart points out in Chapter 15, the PyPDF library does not extract text perfectly, though it generally does a good job. The following code will set the target page and print the text contents.
###Code
pageObj = pdfReader.getPage(0) # set the page that we want to extract text from
print(pageObj.extractText()) # extract the text
###Output
_____no_output_____
###Markdown
With a method for extracting PDF in hand, we can also opt to write PDF text in a more Python-friendly format, such as plain text (a.k.a. .txt). Python is equipped to write .txt files by default and we can write one using the same `open()` method that we used with csv files. The following code writes a text file with the city councillor's information and places it in the data folder.
###Code
councillors_text = open('data/councillors.txt','w') # opens a new write file called councillors.txt in the data folder
councillors_text.write(pageObj.extractText()) # writes the PDF contents in the txt file
councillors_text.close() # closes the txt file
###Output
_____no_output_____
###Markdown
*Challenge Question 4 (2 points)*As Sweigart points out in Chapter 15, PDF documents are difficult to work with. You have been provided with a second file called `halifax_sorting_guide.pdf`. This document has a few tables and text scattered across four pages. Using Python, extract the data from the third page which concerns the communities in Area I. Your script should:- Open the `halifax_sorting_guide.pdf` as a pdfFileObj- Retrieve the third page- Retrieve the part of the string that corresponds to the list of communities in Area I- Print your substring- **Hint:** `pageObj.extractText()` will retrieve the data as a string. You may wish to use the `.index()` function to retrieve the location of the "AREA I" and "Area II" substrings. You don't need to do this a fancy way -- you are welcome to locate the range of the substring with trial and error if it makes more sense to you!
###Code
import PyPDF2
# insert code here!
###Output
_____no_output_____
###Markdown
Objective 5: Write PDF filesFinally, we can also use Python write, and even combine PDF files! Similarly to the PDF reader, PyPDF2 also provides a PDF writer. As Sweigart points out however, this library is limited in the sense that it cannot modify PDF pages, just write pre-existing pages. This has a few obvious uses however, such as- Reducing redundant PDF pages- Combining select pages from various PDF documents- Merging PDF documentsConsider reading through Sweigart Chapter 15 to learn more about how the PDF writer works! *Challenge Question 5 (2 points)*Using the PyPDF2 library we can retrieve the city councillors page and merge it with the garbage collection guide. Write a script that achieves the following:- Open the Halifax Sorting Guide- Open the Halifax Councillors List- Create a writer instance- Loop through Sorting Guide pages and add them to the writer- Loop through the Councillors List pages and add them to the writer- Write a new file**Hint:** Sweigart gives a useful example in Chapter 15. You are welcome to borrow from his example as long as you cite the reference at the end of your document.
###Code
# insert code here!
###Output
_____no_output_____ |
numberline.ipynb | ###Markdown
メモ1. `colab`で`matplotlib`で、数字線`number line`を作る。1. ノートブック`ipynb`をマークダウン`md`に変換して、はてななどに貼り付ける。1. 画像の処理はどうするか。どうなるか。
###Code
import matplotlib.pyplot as plt
import numpy as np
ax=plt.figure(figsize=(12,6)).add_subplot(xlim=(-4,4), ylim=(0, 1.0))
plt.arrow(-3.5, 0.5, 7, 0, head_width=0.05, head_length=0.15, linewidth=4, color='b', length_includes_head=True)
plt.arrow(3.5, 0.5, -7, 0, head_width=0.05, head_length=0.15, linewidth=4, color='b', length_includes_head=True)
x = [-3, -2, -1, 0, 1, 2, 3]
y = [0.5, 0.5, 0.5, 0.5, 0.5, 0.5, 0.5]
data_name = ["-3", "-2", "-1", "0", "1", "2", "3"]
plt.plot(x, y, 'r|', ms="40")
for (i, j, name) in zip (x, y, data_name) :
plt.text(i, j, name, fontsize=15, position=(i-0.05, j-0.2))
plt.savefig("numberline.svg")
plt.show()
!cat numberline.svg
%%svg
<svg height="432pt" version="1.1" viewBox="0 0 864 432" width="864pt" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<defs>
<style type="text/css">
*{stroke-linecap:butt;stroke-linejoin:round;}
</style>
</defs>
<g id="figure_1">
<g id="patch_1">
<path d="M 0 432
L 864 432
L 864 0
L 0 0
z
" style="fill:#ffffff;"/>
</g>
<g id="axes_1">
<g id="patch_2">
<path d="M 108 378
L 777.6 378
L 777.6 51.84
L 108 51.84
z
" style="fill:#ffffff;"/>
</g>
<g id="patch_3">
<path clip-path="url(#p63d08724d2)" d="M 735.75 214.92
L 723.195 223.074
L 723.195 215.08308
L 149.85 215.08308
L 149.85 214.75692
L 723.195 214.75692
L 723.195 206.766
z
" style="fill:#0000ff;stroke:#0000ff;stroke-linejoin:miter;stroke-width:4;"/>
</g>
<g id="patch_4">
<path clip-path="url(#p63d08724d2)" d="M 149.85 214.92
L 162.405 206.766
L 162.405 214.75692
L 735.75 214.75692
L 735.75 215.08308
L 162.405 215.08308
L 162.405 223.074
z
" style="fill:#0000ff;stroke:#0000ff;stroke-linejoin:miter;stroke-width:4;"/>
</g>
<g id="matplotlib.axis_1">
<g id="xtick_1">
<g id="line2d_1">
<defs>
<path d="M 0 0
L 0 3.5
" id="m9e82c0b751" style="stroke:#000000;stroke-width:0.8;"/>
</defs>
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="108" xlink:href="#m9e82c0b751" y="378"/>
</g>
</g>
<g id="text_1">
<!-- −4 -->
<defs>
<path d="M 10.59375 35.5
L 73.1875 35.5
L 73.1875 27.203125
L 10.59375 27.203125
z
" id="DejaVuSans-8722"/>
<path d="M 37.796875 64.3125
L 12.890625 25.390625
L 37.796875 25.390625
z
M 35.203125 72.90625
L 47.609375 72.90625
L 47.609375 25.390625
L 58.015625 25.390625
L 58.015625 17.1875
L 47.609375 17.1875
L 47.609375 0
L 37.796875 0
L 37.796875 17.1875
L 4.890625 17.1875
L 4.890625 26.703125
z
" id="DejaVuSans-52"/>
</defs>
<g transform="translate(100.628906 392.598437)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-8722"/>
<use x="83.789062" xlink:href="#DejaVuSans-52"/>
</g>
</g>
</g>
<g id="xtick_2">
<g id="line2d_2">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="191.7" xlink:href="#m9e82c0b751" y="378"/>
</g>
</g>
<g id="text_2">
<!-- −3 -->
<defs>
<path d="M 40.578125 39.3125
Q 47.65625 37.796875 51.625 33
Q 55.609375 28.21875 55.609375 21.1875
Q 55.609375 10.40625 48.1875 4.484375
Q 40.765625 -1.421875 27.09375 -1.421875
Q 22.515625 -1.421875 17.65625 -0.515625
Q 12.796875 0.390625 7.625 2.203125
L 7.625 11.71875
Q 11.71875 9.328125 16.59375 8.109375
Q 21.484375 6.890625 26.8125 6.890625
Q 36.078125 6.890625 40.9375 10.546875
Q 45.796875 14.203125 45.796875 21.1875
Q 45.796875 27.640625 41.28125 31.265625
Q 36.765625 34.90625 28.71875 34.90625
L 20.21875 34.90625
L 20.21875 43.015625
L 29.109375 43.015625
Q 36.375 43.015625 40.234375 45.921875
Q 44.09375 48.828125 44.09375 54.296875
Q 44.09375 59.90625 40.109375 62.90625
Q 36.140625 65.921875 28.71875 65.921875
Q 24.65625 65.921875 20.015625 65.03125
Q 15.375 64.15625 9.8125 62.3125
L 9.8125 71.09375
Q 15.4375 72.65625 20.34375 73.4375
Q 25.25 74.21875 29.59375 74.21875
Q 40.828125 74.21875 47.359375 69.109375
Q 53.90625 64.015625 53.90625 55.328125
Q 53.90625 49.265625 50.4375 45.09375
Q 46.96875 40.921875 40.578125 39.3125
z
" id="DejaVuSans-51"/>
</defs>
<g transform="translate(184.328906 392.598437)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-8722"/>
<use x="83.789062" xlink:href="#DejaVuSans-51"/>
</g>
</g>
</g>
<g id="xtick_3">
<g id="line2d_3">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="275.4" xlink:href="#m9e82c0b751" y="378"/>
</g>
</g>
<g id="text_3">
<!-- −2 -->
<defs>
<path d="M 19.1875 8.296875
L 53.609375 8.296875
L 53.609375 0
L 7.328125 0
L 7.328125 8.296875
Q 12.9375 14.109375 22.625 23.890625
Q 32.328125 33.6875 34.8125 36.53125
Q 39.546875 41.84375 41.421875 45.53125
Q 43.3125 49.21875 43.3125 52.78125
Q 43.3125 58.59375 39.234375 62.25
Q 35.15625 65.921875 28.609375 65.921875
Q 23.96875 65.921875 18.8125 64.3125
Q 13.671875 62.703125 7.8125 59.421875
L 7.8125 69.390625
Q 13.765625 71.78125 18.9375 73
Q 24.125 74.21875 28.421875 74.21875
Q 39.75 74.21875 46.484375 68.546875
Q 53.21875 62.890625 53.21875 53.421875
Q 53.21875 48.921875 51.53125 44.890625
Q 49.859375 40.875 45.40625 35.40625
Q 44.1875 33.984375 37.640625 27.21875
Q 31.109375 20.453125 19.1875 8.296875
z
" id="DejaVuSans-50"/>
</defs>
<g transform="translate(268.028906 392.598437)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-8722"/>
<use x="83.789062" xlink:href="#DejaVuSans-50"/>
</g>
</g>
</g>
<g id="xtick_4">
<g id="line2d_4">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="359.1" xlink:href="#m9e82c0b751" y="378"/>
</g>
</g>
<g id="text_4">
<!-- −1 -->
<defs>
<path d="M 12.40625 8.296875
L 28.515625 8.296875
L 28.515625 63.921875
L 10.984375 60.40625
L 10.984375 69.390625
L 28.421875 72.90625
L 38.28125 72.90625
L 38.28125 8.296875
L 54.390625 8.296875
L 54.390625 0
L 12.40625 0
z
" id="DejaVuSans-49"/>
</defs>
<g transform="translate(351.728906 392.598437)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-8722"/>
<use x="83.789062" xlink:href="#DejaVuSans-49"/>
</g>
</g>
</g>
<g id="xtick_5">
<g id="line2d_5">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="442.8" xlink:href="#m9e82c0b751" y="378"/>
</g>
</g>
<g id="text_5">
<!-- 0 -->
<defs>
<path d="M 31.78125 66.40625
Q 24.171875 66.40625 20.328125 58.90625
Q 16.5 51.421875 16.5 36.375
Q 16.5 21.390625 20.328125 13.890625
Q 24.171875 6.390625 31.78125 6.390625
Q 39.453125 6.390625 43.28125 13.890625
Q 47.125 21.390625 47.125 36.375
Q 47.125 51.421875 43.28125 58.90625
Q 39.453125 66.40625 31.78125 66.40625
z
M 31.78125 74.21875
Q 44.046875 74.21875 50.515625 64.515625
Q 56.984375 54.828125 56.984375 36.375
Q 56.984375 17.96875 50.515625 8.265625
Q 44.046875 -1.421875 31.78125 -1.421875
Q 19.53125 -1.421875 13.0625 8.265625
Q 6.59375 17.96875 6.59375 36.375
Q 6.59375 54.828125 13.0625 64.515625
Q 19.53125 74.21875 31.78125 74.21875
z
" id="DejaVuSans-48"/>
</defs>
<g transform="translate(439.61875 392.598437)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-48"/>
</g>
</g>
</g>
<g id="xtick_6">
<g id="line2d_6">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="526.5" xlink:href="#m9e82c0b751" y="378"/>
</g>
</g>
<g id="text_6">
<!-- 1 -->
<g transform="translate(523.31875 392.598437)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-49"/>
</g>
</g>
</g>
<g id="xtick_7">
<g id="line2d_7">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="610.2" xlink:href="#m9e82c0b751" y="378"/>
</g>
</g>
<g id="text_7">
<!-- 2 -->
<g transform="translate(607.01875 392.598437)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-50"/>
</g>
</g>
</g>
<g id="xtick_8">
<g id="line2d_8">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="693.9" xlink:href="#m9e82c0b751" y="378"/>
</g>
</g>
<g id="text_8">
<!-- 3 -->
<g transform="translate(690.71875 392.598437)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-51"/>
</g>
</g>
</g>
<g id="xtick_9">
<g id="line2d_9">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="777.6" xlink:href="#m9e82c0b751" y="378"/>
</g>
</g>
<g id="text_9">
<!-- 4 -->
<g transform="translate(774.41875 392.598437)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-52"/>
</g>
</g>
</g>
</g>
<g id="matplotlib.axis_2">
<g id="ytick_1">
<g id="line2d_10">
<defs>
<path d="M 0 0
L -3.5 0
" id="m41992121dd" style="stroke:#000000;stroke-width:0.8;"/>
</defs>
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="108" xlink:href="#m41992121dd" y="378"/>
</g>
</g>
<g id="text_10">
<!-- 0.0 -->
<defs>
<path d="M 10.6875 12.40625
L 21 12.40625
L 21 0
L 10.6875 0
z
" id="DejaVuSans-46"/>
</defs>
<g transform="translate(85.096875 381.799219)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-48"/>
<use x="63.623047" xlink:href="#DejaVuSans-46"/>
<use x="95.410156" xlink:href="#DejaVuSans-48"/>
</g>
</g>
</g>
<g id="ytick_2">
<g id="line2d_11">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="108" xlink:href="#m41992121dd" y="312.768"/>
</g>
</g>
<g id="text_11">
<!-- 0.2 -->
<g transform="translate(85.096875 316.567219)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-48"/>
<use x="63.623047" xlink:href="#DejaVuSans-46"/>
<use x="95.410156" xlink:href="#DejaVuSans-50"/>
</g>
</g>
</g>
<g id="ytick_3">
<g id="line2d_12">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="108" xlink:href="#m41992121dd" y="247.536"/>
</g>
</g>
<g id="text_12">
<!-- 0.4 -->
<g transform="translate(85.096875 251.335219)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-48"/>
<use x="63.623047" xlink:href="#DejaVuSans-46"/>
<use x="95.410156" xlink:href="#DejaVuSans-52"/>
</g>
</g>
</g>
<g id="ytick_4">
<g id="line2d_13">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="108" xlink:href="#m41992121dd" y="182.304"/>
</g>
</g>
<g id="text_13">
<!-- 0.6 -->
<defs>
<path d="M 33.015625 40.375
Q 26.375 40.375 22.484375 35.828125
Q 18.609375 31.296875 18.609375 23.390625
Q 18.609375 15.53125 22.484375 10.953125
Q 26.375 6.390625 33.015625 6.390625
Q 39.65625 6.390625 43.53125 10.953125
Q 47.40625 15.53125 47.40625 23.390625
Q 47.40625 31.296875 43.53125 35.828125
Q 39.65625 40.375 33.015625 40.375
z
M 52.59375 71.296875
L 52.59375 62.3125
Q 48.875 64.0625 45.09375 64.984375
Q 41.3125 65.921875 37.59375 65.921875
Q 27.828125 65.921875 22.671875 59.328125
Q 17.53125 52.734375 16.796875 39.40625
Q 19.671875 43.65625 24.015625 45.921875
Q 28.375 48.1875 33.59375 48.1875
Q 44.578125 48.1875 50.953125 41.515625
Q 57.328125 34.859375 57.328125 23.390625
Q 57.328125 12.15625 50.6875 5.359375
Q 44.046875 -1.421875 33.015625 -1.421875
Q 20.359375 -1.421875 13.671875 8.265625
Q 6.984375 17.96875 6.984375 36.375
Q 6.984375 53.65625 15.1875 63.9375
Q 23.390625 74.21875 37.203125 74.21875
Q 40.921875 74.21875 44.703125 73.484375
Q 48.484375 72.75 52.59375 71.296875
z
" id="DejaVuSans-54"/>
</defs>
<g transform="translate(85.096875 186.103219)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-48"/>
<use x="63.623047" xlink:href="#DejaVuSans-46"/>
<use x="95.410156" xlink:href="#DejaVuSans-54"/>
</g>
</g>
</g>
<g id="ytick_5">
<g id="line2d_14">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="108" xlink:href="#m41992121dd" y="117.072"/>
</g>
</g>
<g id="text_14">
<!-- 0.8 -->
<defs>
<path d="M 31.78125 34.625
Q 24.75 34.625 20.71875 30.859375
Q 16.703125 27.09375 16.703125 20.515625
Q 16.703125 13.921875 20.71875 10.15625
Q 24.75 6.390625 31.78125 6.390625
Q 38.8125 6.390625 42.859375 10.171875
Q 46.921875 13.96875 46.921875 20.515625
Q 46.921875 27.09375 42.890625 30.859375
Q 38.875 34.625 31.78125 34.625
z
M 21.921875 38.8125
Q 15.578125 40.375 12.03125 44.71875
Q 8.5 49.078125 8.5 55.328125
Q 8.5 64.0625 14.71875 69.140625
Q 20.953125 74.21875 31.78125 74.21875
Q 42.671875 74.21875 48.875 69.140625
Q 55.078125 64.0625 55.078125 55.328125
Q 55.078125 49.078125 51.53125 44.71875
Q 48 40.375 41.703125 38.8125
Q 48.828125 37.15625 52.796875 32.3125
Q 56.78125 27.484375 56.78125 20.515625
Q 56.78125 9.90625 50.3125 4.234375
Q 43.84375 -1.421875 31.78125 -1.421875
Q 19.734375 -1.421875 13.25 4.234375
Q 6.78125 9.90625 6.78125 20.515625
Q 6.78125 27.484375 10.78125 32.3125
Q 14.796875 37.15625 21.921875 38.8125
z
M 18.3125 54.390625
Q 18.3125 48.734375 21.84375 45.5625
Q 25.390625 42.390625 31.78125 42.390625
Q 38.140625 42.390625 41.71875 45.5625
Q 45.3125 48.734375 45.3125 54.390625
Q 45.3125 60.0625 41.71875 63.234375
Q 38.140625 66.40625 31.78125 66.40625
Q 25.390625 66.40625 21.84375 63.234375
Q 18.3125 60.0625 18.3125 54.390625
z
" id="DejaVuSans-56"/>
</defs>
<g transform="translate(85.096875 120.871219)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-48"/>
<use x="63.623047" xlink:href="#DejaVuSans-46"/>
<use x="95.410156" xlink:href="#DejaVuSans-56"/>
</g>
</g>
</g>
<g id="ytick_6">
<g id="line2d_15">
<g>
<use style="stroke:#000000;stroke-width:0.8;" x="108" xlink:href="#m41992121dd" y="51.84"/>
</g>
</g>
<g id="text_15">
<!-- 1.0 -->
<g transform="translate(85.096875 55.639219)scale(0.1 -0.1)">
<use xlink:href="#DejaVuSans-49"/>
<use x="63.623047" xlink:href="#DejaVuSans-46"/>
<use x="95.410156" xlink:href="#DejaVuSans-48"/>
</g>
</g>
</g>
</g>
<g id="line2d_16">
<defs>
<path d="M 0 20
L 0 -20
" id="mfbc1d2e86c" style="stroke:#ff0000;"/>
</defs>
<g clip-path="url(#p63d08724d2)">
<use style="fill:#ff0000;stroke:#ff0000;" x="191.7" xlink:href="#mfbc1d2e86c" y="214.92"/>
<use style="fill:#ff0000;stroke:#ff0000;" x="275.4" xlink:href="#mfbc1d2e86c" y="214.92"/>
<use style="fill:#ff0000;stroke:#ff0000;" x="359.1" xlink:href="#mfbc1d2e86c" y="214.92"/>
<use style="fill:#ff0000;stroke:#ff0000;" x="442.8" xlink:href="#mfbc1d2e86c" y="214.92"/>
<use style="fill:#ff0000;stroke:#ff0000;" x="526.5" xlink:href="#mfbc1d2e86c" y="214.92"/>
<use style="fill:#ff0000;stroke:#ff0000;" x="610.2" xlink:href="#mfbc1d2e86c" y="214.92"/>
<use style="fill:#ff0000;stroke:#ff0000;" x="693.9" xlink:href="#mfbc1d2e86c" y="214.92"/>
</g>
</g>
<g id="patch_5">
<path d="M 108 378
L 108 51.84
" style="fill:none;stroke:#000000;stroke-linecap:square;stroke-linejoin:miter;stroke-width:0.8;"/>
</g>
<g id="patch_6">
<path d="M 777.6 378
L 777.6 51.84
" style="fill:none;stroke:#000000;stroke-linecap:square;stroke-linejoin:miter;stroke-width:0.8;"/>
</g>
<g id="patch_7">
<path d="M 108 378
L 777.6 378
" style="fill:none;stroke:#000000;stroke-linecap:square;stroke-linejoin:miter;stroke-width:0.8;"/>
</g>
<g id="patch_8">
<path d="M 108 51.84
L 777.6 51.84
" style="fill:none;stroke:#000000;stroke-linecap:square;stroke-linejoin:miter;stroke-width:0.8;"/>
</g>
<g id="text_16">
<!-- -3 -->
<defs>
<path d="M 4.890625 31.390625
L 31.203125 31.390625
L 31.203125 23.390625
L 4.890625 23.390625
z
" id="DejaVuSans-45"/>
</defs>
<g transform="translate(187.515 280.152)scale(0.15 -0.15)">
<use xlink:href="#DejaVuSans-45"/>
<use x="36.083984" xlink:href="#DejaVuSans-51"/>
</g>
</g>
<g id="text_17">
<!-- -2 -->
<g transform="translate(271.215 280.152)scale(0.15 -0.15)">
<use xlink:href="#DejaVuSans-45"/>
<use x="36.083984" xlink:href="#DejaVuSans-50"/>
</g>
</g>
<g id="text_18">
<!-- -1 -->
<g transform="translate(354.915 280.152)scale(0.15 -0.15)">
<use xlink:href="#DejaVuSans-45"/>
<use x="36.083984" xlink:href="#DejaVuSans-49"/>
</g>
</g>
<g id="text_19">
<!-- 0 -->
<g transform="translate(438.615 280.152)scale(0.15 -0.15)">
<use xlink:href="#DejaVuSans-48"/>
</g>
</g>
<g id="text_20">
<!-- 1 -->
<g transform="translate(522.315 280.152)scale(0.15 -0.15)">
<use xlink:href="#DejaVuSans-49"/>
</g>
</g>
<g id="text_21">
<!-- 2 -->
<g transform="translate(606.015 280.152)scale(0.15 -0.15)">
<use xlink:href="#DejaVuSans-50"/>
</g>
</g>
<g id="text_22">
<!-- 3 -->
<g transform="translate(689.715 280.152)scale(0.15 -0.15)">
<use xlink:href="#DejaVuSans-51"/>
</g>
</g>
</g>
</g>
<defs>
<clipPath id="p63d08724d2">
<rect height="326.16" width="669.6" x="108" y="51.84"/>
</clipPath>
</defs>
</svg>
###Output
_____no_output_____ |
1. Natural Language Processing with Classification and Vector Spaces/Week 1/NLP_C1_W1_lecture_nb_01.ipynb | ###Markdown
PreprocessingIn this lab, we will be exploring how to preprocess tweets for sentiment analysis. We will provide a function for preprocessing tweets during this week's assignment, but it is still good to know what is going on under the hood. By the end of this lecture, you will see how to use the [NLTK](http://www.nltk.org) package to perform a preprocessing pipeline for Twitter datasets. SetupYou will be doing sentiment analysis on tweets in the first two weeks of this course. To help with that, we will be using the [Natural Language Toolkit (NLTK)](http://www.nltk.org/howto/twitter.html) package, an open-source Python library for natural language processing. It has modules for collecting, handling, and processing Twitter data, and you will be acquainted with them as we move along the course.For this exercise, we will use a Twitter dataset that comes with NLTK. This dataset has been manually annotated and serves to establish baselines for models quickly. Let us import them now as well as a few other libraries we will be using.
###Code
import nltk # Python library for NLP
from nltk.corpus import twitter_samples # sample Twitter dataset from NLTK
import matplotlib.pyplot as plt # library for visualization
import random # pseudo-random number generator
###Output
_____no_output_____
###Markdown
About the Twitter datasetThe sample dataset from NLTK is separated into positive and negative tweets. It contains 5000 positive tweets and 5000 negative tweets exactly. The exact match between these classes is not a coincidence. The intention is to have a balanced dataset. That does not reflect the real distributions of positive and negative classes in live Twitter streams. It is just because balanced datasets simplify the design of most computational methods that are required for sentiment analysis. However, it is better to be aware that this balance of classes is artificial. The dataset is already downloaded in the Coursera workspace. In a local computer however, you can download the data by doing:
###Code
# downloads sample twitter dataset. uncomment the line below if running on a local machine.
# nltk.download('twitter_samples')
###Output
_____no_output_____
###Markdown
We can load the text fields of the positive and negative tweets by using the module's `strings()` method like this:
###Code
# select the set of positive and negative tweets
all_positive_tweets = twitter_samples.strings('positive_tweets.json')
all_negative_tweets = twitter_samples.strings('negative_tweets.json')
###Output
_____no_output_____
###Markdown
Next, we'll print a report with the number of positive and negative tweets. It is also essential to know the data structure of the datasets
###Code
print('Number of positive tweets: ', len(all_positive_tweets))
print('Number of negative tweets: ', len(all_negative_tweets))
print('\nThe type of all_positive_tweets is: ', type(all_positive_tweets))
print('The type of a tweet entry is: ', type(all_negative_tweets[0]))
###Output
Number of positive tweets: 5000
Number of negative tweets: 5000
The type of all_positive_tweets is: <class 'list'>
The type of a tweet entry is: <class 'str'>
###Markdown
We can see that the data is stored in a list and as you might expect, individual tweets are stored as strings.You can make a more visually appealing report by using Matplotlib's [pyplot](https://matplotlib.org/tutorials/introductory/pyplot.html) library. Let us see how to create a [pie chart](https://matplotlib.org/3.2.1/gallery/pie_and_polar_charts/pie_features.htmlsphx-glr-gallery-pie-and-polar-charts-pie-features-py) to show the same information as above. This simple snippet will serve you in future visualizations of this kind of data.
###Code
# Declare a figure with a custom size
fig = plt.figure(figsize=(5, 5))
# labels for the two classes
labels = 'Positives', 'Negative'
# Sizes for each slide
sizes = [len(all_positive_tweets), len(all_negative_tweets)]
# Declare pie chart, where the slices will be ordered and plotted counter-clockwise:
plt.pie(sizes, labels=labels, autopct='%1.1f%%',
shadow=True, startangle=90)
# Equal aspect ratio ensures that pie is drawn as a circle.
plt.axis('equal')
# Display the chart
plt.show()
###Output
_____no_output_____
###Markdown
Looking at raw textsBefore anything else, we can print a couple of tweets from the dataset to see how they look. Understanding the data is responsible for 80% of the success or failure in data science projects. We can use this time to observe aspects we'd like to consider when preprocessing our data.Below, you will print one random positive and one random negative tweet. We have added a color mark at the beginning of the string to further distinguish the two. (Warning: This is taken from a public dataset of real tweets and a very small portion has explicit content.)
###Code
# print positive in greeen
print('\033[92m' + all_positive_tweets[random.randint(0,5000)])
# print negative in red
print('\033[91m' + all_negative_tweets[random.randint(0,5000)])
###Output
[92m@Fearnecotton good luck and cherish what you have an amazing gift of life :)
[91m@The_Lie_Lama back to Delhi :(
###Markdown
One observation you may have is the presence of [emoticons](https://en.wikipedia.org/wiki/Emoticon) and URLs in many of the tweets. This info will come in handy in the next steps. Preprocess raw text for Sentiment analysis Data preprocessing is one of the critical steps in any machine learning project. It includes cleaning and formatting the data before feeding into a machine learning algorithm. For NLP, the preprocessing steps are comprised of the following tasks:* Tokenizing the string* Lowercasing* Removing stop words and punctuation* StemmingThe videos explained each of these steps and why they are important. Let's see how we can do these to a given tweet. We will choose just one and see how this is transformed by each preprocessing step.
###Code
# Our selected sample. Complex enough to exemplify each step
tweet = all_positive_tweets[2277]
print(tweet)
###Output
My beautiful sunflowers on a sunny Friday morning off :) #sunflowers #favourites #happy #Friday off… https://t.co/3tfYom0N1i
###Markdown
Let's import a few more libraries for this purpose.
###Code
# download the stopwords from NLTK
nltk.download('stopwords')
import re # library for regular expression operations
import string # for string operations
from nltk.corpus import stopwords # module for stop words that come with NLTK
from nltk.stem import PorterStemmer # module for stemming
from nltk.tokenize import TweetTokenizer # module for tokenizing strings
###Output
_____no_output_____
###Markdown
Remove hyperlinks, Twitter marks and stylesSince we have a Twitter dataset, we'd like to remove some substrings commonly used on the platform like the hashtag, retweet marks, and hyperlinks. We'll use the [re](https://docs.python.org/3/library/re.html) library to perform regular expression operations on our tweet. We'll define our search pattern and use the `sub()` method to remove matches by substituting with an empty character (i.e. `''`)
###Code
print('\033[92m' + tweet)
print('\033[94m')
# remove old style retweet text "RT"
tweet2 = re.sub(r'^RT[\s]+', '', tweet)
# remove hyperlinks
tweet2 = re.sub(r'https?:\/\/.*[\r\n]*', '', tweet2)
# remove hashtags
# only removing the hash # sign from the word
tweet2 = re.sub(r'#', '', tweet2)
print(tweet2)
###Output
[92mMy beautiful sunflowers on a sunny Friday morning off :) #sunflowers #favourites #happy #Friday off… https://t.co/3tfYom0N1i
[94m
My beautiful sunflowers on a sunny Friday morning off :) sunflowers favourites happy Friday off…
###Markdown
Tokenize the stringTo tokenize means to split the strings into individual words without blanks or tabs. In this same step, we will also convert each word in the string to lower case. The [tokenize](https://www.nltk.org/api/nltk.tokenize.htmlmodule-nltk.tokenize.casual) module from NLTK allows us to do these easily:
###Code
print()
print('\033[92m' + tweet2)
print('\033[94m')
# instantiate tokenizer class
tokenizer = TweetTokenizer(preserve_case=False, strip_handles=True,
reduce_len=True)
# tokenize tweets
tweet_tokens = tokenizer.tokenize(tweet2)
print()
print('Tokenized string:')
print(tweet_tokens)
###Output
[92mMy beautiful sunflowers on a sunny Friday morning off :) sunflowers favourites happy Friday off…
[94m
Tokenized string:
['my', 'beautiful', 'sunflowers', 'on', 'a', 'sunny', 'friday', 'morning', 'off', ':)', 'sunflowers', 'favourites', 'happy', 'friday', 'off', '…']
###Markdown
Remove stop words and punctuationsThe next step is to remove stop words and punctuation. Stop words are words that don't add significant meaning to the text. You'll see the list provided by NLTK when you run the cells below.
###Code
#Import the english stop words list from NLTK
stopwords_english = stopwords.words('english')
print('Stop words\n')
print(stopwords_english)
print('\nPunctuation\n')
print(string.punctuation)
###Output
Stop words
['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've", "you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', 've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn', "hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn', "mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", 'won', "won't", 'wouldn', "wouldn't"]
Punctuation
!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~
###Markdown
We can see that the stop words list above contains some words that could be important in some contexts. These could be words like _i, not, between, because, won, against_. You might need to customize the stop words list for some applications. For our exercise, we will use the entire list.For the punctuation, we saw earlier that certain groupings like ':)' and '...' should be retained when dealing with tweets because they are used to express emotions. In other contexts, like medical analysis, these should also be removed.Time to clean up our tokenized tweet!
###Code
print()
print('\033[92m')
print(tweet_tokens)
print('\033[94m')
tweets_clean = []
for word in tweet_tokens: # Go through every word in your tokens list
if (word not in stopwords_english and # remove stopwords
word not in string.punctuation): # remove punctuation
tweets_clean.append(word)
print('removed stop words and punctuation:')
print(tweets_clean)
###Output
[92m
['my', 'beautiful', 'sunflowers', 'on', 'a', 'sunny', 'friday', 'morning', 'off', ':)', 'sunflowers', 'favourites', 'happy', 'friday', 'off', '…']
[94m
removed stop words and punctuation:
['beautiful', 'sunflowers', 'sunny', 'friday', 'morning', ':)', 'sunflowers', 'favourites', 'happy', 'friday', '…']
###Markdown
Please note that the words **happy** and **sunny** in this list are correctly spelled. StemmingStemming is the process of converting a word to its most general form, or stem. This helps in reducing the size of our vocabulary.Consider the words: * **learn** * **learn**ing * **learn**ed * **learn**t All these words are stemmed from its common root **learn**. However, in some cases, the stemming process produces words that are not correct spellings of the root word. For example, **happi** and **sunni**. That's because it chooses the most common stem for related words. For example, we can look at the set of words that comprises the different forms of happy: * **happ**y * **happi**ness * **happi**er We can see that the prefix **happi** is more commonly used. We cannot choose **happ** because it is the stem of unrelated words like **happen**. NLTK has different modules for stemming and we will be using the [PorterStemmer](https://www.nltk.org/api/nltk.stem.htmlmodule-nltk.stem.porter) module which uses the [Porter Stemming Algorithm](https://tartarus.org/martin/PorterStemmer/). Let's see how we can use it in the cell below.
###Code
print()
print('\033[92m')
print(tweets_clean)
print('\033[94m')
# Instantiate stemming class
stemmer = PorterStemmer()
# Create an empty list to store the stems
tweets_stem = []
for word in tweets_clean:
stem_word = stemmer.stem(word) # stemming word
tweets_stem.append(stem_word) # append to the list
print('stemmed words:')
print(tweets_stem)
###Output
[92m
['beautiful', 'sunflowers', 'sunny', 'friday', 'morning', ':)', 'sunflowers', 'favourites', 'happy', 'friday', '…']
[94m
stemmed words:
['beauti', 'sunflow', 'sunni', 'friday', 'morn', ':)', 'sunflow', 'favourit', 'happi', 'friday', '…']
###Markdown
That's it! Now we have a set of words we can feed into to the next stage of our machine learning project. process_tweet()As shown above, preprocessing consists of multiple steps before you arrive at the final list of words. We will not ask you to replicate these however. In the week's assignment, you will use the function `process_tweet(tweet)` available in _utils.py_. We encourage you to open the file and you'll see that this function's implementation is very similar to the steps above.To obtain the same result as in the previous code cells, you will only need to call the function `process_tweet()`. Let's do that in the next cell.
###Code
from utils import process_tweet # Import the process_tweet function
# choose the same tweet
tweet = all_positive_tweets[2277]
print()
print('\033[92m')
print(tweet)
print('\033[94m')
# call the imported function
tweets_stem = process_tweet(tweet); # Preprocess a given tweet
print('preprocessed tweet:')
print(tweets_stem) # Print the result
###Output
[92m
My beautiful sunflowers on a sunny Friday morning off :) #sunflowers #favourites #happy #Friday off… https://t.co/3tfYom0N1i
[94m
preprocessed tweet:
['beauti', 'sunflow', 'sunni', 'friday', 'morn', ':)', 'sunflow', 'favourit', 'happi', 'friday', '…']
|
module4/Mike_Xie_assignment_applied_modeling_4_1.ipynb | ###Markdown
Lambda School Data Science*Unit 2, Sprint 3, Module 1*--- Define ML problemsYou will use your portfolio project dataset for all assignments this sprint. AssignmentComplete these tasks for your project, and document your decisions.- [ ] Choose your target. Which column in your tabular dataset will you predict?- [ ] Is your problem regression or classification?- [ ] How is your target distributed? - Classification: How many classes? Are the classes imbalanced? - Regression: Is the target right-skewed? If so, you may want to log transform the target.- [ ] Choose your evaluation metric(s). - Classification: Is your majority class frequency >= 50% and < 70% ? If so, you can just use accuracy if you want. Outside that range, accuracy could be misleading. What evaluation metric will you choose, in addition to or instead of accuracy? - Regression: Will you use mean absolute error, root mean squared error, R^2, or other regression metrics?- [ ] Choose which observations you will use to train, validate, and test your model. - Are some observations outliers? Will you exclude them? - Will you do a random split or a time-based split?- [ ] Begin to clean and explore your data.- [ ] Begin to choose which features, if any, to exclude. Would some features "leak" future information?If you haven't found a dataset yet, do that today. [Review requirements for your portfolio project](https://lambdaschool.github.io/ds/unit2) and choose your dataset. Lambda School Data Science*Unit 2, Sprint 3, Module 2*--- Wrangle ML datasets- [ ] Continue to clean and explore your data. - [ ] For the evaluation metric you chose, what score would you get just by guessing?- [ ] Can you make a fast, first model that beats guessing?**We recommend that you use your portfolio project dataset for all assignments this sprint.****But if you aren't ready yet, or you want more practice, then use the New York City property sales dataset for today's assignment.** Follow the instructions below, to just keep a subset for the Tribeca neighborhood, and remove outliers or dirty data. [Here's a video walkthrough](https://youtu.be/pPWFw8UtBVg?t=584) you can refer to if you get stuck or want hints!- Data Source: [NYC OpenData: NYC Citywide Rolling Calendar Sales](https://data.cityofnewyork.us/dataset/NYC-Citywide-Rolling-Calendar-Sales/usep-8jbt)- Glossary: [NYC Department of Finance: Rolling Sales Data](https://www1.nyc.gov/site/finance/taxes/property-rolling-sales-data.page) Lambda School Data Science*Unit 2, Sprint 3, Module 3*--- Permutation & BoostingYou will use your portfolio project dataset for all assignments this sprint. AssignmentComplete these tasks for your project, and document your work.- [ ] If you haven't completed assignment 1, please do so first.- [ ] Continue to clean and explore your data. Make exploratory visualizations.- [ ] Fit a model. Does it beat your baseline? - [ ] Try xgboost.- [ ] Get your model's permutation importances.You should try to complete an initial model today, because the rest of the week, we're making model interpretation visualizations.But, if you aren't ready to try xgboost and permutation importances with your dataset today, that's okay. You can practice with another dataset instead. You may choose any dataset you've worked with previously.The data subdirectory includes the Titanic dataset for classification and the NYC apartments dataset for regression. You may want to choose one of these datasets, because example solutions will be available for each. ReadingTop recommendations in _**bold italic:**_ Permutation Importances- _**[Kaggle / Dan Becker: Machine Learning Explainability](https://www.kaggle.com/dansbecker/permutation-importance)**_- [Christoph Molnar: Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/feature-importance.html) (Default) Feature Importances - [Ando Saabas: Selecting good features, Part 3, Random Forests](https://blog.datadive.net/selecting-good-features-part-iii-random-forests/) - [Terence Parr, et al: Beware Default Random Forest Importances](https://explained.ai/rf-importance/index.html) Gradient Boosting - [A Gentle Introduction to the Gradient Boosting Algorithm for Machine Learning](https://machinelearningmastery.com/gentle-introduction-gradient-boosting-algorithm-machine-learning/) - _**[A Kaggle Master Explains Gradient Boosting](http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/)**_ - [_An Introduction to Statistical Learning_](http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf) Chapter 8 - [Gradient Boosting Explained](http://arogozhnikov.github.io/2016/06/24/gradient_boosting_explained.html) - _**[Boosting](https://www.youtube.com/watch?v=GM3CDQfQ4sw) (2.5 minute video)**_
###Code
%%capture
import sys
if 'google.colab' in sys.modules:
# Install packages in Colab
!pip install category_encoders==2.*
!pip install pandas-profiling==2.*
!wget https://github.com/bowswung/voobly-scraper/raw/master/data/MatchData/20190208/matchDump.csv.zip
!unzip matchDump.csv.zip
!head matchDump.csv
import pandas as pd
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.pipeline import make_pipeline
pd.options.display.max_rows = 999
pd.options.display.max_columns = 100
df = pd.read_csv('matchDump.csv', header=0, engine='python')
cols_to_keep = ['MatchId','MatchMods','MatchMap','MatchLadder','MatchDuration','MatchPlayerTeam','MatchPlayerCivId','MatchPlayerCivName','MatchPlayerWinner','MatchPlayerPreRating']
df = df[cols_to_keep]
df.shape
df.head(5)
###Output
_____no_output_____
###Markdown
Choose Target
###Code
# I want to predict which player wins
target = 'MatchPlayerWinner'
# This is 50/50 since odds are P1 and evens are P2 for a baseline
df[target].describe()
df['MatchLadder'].value_counts()
# only do 1v1 on random map game mode
rm_1v1 = df[df['MatchLadder'] == 'RM - 1v1']
# remove ones with errors
rm_1v1 = rm_1v1[rm_1v1.MatchPlayerPreRating != '*VooblyErrorPlayerNotFound*']
# drop matchladder now since it's all the same
rm_1v1 = rm_1v1.drop(labels=['MatchLadder','MatchPlayerCivName','MatchPlayerTeam'], axis=1)
rm_1v1.head()
rm_1v1.dtypes
rm_1v1['MatchPlayerCivId'] = rm_1v1['MatchPlayerCivId'].astype(int)
rm_1v1['MatchPlayerWinner'] = rm_1v1['MatchPlayerWinner'].astype(int)
rm_1v1['MatchPlayerPreRating'] = rm_1v1['MatchPlayerPreRating'].astype(int)
rm_1v1.dtypes
rm_1v1['MatchMods'].describe()
rm_1v1['MatchMap'].describe()
rm_1v1.tail()
y = rm_1v1[target]
y.nunique()
y.value_counts() # baseline is 50/50 since only P1 or P2 can win a 1v1
# ask Shriphani how to drop Voobly Errors
# do this later to feature engineer relative rating by getting P1 and P2 ratings on same row after merge
first_players = rm_1v1[::2]
second_players = rm_1v1[1::2]
first_players.shape, second_players.shape
first_players.head()
second_players.head()
p2_names = {
"MatchPlayerCivId" : "MatchPlayerCivId2",
"MatchPlayerPreRating" : "MatchPlayerPreRating2",
"MatchPlayerPostRating" : "MatchPlayerPostRating2"
}
second_players = second_players.rename(columns=p2_names)
drop_cols = ['MatchMods','MatchDuration','MatchPlayerWinner', 'MatchMap']
second_players = second_players.drop(labels=drop_cols, axis=1)
second_players.head()
rm_1v1 = first_players.merge(second_players, on='MatchId')
# set matchID as index
rm_1v1 = rm_1v1.set_index('MatchId')
# unfortunately all of the entries are formatted such that P1 is the match winner so we need to shuffle some
# half of it
from sklearn.model_selection import train_test_split
fst, snd = train_test_split(rm_1v1, train_size=0.5, test_size=0.5)
fst.shape, snd.shape
snd_names = {
"MatchPlayerCivId" : "MatchPlayerCivId2",
"MatchPlayerPreRating" : "MatchPlayerPreRating2",
"MatchPlayerPostRating" : "MatchPlayerPostRating2",
"MatchPlayerCivId2" : "MatchPlayerCivId",
"MatchPlayerPreRating2" : "MatchPlayerPreRating",
"MatchPlayerPostRating2" : "MatchPlayerPostRating"
}
snd = snd.rename(columns=snd_names)
snd['MatchPlayerWinner'] = snd['MatchPlayerWinner'].map({1:0})
snd = snd.fillna(0)
snd.head()
fst.head()
rm_1v1 = snd.append(fst, sort=True)
###Output
_____no_output_____
###Markdown
Feature Engineering
###Code
rm_1v1['EloDifference'] = rm_1v1['MatchPlayerPreRating'] - rm_1v1['MatchPlayerPreRating2']
rm_1v1.head()
rm_1v1.columns
rm_1v1['EloBin'] = pd.qcut(rm_1v1['EloDifference'], 6, labels=[1,2,3,4,5,6])
sns.barplot(x='EloBin', y='MatchPlayerWinner', data=rm_1v1);
rm_1v1['TimeBin'] = pd.qcut(rm_1v1['MatchDuration'], 6, labels=[1,2,3,4,5,6])
sns.barplot(x='TimeBin', y='MatchPlayerWinner', data=rm_1v1);
sns.barplot(x='MatchPlayerCivId', y='MatchPlayerWinner', data=rm_1v1);
# ok prob bin some of the less popular ones together
rm_1v1['MatchPlayerCivId'].value_counts()
rm_1v1.columns
rm_1v1.head()
###Output
_____no_output_____
###Markdown
FIT MODEL
###Code
rm_1v1.describe()
from sklearn.model_selection import train_test_split
train, val = train_test_split(rm_1v1, train_size = 0.8, test_size = 0.2)
train.shape, val.shape
import seaborn as sns
sns.barplot(x='EloDifference', y='MatchPlayerWinner', data=rm_1v1);
# run a model better than baseline
target = 'MatchPlayerWinner'
features = [
'MatchMods',
'MatchMap',
'MatchDuration',
# 'MatchPlayerCivId',
# 'MatchPlayerPreRating',
# 'MatchPlayerCivId2',
# 'MatchPlayerPreRating2',
'EloDifference',
'EloBin']
# looks sparse need to make a lot of feature engineering later
X_train = train[features]
y_train = train[target]
X_val = val[features]
y_val = val[target]
###Output
_____no_output_____
###Markdown
Train Accuracy 1.0 Still Not Sure Why
###Code
pipeline = make_pipeline(
ce.OrdinalEncoder(),
#RandomForestClassifier(n_estimators = 100, n_jobs=-1)
DecisionTreeClassifier(random_state=42)
)
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Train Accuracy', pipeline.score(X_train, y_train))
# well, that's, not good
print('Validation Accuracy', pipeline.score(X_val, y_val))
# fail
rm_1v1['EloDifference'].describe()
###Output
_____no_output_____
###Markdown
Visualization of Random Forest See Why It Fails
###Code
# import graphviz
# from sklearn.tree import export_graphviz
# model = pipeline.named_steps['decisiontreeclassifier']
# encoder = pipeline.named_steps['ordinalencoder']
# encoded_columns = encoder.transform(X_val).columns
# dot_data = export_graphviz(model,
# out_file=None,
# max_depth=3,
# feature_names=encoded_columns,
# class_names=model.classes_,
# impurity=False,
# filled=True,
# proportion=True,
# rounded=True)
# display(graphviz.Source(dot_data))
# # not sure why suddenly broken
###Output
_____no_output_____
###Markdown
Wednesday XBG Boost
###Code
rf = pipeline.named_steps['decisiontreeclassifier']
importances = pd.Series(rf.feature_importances_, X_train.columns)
%matplotlib inline
import matplotlib.pyplot as plt
n=20
plt.figure(figsize=(10,n/2))
plt.title(f'Top {n} features')
importances.sort_values()[-n:].plot.barh(color='grey');
###Output
_____no_output_____
###Markdown
Thursday PDP and Shapely Plots
###Code
sns.distplot(y_train);
import category_encoders as ce
from sklearn.linear_model import LinearRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
lr = make_pipeline(
ce.TargetEncoder(),
LinearRegression()
)
lr.fit(X_train, y_train)
print('Linear Regression R^2', lr.score(X_val, y_val))
# oh god these coeffficients are so low
from sklearn.metrics import r2_score
from xgboost import XGBRegressor
gb = make_pipeline(
ce.OrdinalEncoder(),
XGBRegressor(n_estimators=200, objective='reg:squarederror', n_jobs=-1)
)
gb.fit(X_train, y_train)
y_pred = gb.predict(X_val)
print('Gradient Boosting R^2', r2_score(y_val, y_pred))
# increase the dots per inch (double it), so the text isn't so fuzzy
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 72
if 'google.colab' in sys.modules:
!pip install pdpbox
!pip install shap
from pdpbox.pdp import pdp_isolate, pdp_plot
feature = 'EloDifference'
isolated = pdp_isolate(
model=gb,
dataset=X_val,
model_features=X_val.columns,
feature=feature
)
pdp_plot(isolated, feature_name=feature)
# Plot PDP with 100 ICE curves
# PDP: Partial Dependence Plot
# ICE: Individual Conditional Expectation
pdp_plot(isolated, feature_name=feature, plot_lines=True,
frac_to_plot=0.01)
plt.xlim(-1000,1000);
###Output
_____no_output_____ |
Deep Learning-SEMICOLON/.ipynb_checkpoints/Numpy and matplotlib-checkpoint.ipynb | ###Markdown
Numpy
###Code
import numpy as np
x=[1,2,3.5,4.8]
a=np.array(x)
print(a)
print (a*2)
x=np.arange(1,10)
print (x)
x=np.arange(3.2)
y=np.arange(0.1,1.2,0.1)
print (x)
print (y)
# Numpy arrays
a=np.array([1,2,3,4])
b=np.array([[1,1],[2,2]])
print (a)
print (a.ndim)
print (b)
print (b.ndim)
np.shape(b)
np.shape(a)
#indexing
print (b[0])
print (b[0][1])
print (a[0:3])
#generating random number
a=np.random.randn(6,4)
print (a)
###Output
[[-1.34246676 -0.5807131 0.41730698 -0.88350931]
[ 0.38956869 -1.53343747 0.93098748 0.10891967]
[ 1.22380434 0.36496217 1.05097715 1.87054314]
[ 0.97132701 -0.5949714 -1.23374735 -1.18094145]
[ 0.72971215 0.30939061 0.45729465 0.24109221]
[-1.87087538 -0.45913391 -0.74520383 -0.82751107]]
###Markdown
Matplotlib pyplot
###Code
import matplotlib.pyplot as plt
k=[1.1,2.3,4.5,10.11]
plt.plot(k)
plt.ylabel("Y axis")
plt.xlabel("X axis")
plt.show()
plt.plot([1,2,3,4], [1,4,9,16], 'ro')
plt.axis([0, 5, 0, 16])
plt.show()
plt.plot(a,a**2,'ro',a,a**3,'b^')
plt.axis([0,5,0,5])
plt.show()
a
###Output
_____no_output_____ |
small_problems/notebooks/fibonacci.ipynb | ###Markdown
The Fibonacci sequenceThe Fibonacci sequence is a sequence of numbers such that any number, except for the first and second, is the sum of the previous two:```0, 1, 1, 2, 3, 5, 8, 13, 21...```The value of the first Fibonacci number in the sequence is 0. The value of the fourth Fibonacci number is 2. It follows that to get the value of any Fibonacci num- ber, n, in the sequence, one can use the formula```pythonfib(n) = fib(n - 1) + fib(n - 2)``` A first recursive attemptThe preceding formula for computing a number in the Fibonacci sequence is a form of pseudocode that can be trivially translated into a recursive Python function. (A recursive function is a function that calls itself.) This mechanical translation will serve as our first attempt at writing a function to return a given value of the Fibonacci sequence.
###Code
def fib1(n: int) -> int:
return fib1(n - 1) + fib1(n - 2)
print(fib1(5))
###Output
_____no_output_____
###Markdown
Uh-oh! If we try to run ```fib1```, we generate an error:```RecursionError: maximum recursion depth exceeded```The issue is that fib1() will run forever without returning a final result. Every call to fib1() results in another two calls of fib1() with no end in sight. We call such a circumstance infinite recursion, and it is analogous to an infinite loop. Utilizing base casesNotice that until you run fib1(), there is no indication from your Python environment that there is anything wrong with it. It is the duty of the programmer to avoid infinite recursion, not the compiler or the interpreter. The reason for the infinite recursion is that we never specified a base case. In a recursive function, a base case serves as a stopping point.In the case of the Fibonacci function, we have natural base cases in the form of the special first two sequence values, 0 and 1. Neither 0 nor 1 is the sum of the previous two numbers in the sequence. Instead, they are the special first two values. Let’s try specifying them as base cases.
###Code
def fib2(n: int) -> int:
if n < 2: #basecase
return n
return fib2(n - 2) + fib2(n - 1) # recursive case
print(fib2(5))
print(fib2(10))
###Output
5
55
###Markdown
> **NOTE** The fib2() version of the Fibonacci function returns 0 as the zeroth number (fib2(0)), rather than the first number, as in our original proposition. In a programming context, this kind of makes sense because we are used to sequences starting with a zeroth element. Do not try calling fib2(50). It will never finish executing! Why? Every call to fib2() results in two more calls to fib2() by way of the recursive calls fib2(n - 1) and fib2(n - 2). In other words, the call tree grows exponentially. Memoization to the rescueMemoization is a technique in which you store the results of computational tasks when they are completed so that when you need them again, you can look them up instead of needing to compute them a second (or millionth) time
###Code
from typing import Dict
memo: Dict[int, int] = {0: 0, 1: 1} # our base cases
def fib3(n: int) -> int:
if n not in memo:
memo[n] = fib3(n - 1) + fib3(n - 2) # memoization return memo[n]
return memo[n]
print(fib3(5))
print(fib3(50))
###Output
5
12586269025
###Markdown
A call to fib3(20) will result in just 39 calls of fib3() as opposed to the 21,891 of fib2() resulting from the call fib2(20). memo is pre-filled with the earlier base cases of 0 and 1, saving fib3() from the complexity of another if statement. Automatic memoizationfib3() can be further simplified. Python has a built-in decorator for memoizing any function automagically. In fib4(), the decorator @functools.lru_cache() is used with the same exact code as we used in fib2(). Each time fib4() is executed with a novel argument, the decorator causes the return value to be cached. Upon future calls of fib4() with the same argument, the previous return value of fib4() for that argu- ment is retrieved from the cache and returned.
###Code
from functools import lru_cache
@lru_cache(maxsize=None)
def fib4(n: int) -> int: # same definition as fib2()
if n < 2: #basecase
return n
return fib4(n - 2) + fib4(n - 1) # recursive case
print(fib4(5))
print(fib4(50))
###Output
5
12586269025
###Markdown
> **Note** that we are able to calculate fib4(50) instantly, even though the body of the Fibonacci function is the same as that in fib2(). @lru_cache’s maxsize property indicates how many of the most recent calls of the function it is decorating should be cached. Setting it to None indicates that there is no limit. Keep it simple, FibonacciThere is an even more performant option. We can solve Fibonacci with an old-fashioned iterative approach.
###Code
def fib5(n: int) -> int:
if n == 0:
return n # special case
last: int = 0 # initially set to fib(0)
next: int = 1 # initially set to fib(1)
for _ in range(1, n):
last, next = next, last + next
return next
print(fib5(5))
print(fib5(50))
###Output
5
12586269025
###Markdown
> **WARNING** The body of the for loop in fib5() uses tuple unpacking in perhaps a bit of an overly clever way. Some may feel that it sacrifices readability for conciseness. Others may find the conciseness in and of itself more readable. The gist is, last is being set to the previous value of next, and next is being set to the previous value of last plus the previous value of next. This avoids the creation of a temporary variable to hold the old value of next after last is updated but before next is updated. Using tuple unpacking in this fashion for some kind of variable swap is common in Python.With this approach, the body of the for loop will run a maximum of n - 1 times. In other words, this is the most efficient version yet. Compare 19 runs of the for loop body to 21,891 recursive calls of fib2() for the 20th Fibonacci number. That could make a serious difference in a real-world application!In the recursive solutions, we worked backward. In this iterative solution, we work forward. Sometimes recursion is the most intuitive way to solve a problem. For exam- ple, the meat of fib1() and fib2() is pretty much a mechanical translation of the original Fibonacci formula. However, naive recursive solutions can also come with significant performance costs. Remember, any problem that can be solved recursively can also be solved iteratively. Generating Fibonacci numbers with a generatorSo far, we have written functions that output a single value in the Fibonacci sequence. What if we want to output the entire sequence up to some value instead? It is easy to convert fib5() into a Python generator using the yield statement. When the genera- tor is iterated, each iteration will spew a value from the Fibonacci sequence using a yield statement.
###Code
from typing import Generator
def fib6(n: int) -> Generator[int, None, None]:
yield 0 # special case
if n > 0: yield 1 # special case
last: int = 0 # initially set to fib(0)
next: int = 1 # initially set to fib(1)
for _ in range(1, n):
last, next = next, last + next
yield next # main generation step
for i in fib6(50):
print(i)
###Output
0
1
1
2
3
5
8
13
21
34
55
89
144
233
377
610
987
1597
2584
4181
6765
10946
17711
28657
46368
75025
121393
196418
317811
514229
832040
1346269
2178309
3524578
5702887
9227465
14930352
24157817
39088169
63245986
102334155
165580141
267914296
433494437
701408733
1134903170
1836311903
2971215073
4807526976
7778742049
12586269025
|
intermediate_notebooks/benchmarks/rapids_decomposition.ipynb | ###Markdown
Compare RAPIDS Decomposition AlgorithmsHeavily influenced by https://umap-learn.readthedocs.io/en/latest/auto_examples/plot_algorithm_comparison.htmlsphx-glr-auto-examples-plot-algorithm-comparison-py Neccessary pip installs
###Code
!pip install seaborn
!pip install matplotlib
###Output
_____no_output_____
###Markdown
Import Libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import time
import dask_cudf
import cudf
import cuml
import pandas as pd
from sklearn import datasets, decomposition, manifold, preprocessing
from colorsys import hsv_to_rgb
###Output
Requirement already satisfied: seaborn in /conda/envs/rapids/lib/python3.7/site-packages (0.9.0)
Requirement already satisfied: pandas>=0.15.2 in /conda/envs/rapids/lib/python3.7/site-packages (from seaborn) (0.23.4)
Requirement already satisfied: numpy>=1.9.3 in /conda/envs/rapids/lib/python3.7/site-packages (from seaborn) (1.15.4)
Requirement already satisfied: scipy>=0.14.0 in /conda/envs/rapids/lib/python3.7/site-packages (from seaborn) (1.2.1)
Requirement already satisfied: matplotlib>=1.4.3 in /conda/envs/rapids/lib/python3.7/site-packages (from seaborn) (3.0.3)
Requirement already satisfied: python-dateutil>=2.5.0 in /conda/envs/rapids/lib/python3.7/site-packages (from pandas>=0.15.2->seaborn) (2.8.0)
Requirement already satisfied: pytz>=2011k in /conda/envs/rapids/lib/python3.7/site-packages (from pandas>=0.15.2->seaborn) (2018.9)
Requirement already satisfied: cycler>=0.10 in /conda/envs/rapids/lib/python3.7/site-packages (from matplotlib>=1.4.3->seaborn) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /conda/envs/rapids/lib/python3.7/site-packages (from matplotlib>=1.4.3->seaborn) (2.4.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /conda/envs/rapids/lib/python3.7/site-packages (from matplotlib>=1.4.3->seaborn) (1.0.1)
Requirement already satisfied: six>=1.5 in /conda/envs/rapids/lib/python3.7/site-packages (from python-dateutil>=2.5.0->pandas>=0.15.2->seaborn) (1.12.0)
Requirement already satisfied: setuptools in /conda/envs/rapids/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib>=1.4.3->seaborn) (40.8.0)
###Markdown
Prepare and size your data Things to note about the datasets:- Blobs: A set of five gaussian blobs in 10 dimensional space. This should be a prototypical example of something that should clearly separate even in a reduced dimension space.- Iris: a classic small dataset with one distinct class and two classes that are not clearly separated.- Digits: handwritten digits – ideally different digit classes should form distinct groups. Due to the nature of handwriting digits may have several forms (crossed or uncrossed sevens, capped or straight line oes, etc.)- Wine: wine characteristics ideally used for a toy regression. Ultimately the data is essentially one dimensional in nature.- Swiss Roll: data is essentially a rectangle, but has been “rolled up” like a swiss roll in three dimensional space. Ideally a dimension reduction technique should be able to “unroll” it. The data has been coloured according to one dimension of the rectangle, so should form a rectangle of smooth color variation.- Sphere: the two dimensional surface of a three dimensional sphere. This cannot be represented accurately in two dimensions without tearing. The sphere has been coloured with hue around the equator and black to white from the south to north pole.
###Code
## Just for kicks, we 10xed the samples compared to the original source code. Cause we can.
sns.set(context="paper", style="white")
blobs, blob_labels = datasets.make_blobs(
n_samples=5000, n_features=10, centers=5, random_state=42
)
iris = datasets.load_iris()
digits = datasets.load_digits(n_class=10)
wine = datasets.load_wine()
swissroll, swissroll_labels = datasets.make_swiss_roll(
n_samples=10000, noise=0.1, random_state=42
)
sphere = np.random.normal(size=(600, 3))
sphere = preprocessing.normalize(sphere)
sphere_hsv = np.array(
[
(
(np.arctan2(c[1], c[0]) + np.pi) / (2 * np.pi),
np.abs(c[2]),
min((c[2] + 1.1), 1.0),
)
for c in sphere
]
)
sphere_colors = np.array([hsv_to_rgb(*c) for c in sphere_hsv])
###Output
_____no_output_____
###Markdown
Call your algorithms and define your iterations
###Code
## Change your parameters so that you can see how it affects your results
nc = 2
rs = 42
nn = 30
## Iterate through our decomposition algorithms TSVD, PCA, and UMAP
reducers = [
(cuml.TruncatedSVD(n_components=nc,algorithm='full', random_state=rs)),
(cuml.TruncatedSVD(n_components=nc,algorithm='jacobi', random_state=rs)),
(cuml.PCA(n_components=nc,svd_solver='full',whiten=False, random_state=rs)),
(cuml.PCA(n_components=nc,svd_solver='jacobi',whiten=False, random_state=rs)),
(cuml.UMAP(n_neighbors=nn, init="spectral"))
]
## Iterate through your datasets
test_data = [
(blobs, blob_labels),
(iris.data, iris.target),
(digits.data, digits.target),
(wine.data, wine.target),
(swissroll, swissroll_labels),
(sphere, sphere_colors),
]
## Name your data
dataset_names = ["Blobs", "Iris", "Digits", "Wine", "Swiss Roll", "Sphere"]
## Helper variables
n_rows = len(test_data)
n_cols = len(reducers)
ax_index = 1
ax_list = []
## Size your plots
plt.rcParams["figure.figsize"] = [20,20]
plt.subplots_adjust(
left=.2, right=10, bottom=.001, top=.96, wspace=.05, hspace=.1
)
###Output
_____no_output_____
###Markdown
Run your tests
###Code
for data, labels in test_data:
gdf=cudf.DataFrame.from_records(data) # skLearn data is a numpy ndarray, so we can just use "from_records" to put it into a cudf dataframe"
for reducer in reducers:
start_time = time.time()
embedding = reducer.fit_transform(gdf)
elapsed_time = time.time() - start_time
ax = plt.subplot(n_rows, n_cols, ax_index)
#print(embedding.T)
embedding_numpy = embedding.to_pandas().values
#pdb.set_trace()
if isinstance(labels[0], tuple):
ax.scatter(*embedding_numpy.T, s=10, c=labels, alpha=0.5)
else:
ax.scatter(
*embedding_numpy.T, s=10, c=labels, cmap="Spectral", alpha=0.5
)
ax.text(
0.99,
0.01,
"{:.2f} s".format(elapsed_time),
transform=ax.transAxes,
size=14,
horizontalalignment="right",
)
ax_list.append(ax)
ax_index += 1
plt.setp(ax_list, xticks=[], yticks=[])
for i in np.arange(n_rows) * n_cols:
ax_list[i].set_ylabel(dataset_names[i // n_cols], size=16)
for i in range(n_cols):
ax_list[i].set_xlabel(repr(reducers[i]).split(".")[2], size=16)
ax_list[i].xaxis.set_label_position("top")
#plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
01-code-scripts/concatenate_vnp46a1.ipynb | ###Markdown
IntroductionConcatenates already-preprocessed VNP46A1 GeoTiff files that are spatially adjacent in the longitudinal direction and exports single GeoTiff files containing the concatenated data. Used in cases when a study area bounding box intersects two VNP46A1 grid cells (.e.g. `VNP46A1.A2020001.h30v05.001.2020004003738.h5` and `VNP46A1.A2020001.h31v05.001.2020004003738.h5` for raw files and `vnp46a1-a2020001-h30v05-001-2020004003738.tif` and `vnp46a1-a2020001-h31v05-001-2020004003841.tif` for already-preprocessed files).This Notebook uses the following folder structure:```├── 01-code-scripts│ ├── clip_vnp46a1.ipynb│ ├── clip_vnp46a1.py│ ├── concatenate_vnp46a1.ipynb│ ├── concatenate_vnp46a1.py│ ├── download_laads_order.ipynb│ ├── download_laads_order.py│ ├── preprocess_vnp46a1.ipynb│ ├── preprocess_vnp46a1.py│ └── viirs.py├── 02-raw-data├── 03-processed-data├── 04-graphics-outputs└── 05-papers-writings```Running the Notebook from the `01-code-scripts/` folder works by default. If the Notebook runs from a different folder, the paths in the environment setup section may have to be changed.This notebook uses files that have alrady been preprocessed and saved to GeoTiff files. Environment Setup
###Code
# Load Notebook formatter
%load_ext nb_black
# %reload_ext nb_black
# Import packages
import os
import warnings
import glob
import viirs
# Set options
warnings.simplefilter("ignore")
# Set working directory
os.chdir("..")
###Output
_____no_output_____
###Markdown
User-Defined Variables
###Code
# Define path to folder containing preprocessed VNP46A1 GeoTiff files
geotiff_input_folder = os.path.join(
"03-processed-data", "raster", "south-korea", "vnp46a1-grid"
)
# Defne path to output folder to store concatenated, exported GeoTiff files
geotiff_output_folder = os.path.join(
"03-processed-data", "raster", "south-korea", "vnp46a1-grid-concatenated"
)
# Set start date and end date for processing
start_date, end_date = "2020-01-01", "2020-04-09"
###Output
_____no_output_____
###Markdown
Data Preprocessing
###Code
# Concatenate and export adjacent images that have the same acquisition date
dates = viirs.create_date_range(start_date=start_date, end_date=end_date)
geotiff_files = glob.glob(os.path.join(geotiff_input_folder, "*.tif"))
concatenated_dates = 0
skipped_dates = 0
processed_dates = 0
total_dates = len(dates)
for date in dates:
adjacent_images = []
for file in geotiff_files:
if date in viirs.extract_date_vnp46a1(geotiff_path=file):
adjacent_images.append(file)
adjacent_images_sorted = sorted(adjacent_images)
if len(adjacent_images_sorted) == 2:
viirs.concatenate_preprocessed_vnp46a1(
west_geotiff_path=adjacent_images_sorted[0],
east_geotiff_path=adjacent_images_sorted[1],
output_folder=geotiff_output_folder,
)
concatenated_dates += 1
else:
skipped_dates += 1
processed_dates += 1
print(f"Processed dates: {processed_dates} of {total_dates}\n\n")
print(
f"Concatenated dates: {concatenated_dates}, Skipped dates: {skipped_dates}"
)
###Output
_____no_output_____ |
preprocessing/BLink_visualization_networks_capital_new.ipynb | ###Markdown
read data
###Code
import numpy as np
import pandas as pd
import geopandas as gpd
from tqdm.notebook import tqdm
from copy import deepcopy
import time
import networkx as nx
import os
import sys
import matplotlib.pyplot as plt
#os.environ['PROJ_LIB'] = os.path.dirname(sys.argv[0])
# 55.6s
#file_path = 'data/blink_data/BLinkDataPt2_2005.csv'
file_path = 'data/blink_data/BLinkDataPt2_2005.csv'
col_names = ['c{0:02d}'.format(i) for i in range(32) ]
col_names = ['mesh_code', 'map_level','area_code', 'link_id', 'diff',
'road_type_code','lane_num_code',
'start_trans_id', 'end_tras_id',
'link_length', 'link_type_code',
'oneway_limit_flag','stop_way_limit_flag',
'start_lon','start_lat', 'latlng_diff_num'] + col_names
df = pd.read_csv(file_path, encoding='shift_jis', names=col_names, skiprows=3)
df = df[['mesh_code', 'map_level','area_code', 'link_id', 'diff',
'road_type_code','lane_num_code',
'start_trans_id', 'end_tras_id',
'link_length', 'link_type_code',
'oneway_limit_flag','stop_way_limit_flag',
'start_lon','start_lat', 'latlng_diff_num']]
df
df.dtypes
print('Unique area codes: ', df.area_code.unique())
print('No. of unique road types: ', df.road_type_code.nunique())
print('No. of unique link_ids: ', df.link_id.nunique())
print('No. of duplicated link_ids:', df.link_id.duplicated().sum())
blink = pd.read_csv('data/blink_data/BlinkData.csv', encoding='shift_jis')
blink = blink[blink.roadname.notnull()]
blink.columns = ['area_code', 'link_id', 'diff', 'roadname', 'routenumber',
'roadtype', 'link_type_code', 'tollroad', 'motorway', 'hwy', 'oneway_limit_flag',
'speedlimit', 'hwyupdown']
blink
# for i,roadname in enumerate(blink.roadname.unique()):
# print(i," ",roadname)
print('Unique Area Codes:', blink.area_code.unique())
print('No. of unique roadnames:', blink.roadname.nunique())
print('No. of unique route numbers:', blink.routenumber.nunique())
print('No. of unique link_ids:', blink.link_id.nunique())
print('No. of duplicated link_ids:', blink.link_id.duplicated().sum())
print('No. of blink link_id not present in the main df:', len(blink) - blink.link_id.isin(df.link_id).sum())
#Merging of both dataframes - df and blink (on common columns)
blinkmerged = pd.merge(blink, df[['mesh_code', 'area_code', 'link_id', 'diff', 'road_type_code', 'lane_num_code',
'start_trans_id', 'end_tras_id', 'link_length', 'link_type_code', 'start_lon', 'start_lat']],
how='left', on=['area_code', 'link_id', 'diff'])
#Check 10s
tmp = [1 for x,y in (zip(blinkmerged[blinkmerged.mesh_code.isna()].link_id.values,
blinkmerged[blinkmerged.mesh_code.isna()].area_code.values)) if x not in df.link_id.values]
#if y not in df[df.link_id==x].link_type_code.values]
sum(tmp)
blinkmerged.isna().sum()
# len=25
capital_road_list =[
"首都高速湾岸線",
"首都高速神奈川3号狩場線",
"首都高速神奈川2号三ツ沢線",
"首都高速神奈川1号横羽線",
"首都高速1号羽田線",
"首都高速神奈川5号大黒線",
"首都高速神奈川6号川崎線",
"首都高速3号渋谷線",
"首都高速2号目黒線",
"首都高速4号新宿線",
"首都高速中央環状線",
"首都高速都心環状線",
"首都高速1号上野線",
"首都高速11号台場線",
"首都高速10号晴海線",
"首都高速9号深川線",
"首都高速5号池袋線",
"首都高速埼玉大宮線",
"首都高速八重洲線",
"首都高速6号向島線",
"首都高速7号小松川線",
"首都高速6号三郷線",
"首都高速川口線",
"首都高速埼玉新都心線",
"首都高速神奈川7号横浜北線"
]
blinkmerged_capital=blinkmerged.loc[blinkmerged["roadname"].isin(capital_road_list)]
blinkmerged_capital
### restrict the lat lon to and mmlatitude <= 35.90 and mmlatitude >= 35.36 and mmlongitude <= 139.947 and mmlongitude >= 139.537
blinkmerged_capital_new= blinkmerged_capital[blinkmerged_capital['start_lon'].between(139.537*128*3600, 139.947*128*3600, inclusive=True)& blinkmerged_capital['start_lat'].between(35.36*128*3600, 35.90*128*3600, inclusive=True)]
blinkmerged_capital_new
###Output
_____no_output_____
###Markdown
analysis data for longest path step0: create connect_dic {linkid:[linkid1,linkid2]}
###Code
connect_dic={}
for linkid in tqdm(blinkmerged_capital_new["link_id"]):
for _end_crs_id in blinkmerged_capital_new[blinkmerged_capital_new['link_id']==linkid]['end_tras_id'].values:
for end_link in blinkmerged_capital_new[blinkmerged_capital_new['start_trans_id']==_end_crs_id]['link_id'].values:
if linkid not in connect_dic.keys():
connect_dic[linkid]=[end_link]
else:
if end_link not in connect_dic[linkid]:
connect_dic[linkid].append(end_link)
len(connect_dic.keys())
connect_dic1=deepcopy(connect_dic)
G = nx.DiGraph()
for key in connect_dic1.keys():
for value in connect_dic1[key]:
G.add_edge(key, value)
print(G.number_of_nodes())
nodes = list(G.nodes)
###Output
_____no_output_____
###Markdown
create adjacency_matrix
###Code
#M1 = nx.adjacency_matrix(G,nodelist=nodes)
M1=nx.to_pandas_adjacency(G, nodelist=nodes, dtype=int)
len(M1)
#np.savetxt('result/visualization/matrix.txt',M1.toarray(), fmt='%d',)
M1.to_csv('result/metrix/capital_01_relation.csv')
###Output
_____no_output_____
###Markdown
create distance adjacency_matrix
###Code
# distance M2
from haversine import haversine, Unit
G2 = nx.DiGraph()
for key in tqdm(connect_dic1.keys()):
for value in connect_dic1[key]:
key_latlon=(df[df['link_id']==key]['start_lat'].values[0]/128/3600,df[df['link_id']==key]['start_lon'].values[0]/128/3600)
value_latlon=(df[df['link_id']==value]['start_lat'].values[0]/128/3600,df[df['link_id']==value]['start_lon'].values[0]/128/3600)
G2.add_edge(key, value, weight=haversine(key_latlon, value_latlon)*1000)
#df[df['link_id']==83585420]['start_lat'].values[0]/128/3600
M2=nx.to_pandas_adjacency(G2, nodelist=nodes, dtype=float)
M2
for row in tqdm(M2.index):
for column in M2:
key_latlon=(df[df['link_id']==row]['start_lat'].values[0]/128/3600, df[df['link_id']==row]['start_lon'].values[0]/128/3600)
value_latlon=(df[df['link_id']==column]['start_lat'].values[0]/128/3600,df[df['link_id']==column]['start_lon'].values[0]/128/3600)
M2.at[row,column]=haversine(key_latlon, value_latlon)*1000
M2
M2.to_csv('result/metrix/capital_distance_relation.csv')
###Output
_____no_output_____
###Markdown
create capital_graph_link_info.csv
###Code
blinkmerged_capital_new_graph= blinkmerged_capital_new.loc[blinkmerged_capital_new["link_id"].isin(nodes)][["link_id","roadname","start_lon","start_lat"]]
blinkmerged_capital_new_graph=blinkmerged_capital_new_graph.reset_index(drop=True)
blinkmerged_capital_new_graph=blinkmerged_capital_new_graph.apply({"link_id": lambda x:x,"roadname": lambda x:x, "start_lon": lambda x:x/128/3600, "start_lat": lambda x:x/128/3600})
blinkmerged_capital_new_graph.to_csv('result/metrix/capital_graph_link_info.csv')
###Output
_____no_output_____ |
notebooks/MPS_T09_R_Hypotheses_and_Regression_solution.ipynb | ###Markdown
*Managerial Problem Solving* Tutorial 9 - Hypothesis Testing and Regression AnalysisToni GreifLehrstuhl für Wirtschaftsinformatik und InformationsmanagementSS 2019 Hypothesis TestingDrawing inferences about two contrasting propositions (each called a hypothesis) relating to the value of one or more population parameters.- $H_0$ Null hypothesis: describes an existing theory (conservative, adversarial)- $H_1$ Alternative hypothesis: the complement of $H_0$ Using sample data, we either:- reject $H_0$ and conclude the sample data provides sufficient evidence to support $H_1$, or- fail to reject $H_0$ and conclude the sample data does not support $H_1$. Understanding Risk in Hypothesis TestingWe always risk drawing an incorrect conclusion:- $H_0$ is true and the test correctly fails to reject $H_0$- $H_0$ is false and the test correctly rejects $H_0$- $H_0$ is true and the test incorrectly rejects $H_0$ (called a *Type I error*)- $H_0$ is false and the test incorrectly fails to reject $H_0$ (called a *Type II error*)We are typically most concerned about Type I errors:- Innocent person convicted- Ineffective treatment approved- Sick person considered healthy Steps of Hypothesis Testing procedures1. Identify the population parameter and formulate the hypotheses to test.2. Select a level of significance (related to the risk of drawing an incorrect conclusion).3. Determine a decision rule on which to base a conclusion.4. Collect data and calculate a test statistic.5. Apply the decision rule and draw a conclusion.The key competence in hypothesis testing is the correct choice of test statistics, and the interpretation of the results (Critical Value, p-value, confidence interval...) Computing the Test Statistics**One-sample test on a mean, σ unknown**$$t=\frac{\bar{x}-\mu_0}{s\ /\sqrt{n}}$$**One-sample test on a proportion**$$z=\frac{\hat{p}-\pi_0}{\sqrt{\pi_0(1-\pi_0)\ /n}}$$with $\hat{p}=\frac{number\ in\ the\ sample}{size\ of\ the\ sample}$However, we will rely on pre-installed test functions in most applications. Exercise 1Use the mtcars data set to test the following hypothesis:- The average mpg of a car is below 20.
###Code
library(tidyverse)
df <- mtcars
###Output
_____no_output_____
###Markdown
Manual calculation of t-statistics$$t=\frac{\bar{x}-\mu_0}{s\ /\sqrt{n}}$$
###Code
(mean(df$mpg) - 20)/((sd(df$mpg)/sqrt(nrow(df))))
pt(0.085, df = nrow(df))
###Output
_____no_output_____
###Markdown
...and now using the pre-installed function:```R t.test()```- $H_0: mean(mpg) \leq 20$- $H_1: mean(mpg) > 20$
###Code
t.test(df$mpg,
mu = 20,
alternative = "greater")
###Output
_____no_output_____
###Markdown
Use the pre-installed function to test the following hypothesis:- Cars with more than 4 cylinders have a lower mpg than cars with 4 or less cylinders.
###Code
# H0: mean(mpg["bigCars"]) >= mean(mpg["smallCars"])
# H1: mean(mpg["bigCars"]) < mean(mpg["smallCars"])
t.test(df %>% filter(cyl <= 4) %>% select(mpg),
df %>% filter(cyl > 4) %>% select(mpg),
alternative = "greater")
###Output
_____no_output_____
###Markdown
Exercise 2The file *roomInspection.csv* summarizes the room inspection results of a hotel chain. During the samples 1000 hotel rooms have been inspected.
###Code
df <- read.csv("data/T09/roomInspections.csv")
df %>% head()
###Output
_____no_output_____
###Markdown
The management wants the share of rooms not matching the standard to be below 2%. Formulate a suitable hypothesis and test it.
###Code
# H0: p(FALSE) < 0.02
# H1: p(FALSE) >= 0.02
prop.test(df %>% filter(roomOk == FALSE) %>% nrow(),
df %>% nrow(),
p = 0.02,
alternative = "greater")
###Output
_____no_output_____
###Markdown
Exercise 3A retailer believes that a new marketing strategy can improve the revenues. Until now, customer spending in 15 different categories averages at 70.00€ for customers between 18 and 34 as well as for customers 35+. After the new marketing strategy is launched, the spending of customers is analyzed.1. Set up the hypothesis to test the success of a marketing strategy.2. 300 of the asked customers are aged between 18 and 34. Their average spending is 75.86€ with a standard deviation of 50.90€. Has the average spending been changed significantly?3. 700 of the asked are aged above 35. Their average spending is 68.53€ with a standard deviation of 45.29€. Has the average spending of this group been changed significantly?
###Code
tStat <- function(xbar, x0, s, n){
(xbar-x0)/(s/sqrt(n))
}
#H0: mean(spending) <= 70
#H1: mean(spending) > 70
tStat(75.86,70, 50.90, 300)
qt(0.95, 299)
#H0: mean(spending) = 70
#H1: mean(spending) != 70
tStat(68.53, 70, 45.29, 700)
qt(0.95, 699)
###Output
_____no_output_____
###Markdown
Regression Analysis Results of Regression Analysis**Information on model quality:**- Standard error (SE) - Information on the deviation of the model from the data- Pearson correlation coefficient $(R)$ - Magnitude of linear correlation $(-1 \leq R \leq 1)$- Coefficient of determination $(R^2)$ - Characterizes the 'predictive power' of the model **Intercept and slope of regression function (Regression coefficients)****Confidence intervals**- Interval in which the true regression coefficient value lies with a probability of 95% - If 0 is covered by the interval, the coefficient is not statistically significant - The same information is conveyed by the coefficients’ p-values (p-value < 0.05) Load the dataset “income.csv”.
###Code
income <- read.csv("data/T09/income.csv")
income %>% head()
###Output
_____no_output_____
###Markdown
Perform a multiple linear regression. Therefore, use the income as dependent variable and all others parameters as independent variables.
###Code
fit1a <- lm(Income ~ ., data=income)
summary(fit1a)
###Output
_____no_output_____
###Markdown
After fitting the initial model, keep removing the insignificant (5%) independent variables.What independent variables have a significant influence on the life expectancy of the state inhabitants?
###Code
# - Life.Exp
fit1a <- lm(Income ~ Population +
Illiteracy + Murder + HS.Grad +
Frost + Area, data=income)
summary(fit1a)
# - Frost
fit1a <- lm(Income ~ Population +
Illiteracy + Murder + HS.Grad +
Area, data=income)
summary(fit1a)
# Murder
fit1a <- lm(Income ~ Population +
Illiteracy + HS.Grad +
Area, data=income)
summary(fit1a)
# - Illiteracy
fit1a <- lm(Income ~ Population +
HS.Grad +
Area, data=income)
summary(fit1a)
# Area
fit1a <- lm(Income ~ Population +
HS.Grad, data=income)
summary(fit1a)
###Output
_____no_output_____ |
titanic/Titanic NN.ipynb | ###Markdown
Titanic NN 概要:- 运行时间比较长的训练,还是应该弄个TensorBoard,方便监控结果,免得每次都需要用鼠标手动托页面查看最新的运行结果。- 模型为全链接神经网络和统计学习方法。 Result:Reference: 1. https://www.kaggle.com/c/titanictutorials2. https://www.kaggle.com/sinakhorami/titanic-best-working-classifier3. https://www.kaggle.com/arthurtok/introduction-to-ensembling-stacking-in-python/notebook 1. Preprocess Import pkgs
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.preprocessing import MinMaxScaler
from IPython.display import display
%matplotlib inline
import time
import os
project_name = 'Titanic'
step_name = 'StatML_NN'
date_str = time.strftime("%Y%m%d", time.localtime())
time_str = time.strftime("%Y%m%d_%H%M%S", time.localtime())
run_name = project_name + '_' + step_name + '_' + time_str
print('run_name: ' + run_name)
cwd = os.getcwd()
log_path = os.path.join(cwd, 'log')
model_path = os.path.join(cwd, 'model')
output_path = os.path.join(cwd, 'output')
print('model_path: ' + log_path)
print('model_path: ' + model_path)
print('model_path: ' + output_path)
###Output
run_name: Titanic_StatML_NN_20180620_094729
model_path: /data1/github/Kaggle/titanic/log
model_path: /data1/github/Kaggle/titanic/model
model_path: /data1/github/Kaggle/titanic/output
###Markdown
Import original data as DataFrame
###Code
data_train = pd.read_csv('./input/train.csv')
data_test = pd.read_csv('./input/test.csv')
display(data_train.head(2))
display(data_test.head(2))
data_train.loc[2, 'Ticket']
###Output
_____no_output_____
###Markdown
Show columns of dataframe
###Code
data_train_original_col = data_train.columns
data_test_original_col = data_test.columns
print(data_train_original_col)
print(data_test_original_col)
# data_train0 = data_train.drop(data_train_original_col, axis = 1)
# data_test0 = data_test.drop(data_test_original_col, axis = 1)
# display(data_train0.head(2))
# display(data_test0.head(2))
###Output
Index(['PassengerId', 'Survived', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp',
'Parch', 'Ticket', 'Fare', 'Cabin', 'Embarked'],
dtype='object')
Index(['PassengerId', 'Pclass', 'Name', 'Sex', 'Age', 'SibSp', 'Parch',
'Ticket', 'Fare', 'Cabin', 'Embarked'],
dtype='object')
###Markdown
Preprocess features
###Code
full_data = [data_train, data_test]
# Pclass
for dataset in full_data:
temp = dataset[dataset['Pclass'].isnull()]
if len(temp) == 0:
print('Do not have null value!')
else:
temp.head(2)
for dataset in full_data:
dataset['a_Pclass'] = dataset['Pclass']
# display(dataset.head())
# Name
for dataset in full_data:
dataset['a_Name_Length'] = dataset['Name'].apply(len)
# display(dataset.head(2))
# Sex
for dataset in full_data:
dataset['a_Sex'] = dataset['Sex'].map({'female': 0, 'male': 1}).astype(int)
# display(dataset.head(2))
# Age
def is_child(age):
if age >= 0 and age <=15:
return 1
return 0
for dataset in full_data:
dataset['a_Age'] = dataset['Age'].fillna(-1)
dataset['a_Have_Age'] = dataset['Age'].isnull().map({True: 0, False: 1}).astype(int)
dataset['a_Is_Child'] = dataset['a_Age'].apply(is_child)
# display(dataset[dataset['Age'].isnull()].head(2))
display(dataset[dataset['Age']<=15].head(2))
display(dataset.head(2))
# SibSp and Parch
for dataset in full_data:
dataset['a_FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
dataset['a_IsAlone'] = dataset['a_FamilySize'].apply(lambda x: 1 if x<=1 else 0)
# display(dataset.head(2))
# Ticket(Very one have a ticket)
for dataset in full_data:
dataset['a_Have_Ticket'] = dataset['Ticket'].isnull().map({True: 0, False: 1}).astype(int)
# display(dataset[dataset['Ticket'].isnull()].head(2))
# display(dataset.head(2))
# Fare
for dataset in full_data:
dataset['a_Fare'] = dataset['Fare'].fillna(-1)
dataset['a_Have_Fare'] = dataset['Fare'].isnull().map({True: 0, False: 1}).astype(int)
# display(dataset[dataset['Fare'].isnull()].head(2))
# display(dataset.head(2))
# Cabin
for dataset in full_data:
dataset['a_Have_Cabin'] = dataset['Cabin'].isnull().map({True: 0, False: 1}).astype(int)
# display(dataset[dataset['Cabin'].isnull()].head(2))
# display(dataset.head(2))
# Embarked
for dataset in full_data:
# dataset['Embarked'] = dataset['Embarked'].fillna('N')
dataset['a_Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2, None: 3} ).astype(int)
dataset['a_Have_Embarked'] = dataset['Embarked'].isnull().map({True: 0, False: 1}).astype(int)
# display(dataset[dataset['Embarked'].isnull()].head(2))
# display(dataset.head(2))
###Output
_____no_output_____
###Markdown
Name words segmentation and one-hote
###Code
# Name words segmentation
import re
name_words = []
# Inorder to allign columns of data_train and data_test, only data_train to fetch word
for name in data_train['Name']:
# print(name)
words = re.findall(r"[\w']+", name)
# print(len(words))
# print(words)
for w in words:
if w not in name_words:
name_words.append(w)
# print(len(name_words))
name_words.sort()
# print(name_words)
# Add columns
for dataset in full_data:
for w in name_words:
col_name = 'a_Name_' + w
dataset[col_name] = 0
dataset.head(1)
# Name words one-hote
for dataset in full_data:
for i, row in dataset.iterrows():
# print(row['Name'])
words = re.findall(r"[\w']+", row['Name'])
for w in words:
if w in name_words:
col_name = 'a_Name_' + w
dataset.loc[i, col_name] = 1
# display(dataset[dataset['a_Name_Braund'] == 1])
###Output
_____no_output_____
###Markdown
Cabin segmentation and one-hote
###Code
# Get cabin segmentation words
import re
cabin_words = []
# Inorder to allign columns of data_train and data_test, only data_train to fetch number
for c in data_train['Cabin']:
# print(c)
if c is not np.nan:
word = re.findall(r"[a-zA-Z]", c)
# print(words[0])
cabin_words.append(word[0])
print(len(cabin_words))
cabin_words.sort()
print(np.unique(cabin_words))
cabin_words_unique = list(np.unique(cabin_words))
def get_cabin_word(cabin):
if cabin is not np.nan:
word = re.findall(r"[a-zA-Z]", cabin)
if word:
return cabin_words_unique.index(word[0])
return -1
for dataset in full_data:
dataset['a_Cabin_Word'] = dataset['Cabin'].apply(get_cabin_word)
# dataset['a_Cabin_Word'].head(100)
def get_cabin_number(cabin):
if cabin is not np.nan:
word = re.findall(r"[0-9]+", cabin)
if word:
return int(word[0])
return -1
for dataset in full_data:
dataset['a_Cabin_Number'] = dataset['Cabin'].apply(get_cabin_number)
# dataset['a_Cabin_Number'].head(100)
# Clean data
# Reference:
# 1. https://www.kaggle.com/sinakhorami/titanic-best-working-classifier
# 2. https://www.kaggle.com/arthurtok/introduction-to-ensembling-stacking-in-python/notebook
# full_data = [data_train, data_test]
# for dataset in full_data:
# dataset['a_Name_length'] = dataset['Name'].apply(len)
# #dataset['Sex'] = (dataset['Sex']=='male').astype(int)
# dataset['a_Sex'] = dataset['Sex'].map( {'female': 0, 'male': 1} ).astype(int)
# dataset['a_Age'] = dataset['Age'].fillna(0)
# dataset['a_Age_IsNull'] = dataset['Age'].isnull()
# dataset['a_FamilySize'] = dataset['SibSp'] + dataset['Parch'] + 1
# dataset['a_IsAlone'] = dataset['a_FamilySize'].apply(lambda x: 1 if x<=1 else 0)
# dataset['a_Fare'] = dataset['Fare'].fillna(dataset['Fare'].median())
# #dataset['Has_Cabin'] = dataset['Cabin'].apply(lambda x: 1 if type(x) == str else 0) # same as below
# dataset['a_Has_Cabin'] = dataset['Cabin'].apply(lambda x: 0 if type(x) == float else 1)
# dataset['a_Has_Embarked'] = dataset['Embarked'].isnull()
# dataset['Embarked'] = dataset['Embarked'].fillna('N')
# dataset['a_Embarked'] = dataset['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2, 'N': 3} ).astype(int)
# dataset['Embarked'] = dataset['Embarked'].fillna('S')
# display(data_train.head(2))
# display(data_test.head(2))
survived = data_train['Survived']
data_train0 = data_train.drop(data_train_original_col, axis = 1)
data_test0 = data_test.drop(data_test_original_col, axis = 1)
display(data_train0.head(2))
display(data_test0.head(2))
features = data_train0
display(features.head(2))
###Output
_____no_output_____
###Markdown
Check and confirm all columns is proccessed
###Code
for col in features.columns:
if not col.startswith('a_'):
print(col)
# Shuffle and split the train_data into train, crossvalidation and testing subsets
x_train, x_val, y_train, y_val = train_test_split(features, survived, test_size=0.2, random_state=2017)
# Show distribute of abave data sets
print(x_train.shape)
print(x_val.shape)
print(y_train.shape)
print(y_val.shape)
display(x_train.head(2))
display(y_train.head(2))
###Output
(712, 1545)
(179, 1545)
(712,)
(179,)
###Markdown
Neuron network
###Code
from sklearn.metrics import confusion_matrix
from keras.utils.np_utils import to_categorical # convert to one-hot-encoding
from keras.models import Sequential
from keras.layers import Dense, Dropout, Input, Flatten, Conv2D, MaxPooling2D, BatchNormalization
from keras.optimizers import Adam
from keras.preprocessing.image import ImageDataGenerator
from keras.callbacks import LearningRateScheduler, TensorBoard
def get_lr(x):
lr = 2e-4 * 0.9 ** x
if lr < 1e-5:
lr = 1e-5
print(lr)
return lr
annealer = LearningRateScheduler(get_lr)
log_dir = os.path.join(log_path, run_name)
print('log_dir:' + log_dir)
tensorBoard = TensorBoard(log_dir=log_dir)
# Initialising the ANN
model = Sequential()
# layers
model.add(Dense(units = 512, activation = 'sigmoid', input_dim = 1545))
model.add(Dropout(0.5))
model.add(Dense(units = 512, activation = 'sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(units = 128, activation = 'sigmoid'))
model.add(Dropout(0.5))
model.add(Dense(units = 1, activation = 'sigmoid'))
model.compile(optimizer = Adam(lr=1e-4), loss = 'binary_crossentropy', metrics = ['accuracy'])
x_val = x_val.as_matrix()
y_val = y_val.as_matrix()
hist = model.fit(x_train.as_matrix(), y_train.as_matrix(),
batch_size = 8,
verbose=1,
epochs = 100,
validation_data=(x_val, y_val),
callbacks=[annealer, tensorBoard])
final_loss, final_acc = model.evaluate(x_val, y_val, verbose=1)
print("Final loss: {0:.4f}, final accuracy: {1:.4f}".format(final_loss, final_acc))
plt.plot(hist.history['loss'], color='b')
plt.plot(hist.history['val_loss'], color='r')
plt.show()
plt.plot(hist.history['acc'], color='b')
plt.plot(hist.history['val_acc'], color='r')
plt.show()
###Output
_____no_output_____
###Markdown
Predict and Export pred.csv file
###Code
train_cols = data_train.columns
for col in data_test0.columns:
if col not in train_cols:
print(col)
final_acc_str = str(int(final_acc*10000))
run_name_acc = project_name + '_' + step_name + '_' + time_str + '_' + final_acc_str
print(run_name_acc)
cwd = os.getcwd()
pred_file = os.path.join(cwd, 'output', run_name_acc + '.csv')
print(pred_file)
display(data_test0.head(2))
y_data_pred = model.predict(data_test0.as_matrix())
print(y_data_pred.shape)
y_data_pred = np.squeeze(y_data_pred)
print(y_data_pred.shape)
y_data_pred = (y_data_pred > 0.5).astype(int)
print(y_data_pred)
print(data_test['PassengerId'].shape)
passenger_id = data_test['PassengerId']
output = pd.DataFrame( { 'PassengerId': passenger_id , 'Survived': y_data_pred })
output.to_csv(pred_file , index = False)
print(run_name_acc)
print('Done!')
###Output
Titanic_StatML_NN_20180620_094729_8100
Done!
|
CEA_EDF_INRIA/softmax_temperature.ipynb | ###Markdown
Softmax and temperature scaling In this notebook we will take a look at the softmax function and its behavior for different values of the logits.We will also study the impact of temperature scaling over the final probabilities
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
def softmax(X, scale = 1.0, axis = None):
"""
Compute the softmax of each element along an axis of X.
Parameters
----------
X: ND-Array. Probably should be floats.
scale (optional): float parameter, used as a multiplier
prior to exponentiation. Default = 1.0
axis (optional): axis to compute values along. Default is the
first non-singleton axis.
Returns an array the same size as X. The result will sum to 1
along the specified axis.
"""
# make X at least 2d
y = np.atleast_2d(X)
# find axis
if axis is None:
axis = next(j[0] for j in enumerate(y.shape) if j[1] > 1)
# multiply y against the theta parameter,
y = y * float(scale)
# exponentiate y
y = np.exp(y)
# take the sum along the specified axis
ax_sum = np.expand_dims(np.sum(y, axis = axis), axis)
# finally: divide elementwise
p = y / ax_sum
# flatten if X was 1D
if len(X.shape) == 1: p = p.flatten()
return p
###Output
_____no_output_____
###Markdown
Generate a range of logits
###Code
x = np.arange(0, 1, 0.2)
logits_1 = x
logits_2 = -2*x - 2
logits_3 = -3*x + 1
###Output
_____no_output_____
###Markdown
There are two parameters that you can adjust:- `logits_multiplier`: controlling the magnitudes of the logits- `temperature`: the coefficient used for adjusting the softmax values ExerciseTry out different values for `logits_multiplier` in the range $[1,50]$ and for the `temperature` in the range $[0.1,200]$.What do you notice?
###Code
logits_multiplier = 1
temperature = 200
logits = np.array([logits_1, logits_2, logits_3])* logits_multiplier
probas = softmax(logits, scale = 1, axis = 0)
probas_scaled = softmax(logits, scale = 1/temperature, axis = 0)
plt.figure(figsize=(15,3))
plt.subplot(1,3,1)
plt.plot(x, logits.T)
plt.legend(['logits_1','logits_2','logits_3'])
plt.title('logits')
plt.subplot(1,3,2)
plt.plot(x, probas.T)
plt.title('probas')
plt.subplot(1,3,3)
plt.plot(x, probas_scaled.T)
plt.title('probas_scaled')
###Output
_____no_output_____ |
video-anomaly.ipynb | ###Markdown
Welcome to the lab! Before we get started here are a few pointers on Jupyter notebooks.1. The notebook is composed of cells; cells can contain code which you can run, or they can hold text and/or images which are there for you to read.2. You can execute code cells by clicking the ```Run``` icon in the menu, or via the following keyboard shortcuts ```Shift-Enter``` (run and advance) or ```Ctrl-Enter``` (run and stay in the current cell).3. To interrupt cell execution, click the ```Stop``` button on the toolbar or navigate to the ```Kernel``` menu, and select ```Interrupt ```. Video Anomaly Detection In this lab, we will build an anomaly detector that can help to detect unusual activities in video frames. We will make use of two sets of videos: one set of videos contains only normal pedestrian traffic, and another set of videos that contains anomalous activities, such as someone riding a bicycle or a car moving through the scene. We will train our model to learn what is normal pedestrian traffic and then use it to detect unusual activities. Import librariesWe begin by importing the libraries that we need, mainly the tensorflow (which is the framework that we use to build the autoencoder neural network) and some utility libraries that help us plot the video images na
###Code
import tensorflow as tf
import os
from utils import *
from IPython.display import display
from IPython.display import Image as ipyImage
from dataset_util import prepare_dataset
fix_cudnn_bug()
# Uncomment this is to fix the libtiff bugs on Windows
# !conda install libtiff=4.1.0=h885aae3_4 -c conda-forge -y
###Output
_____no_output_____
###Markdown
Dataset **UCSD Anomaly Detection Dataset**The UCSD Anomaly Detection Dataset was acquired with a stationary camera mounted at an elevation, overlooking pedestrian walkways. The crowd density in the walkways was variable, ranging from sparse to very crowded. In the normal setting, the video contains only pedestrians. Abnormal events are due to either:- the circulation of non pedestrian entities in the walkways- anomalous pedestrian motion patternsCommonly occurring anomalies include bikers, skaters, small carts, and people walking across a walkway or in the grass that surrounds it. A few instances of people in wheelchair were also recorded. All abnormalities are naturally occurring, i.e. they were not staged for the purposes of assembling the dataset. The data was split into 2 subsets, each corresponding to a different scene. The video footage recorded from each scene was split into various clips of around 200 frames.- Peds1: clips of groups of people walking towards and away from the camera, and some amount of perspective distortion. Contains 34 training video samples and 36 testing video samples.- Peds2: scenes with pedestrian movement parallel to the camera plane. Contains 16 training video samples and 12 testing video samples. Download the Dataset
###Code
base_dataset_dir = 'video_dataset'
datafile_url = 'https://sdaaidata.s3-ap-southeast-1.amazonaws.com/datasets/UCSD_Anomaly_Dataset.v1p2.zip'
download_data(base_dataset_dir, datafile_url, extract=True, force=False)
# For now we use the UCSDped1 dataset.
# Change this to UCSDped2 if you want to experiment with another dataset
dataset = 'UCSDped1'
# setup all the relative path
root_path = os.path.join(base_dataset_dir, dataset)
train_dir = os.path.join(root_path, 'Train')
test_dir = os.path.join(root_path, 'Test')
###Output
_____no_output_____
###Markdown
The UCSD dataset is split into two sets: one for training, and one for testing. The training data (consists of 24 video clips) are in Train subfolder. Each clip is in a separate subfolder 'Train001', 'Train002', etc, and each folder consists 200 image frame. Visualize the Train datasetOur training set contains only video scenes that are 'normal'. Let's look at the few examples. You can change the train_sample_folder to another folder e.g. Train010, Train200, etc. range(1,9) allows you to view
###Code
img = Image.open('video_dataset/UCSDped1/Train/Train001/001.tif')
plt.imshow(img)
# You can change the following train_sample_folder to another folder to view other clips
train_sample_folder = 'Train034'
image_range = (1,9) # this display image from 1 to 8
image_folder = os.path.join(train_dir, train_sample_folder)
display_images(image_folder, image_range=image_range, max_per_row=4)
###Output
_____no_output_____
###Markdown
Visualize as animated gifHere we will convert the image frames to video (as animated gif) for easier viewing. The video consists of 200 frames. From the navigation panel, you will see that a gif file called `.gif` has been created, e.g. `Train001.gif`.
###Code
gif_filename = train_sample_folder + '.gif'
create_gif(image_folder, gif_filename, img_type='tif')
###Output
_____no_output_____
###Markdown
Now we play the video
###Code
with open(gif_filename,'rb') as file:
display(ipyImage(file.read(), format='png'))
###Output
_____no_output_____
###Markdown
Visualize the Test datasetNow do the same for the Train dataset
###Code
test_sample_folder = 'Test001'
image_range = (1,9) # this display image from 1 to 8
image_folder = os.path.join(test_dir, test_sample_folder)
display_images(image_folder, image_range=image_range, max_per_row=4)
gif_filename = train_sample_folder + '.gif'
create_gif(image_folder, gif_filename, img_type='tif')
with open(gif_filename,'rb') as file:
display(ipyImage(file.read(), format='png'))
###Output
_____no_output_____
###Markdown
Prepare Training DatasetHere we create a tensorflow dataset suitable for use in training the Autoencoder network later.
###Code
IMG_HEIGHT=100
IMG_WIDTH=100
BATCH_SIZE=128
train_fileset = os.path.join(train_dir, '*/*.tif')
train_dataset = prepare_dataset(train_fileset,
img_height=IMG_HEIGHT,
img_width=IMG_WIDTH,
batch_size=BATCH_SIZE,
shuffle=True)
# We have a total of 34 x 200 = 6800 images.
# As each training batch is 16 images, we have a total of 6800/16=425 batches of images
print(len(list(train_dataset)))
###Output
_____no_output_____
###Markdown
Building the Auto-encoder ModelExplain what auto-encoder is and how it looks like **Exercise**:Try changing the Dense layer below to higher or lower number.
###Code
# The encoder part of the Audo-encoder model
inputs = tf.keras.layers.Input(shape=(100,100,1))
x = tf.keras.layers.Conv2D(32, kernel_size=5, activation='relu')(inputs)
x = tf.keras.layers.MaxPool2D(pool_size=2)(x)
x = tf.keras.layers.Conv2D(32, kernel_size=5, activation='relu')(x)
x = tf.keras.layers.MaxPool2D(pool_size=2)(x)
x = tf.keras.layers.Flatten()(x)
encoded = tf.keras.layers.Dense(2000)(x)
encoder = tf.keras.Model(inputs=[inputs], outputs=[encoded])
# The decoder part of the Audo-encoder model
decoder_inputs = tf.keras.layers.Input(shape=(2000))
x = tf.keras.layers.Dense(22*22*32, activation='relu')(decoder_inputs)
x = tf.keras.layers.Reshape(target_shape=(22,22,32))(x)
x = tf.keras.layers.UpSampling2D(2, interpolation='nearest')(x)
x = tf.keras.layers.Conv2DTranspose(32, kernel_size=5, activation='relu')(x)
x = tf.keras.layers.UpSampling2D(2, interpolation='nearest')(x)
decoded = tf.keras.layers.Conv2DTranspose(1, kernel_size=5, activation='sigmoid')(x)
decoder = tf.keras.Model(inputs=[decoder_inputs], outputs=[decoded])
# stacked_ae = tf.keras.Model(inputs=[inputs], outputs=[decoded])
# stacked_ae.summary()
# decoder = tf.keras.Model(inputs=[decoder_inputs], outputs=[x])
# encoder = tf.keras.Model(inputs=[inputs], outputs=[x])
# encoder.summary()
encoder.summary()
tf.keras.utils.plot_model(encoder)
decoder.summary()
tf.keras.utils.plot_model(decoder)
# stacked_ae = tf.keras.Sequential([encoder, decoder])
encoding = encoder(inputs)
decoding = decoder(encoding)
conv_ae = tf.keras.Model(inputs=[inputs], outputs=[decoding])
conv_ae.summary()
tf.keras.utils.plot_model(conv_ae, to_file='model.png', expand_nested=True,show_shapes=True, dpi=282)
conv_ae.compile(loss=tf.keras.losses.MeanSquaredError(),
optimizer=tf.keras.optimizers.Adam(lr=1e-4, decay=1e-4),
metrics=['mae'])
train = True
num_epochs = 2
if train:
run_logdir = get_run_logdir() # e.g., './my_logs/run_2019_06_07-15_15_22'
tensorboard_cb = tf.keras.callbacks.TensorBoard(run_logdir)
history = conv_ae.fit(train_dataset, epochs=num_epochs, callbacks=[tensorboard_cb])
conv_ae.save('trained_model')
else:
conv_ae = tf.keras.models.load_model('trained_model')
if train:
plot_training_loss(history)
# def plot_image(image):
# print('plot image {}'.format(image.shape))
# plt.imshow(image[:,:,0], cmap=plt.cm.gray, interpolation='nearest')
# plt.axis("off")
sample_train_image = os.path.join(train_dir, 'Train001/001.tif')
sample_test_image = os.path.join(test_dir, 'Test024/140.tif')
print(sample_test_image)
###Output
_____no_output_____
###Markdown
Now we will take a 'normal' image from the train set, and see how well the autoencoder reconstructs it. We will plot the original image on the left and the reconstructed image on the righ. Here you can see that it can mostly reconstruct the original image (the left)
###Code
show_reconstructions(conv_ae, sample_train_image)
###Output
_____no_output_____
###Markdown
Let's look at the 'abnormal' image from the test set where a cart can be seen driving through the walkway. Since the cart is something that the autoencoder has never seen before, it failed to reconstruct it properly.
###Code
show_reconstructions(conv_ae, sample_test_image)
###Output
_____no_output_____
###Markdown
Let's us look at how the reconstruction loss varies for each video frames. Here you can clearly see that the loss starting to spike after the cart has entered the scene and continue to rise until the cart disappears from the scene, when the loss drops suddenly. Prepare Testing DatasetNow we will create a test dataset that we can use to test the trained auto-encoder. **Exercise**:Change the following `test_sample_folder` to the video folder you want to test.
###Code
BATCH_SIZE=1
test_sample_folder = 'Test014'
#test_sample_folder = 'Test024'
test_fileset = os.path.join(test_dir, test_sample_folder, "*.tif")
test_dataset = prepare_dataset(test_fileset,
img_height=IMG_HEIGHT,
img_width=IMG_WIDTH,
batch_size=BATCH_SIZE,
shuffle=False)
print(len(list(test_dataset)))
###Output
_____no_output_____
###Markdown
Reconstruction loss over different video framesThe following codes will take all the video frames from the test folder and runs through the autoencoder and compute the reconstruction loss and show the reconstruction loss for each frame.
###Code
create_losses_animation(conv_ae, test_dataset, "losses.gif")
with open('losses.gif','rb') as file:
display(ipyImage(file.read(), format='png'))
###Output
_____no_output_____
###Markdown
Identification of anomalous object from the video framesThe following codes will compute the differences of each pixel between original frames and the reconstructed frames (a total of 200 fames), and by comparing the differences over a patch of 4x4 pixels, and if the difference is above certain threshold, it will mark that patch with red to signify that there is an anomalous object detected within that patch. The 200 frames will be displayed as animated gif to better visualize the changes over time.**Exercise**Try changing the threshold below to adjust the sensitivity of the certain pixels being classified as anomalous.
###Code
threshold = 4.0
identify_anomaly(conv_ae, test_dataset, "video.gif", threshold)
with open('video.gif','rb') as file:
display(ipyImage(file.read(), format='png'))
###Output
_____no_output_____ |
doc/example_notebooks/PCA_demo.ipynb | ###Markdown
Principal Components AnalysisThis notebook provides a breif summary of Principal Compoents Analysis (PCA) and the `jive.PCA.PCA` object contained in this package. PCA notation is a bit of the wild west so this notebook serves to explain the notation used in this package. Finally, the output of AJIVE consists of a number of different PCAs and the `jive.AJIVE.AJIVE` object makes use of the `jive.PCA.PCA` object.Let $X \in \mathbb{R}^{n \times d}$ be the provided data matrix with n observations (rows) and d features (columns). Rank $K$ ($1 \le K \le \min(n, d)$) PCA computes the rank $K$ Singular Value Decomposition (SVD) of $X_{cent}$ where $X_{cent}$ is the column centered version of $X$. outputThe output of rank K PCA is a set of three matrix: scores, singular values and loadings. We refer to these as $U, D, V$ and sort compoents by decreasing singular value.- scores: $U \in \mathbb{R}^{n \times K}$ with orthonormal columns (also called "normalized scores")- singular values: $D \in \mathbb{R}^{K \times K}$ digonal matix of singular values. - loadings: $V \in \mathbb{R}^{d \times K}$ with orthonormal columns.We also make use of the "unnormalized scores" $UD \in \mathbb{R}^{n \times K}$ where each scores component has been scaled by the respective singular values.Note $ X_{cent} = X - 1_n m^T$ where $1_n \in \mathbb{R}^n$ is the vector of ones and $m \in \mathbb{R}^d$ is the column means of X. The following relationship holds$$X = X_{cent} + 1_n m^T \approx U D V^T$$where the $\approx$ is an equality iff $K = rank(X_{cent})$.The kth loading component, V[:, k] $\in \mathbb{R}^d$ is a direction in variable space. The kth scores component $U[:, k] \in \mathbb{R}^n$ gives a coordinate for each observation in the affine PCA space. Notes Zero indexing in the code*Warning*: To match Python's zero indexing, we will zero index components in the code below (e.g. `pca.scores_[:, 0]` is the first scores component). CenteringIn standard PCA the columns are centered with the mean. Depending on the context, other forms of centering may be used e.g. a robust mean or no centering at all (i.e. SVD). PCA will mean center by default, but the user can provide other centering options.Other data transformations may be used (e.g. whitening), however, these must be handled outside the PCA object. Sparsity The `PCA` object **is able to** handle sparse matrices. If $X$ is a sparse matrix then the naive centering operation will lead to a dense matrix (thus we lose the computational and memory benefits of specialized sparse data structures and algorithms). However, by exploiting lazy evaluation the `jive.lazymatpy` package is able to efficiently compute a low rank SVD of a mean centered sparse matrix **without** creating an intermediate dense matrix. ReferencesSome useful PCA/SVD references include- Notes on SVD from V. Guruswami's course at CMU are a good overview of SVD [link](https://www.cs.cmu.edu/~venkatg/teaching/CStheory-infoage/book-chapter-4.pdf).- Introduction to Statistical Learning with R (chapter 10.2, [pdf link](https://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf)) and other standard introduction to machine learning textbooks all have sections on PCA (Elements of Statistical Learning, Machine Learning a Probabilistic Perspective, and Pattern Recognition and Machine Learning).- Principal Components Analysis by I.T. Jolliffe is the standard PCA textbook reference [pdf link](https://bit.ly/2jsD5OC).
###Code
import numpy as np
from __future__ import print_function
import matplotlib.pyplot as plt
from sklearn import datasets
import pandas as pd
%matplotlib inline
from jive.PCA import PCA
###Output
_____no_output_____
###Markdown
load toy dataset
###Code
iris = datasets.load_iris()
X = pd.DataFrame(iris.data, columns=iris.feature_names) # measurement data matrix
classes = [iris.target_names[iris.target[i]] for i in range(len(iris.target))] # plant categories
###Output
_____no_output_____
###Markdown
fit PCA
###Code
pca = PCA()
pca.fit(X)
###Output
_____no_output_____
###Markdown
PCA outputscores, singular values and loadings can be accessed via `pca.QUANTITY()`.
###Code
print('normalized scores', pca.scores().shape) # U
print('unnormalized scores',pca.scores(norm=False).shape) # UD (Note this is actually a property)
print('singular values scores',pca.svals().shape) # D (Note the singular values are stored as a list)
print('loadings',pca.loadings().shape) # V
###Output
normalized scores (150, 4)
unnormalized scores (150, 4)
singular values scores (4,)
loadings (4, 4)
###Markdown
The scores, loadings are stored as pd.DataFrame and the singular values are stored as a pd.Series. These quantities can be accessed via `pca.QUANTITY_`.
###Code
print('normalized scores', pca.scores_.shape) # U
print('singular values scores',pca.svals_.shape) # D (Note the singular values are stored as a list)
print('loadings', pca.loadings_.shape) # V
pca.loadings_
###Output
normalized scores (150, 4)
singular values scores (4,)
loadings (4, 4)
###Markdown
variable and observation namesFor data analysis, it's useful for PCA to have names for each observation and feature. If X is a pandas dataframe, then PCA will use the indices/column names to name each observation/variable respectively. If X is a numpy object, then PCA will name observations/variables using a default scheme.
###Code
print('variable names')
print(pca.var_names())
print(pca.loadings().index)
print()
print('observation names')
print(pca.obs_names())
print(pca.scores().index)
###Output
variable names
['sepal length (cm)' 'sepal width (cm)' 'petal length (cm)'
'petal width (cm)']
Index(['sepal length (cm)', 'sepal width (cm)', 'petal length (cm)',
'petal width (cm)'],
dtype='object')
observation names
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53
54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89
90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107
108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143
144 145 146 147 148 149]
Int64Index([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9,
...
140, 141, 142, 143, 144, 145, 146, 147, 148, 149],
dtype='int64', length=150)
###Markdown
PCA quantities can also be returned as pd.DataFrames with the corresponding variable/observation names using `pca.QUANTITY()`
###Code
# The indicies of pca.scores()/ pca.unnorm_scores() are the obs_names
pca.scores()
# pca.unnorm_scores()
# The indicies of pca.loadings() are the var_names
pca.loadings()
###Output
_____no_output_____
###Markdown
output visualizationThe PCA object comes with a number of visualizations methods which are useful for exploratory analysis. scores visualization- histograms of the scores for a each component (e.g. `plt.hist(pca.scores_[:, k])`)- scatter plots of pairs of scores (e.g. `plt.scatter(pca.scores_[:, k], pca.scores_[:, j])`)
###Code
pca.plot_scores()
# observations can be colored by known categories
pca.plot_scores(classes=classes, classes_name='plant type')
# the normalized scores are shown by default
pca.plot_scores(norm=True)
# can also show the unnormalized scores
plt.figure()
pca.plot_scores(norm=False)
# scores plot can be customized with keyword arguments
pca.plot_scores(dist_kws={'bins': 30,
'rug_kws': {'color': 'red'},
'kde_kws': {'bw':.01}})
###Output
_____no_output_____
###Markdown
loadings visualizationShows each features's value for a given loading component
###Code
pca.plot_loading(comp=0)
###Output
_____no_output_____
###Markdown
singular value visualizations
###Code
pca.plot_scree()
pca.plot_var_expl_prop()
pca.plot_var_expl_cum()
model, saved_selected = pca.plot_interactive_scores_slice(1, 2)
model.to_df()
###Output
_____no_output_____ |
notebook/pred_phrase_stats.ipynb | ###Markdown
Compute the unique predictions for cat models
###Code
import json
import tqdm
import numpy as np
pred_path = "/Users/memray/project/kp/OpenNMT-kpg/output/aaai20/catseq_pred/kp20k.pred"
keys = ['gold_num',
'unique_pred_num', 'dup_pred_num', 'pred_sents_local_count', 'topseq_pred_num',
'beam_num', 'beamstep_num']
num_doc = 0
stat_dict = {k:[] for k in keys}
for l in tqdm.tqdm(open(pred_path, 'r')):
pred_dict = json.loads(l)
# print(pred_dict.keys())
# print(pred_dict['topseq_pred_sents']) # top beam, a sequence of words
# print(pred_dict['topseq_preds']) # a sequence of indices
# print(pred_dict['pred_sents']) # unique phrases
# print(pred_dict['ori_pred_sents']) # beams, each is a list of words, seperated by <sep>
# print(pred_dict['ori_preds'])
# print(pred_dict['unique_pred_num'])
# if num_doc > 10:
# break
num_doc += 1
stat_dict['gold_num'].append(len(pred_dict['gold_sent']))
stat_dict['unique_pred_num'].append(pred_dict['unique_pred_num'])
stat_dict['dup_pred_num'].append(pred_dict['dup_pred_num'])
stat_dict['pred_sents_local_count'].append(len(pred_dict['pred_sents']))
stat_dict['beam_num'].append(pred_dict['beam_num'])
stat_dict['beamstep_num'].append(pred_dict['beamstep_num'])
stat_dict['topseq_pred_num'].append(len(pred_dict['topseq_pred_sents']))
print('#(doc)=%d' % num_doc)
for k, v in stat_dict.items():
print('avg(%s) = %d/%d = %f' % (k, np.sum(v), num_doc, np.mean(v)))
pred_path = "/Users/memray/project/kp/OpenNMT-kpg/output/aaai20/catseqd_pred/kp20k.pred"
keys = ['gold_num',
'unique_pred_num', 'dup_pred_num', 'pred_sents_local_count', 'topseq_pred_num',
'beam_num', 'beamstep_num']
num_doc = 0
stat_dict = {k:[] for k in keys}
for l in tqdm.tqdm(open(pred_path, 'r')):
pred_dict = json.loads(l)
# print(pred_dict.keys())
# print(pred_dict['topseq_pred_sents']) # top beam, a sequence of words
# print(pred_dict['topseq_preds']) # a sequence of indices
# print(pred_dict['pred_sents']) # unique phrases
# print(pred_dict['ori_pred_sents']) # beams, each is a list of words, seperated by <sep>
# print(pred_dict['ori_preds'])
# print(pred_dict['unique_pred_num'])
# if num_doc > 10:
# break
num_doc += 1
stat_dict['gold_num'].append(len(pred_dict['gold_sent']))
stat_dict['unique_pred_num'].append(pred_dict['unique_pred_num'])
stat_dict['dup_pred_num'].append(pred_dict['dup_pred_num'])
stat_dict['pred_sents_local_count'].append(len(pred_dict['pred_sents']))
stat_dict['beam_num'].append(pred_dict['beam_num'])
stat_dict['beamstep_num'].append(pred_dict['beamstep_num'])
stat_dict['topseq_pred_num'].append(len(pred_dict['topseq_pred_sents']))
print('#(doc)=%d' % num_doc)
for k, v in stat_dict.items():
print('avg(%s) = %d/%d = %f' % (k, np.sum(v), num_doc, np.mean(v)))
###Output
0it [00:00, ?it/s][A
8it [00:00, 75.23it/s][A
18it [00:00, 80.84it/s][A
26it [00:00, 79.53it/s][A
33it [00:00, 75.71it/s][A
40it [00:00, 71.94it/s][A
48it [00:00, 72.60it/s][A
57it [00:00, 75.83it/s][A
67it [00:00, 79.09it/s][A
78it [00:00, 85.10it/s][A
87it [00:01, 85.75it/s][A
96it [00:01, 86.63it/s][A
105it [00:01, 86.45it/s][A
114it [00:01, 84.04it/s][A
124it [00:01, 87.74it/s][A
133it [00:01, 80.53it/s][A
142it [00:01, 80.11it/s][A
152it [00:01, 83.99it/s][A
161it [00:01, 85.58it/s][A
170it [00:02, 86.55it/s][A
181it [00:02, 89.70it/s][A
193it [00:02, 93.02it/s][A
204it [00:02, 91.69it/s][A
215it [00:02, 91.93it/s][A
226it [00:02, 91.56it/s][A
236it [00:02, 90.96it/s][A
247it [00:02, 94.84it/s][A
257it [00:02, 94.20it/s][A
267it [00:03, 90.66it/s][A
277it [00:03, 90.10it/s][A
287it [00:03, 90.74it/s][A
297it [00:03, 88.01it/s][A
306it [00:03, 86.41it/s][A
315it [00:03, 85.77it/s][A
326it [00:03, 86.38it/s][A
338it [00:03, 89.85it/s][A
349it [00:04, 89.89it/s][A
360it [00:04, 93.70it/s][A
370it [00:04, 93.72it/s][A
380it [00:04, 94.32it/s][A
392it [00:04, 99.09it/s][A
402it [00:04, 94.84it/s][A
412it [00:04, 93.27it/s][A
422it [00:04, 90.19it/s][A
432it [00:04, 88.58it/s][A
441it [00:05, 86.57it/s][A
451it [00:05, 89.40it/s][A
462it [00:05, 90.52it/s][A
474it [00:05, 92.30it/s][A
485it [00:05, 95.87it/s][A
495it [00:05, 92.91it/s][A
506it [00:05, 96.06it/s][A
516it [00:05, 91.75it/s][A
526it [00:05, 93.03it/s][A
536it [00:06, 90.56it/s][A
546it [00:06, 89.69it/s][A
556it [00:06, 92.55it/s][A
566it [00:06, 92.38it/s][A
576it [00:06, 89.25it/s][A
585it [00:06, 87.36it/s][A
594it [00:06, 86.56it/s][A
603it [00:06, 86.38it/s][A
612it [00:06, 83.15it/s][A
621it [00:06, 84.00it/s][A
632it [00:07, 85.53it/s][A
643it [00:07, 90.55it/s][A
653it [00:07, 88.79it/s][A
662it [00:07, 87.74it/s][A
671it [00:07, 86.60it/s][A
680it [00:07, 83.74it/s][A
689it [00:07, 84.29it/s][A
700it [00:07, 85.77it/s][A
710it [00:08, 86.89it/s][A
719it [00:08, 86.81it/s][A
729it [00:08, 88.39it/s][A
738it [00:08, 87.88it/s][A
747it [00:08, 83.62it/s][A
756it [00:08, 80.65it/s][A
766it [00:08, 84.15it/s][A
775it [00:08, 85.20it/s][A
785it [00:08, 87.29it/s][A
794it [00:08, 85.07it/s][A
803it [00:09, 83.93it/s][A
812it [00:09, 79.13it/s][A
823it [00:09, 82.93it/s][A
834it [00:09, 85.18it/s][A
843it [00:09, 80.88it/s][A
852it [00:09, 67.09it/s][A
860it [00:09, 65.10it/s][A
869it [00:10, 69.51it/s][A
877it [00:10, 71.83it/s][A
887it [00:10, 77.25it/s][A
896it [00:10, 76.27it/s][A
904it [00:10, 76.64it/s][A
912it [00:10, 76.73it/s][A
922it [00:10, 77.31it/s][A
931it [00:10, 80.52it/s][A
940it [00:10, 81.02it/s][A
949it [00:11, 81.78it/s][A
958it [00:11, 82.18it/s][A
967it [00:11, 81.54it/s][A
976it [00:11, 81.95it/s][A
987it [00:11, 87.55it/s][A
996it [00:11, 85.53it/s][A
1005it [00:11, 84.97it/s][A
1014it [00:11, 82.83it/s][A
1025it [00:11, 85.80it/s][A
1035it [00:12, 87.03it/s][A
1045it [00:12, 86.20it/s][A
1054it [00:12, 82.29it/s][A
1065it [00:12, 85.39it/s][A
1075it [00:12, 87.11it/s][A
1084it [00:12, 86.57it/s][A
1095it [00:12, 91.05it/s][A
1105it [00:12, 89.69it/s][A
1115it [00:12, 85.28it/s][A
1124it [00:13, 83.99it/s][A
1133it [00:13, 82.86it/s][A
1143it [00:13, 86.11it/s][A
1153it [00:13, 85.16it/s][A
1164it [00:13, 86.46it/s][A
1174it [00:13, 89.23it/s][A
1184it [00:13, 91.21it/s][A
1194it [00:13, 91.05it/s][A
1204it [00:13, 88.83it/s][A
1214it [00:14, 83.25it/s][A
1224it [00:14, 86.84it/s][A
1235it [00:14, 91.91it/s][A
1246it [00:14, 90.68it/s][A
1256it [00:14, 89.13it/s][A
1266it [00:14, 90.85it/s][A
1276it [00:14, 93.40it/s][A
1286it [00:14, 94.49it/s][A
1296it [00:14, 94.45it/s][A
1306it [00:15, 90.85it/s][A
1316it [00:15, 88.91it/s][A
1326it [00:15, 90.91it/s][A
1336it [00:15, 92.02it/s][A
1347it [00:15, 95.47it/s][A
1358it [00:15, 94.10it/s][A
1368it [00:15, 94.83it/s][A
1378it [00:15, 91.55it/s][A
1389it [00:15, 94.51it/s][A
1399it [00:16, 92.53it/s][A
1409it [00:16, 93.39it/s][A
1420it [00:16, 95.26it/s][A
1430it [00:16, 92.53it/s][A
1440it [00:16, 90.73it/s][A
1450it [00:16, 89.44it/s][A
1460it [00:16, 91.69it/s][A
1471it [00:16, 96.39it/s][A
1481it [00:16, 95.94it/s][A
1491it [00:17, 90.17it/s][A
1503it [00:17, 95.50it/s][A
1513it [00:17, 94.34it/s][A
1523it [00:17, 93.46it/s][A
1533it [00:17, 90.09it/s][A
1543it [00:17, 86.10it/s][A
1554it [00:17, 89.31it/s][A
1564it [00:17, 91.67it/s][A
1574it [00:17, 89.50it/s][A
1584it [00:18, 87.47it/s][A
1593it [00:18, 87.94it/s][A
1602it [00:18, 86.73it/s][A
1612it [00:18, 86.37it/s][A
1622it [00:18, 88.80it/s][A
1631it [00:18, 84.94it/s][A
1642it [00:18, 90.54it/s][A
1652it [00:18, 89.02it/s][A
1663it [00:18, 92.79it/s][A
1673it [00:19, 92.34it/s][A
1684it [00:19, 94.80it/s][A
1694it [00:19, 92.02it/s][A
1704it [00:19, 86.74it/s][A
1714it [00:19, 87.94it/s][A
1725it [00:19, 92.13it/s][A
1735it [00:19, 94.00it/s][A
1745it [00:19, 95.08it/s][A
1755it [00:19, 90.27it/s][A
1765it [00:20, 86.14it/s][A
1776it [00:20, 90.96it/s][A
1786it [00:20, 89.36it/s][A
1796it [00:20, 87.89it/s][A
1805it [00:20, 88.05it/s][A
1814it [00:20, 84.71it/s][A
1825it [00:20, 90.11it/s][A
1835it [00:20, 88.63it/s][A
1845it [00:20, 90.62it/s][A
1855it [00:21, 86.94it/s][A
1865it [00:21, 88.62it/s][A
1875it [00:21, 89.64it/s][A
1885it [00:21, 90.95it/s][A
1895it [00:21, 90.13it/s][A
1905it [00:21, 85.70it/s][A
1916it [00:21, 89.06it/s][A
1927it [00:21, 90.56it/s][A
1938it [00:22, 94.24it/s][A
1948it [00:22, 90.66it/s][A
1959it [00:22, 94.94it/s][A
1969it [00:22, 90.59it/s][A
1979it [00:22, 90.80it/s][A
1989it [00:22, 81.14it/s][A
1999it [00:22, 85.96it/s][A
2008it [00:22, 84.10it/s][A
2019it [00:22, 88.27it/s][A
2030it [00:23, 90.82it/s][A
2040it [00:23, 83.24it/s][A
2052it [00:23, 88.36it/s][A
2063it [00:23, 88.78it/s][A
2073it [00:23, 87.17it/s][A
2084it [00:23, 89.64it/s][A
2096it [00:23, 93.44it/s][A
2106it [00:23, 93.55it/s][A
2116it [00:23, 93.74it/s][A
2126it [00:24, 90.79it/s][A
2136it [00:24, 92.25it/s][A
2146it [00:24, 92.47it/s][A
2156it [00:24, 91.61it/s][A
2166it [00:24, 88.65it/s][A
2175it [00:24, 86.53it/s][A
2187it [00:24, 93.10it/s][A
2197it [00:24, 92.34it/s][A
2209it [00:25, 93.94it/s][A
2219it [00:25, 94.64it/s][A
2229it [00:25, 95.95it/s][A
2239it [00:25, 92.36it/s][A
2249it [00:25, 89.29it/s][A
2258it [00:25, 88.12it/s][A
2267it [00:25, 85.36it/s][A
2277it [00:25, 87.60it/s][A
2287it [00:25, 87.08it/s][A
2298it [00:26, 88.14it/s][A
2309it [00:26, 92.00it/s][A
2320it [00:26, 95.53it/s][A
2331it [00:26, 97.49it/s][A
2341it [00:26, 96.46it/s][A
2351it [00:26, 94.80it/s][A
2361it [00:26, 91.07it/s][A
2371it [00:26, 92.31it/s][A
2381it [00:26, 90.74it/s][A
2391it [00:26, 90.83it/s][A
2401it [00:27, 89.25it/s][A
2411it [00:27, 91.58it/s][A
2422it [00:27, 95.37it/s][A
2432it [00:27, 92.38it/s][A
2442it [00:27, 91.51it/s][A
2452it [00:27, 86.61it/s][A
2461it [00:27, 86.73it/s][A
2470it [00:27, 85.06it/s][A
2479it [00:27, 86.48it/s][A
2488it [00:28, 76.59it/s][A
2499it [00:28, 82.00it/s][A
2510it [00:28, 88.56it/s][A
2520it [00:28, 89.20it/s][A
2530it [00:28, 89.58it/s][A
2540it [00:28, 88.00it/s][A
2550it [00:28, 89.23it/s][A
2561it [00:28, 93.81it/s][A
2571it [00:29, 90.37it/s][A
2581it [00:29, 84.95it/s][A
2591it [00:29, 88.27it/s][A
2603it [00:29, 92.41it/s][A
2614it [00:29, 94.76it/s][A
2624it [00:29, 92.97it/s][A
2634it [00:29, 87.19it/s][A
2643it [00:29, 87.88it/s][A
2654it [00:29, 91.46it/s][A
2664it [00:30, 92.49it/s][A
2674it [00:30, 89.98it/s][A
2684it [00:30, 91.51it/s][A
2694it [00:30, 91.68it/s][A
2705it [00:30, 91.29it/s][A
2715it [00:30, 86.57it/s][A
2724it [00:30, 86.56it/s][A
2734it [00:30, 86.63it/s][A
2745it [00:30, 87.28it/s][A
2756it [00:31, 88.36it/s][A
2767it [00:31, 93.75it/s][A
2778it [00:31, 96.45it/s][A
2788it [00:31, 96.85it/s][A
2798it [00:31, 94.99it/s][A
2808it [00:31, 92.50it/s][A
2818it [00:31, 93.23it/s][A
2828it [00:31, 89.77it/s][A
2838it [00:31, 87.46it/s][A
2847it [00:32, 87.17it/s][A
|
Zajecia 9/Zajecia9_Refresher_solved.ipynb | ###Markdown
Fragment kodu przydatny przy pracy w Google Colab:
###Code
import os
os.getcwd()
###Output
_____no_output_____
###Markdown
Zadanie 1 - StopwordsTen zestaw zadań ma przećwiczyć pracę z plikami i jednocześnie pokazać trochę pracę przy wykorzystaniu tekstów. Do ćwiczeń potrzebny będzie pakiet nltk - prawdopodobnie jest w systemie, jak nie warto wykonać komendę poniżej:
###Code
pip install nltk
###Output
Requirement already satisfied: nltk in /usr/local/lib/python3.7/dist-packages (3.2.5)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from nltk) (1.15.0)
###Markdown
Pierwszym krokiem w analizie tekstów jest usunięcie słów, które nic nie wnoszą do jego treści - są to tzw. *stopwords*. Dla języka angielskiego będa to np. *the*, *must*, *an* etc. Listy takich słów zbierane są w odpowiednich słownikach - aby pobrać anglojęzyczny zestaw uruchomimy następującą komendę:
###Code
import nltk
nltk.download('stopwords')
###Output
[nltk_data] Downloading package stopwords to /root/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
###Markdown
W tym momencie rozpoczyna się nasze właściwe ćwiczenie. W kodzie na dole została pobrana lista słów oraz zapisana do zbioru *en_stops*.Napisz pętle, która sprawdzi czy poszczególne elementy listy *all_worlds* znajdują się w zbiorze *en_stops*. Jeżeli tak niech zostaną wydrukowane.**Podpowiedź:** zadanie można zrobić w dwóch linijkach kodu.
###Code
from nltk.corpus import stopwords
en_stops = set(stopwords.words('english'))
all_words = ['There', 'is', 'a', 'tree','near','the','river']
notStopWords = [element for element in all_words if element not in en_stops]
print(notStopWords)
###Output
['There', 'tree', 'near', 'river']
###Markdown
Zadanie 2 - Stopwords w plikuW następnym zadaniu usuniemy *stopwords* z pliku.Napiszmy program, który wykona następujące czynności: * Otworzy plik manifestu Pythona oraz wczyta jego zawartość do zmiennej string* Zamieni wszystkie znaki końca linii ("\n") na spacje (" ").* Przetworzy zmienną string do listy za pomocą metody *split*.* Stowrzy nową listę w kórej zapisane zostaną wyrazy po usunięciu *stopwords* w podobnej pętli jak w zadaniu 1* Wydrukuje liczbę elementów obydwu list.
###Code
adres = "/content/"
plik = "pep-0020.txt"
# Blok with zamknie plik po zakończeniu
with open (adres + plik) as t:
tekst = t.read()
tekst = tekst.replace("\n", " ")
lista_slow = tekst.split()
lista_bezStopwords = [element for element in lista_slow if element not in en_stops]
print(len(lista_slow))
print(len(lista_bezStopwords))
print(len(lista_bezStopwords)/len(lista_slow))
###Output
241
171
0.7095435684647303
###Markdown
Zadanie 3 - StemmingKolejnym krokiem w analizie tekstów jest przetworzenie wyrazów o analogicznym znaczeniu, występujących w różnych czasach do jednego słowa. Funkcja która to wykonuja nazywa się *Stemmer*.Przykład zastosowania takiej metody poniżej: * CONNECTIONS -> CONNECT* CONNECTED -> CONNECT* CONNECTING -> CONNECT* CONNECTION -> CONNECTPoniżej znajduje się kod który przypisuje taką funkcję do obiektu *englishStemmer* oraz wykonuje *stemming* dla słowa *having*. Prośba o jej wykonanie.
###Code
from nltk.stem.snowball import SnowballStemmer
englishStemmer = SnowballStemmer("english")
englishStemmer.stem("having")
###Output
_____no_output_____
###Markdown
Teraz napisz program który wykona *stemming* dla wszystkich słów z listy *wordsToStem*
###Code
wordsToStem = ["inflationary", "inflated", "inflating", "connections", "connected"]
stemmedWords = [englishStemmer.stem(element) for element in wordsToStem ]
print(stemmedWords)
###Output
['inflationari', 'inflat', 'inflat', 'connect', 'connect']
###Markdown
Zadanie 4 - Stemming z plikuZlicz występowanie poszczególnych wyrazów w manifeście Pythona z zadania 2. Następnie stwórz drugą listę, gdzie przeprowadzisz *stemming* na poszczególnych wyrazach. Zlicz ponownie wyrazy. Czy widoczne sa jakieś większe różnice?
###Code
def stworzSlownik(lista_slow):
slownik_wyrazow = {}
# Pętla iteruje po wszystkich wyrazach z listy
for element in lista_slow:
# Każdy wyraz jest sprawdzany pod kątem wystapienia w słowniku
if element in slownik_wyrazow:
# Jeżeli element był - liczba jego wystąpień jest powiększana o 1
slownik_wyrazow[element] += 1
else:
# Jeżeli nie - ta linijka inicjuje nowy element z wartością 1
slownik_wyrazow[element] = 1
return slownik_wyrazow
print("")
print("Pierwszy słownik ma rozmiar:")
slownik1 = stworzSlownik(lista_bezStopwords)
print(len(slownik1))
ListaStemmedWords = [englishStemmer.stem(element) for element in lista_bezStopwords ]
slownik2 = stworzSlownik(ListaStemmedWords)
print("Drugi słownik ma rozmiar:")
print(len(slownik2))
###Output
Pierwszy słownik ma rozmiar:
148
Drugi słownik ma rozmiar:
142
|
Código Quant - Finanças Quantitativas/Simulando Carteiras de Ações Aleatórias.ipynb | ###Markdown
Ricos pelo Acaso 1. Importando Bibliotecas
###Code
# Configurando Yahoo Finance
!pip install yfinance --upgrade --no-cache-dir
import yfinance as yf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
import random
###Output
_____no_output_____
###Markdown
2. Obtendo e tratando os dados
###Code
tickers_ibov = "ABEV3.SA AZUL4.SA B3SA3.SA BBAS3.SA BBDC3.SA BBDC4.SA BBSE3.SA BPAC11.SA BRAP4.SA BRDT3.SA BRFS3.SA BRKM5.SA BRML3.SA BTOW3.SA CCRO3.SA CIEL3.SA CMIG4.SA COGN3.SA CRFB3.SA CSAN3.SA CSNA3.SA CVCB3.SA CYRE3.SA ECOR3.SA EGIE3.SA ELET3.SA ELET6.SA EMBR3.SA ENBR3.SA EQTL3.SA FLRY3.SA GGBR4.SA GNDI3.SA GOAU4.SA GOLL4.SA HAPV3.SA HGTX3.SA HYPE3.SA IGTA3.SA IRBR3.SA ITSA4.SA ITUB4.SA JBSS3.SA KLBN11.SA LAME4.SA LREN3.SA MGLU3.SA MRFG3.SA MRVE3.SA MULT3.SA NTCO3.SA PCAR4.SA PETR3.SA PETR4.SA QUAL3.SA RADL3.SA RAIL3.SA RENT3.SA SANB11.SA SBSP3.SA SMLS3.SA SULA11.SA SUZB3.SA TAEE11.SA TIMP3.SA TOTS3.SA UGPA3.SA USIM5.SA VALE3.SA VIVT4.SA VVAR3.SA WEGE3.SA YDUQ3.SA"
dados_yahoo = yf.download(tickers=tickers_ibov, period='1y')["Adj Close"]
ibov = yf.download('BOVA11.SA', period='1y')["Adj Close"]
ibov = ibov / ibov.iloc[0]
dados_yahoo.dropna(how='all', inplace=True)
dados_yahoo.dropna(axis=1, inplace=True, thresh=246)
dados_yahoo
retorno = dados_yahoo.pct_change()
retorno
retorno_acumulado = (1 + retorno).cumprod()
retorno_acumulado.iloc[0] = 1
retorno_acumulado
###Output
_____no_output_____
###Markdown
3. Resultados
###Code
carteira = random.sample(list(dados_yahoo.columns) , k=5)
carteira = 100 * retorno_acumulado.loc[: , carteira]
carteira['saldo'] = carteira.sum(axis=1)
carteira["retorno"] = carteira['saldo'].pct_change()
carteira
for i in range(352):
carteira = random.sample(list(dados_yahoo.columns) , k=5)
carteira = 20 * retorno_acumulado.loc[: , carteira]
carteira['saldo'] = carteira.sum(axis=1)
carteira['saldo'].plot(figsize=(18,8))
(ibov*100).plot(linewidth=4, color='black')
###Output
_____no_output_____ |
kotlin-notes/12 Binary Tree exercises.ipynb | ###Markdown
Binary Tree exercises- 이산수학에서 그래프 다루면서 트리에 대해 다루긴 했으므로 프로그래밍을 해보지는 않았어도 정의는 알고 있어야 한다- 데이터구조 과목에서도 자세히 다루고 또 관련 내용을 공부해 본 사람들은 사람들은 프로그램으로도 처리할 줄 알 것이다이진트리(Tree) 귀납적(재귀적) 정의를 복습- 어떤 노드(Node)에 - 데이타(v)가 하나 있고 - 왼쪽 하위트리(left)도 이진트리이고 - 오른쪽 하위트리(right)도 이진트리이면 이 노드를 포함한 트리는 이진트리가 된다- 아무것도 없는 상태(null 혹은 Null)도 이진트리다``` 2 / \ 3 1 / \ / \ . . . .`````` 2 / \ 3 . / \ . .``` --- (버전0): `null`을 이용한 이진트리 정의`Node`가 곧 이진트리의 정의나 마찬가지예외적으로, `null`이 될 수 있는 가능성도 있으므로 이진트라는 것은 곧 `Node?` 타입이다이진트리의 타입을 따로 정의할 필요가 없어서 일단 시작은 간단(좀더 코드 가독성을 높이려면 typealias를 사용하면 좋음)트리에 포함된 데이터의 개수를 계산하는 size 메소드를 하나 정의해 보자
###Code
data class Node0(val v: Int, val left: Node0?, val right: Node0?) {
fun size() : Int {
// 1 + left의 데이터 개수 + right의 데이터 개수
// null인지 아닌지 항상 신경쓰면서 검사를 해야
var lSize = 0
var rSize = 0
if (left != null) lSize = left.size() // 여기에서는 왜 ?.가 아닌 .을 써도 될까?
if (right != null) rSize = right.size() // 이것에 대해서는 smart cast 관련해 교재를 찾아보세요
return 1 + lSize + rSize
}
}
typealias Tree0 = Node0? // C에서 typedef와 비슷 (typealias 정의가 재귀적으로 돌고 돌면 안됨!)
// data class Node0(val v: Int, val left: Tree0, val right: Tree0)
/*
2
/ \
3 .
/ \
. .
*/
val t0 : Tree0 = Node0(2, Node0(3, null, null), null)
t0
t0.size()
###Output
_____no_output_____
###Markdown
---- (버전1): 널 오브젝트(null object) 패턴을 이용한 이진트리 정의참고로 null object 패턴과는 별개로 이진트리보다 더 일반적인 트리 구조(대표적인 예로 파일 시스템의 디렉토리/파일 구조라던가)를 객체지향 용어로는 composite 패턴이라고 부른다. `null`을 대신해서 쓸 수 있는 오브젝트라는 뜻에서 널오브젝트 패턴이라 부름
###Code
abstract class Tree1() // Tree1을 그냥 만들진 못하고 Node1이나 Null1으로 생성해야 하므로 추상 클래스
data class Node1(val v: Int, val left: Tree1, val right: Tree1) : Tree1()
class Null1() : Tree1() // 아무 데이터도 포함하지 않으면 data class로 하고싶어도 정의 못함
val null1 = Null1()
val t1 : Tree1 = Node1(2, Node1(3, null1, null1), null1)
t1 // Null1의 toString을 오버라이딩하면 좀더 간단하게 출력 가능한데 굳이 여기서 그러지는 않겠음
val nu1 = Null1()
val nu2 = Null1()
nu1 === nu2 // 매번 다른 오브젝트가 생성됨
###Output
_____no_output_____
###Markdown
---- (버전2): 널 오브젝트를 싱글턴으로 하고 `sealed` 클래스도 활용아무 데이터도 포함하지 않는 데이터 클래스에 해당하는 것은 싱글턴 오브젝트가 되는 것이 맞다.그래서 코틀린의 `object` 키워드 활용일반적은 코틀린 소스코드 파일이라면 sealed 클래스와 하위 클래스를 같은 레벨에서 정의해도 되는데노트북 환경에서는 sealed 클래스의 하위 클래스를 sealed 클래스 내부에 정의해야만 유의미한 차이를 볼 수 있다```kotlinsealed abstract class Tree2() { // sealed가 아닐 때와의 차이점에 대해서 교재 참고 abstract fun size() : Int // 추상 메소드}data class Node2(val v: Int, val left: Tree2, val right: Tree2) : Tree2() { override fun size() = 1 + left.size() + right.size()}object Null2 : Tree2() { override fun toString() = "Null2" override fun size() = 0}```sealed 클래스는 하위 클래스가 같은 파일 또는 내부 클래스로 정의된 것으로 한정됨.그러니까 다른 곳에서 새로운 하위 클래스를 추가로 정의하는 것을 막는다.
###Code
sealed abstract class Tree2() { // sealed가 아닐때와 차이점에 대해서 교재 참고
abstract fun size() : Int // 추상 메소드
data class Node2(val v: Int, val left: Tree2, val right: Tree2) : Tree2() {
override fun size() = 1 + left.size() + right.size()
}
object Null2 : Tree2() {
override fun toString() = "Null2"
override fun size() = 0
}
}
val t2 : Tree2 = Tree2.Node2(2, Tree2.Node2(3, Tree2.Null2, Tree2.Null2), Tree2.Null2)
t2
t2.size()
// java에서 switch 문과 비슷한 기능
val n: Int = 5
when(n) {
1 -> "one"
2 -> "two"
3 -> "three"
else -> "many" // 다른 나머지 경우
}
val t3: Tree2 = Tree2.Null2
when(t3) {
is Tree2.Null2 -> "Null2" // Tree2를 sealad 클래스로 정의한 경우에는
is Tree2.Node2 -> "Node2" // 두가지 경우만으로 모든 경우를 다 고려한 것으로 판단된다
// else -> "something else" // sealed가 아니면 혹시 다른 곳에서 Tree2를 상속한 하위 클래스가 있을지도 모르니까
}
###Output
_____no_output_____ |
tooling/DataCollectionRQ2.ipynb | ###Markdown
JSS - Reuse Special Issue - Gkortzis et al. Data Collection for RQ2This notebook collects data to answer RQ2 analysis introduced for JSS revision 1:
###Code
import os
import logging
import datetime
import maven as mvn
import spotbugs as sb
logging.basicConfig(level=logging.INFO)
currentDT = datetime.datetime.now()
print ("Started at :: {}".format(str(currentDT)))
def get_reused_tops(spotbugs_xml, module_lst):
class_dict = {}
for m in module_lst:
class_dict[m.get_m2_path()] = m.get_class_list()
all_vs = sb.collect_vulnerabilities(spotbugs_xml, class_dict)
count_vs = {}
for k, vs in all_vs.items():
flat_vs = [b for c in vs for r in c for b in r]
for v in flat_vs:
count_vs[k] = count_vs.get(k,{})
count_vs[k][v['@type']] = count_vs[k].get(v['@type'], 0) + 1
return count_vs
def update_reused_tops(d1, d2):
for d,bc in d2.items():
for b,c in bc.items():
d1[d] = d1.get(d,{})
d1[d][b] = d1[d].get(b,0) + c
return d1
def get_native_tops(spotbugs_xml, module_lst):
project_classes = [c for m in module_lst for c in m.get_class_list()]
vs = sb.collect_vulnerabilities(spotbugs_xml, {'n': project_classes})
count_vs = {}
flat_vs = [b for c in vs['n'] for r in c for b in r]
for v in flat_vs:
count_vs[v['@type']] = count_vs.get(v['@type'], 0) + 1
return sorted(count_vs.items(), key=lambda x: x[1], reverse=True)
def get_artifacts(file_project_trees):
trees = mvn.get_compiled_modules(file_project_trees)
proj_name = os.path.basename(os.path.splitext(file_project_trees)[0])
if not trees:
logging.warning(f'No modules to analyze: {file_project_trees}.')
return None
modules = [m.artifact for m in trees]
dep_modules = [m.artifact for t in trees for m in t.deps if m.artifact not in modules]
dep_modules = list(set(dep_modules)) # remove duplicates
return (proj_name, modules, dep_modules)
path_to_data = os.path.abspath('../repositories_data')
projects_tress = [f for f in os.listdir(path_to_data) if f.endswith('.trees')]
ntops_per_project = []
rtops_per_project = {}
for f in projects_tress:
trees_filepath = path_to_data + os.path.sep + f
spotbugs_xml = f'{os.path.splitext(trees_filepath)[0]}.xml'
if not os.path.exists(spotbugs_xml):
logging.warning("Spotbugs file does not exist :: {}".format(spotbugs_xml))
continue
if os.stat(spotbugs_xml).st_size == 0:
logging.warning("Invalid Spotbugs xml :: {}".format(spotbugs_xml))
continue
proj_arts = get_artifacts(trees_filepath)
ntops = get_native_tops(spotbugs_xml, proj_arts[1])
ntops_per_project.append(ntops)
rtops = get_reused_tops(spotbugs_xml, proj_arts[2])
rtops_per_project = update_reused_tops(rtops_per_project, rtops)
ntop_dict = {}
for tp in ntops_per_project:
for idx_t in range(5):
if idx_t < len(tp):
ntop_dict[tp[idx_t][0]] = ntop_dict.get(tp[idx_t][0], 0) + 1
rtop_dict = {}
for d,tp in rtops_per_project.items():
tp = list(tp.items())
for idx_t in range(5):
if idx_t < len(tp):
rtop_dict[tp[idx_t][0]] = rtop_dict.get(tp[idx_t][0], 0) + 1
print('Top 5 vulnerabilities in native code')
overall_top = sorted(ntop_dict.items(), key=lambda x: x[1], reverse=True)
print(overall_top[:6])
print('Top 5 vulnerabilities in reused code')
overall_top = sorted(rtop_dict.items(), key=lambda x: x[1], reverse=True)
print(overall_top[:6])
currentDT = datetime.datetime.now()
print ("Finished at :: {}".format(str(currentDT)))
###Output
_____no_output_____ |
static_files/presentations/NumPy_tutorial.ipynb | ###Markdown
Numpy - multidimensional data arrays for scientific computing in python Introduction **Python** objects:- High-level objects: integers, floating-point- containers: lists (costless insertion and append), dictionaries (fast lookup)- Python lists are very general. They can contain any kind of object and are dynamically typed. - More importantly, They do not support mathematical functions such as matrix and dot multiplications. Implementing such functions for Python lists would not be very efficient because of the dynamic typing.**NumPy** provides:- Extension package to Python for multi-dimensional arrays- Numpy arrays are **statically typed** and **homogeneous**. The type of the elements is determined when the array is created.- Because of the static typing, fast implementation of mathematical functions such as multiplication and addition of `numpy` arrays can be implemented in a compiled language (C and Fortran is used). Moreover, Numpy arrays are memory efficient.The `numpy` package (module) is used in almost all numerical computation using Python. It is a package that provides high-performance vector, matrix and higher-dimensional data structures for Python. It is implemented in C and Fortran so when calculations are vectorized (formulated with vectors and matrices) which provides good performance. To use `numpy` you need to import the module, using for example:
###Code
import numpy as np
a = range(1000)
%%timeit
a1 = []
for i in range(1000):
a1.append(a[i]**2)
%%timeit
global a2
a2 = [i**2 for i in a]
b = np.arange(1000)
%%timeit
b1 = b**2
###Output
_____no_output_____
###Markdown
In the `numpy` package the terminology used for vectors, matrices and higher-dimensional data sets is *array*. Documentation- https://scipy-lectures.org/intro/numpy/index.html- https://numpy.org/doc/
###Code
np.array?
###Output
_____no_output_____
###Markdown
Creating `numpy` arrays There are a number of ways to initialize new `numpy` arrays, for example from* A Python list or tuples* Using functions that are dedicated to generating `numpy` arrays, such as `arange`, `linspace`, etc.* Reading data from files (`npy`) From Python list For example, to create new vector and matrix arrays from Python lists we can use the `numpy.array` function.
###Code
# a vector: the argument to the array function is a Python list
v = np.array([1,2,3,4])
v, type(v), v.dtype, v.shape
# a matrix: the argument to the array function is a nested Python list
M = np.array([[1, 2], [3, 4]])
M, type(M), M.dtype, M.shape
###Output
_____no_output_____
###Markdown
Note that the `v` and `M` objects are both of the type `ndarray` that the `numpy` module provides. The difference between the `v` and `M` arrays is only their shapes. We can get information about the shape of an array by using the `ndarray.shape` property. Since it is statically typing. We can explicitly define the type of the array data when we create it, using the `dtype` keyword argument:
###Code
M = np.array([[1, 2], [3, 4]], dtype=complex)
M
###Output
_____no_output_____
###Markdown
Common data types that can be used with `dtype` are: `int`, `float`, `complex`, `bool`, etc.We can also explicitly define the bit size of the data types, for example: `int64`, `int16`, `float128`, `complex128`. Using array-generating functions For larger arrays, it is impractical to initialize the data manually, using explicit python lists. Instead, we can use one of the many functions in `numpy` that generate arrays of different forms. Some of the more common are:
###Code
np.arange(-1, 1, 0.1) # arguments: start, stop, step
# using linspace, both end points ARE included
np.linspace(0, 10, 25) # arguments: start, end, number of samples
from numpy import random
# uniform random numbers in [0,1]
np.random.rand(5,5) #argument: shape
# standard normal distributed random numbers
np.random.randn(5,5)
###Output
_____no_output_____
###Markdown
diag
###Code
# a diagonal matrix
np.diag([1,2,3])
###Output
_____no_output_____
###Markdown
zeros and ones
###Code
np.zeros((3,3))
np.ones((3,3))
np.random.seed(89)
np.random.rand(5)
np.random.rand(5)
###Output
_____no_output_____
###Markdown
Manipulating arrays Indexing and slicing- Note that the indices begin at 0, like other Python sequences (and C/C++). In contrast, in Fortran or Matlab, indices start at 1.- In 2D, the first dimension corresponds to rows, the second to columns. We can index elements in an array using square brackets and indices:
###Code
# v is a vector, and has only one dimension, taking one index
v = np.random.rand(5) #Note it starts with zero
v, v[3]
# M is a matrix, or a 2 dimensional array, taking two indices
M = np.random.rand(5,5)
M, M[2,3]
###Output
_____no_output_____
###Markdown
We can get rows and columns as follows
###Code
M[1,:] # row 1
M[:,1] # column 1
###Output
_____no_output_____
###Markdown
Index slicing is the technical name for the syntax `M[lower:upper:step]` to extract part of an array:
###Code
A = np.array([1,2,3,4,5])
A
A[1:3]
###Output
_____no_output_____
###Markdown
We can omit any of the three parameters in `M[lower:upper:step]`:
###Code
A[::2] # step is 2, lower and upper defaults to the beginning and end of the array
A[:3] # first three elements
A[3:] # elements from index 3
###Output
_____no_output_____
###Markdown
Negative indices counts from the end of the array (positive index from the begining):
###Code
A = array([1,2,3,4,5])
A[-1] # the last element in the array
A[-3:] # the last three elements
###Output
_____no_output_____
###Markdown
Index slicing works exactly the same way for multidimensional arrays:
###Code
A = np.array([[n+m*10 for n in range(5)] for m in range(5)])
A
# a block from the original array
A[1:4, 1:4]
###Output
_____no_output_____
###Markdown
- Chcek "Fancy indexing" at https://scipy-lectures.org/intro/numpy/array_object.htmlfancy-indexing Linear algebra on array Vectorizing code is the key to writing efficient numerical calculations with `Python/Numpy`. That means that as much as possible of a program should be formulated in terms of matrix and vector operations, like matrix-matrix multiplication. Scalar and array operations We can use the usual arithmetic operators to multiply, add, subtract, and divide arrays with scalar numbers.
###Code
v = np.arange(0, 5)
v * 2, v + 2
A * 2, A + 2
a = np.arange(10000)
%timeit a + 1
l = range(10000)
%timeit [i+1 for i in l]
###Output
_____no_output_____
###Markdown
Element-wise array-array operations When we add, subtract, multiply and divide arrays with each other, the default behavior is **element-wise** operations:
###Code
A * A # element-wise multiplication
v1 * v1
###Output
_____no_output_____
###Markdown
If we multiply arrays with compatible shapes, we get an element-wise multiplication of each row:
###Code
A, v
A.shape, v.shape
A * v
###Output
_____no_output_____
###Markdown
- See Broadcasting at https://scipy-lectures.org/intro/numpy/operations.htmlbroadcasting and https://cs231n.github.io/python-numpy-tutorial/broadcasting Matrix algebra What about matrix multiplication? There are two ways. We can either use the `dot` function, which applies a matrix-matrix, matrix-vector, or inner vector multiplication to its two arguments:
###Code
np.dot(A, A)
np.dot(A, v)
np.dot(v1, v1)
A.T #transpose
###Output
_____no_output_____
###Markdown
Alternatively, we can cast the array objects to the type `matrix`. This changes the behavior of the standard arithmetic operators `+, -, *` to use matrix algebra. (Become matrix operation!)
###Code
help(np.matrix)
###Output
_____no_output_____
###Markdown
Matrix computations- The sub-module `numpy.linalg` implements basic linear algebra, such as solving linear systems, singular value decomposition, etc. Inverse
###Code
np.linalg.det(A)
###Output
_____no_output_____
###Markdown
Data processing and reshaping Often it is useful to store datasets in Numpy arrays. Numpy provides a number of functions to calculate the statistics of datasets in arrays.
###Code
# column 3
np.mean(A[:,3]), np.std(A[:,3])
a = range(10000)
b = np.arange(10000)
%%timeit
sum(a)
%%timeit
np.sum(b)
###Output
_____no_output_____
###Markdown
When functions such as `min`, `max`, etc. are applied to a multidimensional arrays, it is sometimes useful to apply the calculation to the entire array, and sometimes only on a row or column basis. Using the `axis` argument we can specify how these functions should behave:
###Code
m = np.random.rand(3,3)
m
# global max
m.max()
###Output
_____no_output_____
###Markdown
source: https://scipy-lectures.org/intro/numpy/operations.html
###Code
# max in each column
m.max(axis=0)
# max in each row
m.max(axis=1)
###Output
_____no_output_____
###Markdown
Many other functions and methods in the `array` and `matrix` classes accept the same (optional) `axis` keyword argument. The shape of a Numpy array can be modified without copying the underlying data, which makes it a fast operation even for large arrays.
###Code
A
n, m = A.shape
B = A.reshape((1,n*m))
B
###Output
_____no_output_____
###Markdown
With `newaxis`, we can insert new dimensions in an array, for example converting a vector to a column or row matrix:
###Code
v = np.array([1,2,3])
np.shape(v)
# make a column matrix of the vector v
v[:, np.newaxis]
# column matrix
v[:,np.newaxis].shape
###Output
_____no_output_____
###Markdown
Stacking and repeating arrays Using function `repeat`, `tile`, `vstack`, `hstack`, and `concatenate` we can create larger vectors and matrices from smaller ones: Copy and "deep copy" - To achieve high performance, assignments in Python usually do not copy the underlying objects. This is important for example when objects are passed between functions, to avoid an excessive amount of memory copying when it is not necessary (technical term: pass by reference). - A slicing operation creates a view on the original array, which is just a way of accessing array data. Thus the original array is not copied in memory.
###Code
A = np.array([[1, 2], [3, 4]])
A
# now B is referring to the same array data as A
B = A
# changing B affects A
B[0,0] = 10
B, A
np.may_share_memory(A, B)
###Output
_____no_output_____
###Markdown
If we want to avoid this behavior, so that when we get a new completely independent object `B` copied from `A`, then we need to do a so-called "deep copy" using the function `copy`:
###Code
B = np.copy(A)
# now, if we modify B, A is not affected
B[0,0] = -5
B, A
###Output
_____no_output_____
###Markdown
Memory layout matters! source: https://scipy-lectures.org/advanced/advanced_numpy/index.html
###Code
x = np.zeros((20000,))
y = np.zeros((20000*67,))[::67]
%timeit x.sum()
%timeit y.sum()
x.strides, y.strides
###Output
_____no_output_____
###Markdown
Iterating over array elements Generally, we want to avoid iterating over the elements of arrays whenever we can (at all costs). The reason is that in an interpreted language like Python (or MATLAB), iterations are really slow compared to vectorized operations.
###Code
%%timeit
# itersative sum
total = 0
for item in range(0, 10000):
total += item
%%timeit
# vectorized sum
np.sum(np.arange(10000))
%%timeit
# iterative operation
[math.exp(item) for item in range(100)]
%%timeit
# vectorized operation
np.exp(np.arange(100))
###Output
_____no_output_____
###Markdown
Create Your Own Vectorizing functions To get good performance we should try to avoid looping over elements in our vectors and matrices, and instead use vectorized algorithms. The first step in converting a scalar algorithm to a vectorized algorithm is to make sure that the functions we write work with vector inputs.
###Code
def Theta(x):
"""
Scalar implemenation of the Heaviside step function.
"""
if x >= 0:
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
To get a vectorized version of Theta we can use the Numpy function `vectorize`. In many cases it can automatically vectorize a function:
###Code
Theta_vec = np.vectorize(Theta)
Theta_vec(np.array([-3,-2,-1,0,1,2,3]))
###Output
_____no_output_____
###Markdown
Type casting Since Numpy arrays are *statically typed*, the type of an array does not change once created. But we can explicitly cast an array of some type to another using the `astype` functions (see also the similar `asarray` function). This always creates a new array of a new type:
###Code
M.dtype
M2 = M.astype(float)
M2
M2.dtype
M3 = M.astype(bool)
M3
###Output
_____no_output_____
###Markdown
- See casting at https://scipy-lectures.org/intro/numpy/elaborate_arrays.html File I/O- NumPy has its own binary format, not portable but with efficient I/O- Useful when storing and reading back numpy array data. Use the functions `numpy.save` and `numpy.load`- Matlab: scipy.io.loadmat, scipy.io.savemat
###Code
save("random-matrix.npy", M)
!file random-matrix.npy
load("random-matrix.npy")
###Output
_____no_output_____
###Markdown
Laboratories
###Code
import math
import scipy.spatial
samples = np.random.random((10, 5)) # input: 10 x 5 matrix
A = np.zeros([10, 10]) # output: 2 x 2 matrix
%%timeit
# Baseline
for i in range(10):
for j in range(10):
A[i, j] = np.sqrt(np.sum((samples[i] - samples[j])**2))
for i in range(10):
for j in range(10):
A[i, j] = np.sqrt(np.sum((samples[i] - samples[j])**2))
distances = scipy.spatial.distance.cdist(samples, samples)
np.allclose(A, distances)
###Output
_____no_output_____
###Markdown
ConclusionTo make the code faster using NumPy and- Vectorizing for loops:Find tricks to avoid for loops using numpy arrays.- In place operations: `a *= 3` instead of `a = 3*a` - Use views instead of copies whenever possible- Memory arrangement is important. Keep strides small as possible for coalescing memory access- Broadcasting:Use broadcasting to do operations on arrays as small as possible before combining them.- Use compiled code (The following session) References- https://scipy-lectures.org/intro/numpy/index.html - A good introduction to pydata stack- https://github.com/jrjohansson/scientific-python-lectures/blob/master/Lecture-2-Numpy.ipynb - A good introduction for NumPy though a bit of outdated- http://cs229.stanford.edu/section/cs229_python_tutorial/Spring_2020_Notebook.html - Another good introduction to NumPy- https://www.pythonlikeyoumeanit.com/Module3_IntroducingNumpy/Broadcasting.html - A great reference for broadcasting and distance calculation- https://eli.thegreenplace.net/2015/memory-layout-of-multi-dimensional-arrays - A great article for memory layout behind NumPy- https://numpy.org/doc/stable/user/numpy-for-matlab-users.html - A Numpy guide for MATLAB users.- http://mathesaurus.sourceforge.net/r-numpy.html - A Numpy guide for R users.
###Code
###Output
_____no_output_____ |
notebooks/0-Programming/2-pytorch/11-cnn.ipynb | ###Markdown
如果我们的 Sentence = I hate this filem , embed_dim 是 5那么 首先我们得到的是 [4, 5] 的 Tensor如果一个 filter that covers two words at a time 那么 它的 size 是 2*5绝大部分情况下, the width of the filter equals the width of the image那么我们的输出 是一个列向量,长度等于 input sentence - filter 的长度 + 1filter 的大小:[x,embed_dim], x 为自定义值,表示我们将同时考虑多少个单词最后,我们使用 pooling / max pooling 策略如选取的是 max, 相应的 idea 是 the maximum value is the most important feature for determining the sentiment of the review conv2d:in_channels: 对 text 而言是 1out_channels: filters 的数量kernel_size: the size of the filters -> [n, emb_dim] 对于 1D conv1d: embed_dim 是 filter 的 depth, the number of tokens in the sentence is the width
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
###Output
_____no_output_____
###Markdown
在 LEAM 中,filter 的数量是 num_label, kernel_size = 55, padding = sameactivation = relu
###Code
x = torch.randn(4, 5, 100) # [B, S, E]
cnn_t = nn.Conv2d(1, 100, (3, 100))
cnn_tx = cnn_t(x.unsqueeze(1))
cnn_tx.shape
cnn_tx.max(2)[0]
embedding_dim = 60
filter_size = 3
n_filters = 100
embedding = nn.Embedding(10, embedding_dim)
x = torch.LongTensor([[1,2,6,4,5],[4,7,3,2,9]])
embedded = embedding(x) # [B, src_len, emb_dim]
embedded = embedded.unsqueeze(1) # [B, 1, S, D]
B = 3
S = 4
num_class = 5
G = torch.rand(B, S, 10)
#conv_1 = nn.Conv2d(in_channels= 1, out_channels= n_filters, kernel_size=(filter_size, embedding_dim))
# emb_x = conv_1(embedded)
# emb_x.shape
conv_same = nn.Conv2d(in_channels=1, out_channels=num_class, kernel_size=(55, 55), padding="same")
same_x = conv_same(G.unsqueeze(1))
same_x.shape
poolo = F.max_pool1d(xm,3)
poolo
import torch
import torch.nn as nn
import torch.nn.functional as F
x = torch.rand(3, 1,10, 4)
#x = F.pad(x, (0, 0, 2, 1))
x.shape
nn.Conv2d(in_channels=1, out_channels=1, kernel_size=(1, 1), padding='same')(x).shape
print(torch.__version__)
x = torch.rand(5, 3, 6, 3)
x.shape
x.max(1).values.shape # 选择 filter 的最大值,沿着这一维度取最大值
n_class = 6
input = torch.randn(3,5,4,n_class)
input
m = nn.MaxPool2d(kernel_size=(1,n_class))
output = m(input)
output.shape
b = torch.Tensor([[[3,2,4,5,1],[10,2,12,11,9],[1,12,9,10,6]],[[1,2,6,4,5],[4,7,3,2,9], [0, 32, 19, 3, 2]]])
b.shape
b
9.1094e-04 + 9.9897e-01 + 1.2328e-04
b1 = F.softmax(b, dim=1)
b1.shape
x = torch.rand(3, 1, 4, 100)
cnn1 = nn.Conv2d(1, 100, (3, 100))
cnnx = cnn1(x)
cnnx.shape
x=torch.rand(4, 2, 3)
x
xa = torch.sum(x, dim=1)
xa.shape
xa
xa + 100
pooled
att = torch.rand(3,4)
att
F.softmax(att, dim=1)
0.3105+ 0.1949+ 0.1666+ 0.3280
batch_size = 3
seq_len = 4
num_class = 5
G = torch.randn(batch_size, seq_len, num_class)
cnn1 = nn.Conv1d(in_channels= num_class, out_channels= num_class, kernel_size= 3, padding='same')
G.shape
cnn1
G = G.permute(0, 2, 1) # 输入要修改为 batch_size, embedding_size, text_len
out = cnn1(G)
out.shape
###Output
_____no_output_____ |
python_pymc3_snippets/Lecture_3.ipynb | ###Markdown
Modelh_i ~ Normal(mu, sigma)mu ~ Normal(178, 20)sigma ~ Uniform(0, 50)
###Code
x = np.linspace(start=50, stop=250, num=1000)
prior_probs = stats.norm(178, 20).pdf(x)
plt.plot(x, prior_probs, color='#dd1c77', linewidth=3)
plt.show()
x = np.linspace(start=0, stop=50, num=1000)
sigma = stats.uniform(loc=0, scale=50).pdf(x)
plt.plot(x, sigma, color='#dd1c77', linewidth=3)
plt.show()
# lets simulating from mu
sample_mu = stats.norm(loc=178, scale=20).rvs(10000)
sample_sigma = stats.uniform(loc=0, scale=50).rvs(10000)
prior_h = stats.norm(loc=sample_mu, scale=sample_sigma).rvs()
sns.distplot(prior_h, color='#dd1c77')
plt.show()
# n_samples = 1000
# sample_mu = stats.norm.rvs(loc=178, scale=20, size=n_samples)
# sample_sigma = stats.uniform.rvs(loc=0, scale=50, size=n_samples)
# prior_h = stats.norm.rvs(loc=sample_mu, scale=sample_sigma)
# pm.kdeplot(prior_h)
# plt.xlabel('heights', fontsize=14)
# plt.yticks([]);
r = stats.binned_statistic(prior_h, prior_h, 'count', bins=1000)
r.bin_edges[r.statistic.argmax()]
###Output
_____no_output_____ |
coursera_nlp/project/week5-project-4.ipynb | ###Markdown
Final project: StackOverflow assistant botCongratulations on coming this far and solving the programming assignments! In this final project, we will combine everything we have learned about Natural Language Processing to construct a *dialogue chat bot*, which will be able to:* answer programming-related questions (using StackOverflow dataset);* chit-chat and simulate dialogue on all non programming-related questions.For a chit-chat mode we will use a pre-trained neural network engine available from [ChatterBot](https://github.com/gunthercox/ChatterBot).Those who aim at honor certificates for our course or are just curious, will train their own models for chit-chat.©[xkcd](https://xkcd.com) Data descriptionTo detect *intent* of users questions we will need two text collections:- `tagged_posts.tsv` — StackOverflow posts, tagged with one programming language (*positive samples*).- `dialogues.tsv` — dialogue phrases from movie subtitles (*negative samples*).
###Code
import sys
sys.path.append("..")
from common.download_utils import download_project_resources
download_project_resources()
###Output
File data\dialogues.tsv is already downloaded.
File data\tagged_posts.tsv is already downloaded.
###Markdown
For those questions, that have programming-related intent, we will proceed as follow predict programming language (only one tag per question allowed here) and rank candidates within the tag using embeddings.For the ranking part, you will need:- `word_embeddings.tsv` — word embeddings, that you trained with StarSpace in the 3rd assignment. It's not a problem if you didn't do it, because we can offer an alternative solution for you. As a result of this notebook, you should obtain the following new objects that you will then use in the running bot:- `intent_recognizer.pkl` — intent recognition model;- `tag_classifier.pkl` — programming language classification model;- `tfidf_vectorizer.pkl` — vectorizer used during training;- `thread_embeddings_by_tags` — folder with thread embeddings, arranged by tags. Some functions will be reused by this notebook and the scripts, so we put them into *utils.py* file. Don't forget to open it and fill in the gaps!
###Code
from utils import *
###Output
[nltk_data] Downloading package stopwords to
[nltk_data] C:\Users\Administrator\AppData\Roaming\nltk_data...
[nltk_data] Package stopwords is already up-to-date!
###Markdown
Part I. Intent and language recognition We want to write a bot, which will not only **answer programming-related questions**, but also will be able to **maintain a dialogue**. We would also like to detect the *intent* of the user from the question (we could have had a 'Question answering mode' check-box in the bot, but it wouldn't fun at all, would it?). So the first thing we need to do is to **distinguish programming-related questions from general ones**.It would also be good to predict which programming language a particular question referees to. By doing so, we will speed up question search by a factor of the number of languages (10 here), and exercise our *text classification* skill a bit. :)
###Code
import numpy as np
import pandas as pd
import pickle
import re
from sklearn.feature_extraction.text import TfidfVectorizer
###Output
_____no_output_____
###Markdown
Data preparation In the first assignment (Predict tags on StackOverflow with linear models), you have already learnt how to preprocess texts and do TF-IDF tranformations. Reuse your code here. In addition, you will also need to [dump](https://docs.python.org/3/library/pickle.htmlpickle.dump) the TF-IDF vectorizer with pickle to use it later in the running bot.
###Code
def tfidf_features(X_train, X_test, vectorizer_path):
"""Performs TF-IDF transformation and dumps the model."""
# Train a vectorizer on X_train data.
# Transform X_train and X_test data.
# Pickle the trained vectorizer to 'vectorizer_path'
# Don't forget to open the file in writing bytes mode.
######################################
######### YOUR CODE HERE #############
######################################
tf_idf = TfidfVectorizer()
tf_idf.fit(X_train)
X_train = tf_idf.transform(X_train)
X_test = tf_idf.transform(X_test)
pickle.dump(tf_idf,open(vectorizer_path, 'wb'))
return X_train, X_test
###Output
_____no_output_____
###Markdown
Now, load examples of two classes. Use a subsample of stackoverflow data to balance the classes. You will need the full data later.
###Code
sample_size = 200000
dialogue_df = pd.read_csv('data/dialogues.tsv', sep='\t').sample(sample_size, random_state=0)
stackoverflow_df = pd.read_csv('data/tagged_posts.tsv', sep='\t').sample(sample_size, random_state=0)
###Output
_____no_output_____
###Markdown
Check how the data look like:
###Code
dialogue_df.head()
stackoverflow_df.head()
###Output
_____no_output_____
###Markdown
Apply *text_prepare* function to preprocess the data:
###Code
from utils import text_prepare
dialogue_df['text'] = [text_prepare(x) for x in dialogue_df['text']]
######### YOUR CODE HERE #############
stackoverflow_df['title'] = [text_prepare(x) for x in stackoverflow_df['title']]
######### YOUR CODE HERE #############
###Output
_____no_output_____
###Markdown
Intent recognition We will do a binary classification on TF-IDF representations of texts. Labels will be either `dialogue` for general questions or `stackoverflow` for programming-related questions. First, prepare the data for this task:- concatenate `dialogue` and `stackoverflow` examples into one sample- split it into train and test in proportion 9:1, use *random_state=0* for reproducibility- transform it into TF-IDF features
###Code
from sklearn.model_selection import train_test_split
X = np.concatenate([dialogue_df['text'].values, stackoverflow_df['title'].values])
y = ['dialogue'] * dialogue_df.shape[0] + ['stackoverflow'] * stackoverflow_df.shape[0]
X_train, X_test, y_train, y_test = train_test_split(X, y,test_size=0.1, random_state=0 )
######### YOUR CODE HERE ##########
print('Train size = {}, test size = {}'.format(len(X_train), len(X_test)))
X_train_tfidf, X_test_tfidf = tfidf_features(X_train, X_test, RESOURCE_PATH['TFIDF_VECTORIZER'])
######### YOUR CODE HERE ###########
###Output
Train size = 360000, test size = 40000
###Markdown
Train the **intent recognizer** using LogisticRegression on the train set with the following parameters: *penalty='l2'*, *C=10*, *random_state=0*. Print out the accuracy on the test set to check whether everything looks good.
###Code
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
######################################
######### YOUR CODE HERE #############
######################################
intent_recognizer = LogisticRegression(penalty='l2',C=10, random_state=0)
intent_recognizer.fit(X_train_tfidf, y_train)
# Check test accuracy.
y_test_pred = intent_recognizer.predict(X_test_tfidf)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('Test accuracy = {}'.format(test_accuracy))
###Output
Test accuracy = 0.99055
###Markdown
Dump the classifier to use it in the running bot.
###Code
pickle.dump(intent_recognizer, open(RESOURCE_PATH['INTENT_RECOGNIZER'], 'wb'))
###Output
_____no_output_____
###Markdown
Programming language classification We will train one more classifier for the programming-related questions. It will predict exactly one tag (=programming language) and will be also based on Logistic Regression with TF-IDF features. First, let us prepare the data for this task.
###Code
X = stackoverflow_df['title'].values
y = stackoverflow_df['tag'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
print('Train size = {}, test size = {}'.format(len(X_train), len(X_test)))
###Output
Train size = 160000, test size = 40000
###Markdown
Let us reuse the TF-IDF vectorizer that we have already created above. It should not make a huge difference which data was used to train it.
###Code
vectorizer = pickle.load(open(RESOURCE_PATH['TFIDF_VECTORIZER'], 'rb'))
X_train_tfidf, X_test_tfidf = vectorizer.transform(X_train), vectorizer.transform(X_test)
###Output
_____no_output_____
###Markdown
Train the **tag classifier** using OneVsRestClassifier wrapper over LogisticRegression. Use the following parameters: *penalty='l2'*, *C=5*, *random_state=0*.
###Code
from sklearn.multiclass import OneVsRestClassifier
######################################
######### YOUR CODE HERE #############
######################################
tag_classifier = OneVsRestClassifier(LogisticRegression(penalty='l2',C=5, random_state=0))
tag_classifier.fit(X_train_tfidf, y_train)
# Check test accuracy.
y_test_pred = tag_classifier.predict(X_test_tfidf)
test_accuracy = accuracy_score(y_test, y_test_pred)
print('Test accuracy = {}'.format(test_accuracy))
###Output
Test accuracy = 0.77885
###Markdown
Dump the classifier to use it in the running bot.
###Code
pickle.dump(tag_classifier, open(RESOURCE_PATH['TAG_CLASSIFIER'], 'wb'))
###Output
_____no_output_____
###Markdown
Part II. Ranking questions with embeddings To find a relevant answer (a thread from StackOverflow) on a question you will use vector representations to calculate similarity between the question and existing threads. We already had `question_to_vec` function from the assignment 3, which can create such a representation based on word vectors. However, it would be costly to compute such a representation for all possible answers in *online mode* of the bot (e.g. when bot is running and answering questions from many users). This is the reason why you will create a *database* with pre-computed representations. These representations will be arranged by non-overlaping tags (programming languages), so that the search of the answer can be performed only within one tag each time. This will make our bot even more efficient and allow not to store all the database in RAM. Load StarSpace embeddings which were trained on Stack Overflow posts. These embeddings were trained in *supervised mode* for duplicates detection on the same corpus that is used in search. We can account on that these representations will allow us to find closely related answers for a question. If for some reasons you didn't train StarSpace embeddings in the assignment 3, you can use [pre-trained word vectors](https://code.google.com/archive/p/word2vec/) from Google. All instructions about how to work with these vectors were provided in the same assignment. However, we highly recommend to use StartSpace's embeddings, because it contains more appropriate embeddings. If you chose to use Google's embeddings, delete the words, which is not in Stackoverflow data.
###Code
starspace_embeddings, embeddings_dim = load_embeddings('data/word_embeddings.tsv')
###Output
_____no_output_____
###Markdown
Since we want to precompute representations for all possible answers, we need to load the whole posts dataset, unlike we did for the intent classifier:
###Code
posts_df = pd.read_csv('data/tagged_posts.tsv', sep='\t')
###Output
_____no_output_____
###Markdown
Look at the distribution of posts for programming languages (tags) and find the most common ones. You might want to use pandas [groupby](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html) and [count](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.count.html) methods:
###Code
count_df = posts_df.groupby('tag',as_index=False).count()
count_df
counts_by_tag = {tag:counts for tag , counts in count_df[['tag','post_id']].values }
######### YOUR CODE HERE #############
counts_by_tag
###Output
_____no_output_____
###Markdown
Now for each `tag` you need to create two data structures, which will serve as online search index:* `tag_post_ids` — a list of post_ids with shape `(counts_by_tag[tag],)`. It will be needed to show the title and link to the thread;* `tag_vectors` — a matrix with shape `(counts_by_tag[tag], embeddings_dim)` where embeddings for each answer are stored.Implement the code which will calculate the mentioned structures and dump it to files. It should take several minutes to compute it.
###Code
import os
os.makedirs(RESOURCE_PATH['THREAD_EMBEDDINGS_FOLDER'], exist_ok=True)
for tag, count in counts_by_tag.items():
tag_posts = posts_df[posts_df['tag'] == tag]
tag_post_ids = tag_posts.post_id.tolist()
######### YOUR CODE HERE #############
tag_vectors = np.zeros((count, embeddings_dim), dtype=np.float32)
for i, title in enumerate(tag_posts['title']):
tag_vectors[i, :] = question_to_vec(text_prepare(title), starspace_embeddings, embeddings_dim)
######### YOUR CODE HERE #############
# Dump post ids and vectors to a file.
filename = os.path.join(RESOURCE_PATH['THREAD_EMBEDDINGS_FOLDER'], os.path.normpath('%s.pkl' % tag))
pickle.dump((tag_post_ids, tag_vectors), open(filename, 'wb'))
###Output
_____no_output_____ |
src/datasets/setup_datasets/SAM/parser_for_PlantSeg.ipynb | ###Markdown
Parser for PlantSegParse the data in PNAS to fit with PlantSeg input specifications.The data will be copied into the folder 'PNAS-PlantSeg'. There, each 3D volume will be stored in a file 'sample_.h5'This file contains bothe the raw data stored in 'raw' and the labels in 'label'. I should perhaps split them into training and evaluation set.
###Code
import os
import h5py
from os.path import join
source_path = '/scratch/ottosson/datasets/SAM/data/PNAS'
destination_path = '/scratch/ottosson/datasets/SAM/data/PNAS-PlantSeg'
# Get all directories in source path
plants = [dir for dir in os.listdir(source_path) if os.path.isdir(os.path.join(source_path, dir))]
# Set intermediat paths
intermediate_input_path = 'processed_tiffs'
intermediate_target_path = 'segmentation_tiffs'
# Set strings which are unique for input and output to be able to tell them apart
determiner_input_string = 'acylYFP'
determiner_output_string = ''
def find_label_file(file, directory):
"""
Finds the files in the directory which share the timestamp and plant name of 'file'
The file is assumed to be named "time_plant_XXXXXX.XX"
"""
parts = file.split("_")
time = parts[0]
plant = parts[1]
matched_files = []
for f in os.listdir(directory):
f_parts = f.split("_")
f_time = f_parts[0]
f_plant = f_parts[1]
if time == f_time and plant == f_plant:
matched_files.append(f)
if len(matched_files) != 1:
print(f"Wrong number of file matched: {matched_files}\nfile: {file}\nDir: {directory}\n")
if len(matched_files) == 0:
return None
return matched_files[0]
sample_i = 0
# Go trhough all plant 'movies'
for plant in plants:
# Get all frames in 'movie'
plant_path = join(source_path,plant)
files = os.listdir(join(plant_path,intermediate_input_path))
# Go through all frames
for file in files:
# Skip files which are not training
if determiner_input_string not in file: continue
# Find target file corresponding to input file
input_path = join(plant_path,intermediate_input_path,file)
target_name = find_label_file(file,join(plant_path,intermediate_target_path))
# If no target file, contnue
if not target_name: continue
# Create paths to targets and inputs
target_path = join(plant_path,intermediate_target_path,target_name)
sample_path = join(destination_path,f"sample_{sample_i}.h5")
# Create a new folder for the restructured data.
raw = imageio.volread(input_path)
label = imageio.volread(target_path)
with h5py.File(sample_path, 'w') as hf:
hf.create_dataset('raw', data=raw)
hf.create_dataset('label', data=label)
sample_i = sample_i + 1
###Output
Wrong number of file matched: []
file: 40hrs_plant18_trim-acylYFP.tif
Dir: /scratch/ottosson/datasets/SAM/data/PNAS/plant18/segmentation_tiffs
|
data structure/stack and queues/queues/Reverse a queue.ipynb | ###Markdown
Reversed QueueWrite a function that takes a queue as an input and returns a reversed version of it.
###Code
class LinkedListNode:
def __init__(self, data):
self.data = data
self.next = None
class Stack:
def __init__(self):
self.num_elements = 0
self.head = None
def push(self, data):
new_node = LinkedListNode(data)
if self.head is None:
self.head = new_node
else:
new_node.next = self.head
self.head = new_node
self.num_elements += 1
def pop(self):
if self.is_empty():
return None
temp = self.head.data
self.head = self.head.next
self.num_elements -= 1
return temp
def top(self):
if self.head is None:
return None
return self.head.data
def size(self):
return self.num_elements
def is_empty(self):
return self.num_elements == 0
class Queue:
def __init__(self):
self.head = None
self.tail = None
self.num_elements = 0
def enqueue(self, data):
new_node = LinkedListNode(data)
if self.head is None:
self.head = new_node
self.tail = new_node
else:
self.tail.next = new_node
self.tail = new_node
self.num_elements += 1
def dequeue(self):
if self.is_empty():
return None
temp = self.head.data
self.head = self.head.next
self.num_elements -= 1
return temp
def front(self):
if self.head is None:
return None
return self.head.data
def size(self):
return self.num_elements
def is_empty(self):
return self.num_elements == 0
def reverse_queue(queue):
"""
Reverese the input queue
Args:
queue(queue),str2(string): Queue to be reversed
Returns:
queue: Reveresed queue
"""
# TODO: Write reversed queue function
pass
def test_function(test_case):
queue = Queue()
for num in test_case:
queue.enqueue(num)
reverse_queue(queue)
index = len(test_case) - 1
while not queue.is_empty():
removed = queue.dequeue()
if removed != test_case[index]:
print("Fail")
return
else:
index -= 1
print("Pass")
test_case_1 = [1, 2, 3, 4]
test_function(test_case_1)
test_case_2 = [1]
test_function(test_case_2)
def reverse_queue(queue):
stack = Stack()
while not queue.is_empty():
stack.push(queue.dequeue())
while not stack.is_empty():
queue.enqueue(stack.pop())
###Output
_____no_output_____ |
examples/ionq.ipynb | ###Markdown
IonQ ProjectQ Backend Example This notebook will walk you through a basic example of using IonQ hardware to run ProjectQ circuits. SetupThe only requirement to run ProjectQ circuits on IonQ hardware is an IonQ API token.Once you have acquired a token, please try out the examples in this notebook! Usage & Examples**NOTE**: The `IonQBackend` expects an API key to be supplied via the `token` keyword argument to its constructor. If no token is directly provided, the backend will prompt you for one.The `IonQBackend` currently supports two device types:* `ionq_simulator`: IonQ's simulator backend.* `ionq_qpu`: IonQ's QPU backend.To view the latest list of available devices, you can run the `show_devices` function in the `projectq.backends._ionq._ionq_http_client` module.
###Code
# NOTE: Optional! This ignores warnings emitted from ProjectQ imports.
import warnings
warnings.filterwarnings('ignore')
# Import ProjectQ and IonQBackend objects, the setup an engine
import projectq.setups.ionq
from projectq import MainEngine
from projectq.backends import IonQBackend
# REPLACE WITH YOUR API TOKEN
token = 'your api token'
device = 'ionq_simulator'
# Create an IonQBackend
backend = IonQBackend(
use_hardware=True,
token=token,
num_runs=200,
device=device,
)
# Make sure to get an engine_list from the ionq setup module
engine_list = projectq.setups.ionq.get_engine_list(
token=token,
device=device,
)
# Create a ProjectQ engine
engine = MainEngine(backend, engine_list)
###Output
_____no_output_____
###Markdown
Example — Bell Pair Notes about running circuits on IonQ backendsCircuit building and visualization should feel identical to building a circuit using any other backend with ProjectQ. That said, there are a couple of things to note when running on IonQ backends: - IonQ backends do not allow arbitrary unitaries, mid-circuit resets or measurements, or multi-experiment jobs. In practice, this means using `reset`, `initialize`, `u` `u1`, `u2`, `u3`, `cu`, `cu1`, `cu2`, or `cu3` gates will throw an exception on submission, as will measuring mid-circuit, and submmitting jobs with multiple experiments.- While `barrier` is allowed for organizational and visualization purposes, the IonQ compiler does not see it as a compiler directive. Now, let's make a simple Bell pair circuit:
###Code
# Import gates to apply:
from projectq.ops import All, H, CNOT, Measure
# Allocate two qubits
circuit = engine.allocate_qureg(2)
qubit0, qubit1 = circuit
# add gates — here we're creating a simple bell pair
H | qubit0
CNOT | (qubit0, qubit1)
All(Measure) | circuit
###Output
_____no_output_____
###Markdown
Run the bell pair circuitNow, let's run our bell pair circuit on the simulator. All that is left is to call the main engine's `flush` method:
###Code
# Flush the circuit, which will submit the circuit to IonQ's API for processing
engine.flush()
# If all went well, we can view results from the circuit execution
probabilities = engine.backend.get_probabilities(circuit)
print(probabilities)
###Output
_____no_output_____
###Markdown
You can also use the built-in matplotlib support to plot the histogram of results:
###Code
# show a plot of result probabilities
import matplotlib.pyplot as plt
from projectq.libs.hist import histogram
# Show the histogram
histogram(engine.backend, circuit)
plt.show()
###Output
_____no_output_____
###Markdown
Example - Bernstein-VaziraniFor our second example, let's build a Bernstein-Vazirani circuit and run it on a real IonQ quantum computer.Rather than manually building the BV circuit every time, we'll create a method that can build one for any oracle $s$, and any register size.
###Code
from projectq.ops import All, H, Z, CX, Measure
def oracle(qureg, input_size, s_int):
"""Apply the 'oracle'."""
s = ('{0:0' + str(input_size) + 'b}').format(s_int)
for bit in range(input_size):
if s[input_size - 1 - bit] == '1':
CX | (qureg[bit], qureg[input_size])
def run_bv_circuit(eng, s_int, input_size):
"""build the Bernstein-Vazirani circuit
Args:
eng (MainEngine): A ProjectQ engine instance with an IonQBackend.
s_int (int): value of s, the secret bitstring, as an integer
input_size (int): size of the input register,
i.e. the number of (qu)bits to use for the binary
representation of s
"""
# confirm the bitstring of S is what we think it should be
s = ('{0:0' + str(input_size) + 'b}').format(s_int)
print('s: ', s)
# We need a circuit with `input_size` qubits, plus one ancilla qubit
# Also need `input_size` classical bits to write the output to
circuit = eng.allocate_qureg(input_size + 1)
qubits = circuit[:-1]
output = circuit[input_size]
# put ancilla in state |-⟩
H | output
Z | output
# Apply Hadamard gates before querying the oracle
All(H) | qubits
# Apply the inner-product oracle
oracle(circuit, input_size, s_int)
# Apply Hadamard gates after querying the oracle
All(H) | qubits
# Measurement
All(Measure) | qubits
return qubits
###Output
_____no_output_____
###Markdown
Now let's use that method to create a BV circuit to submit:
###Code
# Run a BV circuit:
s_int = 3
input_size = 3
circuit = run_bv_circuit(engine, s_int, input_size)
engine.flush()
###Output
_____no_output_____
###Markdown
Time to run it on an IonQ QPU!
###Code
# Create an IonQBackend set to use the 'ionq_qpu' device
device = 'ionq_qpu'
backend = IonQBackend(
use_hardware=True,
token=token,
num_runs=100,
device=device,
)
# Make sure to get an engine_list from the ionq setup module
engine_list = projectq.setups.ionq.get_engine_list(
token=token,
device=device,
)
# Create a ProjectQ engine
engine = MainEngine(backend, engine_list)
# Setup another BV circuit
circuit = run_bv_circuit(engine, s_int, input_size)
# Run the circuit!
engine.flush()
# Show the histogram
histogram(engine.backend, circuit)
plt.show()
###Output
_____no_output_____
###Markdown
Because QPU time is a limited resource, QPU jobs are handled in a queue and may take a while to complete. The IonQ backend accounts for this delay by providing basic attributes which may be used to tweak the behavior of the backend while it waits on job results:
###Code
# Create an IonQ backend with custom job fetch/wait settings
backend = IonQBackend(
token=token,
device=device,
num_runs=100,
use_hardware=True,
# Number of times to check for results before giving up
num_retries=3000,
# The number of seconds to wait between attempts
interval=1,
)
###Output
_____no_output_____ |
exps/prc fre.ipynb | ###Markdown
FactRuEval-2016 preprocessMore info about dataset: https://github.com/dialogue-evaluation/factRuEval-2016
###Code
import sys
import warnings
warnings.filterwarnings("ignore")
sys.path.append("../")
from modules.data.fre import fact_ru_eval_preprocess
dev_dir = "/home/eartemov/ae/work/factRuEval-2016/devset/"
test_dir = "/home/eartemov/ae/work/factRuEval-2016/testset/"
dev_df_path = "/home/eartemov/ae/work/factRuEval-2016/dev.csv"
test_df_path = "/home/eartemov/ae/work/factRuEval-2016/test.csv"
fact_ru_eval_preprocess(dev_dir, test_dir, dev_df_path, test_df_path)
import pandas as pd
pd.read_csv(dev_df_path, sep="\t").head()
###Output
_____no_output_____ |
notebooks/exp70_analysis.ipynb | ###Markdown
Exp 70 analysisSee `./informercial/Makefile` for experimentaldetails.
###Code
import os
import numpy as np
from IPython.display import Image
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_style('ticks')
matplotlib.rcParams.update({'font.size': 16})
matplotlib.rc('axes', titlesize=16)
from infomercial.exp import meta_bandit
from infomercial.local_gym import bandit
from infomercial.exp.meta_bandit import load_checkpoint
import gym
# ls ../data/exp2*
###Output
_____no_output_____
###Markdown
Load and process data
###Code
data_path ="/Users/qualia/Code/infomercial/data/"
exp_name = "exp70"
sorted_params = load_checkpoint(os.path.join(data_path, f"{exp_name}_sorted.pkl"))
# print(sorted_params.keys())
best_params = sorted_params[1]
sorted_params
###Output
_____no_output_____
###Markdown
Performanceof best parameters
###Code
env_name = 'BanditHardAndSparse121-v0'
num_episodes = 1210
# Run w/ best params
result = meta_bandit(
env_name=env_name,
num_episodes=num_episodes,
lr=best_params["lr"],
tie_threshold=best_params["tie_threshold"],
seed_value=129,
save="exp45_best_model.pkl"
)
# Plot run
episodes = result["episodes"]
actions =result["actions"]
scores_R = result["scores_R"]
values_R = result["values_R"]
scores_E = result["scores_E"]
values_E = result["values_E"]
p_bests = result["p_bests"]
# Get some data from the gym...
env = gym.make(env_name)
best = env.best
print(f"Best arm: {best}, last arm: {actions[-1]}")
# Init plot
fig = plt.figure(figsize=(6, 14))
grid = plt.GridSpec(5, 1, wspace=0.3, hspace=0.8)
# Do plots:
# Arm
plt.subplot(grid[0, 0])
plt.scatter(episodes, actions, color="black", alpha=.5, s=2, label="Bandit")
plt.plot(episodes, np.repeat(best, np.max(episodes)+1),
color="red", alpha=0.8, ls='--', linewidth=2)
plt.ylim(-.1, np.max(actions)+1.1)
plt.ylabel("Arm choice")
plt.xlabel("Episode")
# p(best)
plt.subplot(grid[1, 0])
plt.scatter(episodes, p_bests, color="red", alpha=.5, s=2, label="Bandit")
plt.ylabel("p(best)")
plt.xlabel("Episode")
# score
plt.subplot(grid[2, 0])
plt.scatter(episodes, scores_R, color="grey", alpha=0.4, s=2, label="R")
plt.scatter(episodes, scores_E, color="purple", alpha=0.9, s=2, label="E")
plt.ylabel("log score")
plt.xlabel("Episode")
plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# Q
plt.subplot(grid[3, 0])
plt.scatter(episodes, values_R, color="grey", alpha=0.4, s=2, label="R")
plt.scatter(episodes, values_E, color="purple", alpha=0.4, s=2, label="E")
plt.ylabel("log Q(s,a)")
plt.xlabel("Episode")
plt.semilogy()
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
_ = sns.despine()
# -
plt.savefig("figures/epsilon_bandit.pdf", bbox_inches='tight')
plt.savefig("figures/epsilon_bandit.eps", bbox_inches='tight')
###Output
Best arm: 54, last arm: 68
###Markdown
Sensitivityto parameter choices
###Code
total_Rs = []
ties = []
lrs = []
trials = list(sorted_params.keys())
for t in trials:
total_Rs.append(sorted_params[t]['total_R'])
ties.append(sorted_params[t]['tie_threshold'])
lrs.append(sorted_params[t]['lr'])
# Init plot
fig = plt.figure(figsize=(10, 18))
grid = plt.GridSpec(4, 1, wspace=0.3, hspace=0.8)
# Do plots:
# Arm
plt.subplot(grid[0, 0])
plt.scatter(trials, total_Rs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("total R")
_ = sns.despine()
plt.subplot(grid[1, 0])
plt.scatter(trials, ties, color="black", alpha=1, s=6, label="total R")
# plt.yscale('log')
plt.xlabel("Sorted params")
plt.ylabel("Tie threshold")
_ = sns.despine()
plt.subplot(grid[2, 0])
plt.scatter(trials, lrs, color="black", alpha=.5, s=6, label="total R")
plt.xlabel("Sorted params")
plt.ylabel("lr")
_ = sns.despine()
###Output
_____no_output_____
###Markdown
Distributionsof parameters
###Code
# Init plot
fig = plt.figure(figsize=(5, 6))
grid = plt.GridSpec(2, 1, wspace=0.3, hspace=0.8)
plt.subplot(grid[0, 0])
plt.hist(ties, color="black")
plt.xlabel("tie threshold")
plt.ylabel("Count")
_ = sns.despine()
plt.subplot(grid[1, 0])
plt.hist(lrs, color="black")
plt.xlabel("lr")
plt.ylabel("Count")
_ = sns.despine()
###Output
_____no_output_____
###Markdown
of total reward
###Code
# Init plot
fig = plt.figure(figsize=(5, 2))
grid = plt.GridSpec(1, 1, wspace=0.3, hspace=0.8)
plt.subplot(grid[0, 0])
plt.hist(total_Rs, color="black", bins=50)
plt.xlabel("Total reward")
plt.ylabel("Count")
plt.xlim(0, 10)
_ = sns.despine()
###Output
_____no_output_____ |
Chapter 4/Section 4.2.ipynb | ###Markdown
[4.2 分布式系统](http://www-inst.eecs.berkeley.edu/~cs61a/sp12/book/communication.htmldistributed-computing)分布式系统是自主的计算机网络,计算机互相通信来完成一个目标。分布式系统中的计算机都是独立的,并且没有物理上共享的内存或处理器。它们使用消息来和其它计算机通信,消息是网络上从一台计算机到另一台计算机传输的一段信息。消息可以用于沟通许多事情:计算机可以让其它计算机来执行一个带有特定参数的过程,它们可以发送和接受数据包,或者发送信号让其它计算机执行特定行为。分布式系统中的计算机具有不同的作用。计算机的作用取决于系统的目标,以及计算机自身的硬件和软件属性。分布式系统中,有两种主要方式来组织计算机,一种叫客户端-服务端架构(C/S 架构),另一种叫做对等网络架构(P2P 架构)。 4.2.1 C/S 系统C/S 架构是一种从中心来源分发服务的方式。只有单个服务端提供服务,多台客户端和服务器通信来消耗它的产出。在这个架构中,客户端和服务端都有不同的任务。服务端的任务就是响应来自客户端的服务请求,而客户端的任务就是使用响应中提供的数据来执行一些任务。C/S 通信模型可以追溯到二十世纪七十年代 Unix 的引入,但这一模型由于现代万维网(WWW)中的使用而变得具有影响力。一个C/S 交互的例子就是在线阅读纽约时报。当`www.nytimes.com`上的服务器与浏览器客户端(比如 Firefox)通信时,它的任务就是发送回来纽约时报主页的 HTML。这可能涉及到基于发送给服务器的用户账户信息,计算个性化的内容。这意味着需要展示图片,安排视觉上的内容,展示不同的颜色、字体和图形,以及允许用户和渲染后的页面交互。客户端和服务端的概念是强大的函数式抽象。服务端仅仅是一个提供服务的单位,可能同时对应多个客户端。客户端是消耗服务的单位。客户端并不需要知道服务如何提供的细节,或者所获取的数据如何储存和计算,服务端也不需要知道数据如何使用。在网络上,我们认为客户端和服务端都是不同的机器,但是,一个机器上的系统也可以拥有 C/S 架构。例如,来自计算机输入设备的信号需要让运行在计算机上的程序来访问。这些程序就是客户端,消耗鼠标和键盘的输入数据。操作系统的设备驱动就是服务端,接受物理的信号并将它们提供为可用的输入。C/S 系统的一个缺陷就是,服务端是故障单点。它是唯一能够分发服务的组件。客户端的数量可以是任意的,它们可以交替,并且可以按需出现和消失。但是如果服务器崩溃了,整个系统就会停止工作。所以,由 C/S 架构创建的函数式抽象也使它具有崩溃的风险。C/S 系统的另一个缺陷是,当客户端非常多的时候,资源就变得稀缺。客户端增加系统上的命令而不贡献任何计算资源。C/S 系统不能随着不断变化的需求缩小或扩大。 4.2.2 P2P 系统C/S 模型适合于服务导向的情形。但是,还有其它计算目标,适合使用更加平等的分工。P2P 的术语用于描述一种分布式系统,其中劳动力分布在系统的所有组件中。所有计算机发送并接受数据,它们都贡献一些处理能力和内存。随着分布式系统的规模增长,它的资源计算能力也会增长。在 P2P 系统中,系统的所有组件都对分布式计算贡献了一些处理能力和内存。所有参与者的劳动力的分工是 P2P 系统的识别特征。也就是说,对等者需要能够和其它人可靠地通信。为了确保消息到达预定的目的地,P2P 系统需要具有组织良好的网络结构。这些系统中的组件协作来维护足够的其它组件的位置信息并将消息发送到预定的目的地。在一些 P2P 系统中,维护网络健康的任务由一系列特殊的组件执行。这种系统并不是纯粹的 P2P 系统,因为它们具有不同类型的组件类型,提供不同的功能。支持 P2P 网络的组件就像脚手架那样:它们有助于网络保持连接,它们维护不同计算机的位置信息,并且它们新来者来邻居中找到位置。P2P 系统的最常见应用就是数据传送和存储。对于数据传送,系统中的每台计算机都致力于网络上的数据传送。如果目标计算机是特定计算机的邻居,那台计算机就一起帮助传送数据。对于数据存储,数据集可以过于庞大,不能在任何单台计算机内装下,或者储存在单台计算机内具有风险。每台计算机都储存数据的一小部分,不同的计算机上可能会储存相同数据的多个副本。当一台计算机崩溃时,上面的数据可以由其它副本恢复,或者在更换替代品之后放回。Skype,一个音频和视频聊天服务,是采用 P2P 架构的数据传送应用的示例。当不同计算机上的两个人都使用 Skype 交谈时,它们的通信会拆成由 1 和 0 构成的数据包,并且通过 P2P 网络传播。这个网络由电脑上注册了 Skype 的其它人组成。每台计算机都知道附近其它人的位置。一台计算机通过将数据包传给它的邻居,来帮助将它传到目的地,它的邻居又将它传给其它邻居,以此类推,直到数据包到达了它预定的目的地。Skype 并不是纯粹的 P2P 系统。一个超级节点组成的脚手架网络用于用户登录和退出,维护它们的计算机的位置信息,并且修改网络结构来处理用户进入和离开。 4.2.3 模块化我们刚才考虑的两个架构 -- P2P 和 C/S -- 都为强制模块化而设计。模块化是一个概念,系统的组件对其它组件来说应该是个黑盒。组件如何实现行为应该并不重要,只要它提供了一个接口:规定了输入应该产生什么输出。在第二章中,我们在调度函数和面向对象编程的上下文中遇到了接口。这里,接口的形式为指定对象应接收的信息,以及对象应如何响应它们。例如,为了提供“表示为字符串”的接口,对象必须回复`__repr__`和`__str__`信息,并且在响应中输出合适的字符串。那些字符串的生成如何实现并不是接口的一部分。在分布式系统中,我们必须考虑涉及到多台计算机的程序设计,所以我们将接口的概念从对象和消息扩展为整个程序。接口指定了应该接受的输入,以及应该在响应中返回给输入的输出。接口在真实世界的任何地方都存在,我们经常习以为常。一个熟悉的例子就是 TV 遥控器。你可以买到许多牌子的遥控器或者 TV,它们都能工作。它们的唯一共同点就是“TV 遥控器”的接口。只要当你按下电院、音量、频道或者其它任何按钮(输入)时,一块电路向你的 TV 发送正确的信号(输出),它就遵循“TV 遥控器”接口。模块化给予系统许多好处,并且是一种沉思熟虑的系统设计。首先,模块化的系统易于理解。这使它易于修改和扩展。其次,如果系统中什么地方发生错误,只需要更换有错误的组件。再者,bug 或故障可以轻易定位。如果组件的输出不符合接口的规定,而且输入是正确的,那么这个组件就是故障来源。 4.2.4 消息传递在分布式系统中,组件使用消息传递来互相沟通。消息有三个必要部分:发送者、接收者和内容。发送者需要被指定,便于接受者得知哪个组件发送了信息,以及将回复发送到哪里。接收者需要被指定,便于任何协助发送消息的计算机知道发送到哪里。消息的内容是最宝贵的。取决于整个系统的函数,内容可以是一段数据、一个信号,或者一条指令,让远程计算机来以一些参数求出某个函数。消息传递的概念和第二章的消息传递机制有很大关系,其中,调度函数或字典会响应值为字符串的信息。在程序中,发送者和接受者都由求值规则标识。但是在分布式系统中,接受者和发送者都必须显式编码进消息中。在程序中,使用字符串来控制调度函数的行为十分方便。在分布式系统中,消息需要经过网络发送,并且可能需要存放许多不同种类的信号作为“数据”,所以它们并不始终编码为字符串。但是在两种情况中,消息都服务于相同的函数。不同的组件(调度函数或计算机)交换消息来完成一个目标,它需要多个组件模块的协作。在较高层面上,消息内容可以是复杂的数据结构,但是在较低层面上,消息只是简单的 1 和 0 的流,在网络上传输。为了变得易用,所有网络上发送的消息都需要根据一致的消息协议格式化。**消息协议**是一系列规则,用于编码和解码消息。许多消息协议规定,消息必须符合特定的格式,其中特定的比特具有固定的含义。固定的格式实现了固定的编码和解码规则来生成和读取这种格式。分布式系统中的所有组件都必须理解协议来互相通信。这样,它们就知道消息的哪个部分对应哪个信息。消息协议并不是特定的程序或软件库。反之,它们是可以由大量程序使用的规则,甚至以不同的编程语言编写。所以,带有大量不同软件系统的计算机可以加入相同的分布式系统,只需要遵守控制这个系统的消息协议。 4.2.5 万维网上的消息**HTTP**(超文本传输协议的缩写)是万维网所支持的消息协议。它指定了在 Web 浏览器和服务器之间交换的消息格式。所有 Web 浏览器都使用 HTTP 协议来请求服务器上的页面,而且所有 Web 服务器都使用 HTTP 格式来发回它们的响应。当你在 Web 浏览器上键入 URL 时,比如 ,你实际上就告诉了你的计算机,使用 "HTTP" 协议,从 "http://en.wikipedia.org/wiki/UC_Berkeley" 的服务器上请求 "wiki/UC_Berkeley" 页面。消息的发送者是你的计算机,接受者是 en.wikipedia.org,以及消息内容的格式是:
###Code
GET /wiki/UC_Berkeley HTTP/1.1
###Output
_____no_output_____
###Markdown
第一个单词是请求类型,下一个单词是所请求的资源,之后是协议名称(HTTP)和版本(1.1)。(请求还有其它类型,例如 PUT、POST 和 HEAD,Web 浏览器也会使用它们。)服务器发回了回复。这时,发送者是 en.wikipedia.org,接受者是你的计算机,消息内容的格式是由数据跟随的协议头:
###Code
HTTP/1.1 200 OK
Date: Mon, 23 May 2011 22:38:34 GMT
Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux)
Last-Modified: Wed, 08 Jan 2011 23:11:55 GMT
Content-Type: text/html; charset=UTF-8
... web page content ...
###Output
_____no_output_____
###Markdown
第一行,单词 "200 OK" 表示没有发生错误。协议头下面的行提供了有关服务器的信息,日期和发回的内容类型。协议头和页面的实际内容通过一个空行来分隔。如果你键入了错误的 Web 地址,或者点击了死链,你可能会看到类似于这个错误的消息:
###Code
404 Error File Not Found
###Output
_____no_output_____
###Markdown
它的意思是服务器发送回了一个 HTTP 协议头,以这样起始:
###Code
HTTP/1.1 404 Not Found
###Output
_____no_output_____ |
praesepe.ipynb | ###Markdown
Download table: http://vizier.cfa.harvard.edu/viz-bin/VizieR-3?-source=J/ApJ/842/83/table3
###Code
douglas = Table.read('data/douglas2017.vot')
possible_singles = (np.array([len(i)==0 for i in douglas['Bin_']]) & # Not binaries
np.array([i == "Y" for i in douglas['Clean_']]) & # Clean periodograms
np.array([i == "N" for i in douglas['Bl_']]) ) # Not blends
douglas_singles = douglas[possible_singles]
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
ax[0].scatter(douglas_singles['Mass'], douglas_singles['Prot1'], marker='.')
ax[0].set(xlabel='Mass [M$_\star$]', ylabel='Period [d]')
ax[1].scatter(np.log(douglas_singles['Prot1']), np.log(douglas_singles['SmAmp']), marker='.')
ax[1].set(xlabel='log Period', ylabel='log Amplitude')
fast_low_mass = (douglas_singles['Mass'] < 0.6) & (douglas_singles['Prot1'] < 3)
slow = -25 * douglas_singles['Mass'] + 25 < douglas_singles['Prot1']
intermediate = np.logical_not(fast_low_mass | slow)
low_mass = douglas_singles['Mass'] < 0.4
plt.scatter(douglas_singles['Mass'][fast_low_mass], douglas_singles['Prot1'][fast_low_mass],
marker='.', label='Fast ({})'.format(np.count_nonzero(douglas_singles['Mass'][fast_low_mass] < 0.4)))
plt.scatter(douglas_singles['Mass'][intermediate], douglas_singles['Prot1'][intermediate],
marker='.', label='Braking ({})'.format(np.count_nonzero(douglas_singles['Mass'][intermediate] < 0.4)))
plt.scatter(douglas_singles['Mass'][slow], douglas_singles['Prot1'][slow], marker='.',
label='Slow ({})'.format(np.count_nonzero(douglas_singles['Mass'][slow] < 0.4)), color='C3')
plt.legend()
plt.axvline(0.4, ls='--', color='gray')
plt.ylim([0, 30])
for s in ['right', 'top']:
plt.gca().spines[s].set_visible(False)
plt.xlabel('Mass [$\\rm M_\odot$]')
plt.ylabel('Period [d]')
plt.savefig('plots/mass-period.pdf', bbox_inches='tight')
bins = 20
bin_range = [0, 6]
np.savetxt('data/lowmass_fast.txt', douglas_singles['SmAmp'][low_mass & fast_low_mass])
np.savetxt('data/lowmass_intermediate.txt', douglas_singles['SmAmp'][low_mass & intermediate])
np.savetxt('data/lowmass_slow.txt', douglas_singles['SmAmp'][low_mass & slow])
plt.hist(douglas_singles['SmAmp'][low_mass & fast_low_mass], bins=bins,
range=bin_range, density=True, alpha=1, histtype='stepfilled', lw=2, label='Fast')
plt.hist(douglas_singles['SmAmp'][low_mass & intermediate], bins=bins,
range=bin_range,density=True, alpha=1, histtype='step', lw=2, label='Braking')
plt.hist(douglas_singles['SmAmp'][low_mass & slow], bins=bins,
range=bin_range, density=True, alpha=1, histtype='step', lw=2, label='Slow', color='C3')
for s in ['right', 'top']:
plt.gca().spines[s].set_visible(False)
plt.legend()
plt.gca().set(xlabel='Smoothed Amp [%]', ylabel='Probability density',
title='Praesepe: $\\rm M < 0.4 M_\odot$')
plt.savefig('plots/mass-period_smamp.pdf')
from sklearn.neighbors.kde import KernelDensity
douglas_small_fast = douglas_singles[(douglas_singles['Mass'] > 0.2) & (douglas_singles['Mass'] < 0.4) &
(douglas_singles['Prot1'] < 5)]
X = np.sort(douglas_small_fast['SmAmp'].data.data)[:, np.newaxis]
kde = KernelDensity(kernel='gaussian', bandwidth=0.5).fit(X)
scores = np.exp(kde.score_samples(X))
# plt.plot(X, scores)
fig, ax = plt.subplots()
bin_range = [0, 5]
ax.hist(douglas_small_fast['SmAmp'], bins=20, range=bin_range)
ax.plot(X, scores*np.max(X)/scores.max())
np.savetxt('data/kde_fast.txt', np.vstack([X.T, scores]).T)
np.savetxt('data/amps_fast.txt', douglas_small_fast['SmAmp'])
ax.set(xlabel='Smoothed Amp [%]', ylabel='N',
title='Praesepe: $0.2 < M < 0.4$ and P$_{{rot}} < 5$ d (N={})'.format(len(douglas_small['SmAmp'])))
fig.savefig('plots/amplitudes_fast.pdf')
plt.show()
from sklearn.neighbors.kde import KernelDensity
douglas_small_slow = douglas_singles[(douglas_singles['Mass'] > 0.35) & (douglas_singles['Mass'] < 0.55) &
(douglas_singles['Prot1'] > 5)]
X = np.sort(douglas_small_slow['SmAmp'].data.data)[:, np.newaxis]
kde = KernelDensity(kernel='gaussian', bandwidth=0.5).fit(X)
scores = np.exp(kde.score_samples(X))
fig, ax = plt.subplots()
ax.hist(douglas_small_slow['SmAmp'], bins=20, range=bin_range)
ax.plot(X, scores*np.max(X)/scores.max())
np.savetxt('data/kde_slow.txt', np.vstack([X.T, scores]).T)
np.savetxt('data/amps_slow.txt', douglas_small_slow['SmAmp'])
ax.set(xlabel='Smoothed Amp [%]', ylabel='N',
title='Praesepe: $0.35 < M < 0.55$ and P$_{{rot}} > 5$ d (N={})'.format(len(douglas_small['SmAmp'])))
fig.savefig('plots/amplitudes_slow.pdf')
plt.show()
np.savetxt('data/epics_slow.txt', douglas_small_slow['EPIC'].data)
np.savetxt('data/epics_fast.txt', douglas_small_fast['EPIC'].data)
###Output
_____no_output_____ |
Sklearn/PCA/PCA_MNIST_PCA_MachineLearningPipeline.ipynb | ###Markdown
PCA + Logistic Regression (MNIST) Downloading MNIST Dataset
###Code
from sklearn.datasets import fetch_mldata
# Change data_home to wherever to where you want to download your data
mnist = fetch_mldata('MNIST original', data_home='~/Desktop/alternativeData')
mnist
# These are the images
mnist.data.shape
# These are the labels
mnist.target.shape
###Output
_____no_output_____
###Markdown
Splitting Data into Training and Test Sets
###Code
from sklearn.model_selection import train_test_split
# test_size: what proportion of original data is used for test set
train_img, test_img, train_lbl, test_lbl = train_test_split(
mnist.data, mnist.target, test_size=1/7.0, random_state=0)
###Output
_____no_output_____
###Markdown
Chain the StandardScaler, PCA, and LogisticRegression Objects in a Pipeline
###Code
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import Pipeline
scaler = StandardScaler()
pca = PCA(.95)
logisticRegr = LogisticRegression(solver = 'lbfgs')
pipe = Pipeline([('Standard', scaler), ('pca', pca), ('clf', logisticRegr)])
pipe.fit(train_img, train_lbl)
pipe.score(test_img, test_lbl)
###Output
_____no_output_____ |
training/transfer-learning-Training.ipynb | ###Markdown
Transfer Learning use caseOn this notebook we will cover the fine tuning of a simple model on a custom dataset, taking a previously trained model. We will cover the next topics:- dogs/cats/horses dataset- Model architecture: - VGG16 - Dense layers- Image generator from a directory- Test on random images The datasetThe dataset is composed of 197 images of dogs, cats and horses. It is structured to be in label-related folders:
###Code
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
tf.get_logger().setLevel('ERROR')
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
tf.keras.backend.clear_session() # For easy reset of notebook state.
from tensorflow.keras.preprocessing import image
image.load_img('/home/fer/data/formaciones/master/datasets/cats_dogs_horses/dogs/images (8).jpg')
image.load_img('/home/fer/data/formaciones/master/datasets/cats_dogs_horses/horses/images (11).jpg')
image.load_img('/home/fer/data/formaciones/master/datasets/cats_dogs_horses/cats/images (15).jpg')
###Output
_____no_output_____
###Markdown
Please note the unbalance on the classes, which could be an issue if we were to train more seriously...
###Code
# tf version session setting lines
from tensorflow.python.keras.backend import set_session
from tensorflow.python.keras.models import load_model
import tensorflow as tf
from tensorflow.compat.v1 import ConfigProto
config = ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.Session(config=config)
graph = tf.get_default_graph()
# IMPORTANT: models have to be loaded AFTER SETTING THE SESSION for keras!
# Otherwise, their weights will be unavailable in the threads after the session there has been set
set_session(sess)
###Output
_____no_output_____
###Markdown
Model ArchitectureBuild the model to use a pretrained vgg16 network without the last layer, pretrained on imagenet. Then:- Add a global average pooling layer after the pre-trained structure- Add a dense layer, with 512 units and relu activated- The final layer will be another fully connected with the number of classes and softmax activated. Hint: use the imported libraries
###Code
import pandas as pd
import numpy as np
import keras
from keras.layers import Dense,GlobalAveragePooling2D
from keras.applications import MobileNet, VGG16
base_model=base_model=VGG16(weights='imagenet',include_top=False)
x=base_model.output
x=...
preds=...
###Output
_____no_output_____
###Markdown
Create the model and print the summary. What happens with the weights?
###Code
from keras.models import Model
...
len (model.layers)
###Output
_____no_output_____
###Markdown
Set the first 19 layers to non_trainable, and the rest to trainable.
###Code
for layer in model.layers[:19]:
...
for layer in model.layers[19:]:
...
###Output
_____no_output_____
###Markdown
Create the generator and specify it will use the vgg preprocessing input function. No other data augmentation so far. Add the flow_from directory function to include where the data will be taken from. Use target size of 224 square.
###Code
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from tensorflow.keras.applications.vgg16 import preprocess_input as preprocess_vgg16
train_datagen=...
# default parameters
color_mode='rgb'
batch_size=8
class_mode='categorical'
shuffle=True
train_generator=...
###Output
_____no_output_____
###Markdown
Compile the model, using adam and categorical Xentropy. Include accuracy as a metric. Train the model for 5 epochs. Remember to include the right step_size. Hint: use the generator .n and .batch_size properties
###Code
from tensorflow.keras.optimizers import Adam
...
###Output
_____no_output_____
###Markdown
Build a function to test the model on random images from internet. Predict the ones from the given directory Hints: - USe the previously loaded functions - Repeat the processing we did before. No, you are not repeating it. See why?
###Code
from numpy import expand_dims
import matplotlib.pyplot as plt
class_dict = {v:k for k, v in train_generator.class_indices.items()}
def predict_image(path):
img = ...
data = ...
pred = ...
print(pred)
return img
predict_image('/home/fer/data/master/datasets/chorra_tests/gato.jpg')
###Output
_____no_output_____ |
GRIPPA_al_2017_AnOpenSourceSemiAutomatedProcessingChain_OBIA.ipynb | ###Markdown
This processing chain is available under [Creative Common Licence (CC-BY)](https://creativecommons.org/licenses/by/4.0/), so feel free to re-use, adapt or enhance it to match your own needs. This processing chain is available on [GitHub.com](https://github.com/tgrippa/Opensource_OBIA_processing_chain). Don't hesitate to create "Pull requests" to propose corrections, modifications or enhancements.This processing chain is linked to a publication in [MDPI - Remote Sensing](http://www.mdpi.com/2072-4292/9/4/358/htm). If you need to refer to this processing chain, you can refer directly to this publication. **Table of Contents** The following cell is a Javascript section of code for building the Jupyter notebook's table of content.
###Code
%%javascript
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Instructions In order to use the semi-automated processing chain, by calling up GRASS GIS functions [without starting grass explicitly](https://grasswiki.osgeo.org/wiki/Working_with_GRASS_without_starting_it_explicitly) in this [Jupyter notebook](http://jupyter.org/), please: 1. Make sure that [GRASS GIS](https://grasswiki.osgeo.org/wiki/Installation_Guide) is installed and fully functional on your computer.2. Make sure that [Anaconda with Python 2.7](https://www.continuum.io/downloads) is installed and fully functional on your computer.3. Make sure that [R software](https://www.r-project.org/) is installed and fully functional on your computer.4. Adjust the **"Define working environment"** part to match your system configuration, for both [GRASS GIS' environment variables](https://grass.osgeo.org/grass64/manuals/variables.html) and [R' environment variables](https://stat.ethz.ch/R-manual/R-devel/library/base/html/EnvVar.html).5. Run this notebook (cell-by-cell running is higly recommanded on first time to control the process and adapt steps, variables and parameters to your own needs). *Note: This script was developed using Windows7 x64, Anaconda 2 with Python 2.7, GRASS 7.3, R 3.3.0 (Caret package v. 6.0-70).* **-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Some useful resources - [Book: Learning IPython for interactive computing and data visualisation](http://it-ebooks.info/book/7021/) (previous name of Jupyter notebook)- [Wiki: GRASS and Python](https://grasswiki.osgeo.org/wiki/GRASS_and_Python)- [Wiki: GRASS Python scripting library](https://grasswiki.osgeo.org/wiki/GRASS_Python_Scripting_Library)- For a nice and easy first view of possibilities of GRASS scripting usin Python, see this [workshop video](http://www.youtube.com/watch?feature=player_embedded&v=PX2UpMhp2hc) on Youtube <a href="http://www.youtube.com/watch?feature=player_embedded&v=PX2UpMhp2hc" target="_blank"><img src="http://img.youtube.com/vi/PX2UpMhp2hc/0.jpg" alt="IMAGE ALT TEXT HERE" width="240" height="180" border="10" />- For more information about the GRASS GIS, please refer to [Neteler and Mitasova, 2008](http://link.springer.com/book/10.1007%2F978-0-387-68574-8) **-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Define the working environment The following cells are used to: - Import required libraries- Set the environment variables for Python, Anaconda, GRASS GIS and R - Define the ["GRASSDATA" folder](https://grass.osgeo.org/grass73/manuals/helptext.html), along with the name of the "location" and the "mapset" in which you will work. **Import libraries**
###Code
## Import libraries needed for setting parameters of operating system
import os
import sys
## Import library for temporary files creation
import tempfile
## Import Pandas library
import pandas as pd
## Import Numpy library
import numpy as np
## Import subprocess
import subprocess
## Import multiprocessing
import multiprocessing
###Output
_____no_output_____
###Markdown
**Set 'Python' and 'GRASS GIS' environment variables** Here, we set [the environment variables allowing to use of GRASS GIS](https://grass.osgeo.org/grass64/manuals/variables.html) inside this Jupyter notebook. Please modify the directory paths, so that they match your own system configuration. If you are working on Windows, with the GRASS GIS [stand-alone installation](https://grass.osgeo.org/download/software/ms-windows/), the paths displayed below should be similar. The setting of environmental variables could be improved like proposed on [this GRASS wiki page](https://grasswiki.osgeo.org/wiki/Working_with_GRASS_without_starting_it_explicitlyPython:_GRASS_GIS_7_without_existing_location_using_metadata_only).
###Code
## Define path to the 'grass7x.bat' file
grass7bin_win = 'C:\\Program Files\\GRASS GIS 7.3.svn\\grass73svn.bat'
## Define GRASS GIS environment variables
os.environ['GISBASE'] = 'C:\\Program Files\\GRASS GIS 7.3.svn'
os.environ['PATH'] = 'C:\\Program Files\\GRASS GIS 7.3.svn\\lib;C:\\Program Files\\GRASS GIS 7.3.svn\\bin;C:\\Program Files\\GRASS GIS 7.3.svn\\extrabin' + os.pathsep + os.environ['PATH']
os.environ['PATH'] = 'C:\\Program Files\\GRASS GIS 7.3.svn\\etc;C:\\Program Files\\GRASS GIS 7.3.svn\\etc\\python;C:\\Python27' + os.pathsep + os.environ['PATH']
os.environ['PATH'] = 'C:\\Program Files\\GRASS GIS 7.3.svn\\Python27;C:\\Users\\Admin_ULB\\AppData\\Roaming\\GRASS7\\addons\\scripts' + os.pathsep + os.environ['PATH']
os.environ['PATH'] = 'C:\\Program Files\\Anaconda2\\lib\\site-packages' + os.pathsep + os.environ['PATH']
os.environ['PYTHONLIB'] = 'C:\\Python27'
os.environ['PYTHONPATH'] = 'C:\\Program Files\\GRASS GIS 7.3.svn\\etc\\python'
os.environ['GIS_LOCK'] = '$$'
os.environ['GISRC'] = 'C:\\Users\\Admin_ULB\\AppData\\Roaming\\GRASS7\\rc'
os.environ['GDAL_DATA'] = 'C:\\Program Files\\GRASS GIS 7.3.svn\\share\\gdal'
## Define GRASS-Python environment
sys.path.append(os.path.join(os.environ['GISBASE'],'etc','python'))
###Output
_____no_output_____
###Markdown
Please notice that paths will differ if you installed GRASS through the 'OSGeo4W package'. Here are some identified environment variables to use with a OSGeo4W installation: - grass7bin_win = 'C:\\OSGeo4W64\\bin\\grass73svn.bat'- os.environ['GISBASE'] = 'C:\\OSGeo4W64\\apps\\grass\\grass-7.3.svn'- os.environ['PATH'] = 'C:\\OSGeo4W64\\bin' + os.pathsep + os.environ['PATH']- os.environ['PYTHONLIB'] = 'C:\\OSGeo4W64\\apps\\Python27'- os.environ['GDAL_DATA'] = 'C:\\OSGeo4W64\\share\\gdal' **Set 'R statistical computing software' environment variables** Here, we set [the environment variables allowing to use the R statistical computing software](https://stat.ethz.ch/R-manual/R-devel/library/base/html/EnvVar.html) inside this Jupyter notebook. Please change the directory path to match your system configuration. If you are working on Windows, the paths below should be similar. Please notice that you will probably have to set the path of R_LIBS_USER also directly in R interface. For that, open R software (or [Rstudio software](https://www.rstudio.com/)) and enter the following command in the command prompt (you should adapt this path to match your own configuration: **.libPaths('C:\\R_LIBS_USER\\win-library\\3.3')**
###Code
## Add the R software directory to the general PATH
os.environ['PATH'] = 'C:\\Program Files\\R\\R-3.3.0\\bin' + os.pathsep + os.environ['PATH']
## Set R software specific environment variables
os.environ['R_HOME'] = 'C:\Program Files\R\R-3.3.0'
os.environ['R_ENVIRON'] = 'C:\Program Files\R\R-3.3.0\etc\x64'
os.environ['R_DOC_DIR'] = 'C:\Program Files\R\R-3.3.0\doc'
os.environ['R_LIBS'] = 'C:\Program Files\R\R-3.3.0\library'
os.environ['R_LIBS_USER'] = 'C:\R_LIBS_USER\win-library\\3.3'
###Output
_____no_output_____
###Markdown
**Display current environment variables of your computer**
###Code
## Display the current defined environment variables
for key in os.environ.keys():
print "%s = %s \t" % (key,os.environ[key])
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** User inputs
###Code
## Define a empty dictionnary for saving user inputs
user={}
###Output
_____no_output_____
###Markdown
Here after:- Enter the path to the directory you want to use as "[GRASSDATA](https://grass.osgeo.org/programming7/loc_struct.png)". - Enter the name of the location in which you want to work and its projection information in [EPSG code](http://spatialreference.org/ref/epsg/) format. Please note that the GRASSDATA folder and locations will be automatically created if they do not yet exist. If the location name already exists, the projection information will not be used. - Enter the name you want for the mapsets which will be used later for Unsupervised Segmentation Parameter Optimization (USPO), Segmentation and Classification steps.
###Code
## Enter the path to GRASSDATA folder
user["gisdb"] = "F:\\myusername\\GRASSDATA"
## Enter the name of the location (existing or for a new one)
user["location"] = "mycityname_32630"
## Enter the EPSG code for this location
user["locationepsg"] = "32630"
## Enter the name of the mapset to use for Unsupervised Segmentation Parameter Optimization (USPO) step
user["uspo_mapsetname"] = "TEST_USPO"
## Enter the name of the mapset to use for segmentation step
user["segmentation_mapsetname"] = "TEST_SEGMENT"
## Enter the name of the mapset to use for classification step
user["classification_mapsetname"] = "TEST_CLASSIF"
## Enter the maximum number of processes to run in parallel
user["nb_proc"] = 6
if user["nb_proc"] > multiprocessing.cpu_count():
print "The requiered number of cores is higher than the amount available. Please fix it"
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Define the GRASSDATA folder and create GRASS location and mapsets Here after, the python script will check if the GRASSDATA folder, locations and mapsets already exist. If not, they will be automatically created. **Import GRASS Python packages**
###Code
## Import libraries needed to launch GRASS GIS in the jupyter notebook
import grass.script.setup as gsetup
## Import libraries needed to call GRASS using Python
import grass.script as grass
###Output
_____no_output_____
###Markdown
**Define GRASSDATA folder and create location and mapsets**
###Code
## Automatic creation of GRASSDATA folder
if os.path.exists(user["gisdb"]):
print "GRASSDATA folder already exists"
else:
os.makedirs(user["gisdb"])
print "GRASSDATA folder created in "+user["gisdb"]
## Automatic creation of GRASS location if it doesn't exist
if os.path.exists(os.path.join(user["gisdb"],user["location"])):
print "Location "+user["location"]+" already exists"
else :
if sys.platform.startswith('win'):
grass7bin = grass7bin_win
startcmd = grass7bin + ' -c epsg:' + user["locationepsg"] + ' -e ' + os.path.join(user["gisdb"],user["location"])
p = subprocess.Popen(startcmd, shell=True,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
out, err = p.communicate()
if p.returncode != 0:
print >>sys.stderr, 'ERROR: %s' % err
print >>sys.stderr, 'ERROR: Cannot generate location (%s)' % startcmd
sys.exit(-1)
else:
print 'Created location %s' % os.path.join(user["gisdb"],user["location"])
else:
print 'This notebook was developed for use with Windows. It seems you are using another OS.'
### Automatic creation of GRASS GIS mapsets
## Import library for file copying
import shutil
## USPO mapset
mapsetname=user["uspo_mapsetname"]
if os.path.exists(os.path.join(user["gisdb"],user["location"],mapsetname)):
print "'"+mapsetname+"' mapset already exists"
else:
os.makedirs(os.path.join(user["gisdb"],user["location"],mapsetname))
shutil.copy(os.path.join(user["gisdb"],user["location"],'PERMANENT','WIND'),
os.path.join(user["gisdb"],user["location"],mapsetname,'WIND'))
print "'"+mapsetname+"' mapset created in location "+user["gisdb"]
## SEGMENTATION mapset
mapsetname=user["segmentation_mapsetname"]
if os.path.exists(os.path.join(user["gisdb"],user["location"],mapsetname)):
print "'"+mapsetname+"' mapset already exists"
else:
os.makedirs(os.path.join(user["gisdb"],user["location"],mapsetname))
shutil.copy(os.path.join(user["gisdb"],user["location"],'PERMANENT','WIND'),
os.path.join(user["gisdb"],user["location"],mapsetname,'WIND'))
print "'"+mapsetname+"' mapset created in location "+user["gisdb"]
## CLASSIFICATION mapset
mapsetname=user["classification_mapsetname"]
if os.path.exists(os.path.join(user["gisdb"],user["location"],mapsetname)):
print "'"+mapsetname+"' mapset already exists"
else:
os.makedirs(os.path.join(user["gisdb"],user["location"],mapsetname))
shutil.copy(os.path.join(user["gisdb"],user["location"],'PERMANENT','WIND'),
os.path.join(user["gisdb"],user["location"],mapsetname,'WIND'))
print "'"+mapsetname+"' mapset created in location "+user["gisdb"]
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Define functions This section of the notebook is dedicated to defining functions which will then be called later in the script. If you want to create your own functions, define them here. Function for computing processing time The "print_processing_time" function is used to calculate and display the processing time for various stages of the processing chain. At the beginning of each major step, the current time is stored in a new variable, using [time.time() function](https://docs.python.org/2/library/time.html). At the end of the stage in question, the "print_processing_time" function is called and takes as an argument, the name of this new variable containing the recorded time at the beginning of the stage, and an output message.
###Code
## Import library for managing time in python
import time
## Function "print_processing_time()" compute processing time and print it.
# The argument "begintime" wait for a variable containing the begintime (result of time.time()) of the process for which to compute processing time.
# The argument "printmessage" wait for a string format with information about the process.
def print_processing_time(begintime, printmessage):
endtime=time.time()
processtime=endtime-begintime
remainingtime=processtime
days=int((remainingtime)/86400)
remainingtime-=(days*86400)
hours=int((remainingtime)/3600)
remainingtime-=(hours*3600)
minutes=int((remainingtime)/60)
remainingtime-=(minutes*60)
seconds=round((remainingtime)%60,1)
if processtime<60:
finalprintmessage=str(printmessage)+str(seconds)+" seconds"
elif processtime<3600:
finalprintmessage=str(printmessage)+str(minutes)+" minutes and "+str(seconds)+" seconds"
elif processtime<86400:
finalprintmessage=str(printmessage)+str(hours)+" hours and "+str(minutes)+" minutes and "+str(seconds)+" seconds"
elif processtime>=86400:
finalprintmessage=str(printmessage)+str(days)+" days, "+str(hours)+" hours and "+str(minutes)+" minutes and "+str(seconds)+" seconds"
return finalprintmessage
## Saving current time for processing time management
begintime_full=time.time()
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** 1 - Importing data Usually, original data are imported and stored in the "PERMANENT" mapset (automatically created when creating a new location). **Launch GRASS GIS working session**
###Code
## Save the name of the mapset in which to import the data
mapsetname='PERMANENT'
## Launch GRASS GIS working session in the mapset
if os.path.exists(os.path.join(user["gisdb"],user["location"],mapsetname)):
gsetup.init(os.environ['GISBASE'], user["gisdb"], user["location"], mapsetname)
print "You are now working in mapset '"+mapsetname+"'"
else:
print "'"+mapsetname+"' mapset doesn't exists in "+user["gisdb"]
## Saving current time for processing time management
begintime_importingdata=time.time()
###Output
_____no_output_____
###Markdown
Import raw data in PERMANENT mapset For optical and nDSM import, please adapt the input of the ['r.in.gdal' commands](https://grass.osgeo.org/grass73/manuals/r.in.gdal.html) to match your own data location. Import optical raster imagery Please adapt the number of lines in the loop to match the number of layers stacked in your imagery file (ensuring the number of ['g.rename' commands](https://grass.osgeo.org/grass73/manuals/g.rename.html) equal the number of layers stacked). Ensure the names of the layers match their position in the stack (e.g. "opt_blue" for the first layer).Please note that it is assumed that your data has at least a red band layer, called "opt_red". If not, you will have to change several parameters through this notebook, notably when defining [computation region](https://grasswiki.osgeo.org/wiki/Computational_region).
###Code
## Saving current time for processing time management
begintime_optical=time.time()
## Import optical imagery and rename band with color name
print ("Importing optical raster imagery at " + time.ctime())
grass.run_command('r.in.gdal', input="F:\\....\\.....\\mosaique_georef_ordre2.tif", output="optical", overwrite=True)
for rast in grass.list_strings("rast"):
if rast.find("1")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"opt_blue"))
elif rast.find("2")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"opt_green"))
elif rast.find("3")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"opt_red"))
elif rast.find("4")!=-1: grass.run_command("g.rename", overwrite=True, rast=(rast,"opt_nir"))
print_processing_time(begintime_optical ,"Optical imagery has been imported in ")
###Output
_____no_output_____
###Markdown
Import nDSM raster imagery If you have null value in your nDSM raster, please be careful to define the "setnull" parameter of ['r.null' command](https://grass.osgeo.org/grass73/manuals/r.null.html) well, according to your own data. If you didn't have any null values in your nDSM raster, simply comment the r.null command line with an '' as first character (to put it in [comment](http://www.pythonforbeginners.com/comments/comments-in-python)). Notice that you can display the line's number by pressing the L key when cell edge is in blue.
###Code
## Saving current time for processing time management
begintime_ndsm=time.time()
## Import nDSM imagery
print ("Importing nDSM raster imagery at " + time.ctime())
grass.run_command('r.in.gdal', input="F:\\MAUPP\\.....\\Orthorectified\\mosaique_georef\\nDSM\\nDSM_mosaik_georef_ordre2.tif", output="ndsm", overwrite=True)
## Define null value for specific value in nDSM raster. Adapt the value to your own data.
# If there is no null value in your data, comment the next line
grass.run_command('r.null', map="ndsm", setnull="-999")
print_processing_time(begintime_ndsm, "nDSM imagery has been imported in ")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Update imagery group "optical" with optical rasters In GRASS GIS several operations, mainly segmentation when dealing with OBIA, require an 'imagery group'.In the next cell, a new imagery group called 'optical' containing the optical raster layers is created with ['i.group command'](https://grass.osgeo.org/grass73/manuals/i.group.html). It is impossible to create a new imagery group, if the name already exists, therefore existing imagery groups with the same name are removed (with ['g.remove command'](https://grass.osgeo.org/grass73/manuals/g.remove.html)).
###Code
print ("Updating imagery group 'optical' with optical rasters at " + time.ctime())
## Remove existing imagery group nammed "optical". This group was created when importing multilayer raster data
grass.run_command("g.remove", type="group", name="optical", flags="f")
## Add each raster which begin with the prefix "opt" into a new imagery group "optical"
for rast in grass.list_strings("rast", pattern="opt", flag="r"):
grass.run_command("i.group", group="optical", input=rast)
###Output
_____no_output_____
###Markdown
Save default GRASS GIS' computational region for the whole extent of optical imagery In GRASS GIS, the concept of computational region is fundamental. We highly recommend reading [information about the computation region in GRASS GIS](https://grasswiki.osgeo.org/wiki/Computational_region) to be sure to understand the concept. Here after, the 'default' computational region is defined as corresponding to the red band image.
###Code
## Save default computational region to match the full extend of optical imagery
grass.run_command('g.region', flags="s", raster="opt_red@PERMANENT")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Compute pseudo-band raster Set computational region and mask layer The ['r.mask' command](https://grass.osgeo.org/grass73/manuals/r.mask.html) is used not to perform further processing on 'nodata' pixels.
###Code
## Set computational region to default
grass.run_command('g.region', flags="d")
## Apply mask to not compute out-of-AOI pixels
grass.run_command('r.mask', overwrite=True, raster="opt_red")
###Output
_____no_output_____
###Markdown
Compute NDVI (Normalized difference vegetation index) Here after, we use the [r.mapcalc command](https://grass.osgeo.org/grass73/manuals/r.mapcalc.html) to compute NDVI (Normalized difference vegetation index) indice.
###Code
## Saving current time for processing time management
print ("Begin compute NDVI on "+time.ctime())
begintime_ndvi=time.time()
## Compute NDVI
formula="NDVI=(float(opt_nir@PERMANENT)-float(opt_red@PERMANENT))/(float(opt_nir@PERMANENT)+float(opt_red@PERMANENT))"
grass.mapcalc(formula, overwrite=True)
print_processing_time(begintime_ndvi, "NDVI has been computed in ")
###Output
_____no_output_____
###Markdown
Compute NDWI (Normalized difference water index) Here after, we use the [r.mapcalc command](https://grass.osgeo.org/grass73/manuals/r.mapcalc.html) to compute the normalized difference water index (NDWI) as indice. The formula used was proposed by [McFeeters in 1996](http://www.tandfonline.com/doi/abs/10.1080/01431169608948714).
###Code
# Saving current time for processing time management
print ("Begin compute NDWI on "+time.ctime())
begintime_ndvi=time.time()
## Compute NDVI
formula="NDWI=(float(opt_green@PERMANENT)-float(opt_nir@PERMANENT))/(float(opt_green@PERMANENT)+float(opt_nir@PERMANENT))"
grass.mapcalc(formula, overwrite=True)
print_processing_time(begintime_ndvi, "NDWI has been computed in ")
###Output
_____no_output_____
###Markdown
Compute brightness Here after, we use the [r.mapcalc command](https://grass.osgeo.org/grass73/manuals/r.mapcalc.html) to compute the brightness indice defined as the sum of visible bands.
###Code
# Saving current time for processing time management
print ("Begin compute brightness on "+time.ctime())
begintime_brightness=time.time()
## Compute Brightness
formula="Brightness=opt_blue@PERMANENT+opt_green@PERMANENT+opt_red@PERMANENT"
grass.mapcalc(formula, overwrite=True)
print_processing_time(begintime_brightness, "Brightness has been computed in ")
###Output
_____no_output_____
###Markdown
Compute texture Here, we use ['r.texture' command](https://grass.osgeo.org/grass73/manuals/r.texture.html) to compute angular second moment with 5x5 moving window. This texture layer is not currently being used in the process, but you can adapt the script if you want to use it. Other textures can also be computed using the 'r.texture' command.
###Code
# Saving current time for processing time management
print ("Computing a 5x5 window angular second moment texture on "+time.ctime())
begintime_texture=time.time()
## Compute Angular second moment texture
grass.run_command('r.texture', overwrite=True, input="Brightness", output="texture", method="asm", size="5")
print_processing_time(begintime_texture, "Angular second moment texture has been computed in ")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Remove current mask
###Code
## Check if there is a raster layer named "MASK"
if not grass.list_strings("rast", pattern="MASK", flag='r'):
print 'There is currently no MASK'
else:
## Remove the current MASK layer
grass.run_command('r.mask',flags='r')
print 'The current MASK has been removed'
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** End of part 1
###Code
print("Importation of data ends at "+ time.ctime())
print_processing_time(begintime_importingdata, "Importation of data has been achieved in :")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** *-*-*-*-*-*-*-*-*-*-*-* *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-* **-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** 2 - Unsupervised segmentation parameter optimization (USPO) **Launch GRASS GIS working session**
###Code
## Set the name of the mapset in which to work
mapsetname=user["uspo_mapsetname"]
## Launch GRASS GIS working session in the mapset
if os.path.exists(os.path.join(user["gisdb"],user["location"],mapsetname)):
gsetup.init(os.environ['GISBASE'], user["gisdb"], user["location"], mapsetname)
print "You are now working in mapset '"+mapsetname+"'"
else:
print "'"+mapsetname+"' mapset doesn't exists in "+user["gisdb"]
## Saving current time for processing time management
begintime_USPO_full=time.time()
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Create USPO's working regions based on several image subsets - The ['i.segment.uspo' add-on](https://grass.osgeo.org/grass70/manuals/addons/i.segment.uspo.html) could use the whole area of your data for segmentation parameters optimization, but it could take days depending on the size of your data and the number of parameter combinations you ask for testing. For this reason, we use small rectangular polygons (recommended size: fewer than 4 million pixels), representing the diversity of landscapes in your scene, for which i.segment.uspo will find optimized segmentation parameters. Please refer to the "region" parameter of the [i.segment.uspo help](https://grass.osgeo.org/grass70/manuals/addons/i.segment.uspo.html) for more explanations. Please create this polygon layer (shapefile) focusing on subsets representing the diversity of landscapes in your scene. You could use [QuantumGIS](http://www.qgis.org/en/site/) to perform this step.- Please adapt the "input" parameter of the 'v.import' command to match the path to your own data. - We call here "USPO's regions" the GRASS's computational regions where i.segment.uspo will perform segmentation and compute optimization function in order to find optimized segmentation parameter(s). - The ['v.import' command](https://grass.osgeo.org/grass72/manuals/v.import.html) is used to import vector data.
###Code
## Import shapefile with polygons corresponding to computational region's extension for USPO
print ("Importing vector data with USPO's regions at " + time.ctime())
grass.run_command('g.region', flags="d")
grass.run_command('v.import', overwrite=True,
input="F:/...../region_USPO/Ouaga_region_USPO.shp",
output="region_uspo")
###Output
_____no_output_____
###Markdown
As the "region_uspo" layer contains polygons corresponding to the computational regions to be used in i.segment.uspo, this layer is used here to define (and save) a GRASS computational region for each polygon. We use here [v.extract command](https://grass.osgeo.org/grass72/manuals/v.extract.html) to extract each polygon temporarily from the "region uspo" layer, [g.region command](https://grass.osgeo.org/grass72/manuals/g.region.html) to create computational region corresponding to the polygon and save it with a specific name and [g.remove command](https://grass.osgeo.org/grass72/manuals/g.remove.html) to remove the temporarily created polygon.
###Code
## Create a computional region for each polygon in the 'region_uspo' layer
print ("Defining a GRASS region for each polygon at " + time.ctime())
for cat in grass.parse_command('v.db.select', map='region_uspo', columns='cat', flags='c'):
condition="cat="+cat
outputname="region_uspo_"+cat
regionname="subset_uspo_"+cat
grass.run_command('v.extract', overwrite=True, quiet=True,
input="region_uspo", type="area", where=condition, output=outputname)
grass.run_command('g.region', overwrite=True, vector=outputname, save=regionname, align="opt_red@PERMANENT", flags="u")
grass.run_command('g.remove', type="vector", name=outputname, flags="f")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Unsupervised Segmentation Parameter Optimization with i.segment.uspo The [i.segment.uspo add-on](https://grass.osgeo.org/grass70/manuals/addons/i.segment.uspo.html) is used to automatically find the optimized segmentation parameter(s) for specific computational regions. We highly recommend reading of [the add-on help](https://grass.osgeo.org/grass70/manuals/addons/i.segment.uspo.html) to be sure to understand the different fonctionalities well. Define a imagery group for i.segment.uspo We need to create an imagery group for i.segment.uspo. In GRASS GIS, these imagery groups contain raster layers to use to a speficic task. Here, for i.segment.uspo, we define and use the same imagery group of layers for both segmentation and parameter optimization. You could use different imagery groups for segmentation and parameter optimization (e.g. you could segment using only optical data and optimize the segmentation parameter based on a specific indice like NDVI i.e.). Here, we use the 4 optical layers and the NDVI layer for the segmentation and the USPO step. If you want to use other layers, please adapt the script accordingly. If you make changes, please be careful to use the same layer for the segmentation in i.segment.uspo and for further segmentation step (with i.segment).
###Code
## Saving current time for processing time management
begintime_USPO_FULL=time.time()
print ("Defining a imagery group with raster used for i.segment.uspo at " + time.ctime())
## Remove existing imagery group named "group"
grass.run_command('g.remove', flags="rf", type="group", name="group")
## Add all optical imagery in the imagery group
for rast in grass.list_strings("rast", pattern="opt", flag='r'):
grass.run_command('i.group', group="group", input=rast)
## Add NDVI imagery in the imagery group
grass.run_command('i.group', group="group", input="NDVI@PERMANENT")
## list files in the group
print grass.read_command('i.group', group="group", flags="l")
###Output
_____no_output_____
###Markdown
Instal GRASS extensions GRASS GIS have both a core part (the one installed by default on your computer) and add-ons (which have to be installed using the extension manager ['g.extension'](https://grass.osgeo.org/grass72/manuals/g.extension.html)).In the next cell, 'i.segment.uspo' will be installed (if not yet) and also other add-ons ['r.neighborhoodmatrix'](https://grass.osgeo.org/grass70/manuals/addons/r.neighborhoodmatrix.html) and ['i.segment.hierarchical'](https://grass.osgeo.org/grass70/manuals/addons/i.segment.hierarchical.html) required for running i.segment.uspo.
###Code
## Instal r.neighborhoodmatrix if not yet installed
if "r.neighborhoodmatrix" not in grass.parse_command('g.extension', flags="a"):
grass.run_command('g.extension', extension="r.neighborhoodmatrix")
print "r.neighborhoodmatrix have been installed on your computer"
else: print "r.neighborhoodmatrix is already installed on your computer"
## Instal i.segment.hierarchical if not yet installed
if "i.segment.hierarchical" not in grass.parse_command('g.extension', flags="a"):
grass.run_command('g.extension', extension="i.segment.hierarchical")
print "i.segment.hierarchical have been installed on your computer"
else: print "i.segment.hierarchical is already installed on your computer"
## Instal i.segment.uspo if not yet installed
if "i.segment.uspo" not in grass.parse_command('g.extension', flags="a"):
grass.run_command('g.extension', extension="i.segment.uspo")
print "i.segment.uspo have been installed on your computer"
else: print "i.segment.uspo is already installed on your computer"
###Output
_____no_output_____
###Markdown
Unsupervised segmentation parameter optimisation with i.segment.uspo - Please read the [i.segment.uspo](https://grass.osgeo.org/grass70/manuals/addons/i.segment.uspo.html) help to ensure you understand the extension well and you choose correctly the different parameters for your case. - It is recommended to quickly identify (by visual check) thresholds resulting in clearly oversegmentedand undersegmented segments, and use those thresholds as 'threshold_start' and 'threshold_start', respectively. - If the number of threshold/minsize combination to test is too big, running i.segment.uspo could be very time-consuming or even failed. In this case, please reduce the range and/or steps of thresholds and/or minsize to be tested. - Choose between the following optimization function: "sum" of [Espindola (2006)](http://www.tandfonline.com/doi/abs/10.1080/01431160600617194) or "F function" of [Johnson (2015)](http://www.mdpi.com/2220-9964/4/4/2292). - If you choose the "F function", please use an adapted alpha parameter. A value of alpha>1 is *"(...) appropriate for selecting the parameters for finer segmentation levels (to ensure that smaller objects of interest or objects spectrally-similar to their surroundings are not undersegmented at these levels), while values of a[alpha] < 1 may be more appropriate for selecting parameters for coarser segmentation levels (to ensure that larger/more heterogeneous objects of interest are not oversegmented at these levels)"* (Johnson et al., 2015, pp. 2295).- Please notice that, for our requirements, the minsize parameter was set according to our MMU (Minimum Mapping Unit) and was not optimized with i.segment.uspo. You can change the script if you want but notice that you should then add a step further to select the optimized minsize as it is done currently for the threshold. - Please notice that the RAM allowed to the segmentation has been set to 2Gb but can be modified according to the capacity of your own system. The same is true for the number of processors to use.
###Code
## Define the optimization function name ("sum" or "f")
opti_f="f"
## Define the alpha, only if selected optimization function is "f"
if opti_f=="f":
alpha=1.25
else : alpha=""
###Output
_____no_output_____
###Markdown
In the next cell, please adapt the path to the directory where you want to save the .csv output of i.segment.uspo.
###Code
## Define the csv output file name, according to the optimization function selected
outputcsv="F:\\.....\\Segmentation_param\\ouaga_uspo_"+str(opti_f)+str(alpha)+".csv"
## Defining a list of GRASS GIS' computational regions where i.segment.uspo will optimize the segmentation parameters
regions_uspo=grass.list_strings("region", pattern="subset_uspo_", flag='r')[0]
for region in grass.list_strings("region", pattern="subset_uspo_", flag='r')[1:]:
regions_uspo+=","+region
## Running i.segment.uspo
print ("Runing i.segment.uspo at " + time.ctime())
begintime_USPO=time.time()
grass.run_command('i.segment.uspo', overwrite=True, group='group',
output=outputcsv, segment_map="best",
regions=regions_uspo, threshold_start="0.001", threshold_stop="0.03", threshold_step="0.001", minsizes="8",
optimization_function=opti_f, f_function_alpha=alpha, memory="2000", processes=str(user["nb_proc"]))
## Create a .csvt file containing each colomn type of i.segment.uspo' csv output. Required for further import of .csv file
model_output_desc = outputcsv + "t"
f = open(model_output_desc, 'w')
header_string = '"String","Real","Integer","Real","Real","Real"'
f.write(header_string)
f.close()
## Print
print_processing_time(begintime_USPO, "USPO process achieved in ")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Export of i.segment.uspo results This section is optional and can be used to export the segmentation results of i.segment.uspo in shapefile format. ['r.to.vect' command](https://grass.osgeo.org/grass72/manuals/r.to.vect.html) is used to vectorize the "best" segmentation results coming from i.segment.uspo. If you don't want to export those vectors and just visualize it in GRASS GIS, you can use the r.to.vect command. The ['v.out.ogr' command](https://grass.osgeo.org/grass72/manuals/v.out.ogr.html) is used to perform the export of vector layers in conventional vector formats like shapefile. Convert i.segment.uspo raster outputs in vector Please adapt the path to the directory where you want to save the segmentation results of i.segment.uspo, in shapefile format.
###Code
## Define the output folder name, according to the optimization function selected
outputfolder="F:\\.....\\Segmentation_param\\USPO_bestsegment_"+str(opti_f)+str(alpha)
## Saving current time for processing time management
print ("Begin to export i.segment.uspo results at " + time.ctime())
begintime_exportUSPO=time.time()
count=0
for rast in grass.list_strings("rast", pattern="best_", flag='r'):
count+=1
print "Working on raster '"+str(rast)+"' - "+str(count)+"/"+str(len(grass.list_strings("rast", pattern="best_", flag='r')))
strindex=rast.find("subset_uspo_")
subregion=rast[strindex: strindex+14]
vectname="temp_bestsegment_"+subregion
print ("Converting raster layer into vector")
grass.run_command('r.to.vect', overwrite=True, input=rast, output=vectname, type='area')
print ("Exporting shapefile")
grass.run_command('v.out.ogr', overwrite=True, input=vectname, type='area',
output=outputfolder, format='ESRI_Shapefile')
print ("Remove newly created vector layer")
grass.run_command("g.remove", type="vector", pattern="temp_bestsegment_", flags="rf")
## Print
print_processing_time(begintime_exportUSPO, "Export of i.segment.uspo results done in ")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** End of part 2
###Code
print("The script ends at "+ time.ctime())
print_processing_time(begintime_USPO_FULL, "Entire process has been achieved in ")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** *-*-*-*-*-*-*-*-*-*-*-* *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-* **-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** 3 - Segmentation **Launch GRASS GIS working session**
###Code
## Set the name of the mapset in which to work
mapsetname=user["segmentation_mapsetname"]
## Launch GRASS GIS working session in the mapset
if os.path.exists(os.path.join(user["gisdb"],user["location"],mapsetname)):
gsetup.init(os.environ['GISBASE'], user["gisdb"], user["location"], mapsetname)
print "You are now working in mapset '"+mapsetname+"'"
else:
print "'"+mapsetname+"' mapset doesn't exists in "+user["gisdb"]
## Saving current time for processing time management
begintime_segmentation_full=time.time()
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Import the csv output file from i.segment.uspo - Here, the results of i.segment.uspo are imported and manipulated to select, for each 'USPO's region', the segmentation parameter achieving the highest optimization score. Then, the threshold to be used to segment the whole scene is selected as the lowest threshold among the different "best" thresholds (notice that you can easily change it by using ['median'](https://docs.scipy.org/doc/numpy/reference/generated/numpy.median.html) function or instead of ['amin'](https://docs.scipy.org/doc/numpy/reference/generated/numpy.amin.html) function of numpy library). - Be careful to control this step well, as selection of the lowest threshold could result in over-segmented results if the optimized thresholds are very different among the 'USPO's region'. In that case, median threshold could be preferred. - Please notice that the minsize have been fixed and not optimized with i.segment.uspo- If you made several tests, please be sure to import the .csv file corresponding to the wanted optimization function and alpha parameter !- Python's [Pandas](http://pandas.pydata.org/) library for managing .csv table in dataframe. We specifically used [.read_csv](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html), [.to_csv](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_csv.html), [.merge](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html), [.loc](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.loc.html), [.head](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.head.html), [.sort_values](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html), [.groupby](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.groupby.html), [.max](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.max.html) Pandas' dataframe functions.
###Code
## Import Pandas library
import pandas as pd
## Import the optimization results of i.segment.uspo in a dataframe
print ("Import .csv file with results of i.segment.uspo on " + time.ctime())
ouaga_uspo=pd.read_csv(outputcsv, sep=',',header=0)
ouaga_uspo.head(3)
## Create temporary dataframe with maximum value of optimization criteria for "USPO's region"
temp=ouaga_uspo.loc[:,['region','optimization_criteria']].groupby('region').max()
temp.head()
## Merge between dataframes for identification of threshold corresponding to the maximum optimizaion criteria of each "USPO's region"
uspo_parameters = pd.merge(ouaga_uspo, temp, on='optimization_criteria').loc[:,['region','threshold','optimization_criteria']].sort_values(by='region')
uspo_parameters.head()
## Save the optimized threshold of each "USPO's region" in a list
uspo_parameters_list=uspo_parameters['threshold'].tolist()
## Save the minimum of optimized threshold in a new variable called "optimized_threshold"
optimized_threshold=round(np.amin(uspo_parameters_list),3)
print "The lowest of the 'USPOs region' optimized threshold is "+str(optimized_threshold)
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Copy imagery group created for i.segment.uspo to the current mapset As the imagery group to use for segmentation (which is going to be performed with ['i.segment' module](https://grass.osgeo.org/grass72/manuals/i.segment.html)) should be in the working mapset, we copy the one created for i.segment.uspo in the previous step. We use ['g.copy' command](https://grass.osgeo.org/grass72/manuals/g.copy.html) for this purpose. ['i.group' command](https://grass.osgeo.org/grass72/manuals/i.group.html) is used to print, as a reminder, the list of raster in the imagery group.
###Code
## Copy of imagery group to the current mapset
grass.run_command('g.copy', overwrite=True, group='group@USPO,group')
print grass.read_command('i.group', group="group", flags="l")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Import processing tiles In order to allow processing large areas, this processing chain is designed to work with polygons tiles. Please, create a polygon layer (shapefile) containing at least 2 tiles of your area of interest (it could have decades if you're dealing with very large dataset). Please add a column in the attribute table, called "area_km2" and containing the area of the tiles (in km2), which will be used to inform about the progress of the segmentation process. You can use [QuantumGIS](http://www.qgis.org/en/site/) to perform this step.Please adapt the "input" parameter of the 'v.import' command to match the path to your own data.
###Code
print ('Importing processing tiles at '+ time.ctime())
## Import vectorial tiles zones layers
grass.run_command('v.import', overwrite=True, input="F:\\.....\\processing_tiles.shp", output="processing_tiles")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Segmentation with optimized segmentation parameter - For each processing tile, a segmentation is made (using ['i.segment'](https://grass.osgeo.org/grass72/manuals/i.segment.html)).- In the sequence of processes, each processing tile is extracted (with ['v.extract'](https://grass.osgeo.org/grass72/manuals/v.extract.html)), then is use to define the computational region (with ['g.region'](https://grass.osgeo.org/grass72/manuals/g.region.html)) and finally used as mask (with ['r.mask'](https://grass.osgeo.org/grass72/manuals/r.mask.html)). The processing tile is then segmented according to the optimized parameter, previously defined by i.segment.uspo. - Please notice that the "minsize" parameter of 'i.segment' command is here fixed, but you can adapt the script to your own need.- Please notice that the RAM allowed to the segmentation has been set to 2Gb but can be modified according to the capacity of your own system.- Note that it is necessary to use the *align* parameter when setting the "computational region" to match the polygon extention, as otherwise it could result in misalignment of segmentation results regardind to raster imagery.- We use also ['v.db.select'](https://grass.osgeo.org/grass72/manuals/v.db.select.html) and ['g.remove'](https://grass.osgeo.org/grass72/manuals/g.remove.html) modules.
###Code
## Compute total area to be segmented for process progression information
total_area=0
processed_area=0
for id in grass.parse_command('v.db.select', map='processing_tiles@SEGMENTATION', columns='cat', flags='c'):
condition='cat='+id
size=float(grass.read_command('v.db.select', map="processing_tiles@SEGMENTATION", columns="area_km2", where=condition,flags="c"))
total_area+=size
print total_area
print ("Begin segmentation process on " + time.ctime())
## Saving current time for processing time management
begintime_segmentation=time.time()
## Initialize a variable for process progression purpose
processed_area=0.0
## Segmentation step. We use here a loop trhough the different polygons
for id in grass.parse_command('v.db.select', map='processing_tiles@SEGMENTATION', columns='cat', flags='c'):
## Save current time at loop' start.
begintime_current_id=time.time()
## Build condition for selection in attributes of vector layer
condition='cat='+id
## Save size of the current polygon
size=float(grass.read_command('v.db.select', map="processing_tiles@SEGMENTATION", columns="area_km2", where=condition,flags="c"))
## Build name for temporary vector layer used for computational region and mask definition
tempvector="temp_polygon_"+id
## Extract the current polygon in a new vector layer
grass.run_command('v.extract', overwrite=True, input="processing_tiles@SEGMENTATION", type="area", where=condition, output=tempvector)
## Define computational region to match the current polygon vector layer and align the computational reigion with optical imagery
grass.run_command('g.region', overwrite=True, vector=tempvector, align="opt_red@PERMANENT")
## Apply mask using the current polygon
grass.run_command('r.mask', overwrite=True, vector=tempvector)
## Segmentation of current polygon with i.segment
print ("Segmenting tile number "+str(id)+" corresponding to "+str(size)+" km2" )
outputsegment="segmentation_tile_"+id
grass.run_command('i.segment', overwrite=True, group="group", output=outputsegment, threshold=optimized_threshold, minsize="8", memory="2000")
## Delete the current polygon vector layer
grass.run_command('g.remove', type="vector", name=tempvector, flags="f")
## Remove current mask
grass.run_command('r.mask', flags="r")
## Add size of the current polygon to the already processed area
processed_area+=size
## Print of what happened
print("Tile "+str(id)+" processed.")
print_processing_time(begintime_current_id, " Process achieved in ")
print ("Progress = "+str((processed_area/total_area)*100)+" percent of the total area segmented")
## Compute processing time and print it
print_processing_time(begintime_segmentation, "Segmentation process on all tiles achieved in ")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Merging each individual segmentation raster one "patched" raster The different segmentation results of 'i.segment' (raster layers) for each processing tiles will be "patched" (merged) in one resulting raster. ['r.mapcalc' command](https://grass.osgeo.org/grass71/manuals/r.mapcalc.html) is used to combine all the segmentation rasters together. The 'nmax' expression is used to keep the maximum value of input rasters, excluding the NULL values.
###Code
## Setting the computational region extend to all rasters to be merged
groupraster=grass.list_strings("rast", pattern="segmentation_tile_", mapset="SEGMENTATION", flag='r')[0]
count=1
for rast in grass.list_strings("rast", pattern="segmentation_tile_", mapset="SEGMENTATION", flag='r')[1:]:
groupraster+=","+rast
count+=1
## Define computational region
grass.run_command('g.region', overwrite=True, raster=groupraster)
## Print and saving current time for processing time management
print ("Begin to merge "+str(count)+" individual segmentation maps on " + time.ctime())
begintime_merge=time.time()
## Defining the formula for r.mapcalc
formula="unclumped_raster= nmax("+grass.list_strings("rast", pattern="segmentation_tile_", mapset="SEGMENTATION", flag='r')[0]
for rast in grass.list_strings("rast", pattern="segmentation_tile_", mapset="SEGMENTATION", flag='r')[1:]:
formula+=","+rast
formula+=")"
## Running r.mapcalc to merge all raster together
grass.mapcalc(formula, overwrite=True)
## Compute processing time and print it
print(str(count)+" individual segment maps have been merge with 'r.mapcalc'")
print_processing_time(begintime_merge, " Merging process achieved in ")
###Output
_____no_output_____
###Markdown
Clump patched raster We use here the ['r.clump' command](https://grass.osgeo.org/grass71/manuals/r.clump.html) to allow a new (unique) ID for each group of pixels with different values from their neighbors (because segments resulting from 'i.segment' on different processing tiles could have the same ID after being patched in the precedent step).
###Code
## Print and saving current time for processing time management
print ("Begin clump of raster on " + time.ctime())
begintime_clump=time.time()
## Generate new individual values for group of pixels
grass.run_command('r.clump', overwrite=True, input="unclumped_raster@SEGMENTATION", output="segmentation_raster@SEGMENTATION")
## Compute processing time and print it
print_processing_time(begintime_clump, "Segmentation raster have been clumped in ")
## Compute basic statistics about the clumped raster. The maximum value correspond to the number of objets (patchs)
nbrobject=grass.raster_info("segmentation_raster")
print "The segmentation raster contain "+str(int(nbrobject.max))+" objects"
###Output
_____no_output_____
###Markdown
Erase intermediate maps The next cell will remove all the temporary files needed for the different previous steps. Be careful to be sure that your 'segmentation_raster' has correctly been processed before running this part, otherwise you should start all the segmentation part of this script again.
###Code
## Print
print ("Begin deleting temporary maps on " + time.ctime())
## Delete individual segmentation rasters
grass.run_command('g.remove', flags="rf", type="raster", pattern="segmentation_tile_")
## Delete unclumped segmentation rasters
grass.run_command('g.remove', flags="f", type="raster", name="unclumped_raster")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** End of part 3
###Code
print("The script ends at "+ time.ctime())
print_processing_time(begintime_segmentation_full, "Entire process has been achieved in ")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** *-*-*-*-*-*-*-*-*-*-*-* *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-* **-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** 4 - Classification **Launch GRASS GIS working session**
###Code
## Set the name of the mapset in which to work
mapsetname=user["classification_mapsetname"]
## Launch GRASS GIS working session in the mapset
if os.path.exists(os.path.join(user["gisdb"],user["location"],mapsetname)):
gsetup.init(os.environ['GISBASE'], user["gisdb"], user["location"], mapsetname)
print "You are now working in mapset '"+mapsetname+"'"
else:
print "'"+mapsetname+"' mapset doesn't exists in "+user["gisdb"]
## Saving current time for processing time management
begintime_classif_full=time.time()
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Copy data from other mapset Some data need to be copied from other mapsets into the current mapset. Copy segmentation raster in the current mapset
###Code
## Copy segmentation raster layer from SEGMENTATION mapset to current mapset
grass.run_command('g.copy', overwrite=True, raster="segmentation_raster@SEGMENTATION,segments")
###Output
_____no_output_____
###Markdown
Copy nDSM raster in the current mapset - Specifically to our data, our nDSM have *null values* which have been defined during the importation of the raw data, those *null values* (which correspond, in our case, most of the time to missed pixels in the stereo-photogrammetry process) have to be set to zero values of elevation (with the ['r.null' command](https://grass.osgeo.org/grass72/manuals/r.null.html)). Those missed pixels are for almost water surfaces (0 elevation on nDSM) or hidden side of buildings.- If you are working with a nDSM data which didn't have any null values, please skip the following cell.
###Code
## Copy nDSM raster layer from PERMANENT mapset to current mapset
grass.run_command('g.copy', overwrite=True, raster="nDSM@PERMANENT,nDSM")
## Define computational region to match the extention of GEOBIA Subset
grass.run_command('g.region', overwrite=True, raster="nDSM@CLASSIFICATION")
## Replace null values of nDSM with zero values
grass.run_command('r.null', map="nDSM@CLASSIFICATION", null="0")
###Output
_____no_output_____
###Markdown
Display list of raster available in the current mapset
###Code
## List of raster available in CLASSIFICATION mapset
print grass.list_strings("raster", mapset="CLASSIFICATION", flag='r')
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Creation of training/validation and test sets Here, you can choose between different procedures for building/creation of your training and test sets : 1. Method A: Pre-defined training and test sets.2. Method B: Stratified random split of training and test sets.3. Method C: Spatial split of training and test sets.**Please run only the cells corresponding to the procedure you choose!**The creation of training/test sets is the most human-labour intensive work of the processing. Be careful that the quality of the training set and test set are important to achieve satisfying results. Create a shapefile of points with an attribute column called **'Class_num'** (INT type) and containing the class of the objet. Use numbers as class categories (1,10,2,20,25...) instead of text. **-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**Please notice that we use here [the following definitions](https://en.wikipedia.org/wiki/Test_set) when speaking about "training set", "validation set" and "test set":- Training set: A set of examples used for learning, that is to fit the parameters [i.e., weights] of the classifier.- Validation set: A set of examples used to tune the hyperparameters [i.e., architecture, not weights] of a classifier, for example to choose the number of hidden units in a neural network.- Test set: A set of examples used only to assess the performance [generalization] of a fully-specified classifier.**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-**Here, we build only a training set and an independent test set which will be used later for model's performance evaluation. The validation set to use for tunning machine learning model's parameters will be automatically generated from the training set provided to the GRASS GIS' classification add-on ['v.class.mlR'](https://grass.osgeo.org/grass70/manuals/addons/v.class.mlR.html) which implement cross-validation. **-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Method A: Pre-defined training and test sets This procedure is the simplest. Training and test sets are should be created in distinct shapefile and are imported as separately.
###Code
## Import vector shapefile with the training set
grass.run_command('v.in.ogr', overwrite=True, input='F:\\.....\\Training_test\\training.shp', output='training_set')
## Import vector shapefile with the test set
grass.run_command('v.in.ogr', overwrite=True, input='F:\\.....\\Training_test\\test_set.shp', output='test_set')
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Method B: Stratified random split of training and test sets In this procedure the training set is the result of a (nonspatial) random selection from all available sample points, with stratification based on LULC classes. The user has to choose the ratio of available points to be used for the training set (e.g. 0.5 to split in two equal parts ; 0.75 to select 3/4 of available points for training). Then, the test set is defined as the opposite of the training set (the points still available after selection of training points). For this part, the following commands are used: ['g.remove'](https://grass.osgeo.org/grass72/manuals/g.remove.html), ['v.db.select'](https://grass.osgeo.org/grass72/manuals/v.db.select.html), ['v.extract'](https://grass.osgeo.org/grass72/manuals/v.extract.html), ['v.patch'](https://grass.osgeo.org/grass72/manuals/v.patch.html). **Import sample of points to be divided into training and test set**
###Code
## Set computational region to match the default region
grass.run_command('g.region', flags="d")
## Import sample data (points)
grass.run_command('v.in.ogr', overwrite=True, input='F:\\.....\\Training_test\\sample_point.shp', output='samples')
## Print
print "Point sample imported on "+time.ctime()
###Output
_____no_output_____
###Markdown
**Creation of training set as a ramdom stratified selection from available samples** Here, you can modify the 'ratio' variable to change the percentage of available points which are going to be randomly selected in the training set.
###Code
## Saving current time for processing time management
print ("Start building training set on " + time.ctime())
begintime_trainingset=time.time()
## Set the ratio of available sample to select for training (between 0 and 1)
ratio=0.5
## Erase potential existing vector
grass.run_command('g.remove', flags="rf", type="vector", pattern="temp_sample_")
grass.run_command('g.remove', flags="rf", type="vector", pattern="training_")
grass.run_command('g.remove', flags="rf", type="vector", pattern="training_set")
## Loop through all class label
for classnum in grass.parse_command('v.db.select', map='samples', columns='Class_num', flags='c'):
## Extract one vector layer per class
condition="Class_num='"+str(classnum)+"'"
tempvectorname="temp_sample_"+str(classnum)
grass.run_command('v.extract', overwrite=True, input="samples", type="point", where=condition, output=tempvectorname)
## Extract class-based training sample (one for each class layer)
nbravailable=grass.vector_info(tempvectorname).points
nbrextract=int(nbravailable*ratio)
outputname="training_"+classnum
grass.run_command('v.extract', overwrite=True, input=tempvectorname, output=outputname, type="point", random=nbrextract)
print str(nbrextract)+" training samples extracted from the "+str(nbravailable)+" available for class '"+str(classnum)+"'"
grass.run_command('g.remove', flags="f", type="vector", name=tempvectorname)
## Setting the list of vector to be patched
inputlayers=grass.list_strings("vector", pattern="training_", mapset="CLASSIFICATION", flag='r')[0]
count=1
for vect in grass.list_strings("vector", pattern="training_", mapset="CLASSIFICATION", flag='r')[1:]:
inputlayers+=","+vect
count+=1
## Patch of class-based trainings samples in one unique training set
grass.run_command('g.remove', flags="f", type="vector", name="training_set")
grass.run_command('v.patch', flags="ne", overwrite=True, input=inputlayers, output="training_set")
print str(count)+" vector layers patched in one unique training set"
## Erase individual class-based training sample
for vect in grass.list_strings("vector", pattern="training_", mapset="CLASSIFICATION", flag='r'):
grass.run_command('g.remove', flags="f", type="vector", name=vect)
## Save number of records in the training set
nbtraining=len(grass.parse_command('v.db.select', map='training_set', columns='Id', flags='c'))
## Print number of records in the training set and processing time
print(str(nbtraining)+" points in the training set")
print_processing_time(begintime_trainingset, "Training set build in ")
###Output
_____no_output_____
###Markdown
**Creation of test set as the opposite of training set**
###Code
## Saving current time for processing time management
print ("Start building test set on " + time.ctime())
begintime_testset=time.time()
## Erase existing vector
grass.run_command('g.remove', flags="rf", type="vector", pattern="test_set")
## Save the id of training point
list_id=[]
for point_id in grass.parse_command('v.db.select', map='training_set', columns='Id', flags='c'):
list_id.append(str(point_id))
## Build SQL statement for 'v.extract' command
condition="Id not in ("+str(list_id[0])
for point_id in list_id[1:]:
condition+=","+str(point_id)
condition+=")"
## From sample point, extract point not yet selected in training set
grass.run_command('g.remove', flags="f", type="vector", name="test_set")
grass.run_command('v.extract', overwrite=True, input="samples", type="point", where=condition, output="test_set")
## Save number of records in the test set
nbvalidation=len(grass.parse_command('v.db.select', map='test_set', columns='Id', flags='c'))
## Print number of records in the test set and processing time
print(str(nbvalidation)+" points in the test set")
print_processing_time(begintime_testset, "Test set build in ")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Method C: Spatial split of training and test sets In this procedure, the training and test sets are split in order not to have training points inside a specified image subset (called here after 'GEOBIA_subset'). This subset is determined by a polygon vector layer to be imported (shapefile). All training points will be outside, while available points inside the image subset will be used for the test set. This approach could avoid spatial autocorrelation between training and test points. **Import polygon to be used as image subset**
###Code
## Import vector shapefile of the extend of image subset
grass.run_command('v.in.ogr', overwrite=True, input='F:\\.....\\Training_test\\image_subset_polygon.shp', output='GEOBIA_subset')
###Output
_____no_output_____
###Markdown
**Import sample of points to be divided into training/validation and test set**
###Code
## Set computational region to match the default region
grass.run_command('g.region', flags="d")
## Import sample data (points)
grass.run_command('v.in.ogr', overwrite=True, input='F:\\.....\\Training_test\\sample_point.shp', output='samples')
## Print
print "Point sample imported on "+time.ctime()
###Output
_____no_output_____
###Markdown
**Creation of training set as all available samples outside the image subset**
###Code
## Saving current time for processing time management
print ("Start building training set on " + time.ctime())
begintime_trainingset=time.time()
## Erase existing vector
grass.run_command('g.remove', flags="rf", type="vector", pattern="training_set")
## Select points inside the image subset
grass.run_command('v.select', overwrite=True, ainput="samples", atype="point", binput="GEOBIA_subset", btype="area",
output="training_set", operator="overlap", flags="r")
## Save number of records in the training set
nbtraining=len(grass.parse_command('v.db.select', map='training_set', columns='Id', flags='c'))
## Print number of records in the training set and processing time
print(str(nbtraining)+" points in the training set")
print_processing_time(begintime_trainingset, "Training set build in ")
###Output
_____no_output_____
###Markdown
**Creation of test set as the opposite of training set**
###Code
## Saving current time for processing time management
print ("Start building test set on " + time.ctime())
begintime_testset=time.time()
## Erase existing vector
grass.run_command('g.remove', flags="rf", type="vector", pattern="test_set")
## Save the id of training point
list_id=[]
for point_id in grass.parse_command('v.db.select', map='training_set', columns='Id', flags='c'):
list_id.append(str(point_id))
## Build SQL statement for v.extract
condition="Id not in ("+str(list_id[0])
for point_id in list_id[1:]:
condition+=","+str(point_id)
condition+=")"
## Extract point not in training from all sample points
grass.run_command('v.extract', overwrite=True, input="samples", type="point", where=condition, output="test_set")
## Save number of records in the test set
nbvalidation=len(grass.parse_command('v.db.select', map='test_set', columns='Id', flags='c'))
## Print number of records in the test set and processing time
print(str(nbvalidation)+" points in the test set")
print_processing_time(begintime_testset, "Test set build in ")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Display by-class sample points distribution The following part is used to count up the number of points in training and test set.
###Code
## Create temporary .csv file with attribute table (all columns) of "training_set" vector layer
grass.run_command('v.db.select', overwrite=True, map="training_set@CLASSIFICATION",
file=os.path.join(tempfile.gettempdir(),"tempfile.csv"),separator="comma")
## Import .csv file into Jupyter notebook (with panda)
dataframe=pd.read_csv(os.path.join(tempfile.gettempdir(),"tempfile.csv"), sep=',',header=0)
print str(len(dataframe))+" points in training_set layer\n"
## Delete temporary .csv file
os.remove(os.path.join(tempfile.gettempdir(),"tempfile.csv"))
## Display the number of points per class in sample
print "Number of points per class in training_set"
print dataframe.groupby("Class_num").size()
## Create temporary .csv file with attribute table (all columns) of "test_set" vector layer
grass.run_command('v.db.select', overwrite=True, map="test_set@CLASSIFICATION",
file=os.path.join(tempfile.gettempdir(),"tempfile.csv"),separator="comma")
## Import .csv file into Jupyter notebook (with panda)
dataframe=pd.read_csv(os.path.join(tempfile.gettempdir(),"tempfile.csv"), sep=',',header=0)
print str(len(dataframe))+" points in test_set layer\n"
## Delete temporary .csv file
os.remove(os.path.join(tempfile.gettempdir(),"tempfile.csv"))
## Display the number of points per class in sample
print "Number of points per class in test_set"
print dataframe.groupby("Class_num").size()
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Export of training and test sets in shapefile This part is optional. If you built the training and test sets from a simple set of points (Method B or C), you can run the following cells to exports those sets as shapefiles in desired folder.
###Code
## Export training set
grass.run_command('v.out.ogr', flags="sc", overwrite=True, input="training_set",
output="F:\\.....\\sample_shapefiles\\new_training_set.shp",
format="ESRI_Shapefile")
## Export test set
grass.run_command('v.out.ogr', flags="sc", overwrite=True, input="test_set",
output="F:\\.....\\sample_shapefiles\\new_test_set.shp",
format="ESRI_Shapefile")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Compute statistics of training objects Identify which segment correspond to each training point In our processing chain, the training and test sets are formed of points. However, objects are needed to train the supervised classification in OBIA context. In this section, each point in the training set is used to identify the underlying object in the segmentation layer, and save its unique ID. We use the ['v.db.addcolumn' command](https://grass.osgeo.org/grass72/manuals/v.db.addcolumn.html), ['v.what.rast' command](https://grass.osgeo.org/grass72/manuals/v.what.rast.html) for this purpose.
###Code
## Saving current time for processing time management
begintime_whatrast=time.time()
## Add a column "seg_id" in training_set layer
grass.run_command('v.db.addcolumn', map="training_set", columns="seg_id int")
## Set computational region to the default region
grass.run_command('g.region', flags="d")
## For each training point, add the value of the underlying segmentation raster pixel in column "seg_id"
grass.run_command('v.what.rast', map="training_set", raster="segments", column="seg_id")
## Compute processing time and print it
print_processing_time(begintime_whatrast, "Segment iD added in attribute table of the 'training_set' vector layer in ")
###Output
_____no_output_____
###Markdown
Create dataframe with "seg_id" and "class" of segments in training set Here, we create a [Pandas' dataframe](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html) containing only two columns, the segment ID (named 'cat') and class. This dataframe will be used further for joint with the computed statistics of each segment. Please notice that the number of (distinct) segments to be used for training could be different of the number of points in initial training sample, as some points could refer to the same segment depending of the segmentation results.**In the 'columns' parameter, please set only the segment iD and the class to be used in the classification process (only two columns).**
###Code
## Create a temporary .csv file containing segment iD and class of each training point
grass.run_command('v.db.select', overwrite=True, map="training_set@CLASSIFICATION", columns="seg_id,Class_num",
file=os.path.join(tempfile.gettempdir(),"temp_train_segid_class.csv"),separator="comma")
## Import .csv file in a temporary Pandas' dataframe
temp=pd.read_csv(os.path.join(tempfile.gettempdir(),"temp_train_segid_class.csv"), sep=',',header=0)
## Erase the temporary .csv file
os.remove(os.path.join(tempfile.gettempdir(),"temp_train_segid_class.csv"))
## Rename columns "seg_id" in "cat" for joint further
temp.rename(columns={'seg_id': 'cat'}, inplace=True)
## Keep only distinct value of column "cat"
seg_id_class=temp.drop_duplicates(subset='cat', keep=False)
## Print
print "Dataframe created with "+str(len(seg_id_class))+" distinct segments' ID for training set from the "+str(len(temp))+" point provided in the initial training sample"
## Display table
seg_id_class.head()
###Output
_____no_output_____
###Markdown
Create a new raster layer with segments to be used for training Here, we build a new raster layer containing only segments to be used for training. It will be used after to compute statistics of training objects. This raster is created by reclassifying the original segmentation layer (with segments for the whole area). For that, segments not included in the training set will be replaced with *NULL* values. The ['r.reclass' command](https://grass.osgeo.org/grass72/manuals/r.reclass.html) is used for this purpose. Before reclassification, a 'reclass rule file' containing instructions for reclassification is created.In GRASS GIS, a reclassified raster is only a specific rule assigned to another existing raster. When dealing with very large dataset, display a reclassified raster could be very long. If you want to ensure a faster display of a reclassified raster, you can write a new raster based on the reclassified one. Please note that writing a new raster will use more disk space. The last part of the following cell is dedicated to this purpose. It is optional. **Compute reclassification rules and build a raster of training segments**
###Code
## Saving current time for processing time management
print ("Bulding a raster map with training segments on " + time.ctime())
begintime_reclassify=time.time()
## Define reclass rule
rule=""
for seg_id in grass.parse_command('v.db.select', map='training_set', columns='seg_id', flags='c'): #note that parse_command provide a list of DISTINCT values
rule+=str(seg_id)
rule+="="
rule+=str(seg_id)
rule+="\n"
rule+="*"
rule+="="
rule+="NULL"
## Create a temporary 'reclass_rule.csv' file
outputcsv=os.path.join(tempfile.gettempdir(),"reclass_rules.csv") # Define the csv output file name
f = open(outputcsv, 'w')
f.write(rule)
f.close()
## Set computational region to the default region
grass.run_command('g.region', flags="d")
## Reclass segments raster layer to keep only training segments, using the reclas_rule.csv file
grass.run_command('r.reclass', overwrite=True, input="segments", output="segments_training", rules=outputcsv)
## Erase the temporary 'reclass_rule.csv' file
os.remove(outputcsv)
## Create the same raster with r.mapcalc (to ensure fast display)
##### Comment the following lines if you want to save disk space instead of fast display
formula="segments_training_temp=segments_training"
grass.mapcalc(formula, overwrite=True)
## Rename the new raster with the name of the original one (will be overwrited)
grass.run_command('g.rename', overwrite=True, raster="segments_training_temp,segments_training")
# Remove the existing GRASS colortable (for faster display in GRASS map display)
grass.run_command('r.colors', flags="r", map="segments_training", color="random")
## Compute processing time and print it
print_processing_time(begintime_reclassify, "Raster map with training segments builted in ")
###Output
_____no_output_____
###Markdown
Compute statistics on training segments with i.segment.stats We use here the ['i.segment.stats' add-on](https://grass.osgeo.org/grass70/manuals/addons/i.segment.stats.html) to compute statistics for each object. As this add-on is not by-default installed, the first cell is there to install it with ['g.extension' command](https://grass.osgeo.org/grass72/manuals/g.extension.html). Another add-on, ['r.object.geometry'](https://grass.osgeo.org/grass70/manuals/addons/r.object.geometry.html) is also installed and is required for computing morphological statistics by i.segment.stats.
###Code
## Instal i.segment.stats if not yet installed
if "i.segment.stats" not in grass.parse_command('g.extension', flags="a"):
grass.run_command('g.extension', extension="i.segment.stats")
print "i.segment.stats have been installed on your computer"
else: print "i.segment.stats is already installed on your computer"
## Instal r.object.geometry if not yet installed
if "r.object.geometry" not in grass.parse_command('g.extension', flags="a"):
grass.run_command('g.extension', extension="r.object.geometry")
print "r.object.geometry have been installed on your computer"
else: print "r.object.geometry is already installed on your computer"
###Output
_____no_output_____
###Markdown
**Set list of raster from which to compute statistics with i.segment.stats** Here after, a list of raster layer on which to compute statistics is saved. Please adapt those layers according to the raster you want to use for object statistics.
###Code
## Display the name of rasters available in PERMANENT and CLASSIFICATION mapset
print grass.list_strings("raster", mapset="PERMANENT", flag='r')
print grass.list_strings("raster", mapset="CLASSIFICATION", flag='r')
## Define the list of raster layers for which statistics will be computed
inputstats="opt_blue@PERMANENT"
inputstats+=",opt_green@PERMANENT"
inputstats+=",opt_nir@PERMANENT"
inputstats+=",opt_red@PERMANENT"
inputstats+=",NDVI@PERMANENT"
inputstats+=",Brightness@PERMANENT"
inputstats+=",nDSM@CLASSIFICATION"
print inputstats
###Output
_____no_output_____
###Markdown
**Compute statistics of segments with i.segment.stats** In the following section, ['i.segment.stats' add-on](https://grass.osgeo.org/grass70/manuals/addons/i.segment.stats.html) is used to compute object statistics. Please refer to the official help if you want to modify the parameters. Other raster statistics and morphological features could be used according to your needs.
###Code
## Define computational region to match the extention of segmentation raster
grass.run_command('g.region', overwrite=True, raster="segments@CLASSIFICATION")
## Saving current time for processing time management
print ("Start computing statistics for training segments, using i.segment.stats on " + time.ctime())
begintime_isegmentstats=time.time()
## Compute statistics of objets using i.segment.stats only with .csv output (no vectormap output)
grass.run_command('i.segment.stats', overwrite=True, map="segments_training@CLASSIFICATION",
rasters=inputstats,
raster_statistics="min,max,range,mean,stddev,sum,coeff_var,first_quart,median,third_quart,perc_90",
area_measures="area,perimeter,compact_circle",
csvfile="F:\\.....\\Classification\\i.segment.stats\\stats_training_sample.csv",
processes=str(user["nb_proc"]))
## Compute processing time and print it
print_processing_time(begintime_isegmentstats, "Segment statistics computed in :")
###Output
_____no_output_____
###Markdown
**Remove temporary raster layer**
###Code
## Remove "segment_training" raster layer
grass.run_command('g.remove', flags="f", type="raster", name="segments_training@CLASSIFICATION")
###Output
_____no_output_____
###Markdown
Check for unwanted values (Null/Inf values) in data The purpose of the following section is to check presence of unwanted values in the statistics previously computed, like *null values* or *infinite values*. CSV file with object statistics just created with i.segment.stats is imported into a Pandas' dataframe.
###Code
## Import .csv file
temp_stat_train=pd.read_csv("F:\\.....\\Classification\\i.segment.stats\\stats_training_sample.csv", sep=',',header=0)
print "The .csv file with results of i.segment.stats for "+str(len(temp_stat_train))+" training segments imported in a new dataframe"
## Check and count for NaN values by column in the table
if temp_stat_train.isnull().any().any():
for colomn in list(temp_stat_train.columns.values):
if temp_stat_train[colomn].isnull().any():
print "Column '"+str(colomn)+"' have "+str(temp_stat_train[colomn].isnull().sum())+" NULL values"
else: print "No missing values in dataframe"
## Check and count for Inf values by column in the table
if np.isinf(temp_stat_train).any().any():
for colomn in list(temp_stat_train.columns.values):
if np.isinf(temp_stat_train[colomn]).any():
print "Column '"+str(colomn)+"' have "+str(np.isinf(temp_stat_train[colomn]).sum())+" Infinite values"
else: print "No infinite values in dataframe"
## Display table
temp_stat_train.head()
###Output
_____no_output_____
###Markdown
Replace Null values in data with zero values (PLEASE USE CAREFULLY) FOR EXPERIENCED USERS ONLY! The following section is dedicated to replace Null values in data with zero values (or other values, according to your needs). Use it only if you are sure that the missing values in your data could be replaced by another value!If you want to use the following cell, change its type in "code" instead of "Markdown" in the "Jupyter notebook" interface. Also, you will have to remove the first and the last line. ```python Fill NaN values with Zero value if temp_stat_train.isnull().any().any(): nbnan=temp_stat_train.isnull().sum().sum() temp_stat_train.fillna(0, inplace=True) print str(nbnan)+" NaN value have been filled with Zero values"else: print "No missing values in dataframe" Check and count for NaN values by column in the tableif temp_stat_train.isnull().any().any(): for colomn in list(temp_stat_train.columns.values): if temp_stat_train[colomn].isnull().any(): print "Column '"+str(colomn)+"' still have "+str(temp_stat_train[colomn].isnull().sum())+" NULL values"else: print "No more missing values in dataframe" Display tabletemp_stat_train.head()``` Inf values in data If you have infinite values in your data, please find and solve this problem. The dataset could'nt have any Null or infinite values for classification process.
###Code
## Check and count for Inf values by column in the table
if np.isinf(temp_stat_train).any().any():
for colomn in list(temp_stat_train.columns.values):
if np.isinf(temp_stat_train[colomn]).any():
print "Column '"+str(colomn)+"' still have "+str(np.isinf(temp_stat_train[colomn]).sum())+" Infinite values"
else: print "No more infinite values in dataframe"
###Output
_____no_output_____
###Markdown
Building final training set table Here after, dataframe of training segments' classes with dataframe of training segments' statistics are merged together and saved into a .csv file. This one will be used further in the machine learning classification add-on 'v.class.mlR'. [The merge function of Pandas](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.merge.html) is used to perform the joint between dataframes.
###Code
## Join between tables (pandas dataframe) on column 'cat'
training_sample=pd.merge(seg_id_class, temp_stat_train, on='cat')
## Check if there are NaN values in the table and print basic information
if training_sample.isnull().any().any():
print "WARNING: Some values are missing in the dataset"
else:
# Write dataframe in a .csv file
training_sample.to_csv(path_or_buf="F:\\.....\\Classification\\i.segment.stat\\stats_training_set.csv",
sep=',', header=True, quoting=None, decimal='.', index=False)
print "A new csv table called 'stats_training_set', to be used for training, have been created with "+str(len(training_sample))+" rows."
## Display table
training_sample.head()
###Output
_____no_output_____
###Markdown
Compute number of points per class in training sample The following cell could be used to see the distribution of training segments by LULC classes.
###Code
## Number of points per class in training sample
print "Number of segments per class in training sample\n"
print training_sample.groupby("Class_num").size()
###Output
_____no_output_____
###Markdown
Exclude some specific classes from training sample This section is optional and has to be used only if some specific classes have not to be used in the classification process.If you want to use the following cells, change their type in "code" instead of "Markdown" in the "Jupyter notebook" interface. Also, you will have to remove the first and the last line. ```python Import samples_attributes=pd.read_csv("F:\\MAUPP\\.....\\Classification\\i.segment.stat\\stats_training_set.csv", sep=',',header=0) Exclude specific row corresponding to some classes.samples_attributes.drop(samples_attributes[samples_attributes.Class_num==12].index, inplace=True) Write .csv filesamples_attributes.to_csv(path_or_buf="F:\\MAUPP\\.....\\Classification\\i.segment.stat\\stats_training_set.csv", sep=',', header=True, quoting=None, decimal='.', index=False)print "A new csv table called 'sample_training', with samples to be used for training, have been created with "+str(len(samples_attributes))+" rows." Display tablesamples_attributes.head()``` ```python Number of points per class in training sampleprint "Number of segments per class in training sample\n"print samples_attributes.groupby("Class_num").size()``` **-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Compute statistics for segments to be classified This section uses the ['i.segment.stats' add-on](https://grass.osgeo.org/grass70/manuals/addons/i.segment.stats.html) to compute statistics for each object to be classified. In the context of the GEOBIA conference, just an image subset (GEOBIA_subset) has been classified. You can adapt this part of script according to your own needs.**Please be careful that the statistic you will compute for objects to be classified should be the same as those computed previously for the training set!** **Create raster of segments to be classified which are in the image subset**
###Code
# Define computational region to match the extention of image Subset
grass.run_command('g.region', overwrite=True, vector="GEOBIA_subset@CLASSIFICATION", align="segments@CLASSIFICATION")
# Create a new raster layer with segments inside the current computational region, using r.map.calc
formula="GEOBIA_segments=segments@CLASSIFICATION"
grass.mapcalc(formula, overwrite=True)
###Output
_____no_output_____
###Markdown
**Set list of raster from which to compute statistics with i.segment.stats**
###Code
## Display the name of rasters available in PERMANENT and CLASSIFICATION mapset
print grass.read_command('g.list',type="raster", mapset="PERMANENT", flags='rp')
print grass.read_command('g.list',type="raster", mapset="CLASSIFICATION", flags='rp')
## Define the list of raster layers for which statistics will be computed
inputstats="opt_blue@PERMANENT"
inputstats+=",opt_green@PERMANENT"
inputstats+=",opt_nir@PERMANENT"
inputstats+=",opt_red@PERMANENT"
inputstats+=",NDVI@PERMANENT"
inputstats+=",Brightness@PERMANENT"
inputstats+=",nDSM@CLASSIFICATION"
print inputstats
###Output
_____no_output_____
###Markdown
**Compute statistics of segments to be classified (with i.segment.stats)**
###Code
## Define computational region to match the extention of segmentation raster
grass.run_command('g.region', overwrite=True, raster="GEOBIA_segments@CLASSIFICATION")
## Saving current time for processing time management
print ("Start computing statistics for segments to be classified, using i.segment.stats on " + time.ctime())
begintime_isegmentstats=time.time()
## Compute statistics of objets using i.segment.stats only with .csv output (no vectormap output).
grass.run_command('i.segment.stats', overwrite=True, map="GEOBIA_segments@CLASSIFICATION",
rasters=inputstats,
raster_statistics="min,max,range,mean,stddev,sum,coeff_var,first_quart,median,third_quart,perc_90",
area_measures="area,perimeter,compact_circle",
csvfile="F:\\.....\\Classification\\i.segment.stat\\stats_segments.csv",
processes=str(user["nb_proc"]))
## Compute processing time and print it
print_processing_time(begintime_isegmentstats, "Segment statistics computed in ")
###Output
_____no_output_____
###Markdown
Check for unwanted values (Null/NaN/Inf values) in data The purpose of the following section is to check presence of unwanted values in the statistics previously computed, like *null values* or *infinite values*. CSV file with object statistics just created with i.segment.stats is imported into a Pandas' dataframe.
###Code
## Import .csv file
stats_segments=pd.read_csv("F:\\.....\\Classification\\i.segment.stat\\stats_segments.csv", sep=',',header=0)
print "The .csv file with results of i.segment.stats for the "+str(len(stats_segments))+" segments to be classified imported in a new dataframe"
## Check and count for NaN values by column in the table
if stats_segments.isnull().any().any():
for colomn in list(stats_segments.columns.values):
if stats_segments[colomn].isnull().any():
print "Column '"+str(colomn)+"' have "+str(stats_segments[colomn].isnull().sum())+" NULL values"
else: print "No missing values in dataframe"
## Check and count for Inf values by column in the table
if np.isinf(stats_segments).any().any():
for colomn in list(stats_segments.columns.values):
if np.isinf(stats_segments[colomn]).any():
print "Column '"+str(colomn)+"' have "+str(np.isinf(stats_segments[colomn]).sum())+" Infinite values"
else: print "No infinite values in dataframe"
## Display table
stats_segments.head()
###Output
_____no_output_____
###Markdown
Replace Null/NaN values in data with zero values (PLEASE USE CAREFULLY) FOR EXPERIENCED USERS ONLY! The following section is dedicated to replace Null values in data with zero values (or other values, according to your needs). Use it only if you are sure that the missing values in your data could be replaced by another value!If you want to use the following cell, change its type in "code" instead of "Markdown" in the "Jupyter notebook" interface. Also, you will have to remove the first and the last line. ```python Fill NaN values with Zero value if stats_segments.isnull().any().any(): nbnan=stats_segments.isnull().sum().sum() stats_segments.fillna(0, inplace=True) print str(nbnan)+" NaN value have been filled with Zero values"else: print "No missing values in dataframe" Check and count for NaN values by column in the tableif stats_segments.isnull().any().any(): for colomn in list(stats_segments.columns.values): if stats_segments[colomn].isnull().any(): print "Column '"+str(colomn)+"' still have "+str(stats_segments[colomn].isnull().sum())+" NULL values"else: print "No more missing values in dataframe" Display tablestats_segments.head()``` Inf values in data If you have infinite values in your data, please find and solve this problem. The dataset could'nt have any Null or infinite values for classification process.
###Code
## Check and count for Inf values by column in the table
if np.isinf(stats_segments).any().any():
for colomn in list(stats_segments.columns.values):
if np.isinf(stats_segments[colomn]).any():
print "Column '"+str(colomn)+"' still have "+str(np.isinf(stats_segments[colomn]).sum())+" Infinite values"
else: print "No infinite values in dataframe"
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Classification using multiple machine learning classifiers system Classification with v.class.mlR The ['v.class.mlR' add-on](https://grass.osgeo.org/grass70/manuals/addons/v.class.mlR.html) is used here in order to classify the segments using the training data. You can choose between several machine learning classifiers and several majority-voting systems. Please read the [add-on's help](https://grass.osgeo.org/grass70/manuals/addons/v.class.mlR.html) for more details.
###Code
## Instal v.class.mlR if not yet installed
if "v.class.mlR" not in grass.parse_command('g.extension', flags="a"):
grass.run_command('g.extension', extension="v.class.mlR")
print "v.class.mlR have been installed on your computer"
else: print "v.class.mlR is already installed on your computer"
###Output
_____no_output_____
###Markdown
**Classification** Please notice that if the classification process failed, it could be due to : - R software is not installed on your computer.- the "R_LIBS_USER" environment variable defined in this notebook and in R is not the same. Please return on the begin of this notebook and read the instructions.- You didn't respect the syntax of the folder path in the following cell (/,//) resulting in failure when running the R script.- The .csv files containing the statistics of objects to be classified and of training objects present some unaccepted values like *Null* or *infinite*. It couldn't have any 'hole' in the dataset.Please read the [official help](https://grass.osgeo.org/grass70/manuals/addons/v.class.mlR.html) to know which parameter adapt or not according to your needs.
###Code
## Saving current time for processing time management
print ("Start classification process, using v.class.mlR on " + time.ctime())
begintime_vclassmlr=time.time()
## Classification using v.class.mlR
grass.run_command('v.class.mlR', flags="fi", overwrite=True,
separator="comma",
segments_file="F:/...../Classification/i.segment.stat/stats_segments.csv",
training_file="F:/...../Classification/i.segment.stat/training_sample.csv",
raster_segments_map="GEOBIA_segments@CLASSIFICATION",
classified_map="indiv_classification",
train_class_column="Class_num",
output_class_column="vote",
output_prob_column="prob",
classifiers="svmRadial,rf,rpart,knn",
folds="5",
partitions="10",
tunelength="10",
weighting_modes="smv,swv,bwwv,qbwwv",
weighting_metric="accuracy",
classification_results="F://.....//Classification//all_results.csv",
accuracy_file="F://.....//Classification//accuracy.csv",
model_details="F://.....//Classification//classifier_runs.txt",
bw_plot_file="F://.....//Classification//box_whisker",
r_script_file="F://.....//Classification//Rscript_mlR.R",
processes="2")
## Compute processing time and print it
print_processing_time(begintime_vclassmlr, "Classification process achieved in ")
###Output
_____no_output_____
###Markdown
Import results of v.class.mlR **Import accuracy results of individual classifiers, resulting from cross-validation of tuning**
###Code
## Import .csv file
accuracy=pd.read_csv("F:\\.....\\Classification\\accuracy.csv", sep=',',header=0)
## Display table
accuracy.head(15)
###Output
_____no_output_____
###Markdown
**Import classifiers tuning parameters and individual classifier confusion matrix**
###Code
## Open file
classifier_runs = open('F:\\.....\\Classification\\classifier_runs.txt', 'r')
## Read file
print classifier_runs.read()
###Output
_____no_output_____
###Markdown
Copy classified raster as 'real raster' in the current mapset As classified maps from v.class.mlR are reclassed map of the original segmented map, display in GRASS GIS can be too slow. If you want, you can copy this classified maps as "real raster" in the current mapset. Please notice that it will use more disk space !
###Code
## Display the list of raster available in the current mapset
print grass.read_command('g.list', type="raster", mapset="CLASSIFICATION")
###Output
_____no_output_____
###Markdown
When copying the classified raster, we change the color table in the same time, using [r.colors](https://grass.osgeo.org/grass72/manuals/r.colors.html). You can adapt the R:G:B values for the color table according to the colors you want each class.
###Code
## Make a copy of the classified maps of faster display in GRASS GIS
## Saving current time for processing time management
print ("Making a copy of classified maps in current mapset on " + time.ctime())
begintime_copyraster=time.time()
for classif in grass.list_strings("rast", pattern="indiv_classification_", flag='r'):
## Create the same raster with r.mapcalc
formula=str(classif[:-15])+"_temp="+str(classif[:-15])
grass.mapcalc(formula, overwrite=True)
## Rename the new raster with the name of the original one (will be overwrited)
renameformula=str(classif[:-15])+"_temp,"+str(classif[:-15])
grass.run_command('g.rename', overwrite=True, raster=renameformula)
## Define color table. Replace with the RGB values of wanted colors of each class
color_table="11 227:26:28"+"\n"
color_table+="12 255:141:1"+"\n"
color_table+="13 94:221:227"+"\n"
color_table+="14 102:102:102"+"\n"
color_table+="21 246:194:142"+"\n"
color_table+="22 211:217:173"+"\n"
color_table+="31 0:128:0"+"\n"
color_table+="32 189:255:185"+"\n"
color_table+="33 88:190:141"+"\n"
color_table+="34 29:220:0"+"\n"
color_table+="41 30:30:192"+"\n"
color_table+="51 0:0:0"+"\n"
## Create a temporary 'color_table.txt' file
outputcsv="F:\\.....\\Classification\\Results_maps\\temp_color_table.txt" # Define the csv output file name
f = open(outputcsv, 'w')
f.write(color_table)
f.close()
## Apply new color the existing GRASS colortable (for faster display in GRASS map display)
grass.run_command('r.colors', map=classif, rules=outputcsv)
## Erase the temporary 'color_table.txt' file
os.remove("F:\\.....\\Classification\\Results_maps\\temp_color_table.txt")
## Compute processing time and print it
print_processing_time(begintime_copyraster, "Classified raster maps have been copied in current mapset in ")
###Output
_____no_output_____
###Markdown
Export of classification raster
###Code
## Saving current time for processing time management
print ("Export classified raster maps on " + time.ctime())
begintime_exportraster=time.time()
for classif in grass.list_strings("rast", pattern="indiv_classification_", flag='r'):
outputname="F:\\.....\\Classification\\classified_raster\\"+str(classif[21:-15])+".tif"
grass.run_command('r.out.gdal', overwrite=True, input=classif, output=outputname, format='GTiff')
## Compute processing time and print it
print_processing_time(begintime_exportraster, "Classified raster maps exported in ")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** End of part 4
###Code
print("The script ends at "+ time.ctime())
print_processing_time(begintime_classif_full, "Entire process has been achieved in ")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** *-*-*-*-*-*-*-*-*-*-*-* *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*- *-*-*-*-*-*-*-*-*-*-*-* **-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** 5 - Performance evaluation **Launch GRASS GIS working session**
###Code
## Set the name of the mapset in which to work
mapsetname=user["classification_mapsetname"]
## Launch GRASS GIS working session in the mapset
if os.path.exists(os.path.join(user["gisdb"],user["location"],mapsetname)):
gsetup.init(os.environ['GISBASE'], user["gisdb"], user["location"], mapsetname)
print "You are now working in mapset '"+mapsetname+"'"
else:
print "'"+mapsetname+"' mapset doesn't exists in "+user["gisdb"]
## Saving current time for processing time management
begintime_perform=time.time()
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Import validation sample The following section is dedicated to the importation of test set points. Please adapt the path to you own data.
###Code
## Set computational region
grass.run_command('g.region', overwrite=True, raster="segments")
## Import points sample
grass.run_command('v.in.ogr', overwrite=True,
input='F:\\.....\\Training_test\\test_set.shp', output='test_set')
## Print
print "Validation sample imported on "+time.ctime()
###Output
_____no_output_____
###Markdown
You can run the next cell if you want to see the attribute table of the "test_set" vector layer.
###Code
## Create temporary .csv file with columns of "test_set" vector layer
grass.run_command('v.db.select', overwrite=True, map="test_set@CLASSIFICATION",
file="F:\\.....\\Training_validation\\test_set.csv",separator="comma")
## Import .csv file into Jupyter notebook (with panda)
validation_samples_attributes=pd.read_csv("F:\\.....\\Training_validation\\test_set.csv", sep=',',header=0)
print str(len(validation_samples_attributes))+" points in sample layer imported"
## Delete temporary .csv file
os.remove("F:\\.....\\Training_validation\\test_set.csv")
## Display table
validation_samples_attributes.head()
## Number of points per class in validation sample
print "Number of points per class in validation sample\n"
print validation_samples_attributes.groupby("Class_num").size()
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Build dataframe with predicted class for each validation point Add prediction of each classifier for validations points Here, predictions of each classifier is saved in the attribute table of 'test_set' vector layer. The ['v.db.addcolumn' command](https://grass.osgeo.org/grass72/manuals/v.db.addcolumn.html) and the ['v.what.rast' command](https://grass.osgeo.org/grass72/manuals/v.what.rast.html) are used for this purpose.
###Code
## Saving current time for processing time management
begintime_whatrast=time.time()
## Initialize a empty list
allclassif=[]
## Loop through all individual classification results
for classif in grass.list_strings("rast", pattern="indiv_classification_", flag='r'):
nameclassif=str(classif[21:-15]) # Save the name of classifier
allclassif.append(nameclassif) # Add the name of classifier in the list
## Add a "int" column in test_set layer, for each classification result
grass.run_command('v.db.addcolumn', map="test_set", columns=nameclassif+" int")
## For each validation point, add the value of the underlying classifier raster pixel in column "seg_id"
grass.run_command('v.what.rast', map="test_set", raster="indiv_classification_"+nameclassif+"@CLASSIFICATION", column=nameclassif)
## Compute processing time and print it
print("Predicted classes for '"+', '.join(allclassif)+"' added in the 'test_set' layer")
print_processing_time(begintime_whatrast, "Prossess achieved in ")
###Output
_____no_output_____
###Markdown
In the next cell, the 'test_set' vector layer attribute table is exported in .csv file. Please replace the "columnstoexport" variable according to your own data.
###Code
## Export 'test_set' vector layer attribute table in .csv file.
columnstoexport="Class_num"+","
columnstoexport+=', '.join(allclassif)
grass.run_command('v.db.select', overwrite=True, map="test_set@CLASSIFICATION", columns=columnstoexport,
file="F:\\.....\\Classification\\Validation\\predicted_gtruth.csv",separator="comma")
###Output
_____no_output_____
###Markdown
Exclude some specific classes from test set This section is optional and has to be used only if some specific classes not have to be used in the performance evaluation process.Please notice that you need to have the same classes in your training set and test set. If you want to use the following cell, change its type in "code" instead of "Markdown" in the "Jupyter notebook" interface. Also, you will have to remove the first and the last line. ```python Extract only test set points which have to be used for performance evaluation.PLEASE REPLACE THE "WHERE" CONDITION ACCORDING TO YOUR OWN DATAgrass.run_command('v.extract', overwrite=True, input="test_set", where="Class_num is not 12", output="test_set_filter") Export 'test_set' vector layer attribute table.PLEASE REPLACE THE "COLUMNS" PARAMETER ACCORDING TO YOUR OWN DATAcolumnstoexport="Class_num"+","columnstoexport+=', '.join(allclassif)grass.run_command('v.db.select', overwrite=True, map="test_set_filter", columns=columnstoexport, file="F:\\.....\\Classification\\Validation\\predicted_gtruth.csv",separator="comma") Import data in dataframevalidation_samples_attributes=pd.read_csv("F:\\.....\\Classification\\Validation\\predicted_gtruth.csv", sep=',',header=0) Number of points per class in training sampleprint "Number of points per class in validation sample: "+str(len(validation_samples_attributes))+"\n"print validation_samples_attributes.groupby("Class_num").size()``` **-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** In the end of this section, performance evaluation of classifications is performed. As mentioned in our article, our legend scheme is designed in two hierarchical levels. The second level is the most detailed (11 classes). The first level classes are derived from classification results on the first level classes. Create inputs for classification perfomance evaluation (Level-2)
###Code
## Define computational region to match the extention of segmentation raster
grass.run_command('g.region', overwrite=True, raster="segments@CLASSIFICATION")
## Create raster layers with one pixel corresponding to each object. Pixels values representing either the ground thruth or the prediction of a specific classifier
grass.run_command('v.to.rast', overwrite=True, input='test_set_filter', output='PE_L2_Class_num', use='attr', attribute_column='Class_num')
for result in allclassif:
outputname="PE_L2_"+str(result)
grass.run_command('v.to.rast', overwrite=True, input='test_set_filter', output=outputname, use='attr', attribute_column=result)
###Output
_____no_output_____
###Markdown
Create inputs of classification perfomance evaluation (Level-1)
###Code
## Loop through all_raster used for PE at level2
for result in grass.list_strings('rast', pattern="PE_L2", flag="r"):
## Reclass the pixels of inputs of level2 to match the level1 classes
rule=""
for pixel_value in grass.parse_command('v.db.select', map='test_set_filter', columns=result[6:-15], flags='c'): #note that parse_command provide a list of distinct values
rule+=str(pixel_value)
rule+="="
rule+=str(pixel_value[:-1])
rule+="\n"
rule+="*"
rule+="="
rule+="NULL"
## Create a temporary 'reclass_rule.csv' file
outputcsv="F:\\.....\\Classification\\Validation\\reclass_rules.csv" # Define the csv output file name
f = open(outputcsv, 'w')
f.write(rule)
f.close()
#### Reclass level2 classes to match level1 classes
outputname="PE_L1_"+str(result[6:-15])
grass.run_command('r.reclass', overwrite=True, input=result, output=outputname, rules=outputcsv)
## Erase the temporary 'reclass_rule.csv' file
os.remove("F:\\.....\\Classification\\Validation\\reclass_rules.csv")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Classification performance evaluation Performance evaluation is conducted here with the ['r.kappa' module](https://grass.osgeo.org/grass73/manuals/r.kappa.html). **Level 2**
###Code
## Saving current time for processing time management
begintime_kappa_L2=time.time()
## Classification perfomance evalutation using r.kappa (compute per-class kappa)
for result in grass.list_strings('rast', pattern="PE_L2", flag="r", exclude="PE_L2_Class_num"):
outputfile="F:\\.....\\Classification\\Validation\\rkappa_"+str(result[3:-15])+".txt"
grass.run_command('r.kappa', flags="w", overwrite=True, classification=result, reference="PE_L2_Class_num", output=outputfile)
## Compute processing time and print it
print_processing_time(begintime_kappa_L2, "Performance evaluation for Level 2 achieved in :")
###Output
_____no_output_____
###Markdown
**Level 1**
###Code
## Saving current time for processing time management
begintime_kappa_L1=time.time()
## Classification perfomance evalutation using r.kappa (compute per-class kappa)
for result in grass.list_strings('rast', pattern="PE_L1", flag="r", exclude="PE_L1_Class_num"):
outputfile="F:\\.....\\Classification\\Validation\\rkappa_"+str(result[3:-15])+".txt"
grass.run_command('r.kappa', flags="w", overwrite=True, classification=result, reference="PE_L1_Class_num", output=outputfile)
## Compute processing time and print it
print_processing_time(begintime_kappa_L1, "Performance evaluation for Level 1 achieved in :")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** Clean mapset
###Code
## Erase temporary files no more needed
grass.run_command('g.remove', flags="rf", type="raster", pattern="PE_")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** End of part 5
###Code
print("The script ends at "+ time.ctime())
print_processing_time(begintime_perform, "Entire process has been achieved in ")
###Output
_____no_output_____
###Markdown
**-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-_-** End of this Jupyter notebook
###Code
print("The script ends at "+ time.ctime())
print_processing_time(begintime_full,"Entire process has been achieved in ")
###Output
_____no_output_____ |
stochastic-optimal-control-model.ipynb | ###Markdown
Real Time Bidding with Stochastic Optimal ControlIn this notebook we implement part of the paper [Optimal Real-Time Bidding Strategies](https://arxiv.org/abs/1511.08409) by Fernandez-Tapia et al. 2016.The setting is that a Demand Side Platform (DSP) is running a campaign with a given time horizon and a given budget. The goal is to maximize the number of impressions obtained during the alloted time while staying on budget. The ModelOur agent participates in a second price auction environment. Bid requests arrive at the exchange at random times, their arrival count can be modeled as a Poisson Process of constant rate $\lambda$. Then the other particpants post their bids for the advertisement space, we don't model each of them individually, rather we suppose we know the distribution of their bids. As we win bids our remaining cash decreases and our inventory increases. The model can be summarized as follows, with some reference values for the different parameters: * Bid request arrival process: * $N_t \sim Poisson(\lambda t)$ * $\lambda = 10^3$ bid requests/second* Best bid among other participants : * $p_{N_t} \sim Exp(\mu)$ * $\mu = 2 \cdot 10^3$ (corresponding to a mean bid of 0.0005 euro) * Remaining cash process: * $S_0 = \bar{S} = 500 euro$ (initial budget) * $dS_t = -p_{N_t}\mathbb{1}_{(b_t>p_{N_t})}d{N_t}$, where $b_t$ is our bid* Inventory process: * $I_0 = 0$ (initial inventory) * $dI_t = \mathbb{1}_{(b_t>p_{N_t})}d{N_t}$
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
We start with a pretty straightforward implementation of the simulation environment:* We'll make our first agent naive, always posting the mean bid. * We'll take pretty small steps and treat the Poisson process as the limit of a Bernoulli process: if bid requests arrive with a frequency of $\lambda$ requests per second then the probability of observing a bid request arrive in $dt$ seconds (for $dt$ small) is $\lambda dt$.
###Code
def run_sim(T, dt, lam, mu, budget):
mean_bid = 1./mu
cash_process = [budget]
inventory_process = [0]
t = 0
times = [t]
while t < T:
curr_cash = cash_process[-1]
curr_inventory = inventory_process[-1]
arrival = np.random.binomial(n=1, p=lam*dt)
if arrival:
my_bid = mean_bid #or sthg else
best_bid = np.random.exponential(scale=mean_bid)
if my_bid > best_bid:
curr_cash = cash_process[-1] - best_bid
curr_inventory = inventory_process[-1] + 1
cash_process.append(curr_cash)
inventory_process.append(curr_inventory)
t+=dt
times.append(t)
return {"times":np.array(times), "cash_process": np.array(cash_process), "inventory_process": np.array(inventory_process)}
T = 1 #final time in seconds
dt = 1e-6
lam = 1e3
mu = 2*1e3
budget = 500
%timeit res = run_sim(T, dt, lam, mu, budget)
###Output
3.37 s ± 9.8 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
That's a bit too slow for experimenting. Typically I'd vectorize it but I've been wanting to use just in time compilation more. So let's use numba to speed up that code.
###Code
from numba import jit
@jit
def run_sim_jit(T, dt, lam, mu, budget):
mean_bid = 1./mu
N = int(T/dt)
times = np.linspace(0,T, N)
cash_process = np.zeros(N)
cash_process[0] = budget
inventory_process = np.zeros(N)
for i in range(N-1):
curr_cash = cash_process[i]
curr_inventory = inventory_process[i]
arrival = np.random.binomial(n=1, p=lam*dt)
if arrival:
my_bid = mean_bid #or sthg else
best_bid = np.random.exponential(scale=mean_bid)
if my_bid > best_bid:
curr_cash = cash_process[i] - best_bid
curr_inventory = inventory_process[i] + 1
cash_process[i+1] = curr_cash
inventory_process[i+1] = curr_inventory
return times, cash_process, inventory_process
#before adding jit decorator: 3.37 s
%timeit res = run_sim_jit(T, dt, lam, mu, budget)
#after adding jit decorator: 38 ms
%timeit res = run_sim_jit(T, dt, lam, mu, budget)
###Output
37.8 ms ± 283 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
Great, got it down form 3.4 seconds to 38 milliseconds. Now we can afford running it longer.
###Code
T = 100 #final time in seconds
dt = 1e-4
lam = 1e3
mu = 2*1e3
budget = 500
times, cash, inventory = run_sim_jit(T, dt, lam, mu, budget)
plt.plot(times, cash)
plt.xlabel('time(s)')
plt.ylabel('remaining cash (euro)')
plt.plot(times, inventory)
plt.xlabel('time(s)')
plt.ylabel('Acquired inventory')
#zooming in:
plt.plot(times[:1000], inventory[:1000])
plt.xlabel('time(s)')
plt.ylabel('Acquired inventory')
###Output
_____no_output_____
###Markdown
Now let's use the bidding strategy from the paper. It's outlined in section 2.3.2.where $H'$ is given byand $f$ is in our case the density of $Exp(\mu)$.I could compute $H'$ analytically but not its reciprocal, so I implemented it numerically via rootfinding at the end of the notebook.
###Code
@jit
def run_sim_tapia(T, dt, lam, mu, budget):
mean_bid = 1./mu
N = int(T/dt)
times = np.linspace(0,T, N)
cash_process = np.zeros(N)
cash_process[0] = budget
inventory_process = np.zeros(N)
num_auctions = 0
my_bids, best_bids = np.zeros(N), np.zeros(N)
for i in range(N-1):
curr_cash = cash_process[i]
curr_inventory = inventory_process[i]
arrival = np.random.binomial(n=1, p=lam*dt)
if arrival:
num_auctions+=1
#section 2.3.2 from the paper
expected_num_auctions = lam*(T-times[i])
if curr_cash > expected_num_auctions * mean_bid:
my_bid = budget # it's +infinity in the paper but it wouldnt make sense to bid more than we have as a budget in practice
else:
# we don't have an analytical expression for the inverse of hprime,
# but a numerical implementation is at the end of the notebook
hpinv = hprime_inverse(curr_cash/(T-times[i]))
my_bid = -1./hpinv
best_bid = np.random.exponential(scale=mean_bid)
if my_bid > best_bid:
curr_cash = cash_process[i] - best_bid
curr_inventory = inventory_process[i] + 1
my_bids[i] = my_bid
best_bids[i] = best_bid
cash_process[i+1] = curr_cash
inventory_process[i+1] = curr_inventory
return times, cash_process, inventory_process, num_auctions, my_bids, best_bids
T = 100 #final time in seconds
dt = 1e-4
lam = 1e3
mu = 2*1e3
budget = 15
times, cash, inventory, num_actions, my_bids, best_bids = run_sim_tapia(T, dt, lam, mu, budget)
plt.plot(times, cash)
plt.xlabel('time(s)')
plt.ylabel('remaining cash (euro)')
###Output
/home/alejandro/.conda/envs/pymc/lib/python3.7/site-packages/ipykernel_launcher.py:5: RuntimeWarning: invalid value encountered in sqrt
"""
###Markdown
Look at that! It takes the budget exactly down to zero in the alloted time. A naive agent would either stay way below and lose impressions or run out of budget too early and also lose impressions. The optimal control agent does it perfectly. Of course here we suppose we know the environment perfectly well which in practice won't really happen. Furthermore just winning as many impressions as possible is not the ultimate goal of DSPs.In the rest of the notebook we inspect more in detail the bidding behavior of this strategy. We can see in particular how the bidding radically changes and spending increases at the end of the horizon.
###Code
num_actions
inventory[-1]
inventory[-1]/num_actions
sec = 0.1
plt.plot(times[times<sec], inventory[times<sec])
fig = plt.figure(figsize=(16,9))
plt.scatter(times[best_bids>0], best_bids[best_bids>0], label='other_bids')
plt.scatter(times[best_bids>0], my_bids[best_bids>0], label='my_bids')
plt.legend()
num = 200
fig = plt.figure(figsize=(16,9))
plt.scatter(times[best_bids>0][-num:], best_bids[best_bids>0][-num:], label='other_bids')
plt.scatter(times[best_bids>0][-num:], my_bids[best_bids>0][-num:], label='my_bids')
plt.legend()
num = 90000
plt.plot(budget - cash[best_bids>0][:num], my_bids[best_bids>0][:num])
plt.title('My Bids vs Cash spent')
plt.xlabel('Cash spent')
plt.ylabel('My Bid price')
#plt.hlines(y=1/mu, xmin=cash[best_bids>0][:num].min(), xmax = cash[best_bids>0][:num].max(), label='average bid price')
plt.legend()
num = 1000
plt.plot(budget - cash[best_bids>0][-num:], my_bids[best_bids>0][-num:])
plt.title('My Bids vs Cash spent')
plt.xlabel('Cash spent')
plt.ylabel('My Bid price')
#plt.hlines(y=1/mu, xmin=cash[best_bids>0][:num].min(), xmax = cash[best_bids>0][:num].max(), label='average bid price')
plt.legend()
cash[-1]
xx = np.linspace(-100,100)
def hprime(x):
res = np.zeros_like(x)
res[x>=0] = lam/mu
xneg = x[x<0]
res[x<0] = lam * (np.exp(mu/xneg)*(1/xneg - 1/mu) + 1/mu)
return res
hp = hprime(xx)
plt.plot(xx, hp)
plt.title('H prime')
-1/mu
from scipy.optimize import fsolve
def residual(x,y):
return hprime(x)-y
def hprime_inverse(y):
res, dic, ier, msg = fsolve(residual, x0=np.sqrt(lam/y), args=(y), full_output=True)
#print(res)
#print(dic)
#print(ier, msg)
return res[0]
y = 1
hprime_inverse(y)
ys = np.linspace(0.01, lam/mu, 1000)
inverses = []
for y in ys:
inverses.append(hprime_inverse(y))
plt.plot(ys, inverses)
lam/mu
###Output
_____no_output_____ |
examples/encoding/PRatioEncoder.ipynb | ###Markdown
PRatioEncoderThe PRatioEncoder() replaces categories by the ratio of the probability of thetarget = 1 and the probability of the target = 0.The target probability ratio is given by: p(1) / p(0).The log of the target probability ratio is: np.log( p(1) / p(0) ) It only works for binary classification.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from feature_engine.encoding import PRatioEncoder
from feature_engine.encoding import RareLabelEncoder #to reduce cardinality
# Load titanic dataset from OpenML
def load_titanic():
data = pd.read_csv('https://www.openml.org/data/get_csv/16826755/phpMYEkMl')
data = data.replace('?', np.nan)
data['cabin'] = data['cabin'].astype(str).str[0]
data['pclass'] = data['pclass'].astype('O')
data['age'] = data['age'].astype('float')
data['fare'] = data['fare'].astype('float')
data['embarked'].fillna('C', inplace=True)
data.drop(labels=['boat', 'body', 'home.dest'], axis=1, inplace=True)
return data
data = load_titanic()
data.head()
X = data.drop(['survived', 'name', 'ticket'], axis=1)
y = data.survived
# we will encode the below variables, they have no missing values
X[['cabin', 'pclass', 'embarked']].isnull().sum()
''' Make sure that the variables are type (object).
if not, cast it as object , otherwise the transformer will either send an error (if we pass it as argument)
or not pick it up (if we leave variables=None). '''
X[['cabin', 'pclass', 'embarked']].dtypes
# let's separate into training and testing set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
X_train.shape, X_test.shape
## Rare value encoder first to reduce the cardinality
# see RareLabelEncoder jupyter notebook for more details on this encoder
rare_encoder = RareLabelEncoder(tol=0.03,
n_categories=2,
variables=['cabin', 'pclass', 'embarked'])
rare_encoder.fit(X_train)
# transform
train_t = rare_encoder.transform(X_train)
test_t = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
The PRatioEncoder() replaces categories by the ratio of the probability of thetarget = 1 and the probability of the target = 0.The target probability ratio is given by: p(1) / p(0)The log of the target probability ratio is: np.log( p(1) / p(0) )Note: This categorical encoding is exclusive for binary classification.For example in the variable colour, if the mean of the target = 1 for blueis 0.8 and the mean of the target = 0 is 0.2, blue will be replaced by:0.8 / 0.2 = 4 if ratio is selected, or log(0.8/0.2) = 1.386 if log_ratiois selected.Note: the division by 0 is not defined and the log(0) is not defined.Thus, if p(0) = 0 for the ratio encoder, or either p(0) = 0 or p(1) = 0 forlog_ratio, in any of the variables, the encoder will return an error.The encoder will encode only categorical variables (type 'object'). A listof variables can be passed as an argument. If no variables are passed asargument, the encoder will find and encode all categorical variables(object type). Ratio
###Code
'''
Parameters
----------
encoding_method : str, default=woe
Desired method of encoding.
'ratio' : probability ratio
'log_ratio' : log probability ratio
variables : list, default=None
The list of categorical variables that will be encoded. If None, the
encoder will find and select all object type variables.
'''
Ratio_enc = PRatioEncoder(encoding_method='ratio',
variables=['cabin', 'pclass', 'embarked'])
# to fit you need to pass the target y
Ratio_enc.fit(train_t, y_train)
Ratio_enc.encoder_dict_
# transform and visualise the data
train_t = Ratio_enc.transform(train_t)
test_t = Ratio_enc.transform(test_t)
test_t.sample(5)
###Output
_____no_output_____
###Markdown
log ratio
###Code
train_t = rare_encoder.transform(X_train)
test_t = rare_encoder.transform(X_test)
logRatio_enc = PRatioEncoder(encoding_method='log_ratio',
variables=['cabin', 'pclass', 'embarked'])
# to fit you need to pass the target y
logRatio_enc.fit(train_t, y_train)
logRatio_enc.encoder_dict_
# transform and visualise the data
train_t = logRatio_enc.transform(train_t)
test_t = logRatio_enc.transform(test_t)
test_t.sample(5)
''' The PRatioEncoder(encoding_method='ratio' or 'log_ratio') has the characteristic that return monotonic
variables, that is, encoded variables which values increase as the target increases'''
# let's explore the monotonic relationship
plt.figure(figsize=(7,5))
pd.concat([test_t,y_test], axis=1).groupby("pclass")["survived"].mean().plot()
#plt.xticks([0,1,2])
plt.yticks(np.arange(0,1.1,0.1))
plt.title("Relationship between pclass and target")
plt.xlabel("Pclass")
plt.ylabel("Mean of target")
plt.show()
###Output
_____no_output_____
###Markdown
Automatically select the variablesThis encoder will select all categorical variables to encode, when no variables are specified when calling the encoder.
###Code
train_t = rare_encoder.transform(X_train)
test_t = rare_encoder.transform(X_test)
logRatio_enc = PRatioEncoder(encoding_method='log_ratio')
# to fit you need to pass the target y
logRatio_enc.fit(train_t, y_train)
# transform and visualise the data
train_t = logRatio_enc.transform(train_t)
test_t = logRatio_enc.transform(test_t)
test_t.sample(5)
###Output
_____no_output_____ |
LSSVM.ipynb | ###Markdown
Least-Squares Support Vector machine Summary:1. [Introduction](introduction)2. [LSSVM CPU implementation](lssvm_cpu) 3. [LSSVM GPU implementation](lssvm_gpu)4. [Discussing performance](discussing_performance) 1. Introduction The Least-Squares Support Vector Machine (LSSVM) is a variation of the original Support Vector Machine (SVM) in which we have a slight change in the objective and restriction functions that results in a big simplification of the optimization problem.First, let's see the optimization problem of an SVM:$$ \begin{align} minimize && f_o(\vec{w},\vec{\xi})=\frac{1}{2} \vec{w}^T\vec{w} + C \sum_{i=1}^{n} \xi_i &&\\ s.t. && d_i(\vec{w}^T\vec{x}_i+b)\geq 1 - \xi_i, && i = 1,..., n \\ && \xi_i \geq 0, && i = 1,..., n\end{align}$$In this case, we have a set of inequality restrictions and when solving the optimization problem by it's dual we find a discriminative function, adding the kernel trick, of the type:$$ f(\vec{x}) = sign \ \Big( \sum_{i=1}^{n} \alpha_i^o d_i K(\vec{x}_i,\vec{x}) + b_o \Big) $$Where $\alpha_i^o$ and $b_o$ denote optimum values. Giving enough regularization (smaller values of $C$) we get a lot of $\alpha_i^o$ nulls, resulting in a sparse model in which we only need to save the pairs $(\vec{x}_i,d_i)$ which have the optimum dual variable not null. The vectors $\vec{x}_i$ with not null $\alpha_i^o$ are known as support vectors (SV).In the LSSVM case, we change the inequality restrictions to equality restrictions. As the $\xi_i$ may be negative we square its values in the objective function:$$ \begin{align} minimize && f_o(\vec{w},\vec{\xi})=\frac{1}{2} \vec{w}^T\vec{w} + \gamma \frac{1}{2}\sum_{i=1}^{n} \xi_i^2 &&\\ s.t. && d_i(\vec{w}^T\vec{x}_i+b) = 1 - \xi_i, && i = 1,..., n\end{align}$$The dual of this optimization problem results in a system of linear equations, a set of Karush-Khun-Tucker (KKT) equations:$$\begin{bmatrix} 0 & \vec{d}^T \\ \vec{d} & \Omega + \gamma^{-1} I \end{bmatrix}\\begin{bmatrix} b \\ \vec{\alpha}\end{bmatrix}=\begin{bmatrix} 0 \\ \vec{1}\end{bmatrix}$$Where, with the kernel trick, $\Omega_{i,j} = d_i d_j K(\vec{x}_i,\vec{x}_j)$, $\vec{d} = [d_1 \ d_2 \ ... \ d_n]^T$, $\vec{\alpha} = [\alpha_1 \ \alpha_2 \ ... \ \alpha_n]^T$ e $\vec{1} = [1 \ 1 \ ... \ 1]^T$.The discriminative function of the LSSVM has the same form of the SVM but the $\alpha_i^o$ aren't usually null, resulting in a bigger model. The big advantage of the LSSVM is in finding it's parameters, which is reduced to solving the linear system of the type:$$ A\vec{x} = \vec{b} $$A well-known solution of the linear system is when we minimize the square of the residues, that can be written as the optimization problem:$$\begin{align} minimize && f_o(\vec{x})=\frac{1}{2}||A\vec{x} - \vec{b}||^2\\\end{align}$$And have the analytical solution:$$ \vec{x} = A^{\dagger} \vec{b} $$Where $A^{\dagger}$ is the pseudo-inverse defined as:$$ A^{\dagger} = (A^T A)^{-1} A^T$$
###Code
%run -i 'load_dataset.py' # loading dataset
%run -i 'aux_func.py' # loading auxilary functions
###Output
Dataset: Features.shape: # of classes:
vc2c (310, 6) 2
vc3c (310, 6) 3
wf24f (5456, 24) 4
wf4f (5456, 4) 4
wf2f (5456, 2) 4
pk (195, 22) 2
###Markdown
2. LSSVM CPU implementation
###Code
import numpy as np
from numpy import dot, exp
from scipy.spatial.distance import cdist
class LSSVM:
'Class that implements the Least-Squares Support Vector Machine.'
def __init__(self, gamma=1, kernel='rbf', **kernel_params):
self.gamma = gamma
self.x = None
self.y = None
self.y_labels = None
# model params
self.alpha = None
self.b = None
self.kernel = LSSVM.get_kernel(kernel, **kernel_params)
@staticmethod
def get_kernel(name, **params):
def linear(x_i, x_j):
return dot(x_i, x_j.T)
def poly(x_i, x_j, d=params.get('d',3)):
return ( dot(x_i, x_j.T) + 1 )**d
def rbf(x_i, x_j, sigma=params.get('sigma',1)):
if x_i.ndim==x_i.ndim and x_i.ndim==2: # both matrices
return exp( -cdist(x_i,x_j)**2 / sigma**2 )
else: # both vectors or a vector and a matrix
return exp( -( dot(x_i,x_i.T) + dot(x_j,x_j.T)- 2*dot(x_i,x_j) ) / sigma**2 )
# temp = x_i.T - X
# return exp( -dot(temp.temp) / sigma**2 )
kernels = {'linear': linear, 'poly': poly, 'rbf': rbf}
if kernels.get(name) is None:
raise KeyError("Kernel '{}' is not defined, try one in the list: {}.".format(
name, list(kernels.keys())))
else: return kernels[name]
def opt_params(self, X, y_values):
sigma = np.multiply( y_values*y_values.T, self.kernel(X,X) )
A_cross = np.linalg.pinv(np.block([
[0, y_values.T ],
[y_values, sigma + self.gamma**-1 * np.eye(len(y_values))]
]))
B = np.array([0]+[1]*len(y_values))
solution = dot(A_cross, B)
b = solution[0]
alpha = solution[1:]
return (b, alpha)
def fit(self, X, Y, verboses=0):
self.x = X
self.y = Y
self.y_labels = np.unique(Y, axis=0)
if len(self.y_labels)==2: # binary classification
# converting to -1/+1
y_values = np.where(
(Y == self.y_labels[0]).all(axis=1)
,-1,+1)[:,np.newaxis] # making it a column vector
self.b, self.alpha = self.opt_params(X, y_values)
else: # multiclass classification
# ONE-VS-ALL APPROACH
n_classes = len(self.y_labels)
self.b = np.zeros(n_classes)
self.alpha = np.zeros((n_classes, len(Y)))
for i in range(n_classes):
# converting to +1 for the desired class and -1 for all other classes
y_values = np.where(
(Y == self.y_labels[i]).all(axis=1)
,+1,-1)[:,np.newaxis] # making it a column vector
self.b[i], self.alpha[i] = self.opt_params(X, y_values)
def predict(self, X):
K = self.kernel(self.x, X)
if len(self.y_labels)==2: # binary classification
y_values = np.where(
(self.y == self.y_labels[0]).all(axis=1),
-1,+1)[:,np.newaxis] # making it a column vector
Y = np.sign( dot( np.multiply(self.alpha, y_values.flatten()), K ) + self.b)
y_pred_labels = np.where(Y==-1, self.y_labels[0],
self.y_labels[1])
else: # multiclass classification, ONE-VS-ALL APPROACH
Y = np.zeros((len(self.y_labels), len(X)))
for i in range(len(self.y_labels)):
y_values = np.where(
(self.y == self.y_labels[i]).all(axis=1),
+1, -1)[:,np.newaxis] # making it a column vector
Y[i] = dot( np.multiply(self.alpha[i], y_values.flatten()), K ) + self.b[i] # no sign function applied
predictions = np.argmax(Y, axis=0)
y_pred_labels = np.array([self.y_labels[i] for i in predictions])
return y_pred_labels
###Output
_____no_output_____
###Markdown
Running a single test in all data sets:
###Code
%%time
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
for dataset_name in datasets:
print(dataset_name)
X = datasets[dataset_name]['features'].values
Y = datasets[dataset_name]['labels'].values
X_train, X_test, y_train, y_test = train_test_split(X,Y,test_size=0.5) # Train/Test split
X_tr_norm, X_ts_norm = scale_feat(X_train, X_test, scaleType='min-max') # scaling features
print('linear kernel')
lssvm = LSSVM(gamma=1, kernel='linear')
lssvm.fit(X_tr_norm, y_train)
print('acc_test = ', accuracy_score(dummie_to_multilabel(y_test),
dummie_to_multilabel(lssvm.predict(X_ts_norm))))
print('poly kernel')
lssvm = LSSVM(gamma=1, kernel='poly', d=2)
lssvm.fit(X_tr_norm, y_train)
print('acc_test = ',accuracy_score(dummie_to_multilabel(y_test),
dummie_to_multilabel(lssvm.predict(X_ts_norm))))
print('rbf kernel')
lssvm = LSSVM(gamma=1, kernel='rbf', sigma=1)
lssvm.fit(X_tr_norm, y_train)
print('acc_test = ',accuracy_score(dummie_to_multilabel(y_test),
dummie_to_multilabel(lssvm.predict(X_ts_norm))))
print('\n','#'*100,'\n')
###Output
vc2c
linear kernel
acc_test = 0.8258064516129032
poly kernel
acc_test = 0.8387096774193549
rbf kernel
acc_test = 0.8258064516129032
####################################################################################################
vc3c
linear kernel
acc_test = 0.7354838709677419
poly kernel
acc_test = 0.7677419354838709
rbf kernel
acc_test = 0.7870967741935484
####################################################################################################
wf24f
linear kernel
acc_test = 0.6502932551319648
poly kernel
acc_test = 0.8680351906158358
rbf kernel
acc_test = 0.8830645161290323
####################################################################################################
wf4f
linear kernel
acc_test = 0.655791788856305
poly kernel
acc_test = 0.7096774193548387
rbf kernel
acc_test = 0.7291055718475073
####################################################################################################
wf2f
linear kernel
acc_test = 0.6279325513196481
poly kernel
acc_test = 0.6719208211143695
rbf kernel
acc_test = 0.6876832844574781
####################################################################################################
pk
linear kernel
acc_test = 0.8673469387755102
poly kernel
acc_test = 0.8877551020408163
rbf kernel
acc_test = 0.8775510204081632
####################################################################################################
CPU times: user 47min 19s, sys: 9min 52s, total: 57min 12s
Wall time: 8min
###Markdown
3. LSSVM GPU implementation This implementation uses `PyTorch`.
###Code
import torch
class LSSVM_GPU:
'Class that implements the Least-Squares Support Vector Machine on GPU.'
def __init__(self, gamma=1, kernel='rbf', **kernel_params):
self.device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
self.gamma = gamma
self.x = None
self.y = None
self.y_labels = None
# model params
self.alpha = None
self.b = None
self.kernel = LSSVM_GPU.get_kernel(kernel, **kernel_params) # saving kernel function
@staticmethod
def get_kernel(name, **params):
def linear(x_i, x_j):
return torch.mm(x_i, torch.t(x_j))
def poly(x_i, x_j, d=params.get('d',3)):
return ( torch.mm(x_i, torch.t(x_j)) + 1 )**d
def rbf(x_i, x_j, sigma=params.get('sigma',1)):
if x_i.ndim==x_i.ndim and x_i.ndim==2: # both matrices
return torch.exp( -torch.cdist(x_i,x_j)**2 / sigma**2 )
else: # both vectors or a vector and a matrix
return torch.exp( -( torch.dot(x_i,torch.t(x_i)) + torch.dot(x_j,torch.t(x_j))- 2*torch.dot(x_i,x_j) )
/ sigma**2 )
# temp = x_i.T - X
# return exp( -dot(temp.temp) / sigma**2 )
kernels = {'linear': linear, 'poly': poly, 'rbf': rbf}
if kernels.get(name) is None:
raise KeyError("Kernel '{}' is not defined, try one in the list: {}.".format(
name, list(kernels.keys())))
else: return kernels[name]
def opt_params(self, X, y_values):
sigma = ( torch.mm(y_values, torch.t(y_values)) ) * self.kernel(X,X)
A_cross = torch.pinverse(torch.cat((
# block matrix
torch.cat(( torch.tensor(0, dtype=X.dtype, device=self.device).view(1,1),
torch.t(y_values)
),dim=1),
torch.cat(( y_values,
sigma + self.gamma**-1 * torch.eye(len(y_values), dtype=X.dtype, device=self.device)
),dim=1)
),dim=0))
B = torch.tensor([0]+[1]*len(y_values), dtype=X.dtype, device=self.device).view(-1,1)
solution = torch.mm(A_cross, B)
b = solution[0]
alpha = solution[1:].view(-1) # 1D array form
return (b, alpha)
def fit(self, X, Y, verboses=0):
# converting to tensors and passing to GPU
X = torch.from_numpy(X).to(self.device)
Y = torch.from_numpy(Y).to(self.device)
self.x = X
self.y = Y
self.y_labels = torch.unique(Y, dim=0)
if len(self.y_labels)==2: # binary classification
# converting to -1/+1
y_values = torch.where(
(Y == self.y_labels[0]).all(axis=1)
,torch.tensor(-1, dtype=X.dtype, device=self.device)
,torch.tensor(+1, dtype=X.dtype, device=self.device)
).view(-1,1) # making it a column vector
self.b, self.alpha = self.opt_params(X, y_values)
else: # multiclass classification
# ONE-VS-ALL APPROACH
n_classes = len(self.y_labels)
self.b = torch.empty(n_classes, dtype=X.dtype, device=self.device)
self.alpha = torch.empty(n_classes, len(Y), dtype=X.dtype, device=self.device)
for i in range(n_classes):
# converting to +1 for the desired class and -1 for all other classes
y_values = torch.where(
(Y == self.y_labels[i]).all(axis=1)
,torch.tensor(+1, dtype=X.dtype, device=self.device)
,torch.tensor(-1, dtype=X.dtype, device=self.device)
).view(-1,1) # making it a column vector
self.b[i], self.alpha[i] = self.opt_params(X, y_values)
def predict(self, X):
X = torch.from_numpy(X).to(self.device)
K = self.kernel(self.x, X)
if len(self.y_labels)==2: # binary classification
y_values = torch.where(
(self.y == self.y_labels[0]).all(axis=1)
,torch.tensor(-1, dtype=X.dtype, device=self.device)
,torch.tensor(+1, dtype=X.dtype, device=self.device)
)
Y = torch.sign( torch.mm( (self.alpha*y_values).view(1,-1), K ) + self.b)
y_pred_labels = torch.where(Y==-1, self.y_labels[0],
self.y_labels[1]
).view(-1) # convert to flat array
else: # multiclass classification, ONE-VS-ALL APPROACH
Y = torch.empty((len(self.y_labels), len(X)), dtype=X.dtype, device=self.device)
for i in range(len(self.y_labels)):
y_values = torch.where(
(self.y == self.y_labels[i]).all(axis=1)
,torch.tensor(+1, dtype=X.dtype, device=self.device)
,torch.tensor(-1, dtype=X.dtype, device=self.device)
)
Y[i] = torch.mm( (self.alpha[i]*y_values).view(1,-1), K ) + self.b[i] # no sign function applied
predictions = torch.argmax(Y, axis=0)
y_pred_labels = torch.stack([self.y_labels[i] for i in predictions])
return y_pred_labels
###Output
_____no_output_____
###Markdown
Running a single test in all data sets:
###Code
%%time
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
train_size = 0.5
for dataset_name in datasets:
print(dataset_name)
X = datasets[dataset_name]['features'].values
Y = datasets[dataset_name]['labels'].values
X_train, X_test, y_train, y_test = train_test_split(X,Y,train_size=train_size) # Train/Test split
X_tr_norm, X_ts_norm = scale_feat(X_train, X_test, scaleType='min-max') # scaling features
print('linear kernel')
lssvm = LSSVM_GPU(gamma=1, kernel='linear')
lssvm.fit(X_tr_norm, y_train)
print('acc_test = ', accuracy_score(dummie_to_multilabel(y_test),
dummie_to_multilabel(lssvm.predict(X_ts_norm).cpu().numpy())))
print('poly kernel')
lssvm = LSSVM_GPU(gamma=1, kernel='poly', d=2)
lssvm.fit(X_tr_norm, y_train)
print('acc_test = ',accuracy_score(dummie_to_multilabel(y_test),
dummie_to_multilabel(lssvm.predict(X_ts_norm).cpu().numpy())))
print('rbf kernel')
lssvm = LSSVM_GPU(gamma=1, kernel='rbf', sigma=1)
lssvm.fit(X_tr_norm, y_train)
print('acc_test = ',accuracy_score(dummie_to_multilabel(y_test),
dummie_to_multilabel(lssvm.predict(X_ts_norm).cpu().numpy())))
print('\n','#'*100,'\n')
###Output
vc2c
linear kernel
acc_test = 0.8258064516129032
poly kernel
acc_test = 0.8580645161290322
rbf kernel
acc_test = 0.8709677419354839
####################################################################################################
vc3c
linear kernel
acc_test = 0.8580645161290322
poly kernel
acc_test = 0.8387096774193549
rbf kernel
acc_test = 0.8387096774193549
####################################################################################################
wf24f
linear kernel
acc_test = 0.6543255131964809
poly kernel
acc_test = 0.8779325513196481
rbf kernel
acc_test = 0.8947947214076246
####################################################################################################
wf4f
linear kernel
acc_test = 0.6429618768328446
poly kernel
acc_test = 0.7111436950146628
rbf kernel
acc_test = 0.7415689149560117
####################################################################################################
wf2f
linear kernel
acc_test = 0.623900293255132
poly kernel
acc_test = 0.6704545454545454
rbf kernel
acc_test = 0.6909824046920822
####################################################################################################
pk
linear kernel
acc_test = 0.8571428571428571
poly kernel
acc_test = 0.8673469387755102
rbf kernel
acc_test = 0.8571428571428571
####################################################################################################
CPU times: user 12min 1s, sys: 1min 22s, total: 13min 24s
Wall time: 6min 9s
###Markdown
4. Discussing performance The code below was used to evaluate processing time in several data sets and using different kernels:
###Code
%%time
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
import pandas as pd
import time
import datetime
small_datasets = ['vc2c', 'vc3c', 'pk']
train_size = 0.5
n_runs = 20
kernels = ['linear', 'poly', 'rbf']
header = ['kernel', 'data set', 'CPU time (mean ± std)', 'GPU time (mean ± std)']
data = np.empty((len(kernels)*len(datasets), 4), dtype=object)
count=0
for dataset_name in datasets:
X = datasets[dataset_name]['features'].values
Y = datasets[dataset_name]['labels'].values
X_train, X_test, y_train, y_test = train_test_split(X,Y,train_size=train_size) # Train/Test split
X_tr_norm, X_ts_norm = scale_feat(X_train, X_test, scaleType='min-max') # scaling features
for kernel in kernels:
temp_cpu = np.empty(n_runs)
temp_gpu = np.empty(n_runs)
for i in range(n_runs):
lssvm_cpu = LSSVM(gamma=1, kernel=kernel)
t0 = time.time()
lssvm_cpu.fit(X_tr_norm, y_train)
accuracy_score(dummie_to_multilabel(y_test), dummie_to_multilabel(lssvm_cpu.predict(X_ts_norm)))
t1 = time.time()
temp_cpu[i] = t1-t0
lssvm_gpu = LSSVM_GPU(gamma=1, kernel=kernel)
t0 = time.time()
lssvm_gpu.fit(X_tr_norm, y_train)
accuracy_score(dummie_to_multilabel(y_test), dummie_to_multilabel(lssvm_gpu.predict(X_ts_norm).cpu().numpy()))
t1 = time.time()
temp_gpu[i] = t1-t0
data[count] = np.array([kernel, dataset_name,
'{:.2f} ms ± {:.2f} ms'.format(np.mean(temp_cpu)*1e3, np.std(temp_cpu)*1e3),
'{:.2f} ms ± {:.2f} ms'.format(np.mean(temp_gpu)*1e3, np.std(temp_gpu)*1e3)])
count+=1
print("Done {}/{} at {}.".format(count, len(data), datetime.datetime.now()))
df = pd.DataFrame(data, columns=header)
# saving results
# filename = "df.csv"
# df.to_csv(filename, sep='\t', index=False)
# loading results
df = pd.read_csv("df.csv", sep='\t')
df.sort_values(by=['data set', 'kernel'])
###Output
_____no_output_____ |
docs/_build/html/code_examples/oberpfaffenhofen.ipynb | ###Markdown
PolSAR Oberpfaffenhofen example- Download the dataset from [esa official website](https://earth.esa.int/web/polsarpro/data-sources/sample-datasets).- Ground truth can be found at this [github repository](https://github.com/fudanxu/CV-CNN/blob/master/Label_Germany.mat). DatasetFirst we open the dataset
###Code
from pathlib import Path
import scipy.io
import numpy as np
import spectral.io.envi as envi
from cvnn.utils import standarize, randomize
raw_labels = scipy.io.loadmat('/media/barrachina/data/datasets/PolSar/Oberpfaffenhofen/Label_Germany.mat')['label']
path = Path('/media/barrachina/data/datasets/PolSar/Oberpfaffenhofen/ESAR_Oberpfaffenhofen_T6/Master_Track_Slave_Track/T6')
T = np.zeros(raw_labels.shape + (21,), dtype=complex)
T[:, :, 0] = standarize(envi.open(path / 'T11.bin.hdr', path / 'T11.bin').read_band(0))
T[:, :, 1] = standarize(envi.open(path / 'T22.bin.hdr', path / 'T22.bin').read_band(0))
T[:, :, 2] = standarize(envi.open(path / 'T33.bin.hdr', path / 'T33.bin').read_band(0))
T[:, :, 3] = standarize(envi.open(path / 'T44.bin.hdr', path / 'T44.bin').read_band(0))
T[:, :, 4] = standarize(envi.open(path / 'T55.bin.hdr', path / 'T55.bin').read_band(0))
T[:, :, 5] = standarize(envi.open(path / 'T66.bin.hdr', path / 'T66.bin').read_band(0))
T[:, :, 6] = standarize(envi.open(path / 'T12_real.bin.hdr', path / 'T12_real.bin').read_band(0) + \
1j * envi.open(path / 'T12_imag.bin.hdr', path / 'T12_imag.bin').read_band(0))
T[:, :, 7] = standarize(envi.open(path / 'T13_real.bin.hdr', path / 'T13_real.bin').read_band(0) + \
1j * envi.open(path / 'T13_imag.bin.hdr', path / 'T13_imag.bin').read_band(0))
T[:, :, 8] = standarize(envi.open(path / 'T14_real.bin.hdr', path / 'T14_real.bin').read_band(0) + \
1j * envi.open(path / 'T14_imag.bin.hdr', path / 'T14_imag.bin').read_band(0))
T[:, :, 9] = standarize(envi.open(path / 'T15_real.bin.hdr', path / 'T15_real.bin').read_band(0) + \
1j * envi.open(path / 'T15_imag.bin.hdr', path / 'T15_imag.bin').read_band(0))
T[:, :, 10] = standarize(envi.open(path / 'T16_real.bin.hdr', path / 'T16_real.bin').read_band(0) + \
1j * envi.open(path / 'T16_imag.bin.hdr', path / 'T16_imag.bin').read_band(0))
T[:, :, 11] = standarize(envi.open(path / 'T23_real.bin.hdr', path / 'T23_real.bin').read_band(0) + \
1j * envi.open(path / 'T23_imag.bin.hdr', path / 'T23_imag.bin').read_band(0))
T[:, :, 12] = standarize(envi.open(path / 'T24_real.bin.hdr', path / 'T24_real.bin').read_band(0) + \
1j * envi.open(path / 'T24_imag.bin.hdr', path / 'T24_imag.bin').read_band(0))
T[:, :, 13] = standarize(envi.open(path / 'T25_real.bin.hdr', path / 'T25_real.bin').read_band(0) + \
1j * envi.open(path / 'T25_imag.bin.hdr', path / 'T25_imag.bin').read_band(0))
T[:, :, 14] = standarize(envi.open(path / 'T26_real.bin.hdr', path / 'T26_real.bin').read_band(0) + \
1j * envi.open(path / 'T26_imag.bin.hdr', path / 'T26_imag.bin').read_band(0))
T[:, :, 15] = standarize(envi.open(path / 'T34_real.bin.hdr', path / 'T34_real.bin').read_band(0) + \
1j * envi.open(path / 'T34_imag.bin.hdr', path / 'T34_imag.bin').read_band(0))
T[:, :, 16] = standarize(envi.open(path / 'T35_real.bin.hdr', path / 'T35_real.bin').read_band(0) + \
1j * envi.open(path / 'T35_imag.bin.hdr', path / 'T35_imag.bin').read_band(0))
T[:, :, 17] = standarize(envi.open(path / 'T36_real.bin.hdr', path / 'T36_real.bin').read_band(0) + \
1j * envi.open(path / 'T36_imag.bin.hdr', path / 'T36_imag.bin').read_band(0))
T[:, :, 18] = standarize(envi.open(path / 'T45_real.bin.hdr', path / 'T45_real.bin').read_band(0) + \
1j * envi.open(path / 'T45_imag.bin.hdr', path / 'T45_imag.bin').read_band(0))
T[:, :, 19] = standarize(envi.open(path / 'T46_real.bin.hdr', path / 'T46_real.bin').read_band(0) + \
1j * envi.open(path / 'T46_imag.bin.hdr', path / 'T46_imag.bin').read_band(0))
T[:, :, 20] = standarize(envi.open(path / 'T56_real.bin.hdr', path / 'T56_real.bin').read_band(0) + \
1j * envi.open(path / 'T56_imag.bin.hdr', path / 'T56_imag.bin').read_band(0))
print("T shape " + str(T.shape) + "; labels shape " + str(raw_labels.shape))
###Output
T shape (1300, 1200, 21); labels shape (1300, 1200)
###Markdown
See the ground truth was correctly opened:
###Code
import matplotlib.pyplot as plt
import tikzplotlib
def show_ground_truth(labels, savefile=None):
colors = np.array([
[1, 0.349, 0.392],
[0.086, 0.858, 0.576],
[0.937, 0.917, 0.352]
])
ground_truth = np.zeros(labels.shape + (3,), dtype=float)
for i in range(labels.shape[0]):
for j in range(labels.shape[1]):
if labels[i, j] != 0:
ground_truth[i, j] = colors[labels[i, j] - 1]
plt.imshow(ground_truth)
plt.show()
if savefile is not None:
savefile = Path(savefile)
plt.imsave(savefile / "ground_truth.pdf", ground_truth)
tikzplotlib.save(savefile / "ground_truth.tex")
show_ground_truth(raw_labels)
###Output
_____no_output_____
###Markdown
Preprocess dataset
###Code
def remove_unlabeled(x, y):
mask = y != 0
return x[mask], y[mask]
T, labels = remove_unlabeled(T, raw_labels) # Remove unlabaled pixels
labels -= 1 # map [1, 3] to [0, 2]
labels.shape
###Output
_____no_output_____
###Markdown
Separate Test, Train and validation
###Code
from cvnn.dataset import Dataset
def separate_train_test(x, y, ratio=0.1):
classes = set(y)
x_ordered_database = []
y_ordered_database = []
for cls in classes:
mask = y == cls
x_ordered_database.append(x[mask])
y_ordered_database.append(y[mask])
len_train = int(y.shape[0]*ratio/len(classes))
x_train = x_ordered_database[0][:len_train]
x_test = x_ordered_database[0][len_train:]
y_train = y_ordered_database[0][:len_train]
y_test = y_ordered_database[0][len_train:]
for i in range(len(y_ordered_database)):
assert (y_ordered_database[i] == i).all()
assert len(y_ordered_database[i]) == len(x_ordered_database[i])
if i != 0:
x_train = np.concatenate((x_train, x_ordered_database[i][:len_train]))
x_test = np.concatenate((x_test, x_ordered_database[i][len_train:]))
y_train = np.concatenate((y_train, y_ordered_database[i][:len_train]))
y_test = np.concatenate((y_test, y_ordered_database[i][len_train:]))
x_train, y_train = randomize(x_train, y_train)
x_test, y_test = randomize(x_test, y_test)
return x_train, y_train, x_test, y_test
T_rand, labels_rand = randomize(T, labels)
x_train, y_train, x_test, y_test = separate_train_test(T_rand, labels_rand, ratio=0.1)
x_train, y_train, x_val, y_val = separate_train_test(x_train, y_train, ratio=0.8)
y_train = Dataset.sparse_into_categorical(y_train)
y_test = Dataset.sparse_into_categorical(y_test)
y_val = Dataset.sparse_into_categorical(y_val)
dataset = Dataset(x_train.astype(np.complex64), y_train, dataset_name='Oberpfaffenhofen')
print("Sizes:\n\t- Train shape: " + str(x_train.shape) + "\n\t- Test shape: " + str(x_test.shape) + "\n\t- Validation shape: " + str(x_val.shape))
###Output
Sizes:
- Train shape: (104928, 21)
- Test shape: (1180458, 21)
- Validation shape: (26232, 21)
###Markdown
For training we use the same number of class examples for train and validation set
###Code
def get_number_of_each_class(x, name):
x = np.array(x)
x = Dataset.categorical_to_sparse(x)
print(name + " set")
for cls in range(min(x), max(x)+1):
print("\t" + str(np.sum(x == cls)) + " examples of class " + str(cls))
get_number_of_each_class(y_train, "Train")
get_number_of_each_class(y_test, "Test")
get_number_of_each_class(y_val, "Validation")
###Output
Train set
34976 examples of class 0
34976 examples of class 1
34976 examples of class 2
Test set
284331 examples of class 0
202953 examples of class 1
693174 examples of class 2
Validation set
8744 examples of class 0
8744 examples of class 1
8744 examples of class 2
###Markdown
Training
###Code
Select Hyper-parameters
from cvnn.layers import Dense
from cvnn import layers
shape_raw = [50, 50]
input_size = dataset.x.shape[1] # Size of input
output_size = dataset.y.shape[1] # Size of output
layers.ComplexLayer.last_layer_output_dtype = None
layers.ComplexLayer.last_layer_output_size = None
if len(shape_raw) == 0:
print("No hidden layers are used. activation and dropout will be ignored")
shape = [
Dense(input_size=input_size, output_size=output_size, activation='softmax_real',
input_dtype=np.complex64, dropout=None)
]
else: # len(shape_raw) > 0:
shape = [Dense(input_size=input_size, output_size=shape_raw[0], activation='cart_relu',
input_dtype=np.complex64, dropout=0.5)]
for i in range(1, len(shape_raw)):
shape.append(Dense(output_size=shape_raw[i], activation='cart_relu', dropout=0.5))
shape.append(Dense(output_size=output_size, activation='softmax_real', dropout=None))
from cvnn.cvnn_model import CvnnModel
from tensorflow.keras.losses import categorical_crossentropy
complex_network = CvnnModel(name="complex_network", shape=shape, loss_fun=categorical_crossentropy, optimizer='sgd', verbose=False, tensorboard=False)
complex_network.fit(dataset.x, dataset.y, validation_data = (x_val.astype(np.complex64), y_val), epochs = 200, batch_size=100, verbose=2, save_csv_history=True)
###Output
Epoch 1/200
1050/Unknown - 1s 968us/step - loss: 0.2360 - accuracy: 0.9300 - val_loss: 0.3063 - val_accuracy: 0.8875
Epoch 2/200
1050/1050 [==============================] - 1s 613us/step - loss: 0.2090 - accuracy: 0.9200 - val_loss: 0.2943 - val_accuracy: 0.8868
Epoch 3/200
1050/1050 [==============================] - 1s 639us/step - loss: 0.3087 - accuracy: 0.8800 - val_loss: 0.2925 - val_accuracy: 0.8879
Epoch 4/200
1050/1050 [==============================] - 1s 597us/step - loss: 0.2778 - accuracy: 0.8700 - val_loss: 0.2893 - val_accuracy: 0.8915
Epoch 5/200
1050/1050 [==============================] - 1s 631us/step - loss: 0.3104 - accuracy: 0.8700 - val_loss: 0.3285 - val_accuracy: 0.8782
Epoch 6/200
1050/1050 [==============================] - 1s 628us/step - loss: 0.2151 - accuracy: 0.9100 - val_loss: 0.2829 - val_accuracy: 0.8939
Epoch 7/200
1050/1050 [==============================] - 1s 609us/step - loss: 0.2457 - accuracy: 0.9100 - val_loss: 0.2879 - val_accuracy: 0.8918
Epoch 8/200
1050/1050 [==============================] - 1s 611us/step - loss: 0.2393 - accuracy: 0.9000 - val_loss: 0.2885 - val_accuracy: 0.8900
Epoch 9/200
1050/1050 [==============================] - 1s 601us/step - loss: 0.2586 - accuracy: 0.9000 - val_loss: 0.2832 - val_accuracy: 0.8938
Epoch 10/200
1050/1050 [==============================] - 1s 600us/step - loss: 0.3311 - accuracy: 0.8500 - val_loss: 0.2898 - val_accuracy: 0.8928
Epoch 11/200
1050/1050 [==============================] - 1s 643us/step - loss: 0.1708 - accuracy: 0.9200 - val_loss: 0.2857 - val_accuracy: 0.8928
Epoch 12/200
1050/1050 [==============================] - 1s 626us/step - loss: 0.1962 - accuracy: 0.9200 - val_loss: 0.2849 - val_accuracy: 0.8935
Epoch 13/200
1050/1050 [==============================] - 1s 588us/step - loss: 0.2393 - accuracy: 0.8900 - val_loss: 0.2955 - val_accuracy: 0.8874
Epoch 14/200
1050/1050 [==============================] - 1s 619us/step - loss: 0.3850 - accuracy: 0.8500 - val_loss: 0.2904 - val_accuracy: 0.8892
Epoch 15/200
1050/1050 [==============================] - 1s 615us/step - loss: 0.2327 - accuracy: 0.9000 - val_loss: 0.2902 - val_accuracy: 0.8922
Epoch 16/200
1050/1050 [==============================] - 1s 661us/step - loss: 0.3237 - accuracy: 0.8900 - val_loss: 0.3008 - val_accuracy: 0.8881
Epoch 17/200
1050/1050 [==============================] - 1s 618us/step - loss: 0.2163 - accuracy: 0.9200 - val_loss: 0.2847 - val_accuracy: 0.8937
Epoch 18/200
1050/1050 [==============================] - 1s 647us/step - loss: 0.2519 - accuracy: 0.8800 - val_loss: 0.2795 - val_accuracy: 0.8952
Epoch 19/200
1050/1050 [==============================] - 1s 664us/step - loss: 0.3139 - accuracy: 0.8800 - val_loss: 0.2862 - val_accuracy: 0.8918
Epoch 20/200
1050/1050 [==============================] - 1s 634us/step - loss: 0.3106 - accuracy: 0.9000 - val_loss: 0.2825 - val_accuracy: 0.8947
Epoch 21/200
1050/1050 [==============================] - 1s 649us/step - loss: 0.3095 - accuracy: 0.8200 - val_loss: 0.2855 - val_accuracy: 0.8943
Epoch 22/200
1050/1050 [==============================] - 1s 634us/step - loss: 0.2187 - accuracy: 0.9100 - val_loss: 0.2807 - val_accuracy: 0.8968
Epoch 23/200
1050/1050 [==============================] - 1s 641us/step - loss: 0.3254 - accuracy: 0.8600 - val_loss: 0.2878 - val_accuracy: 0.8927
Epoch 24/200
1050/1050 [==============================] - 1s 642us/step - loss: 0.2714 - accuracy: 0.8700 - val_loss: 0.2808 - val_accuracy: 0.8945
Epoch 25/200
1050/1050 [==============================] - 1s 610us/step - loss: 0.4991 - accuracy: 0.8500 - val_loss: 0.2889 - val_accuracy: 0.8911
Epoch 26/200
1050/1050 [==============================] - 1s 624us/step - loss: 0.2370 - accuracy: 0.9000 - val_loss: 0.2892 - val_accuracy: 0.8915
Epoch 27/200
1050/1050 [==============================] - 1s 638us/step - loss: 0.3302 - accuracy: 0.8800 - val_loss: 0.2835 - val_accuracy: 0.8944
Epoch 28/200
1050/1050 [==============================] - 1s 644us/step - loss: 0.2492 - accuracy: 0.9100 - val_loss: 0.2792 - val_accuracy: 0.8975
Epoch 29/200
1050/1050 [==============================] - 1s 647us/step - loss: 0.2734 - accuracy: 0.9300 - val_loss: 0.2819 - val_accuracy: 0.8973
Epoch 30/200
1050/1050 [==============================] - 1s 637us/step - loss: 0.3736 - accuracy: 0.8800 - val_loss: 0.2879 - val_accuracy: 0.8913
Epoch 31/200
1050/1050 [==============================] - 1s 643us/step - loss: 0.2635 - accuracy: 0.8800 - val_loss: 0.2884 - val_accuracy: 0.8936
Epoch 32/200
1050/1050 [==============================] - 1s 631us/step - loss: 0.2337 - accuracy: 0.9100 - val_loss: 0.2907 - val_accuracy: 0.8903
Epoch 33/200
1050/1050 [==============================] - 1s 669us/step - loss: 0.2525 - accuracy: 0.8600 - val_loss: 0.2860 - val_accuracy: 0.8921
Epoch 34/200
1050/1050 [==============================] - 1s 658us/step - loss: 0.2771 - accuracy: 0.8900 - val_loss: 0.2857 - val_accuracy: 0.8970
Epoch 35/200
1050/1050 [==============================] - 1s 645us/step - loss: 0.3795 - accuracy: 0.8800 - val_loss: 0.2815 - val_accuracy: 0.8924
Epoch 36/200
1050/1050 [==============================] - 1s 615us/step - loss: 0.2411 - accuracy: 0.9100 - val_loss: 0.2940 - val_accuracy: 0.8878
Epoch 37/200
1050/1050 [==============================] - 1s 641us/step - loss: 0.3555 - accuracy: 0.8900 - val_loss: 0.2895 - val_accuracy: 0.8920
Epoch 38/200
1050/1050 [==============================] - 1s 627us/step - loss: 0.2261 - accuracy: 0.9300 - val_loss: 0.3025 - val_accuracy: 0.8911
Epoch 39/200
1050/1050 [==============================] - 1s 670us/step - loss: 0.2432 - accuracy: 0.8800 - val_loss: 0.2823 - val_accuracy: 0.8932
Epoch 40/200
1050/1050 [==============================] - 1s 616us/step - loss: 0.2621 - accuracy: 0.9200 - val_loss: 0.2850 - val_accuracy: 0.8917
Epoch 41/200
1050/1050 [==============================] - 1s 626us/step - loss: 0.3030 - accuracy: 0.9100 - val_loss: 0.3016 - val_accuracy: 0.8862
Epoch 42/200
1050/1050 [==============================] - 1s 660us/step - loss: 0.3223 - accuracy: 0.8600 - val_loss: 0.3014 - val_accuracy: 0.8880
Epoch 43/200
1050/1050 [==============================] - 1s 634us/step - loss: 0.3168 - accuracy: 0.9400 - val_loss: 0.2828 - val_accuracy: 0.8948
Epoch 44/200
1050/1050 [==============================] - 1s 647us/step - loss: 0.2825 - accuracy: 0.8900 - val_loss: 0.2792 - val_accuracy: 0.8960
Epoch 45/200
1050/1050 [==============================] - 1s 654us/step - loss: 0.2115 - accuracy: 0.9400 - val_loss: 0.2795 - val_accuracy: 0.8949
Epoch 46/200
1050/1050 [==============================] - 1s 635us/step - loss: 0.3370 - accuracy: 0.8700 - val_loss: 0.3030 - val_accuracy: 0.8868
Epoch 47/200
1050/1050 [==============================] - 1s 648us/step - loss: 0.2037 - accuracy: 0.9000 - val_loss: 0.2837 - val_accuracy: 0.8908
Epoch 48/200
1050/1050 [==============================] - 1s 658us/step - loss: 0.2455 - accuracy: 0.9500 - val_loss: 0.2795 - val_accuracy: 0.8973
Epoch 49/200
1050/1050 [==============================] - 1s 650us/step - loss: 0.3444 - accuracy: 0.9000 - val_loss: 0.2822 - val_accuracy: 0.8966
Epoch 50/200
1050/1050 [==============================] - 1s 653us/step - loss: 0.4131 - accuracy: 0.8800 - val_loss: 0.2965 - val_accuracy: 0.8893
Epoch 51/200
1050/1050 [==============================] - 1s 621us/step - loss: 0.2427 - accuracy: 0.8900 - val_loss: 0.2819 - val_accuracy: 0.8960
Epoch 52/200
1050/1050 [==============================] - 1s 638us/step - loss: 0.3302 - accuracy: 0.8800 - val_loss: 0.3334 - val_accuracy: 0.8803
Epoch 53/200
1050/1050 [==============================] - 1s 632us/step - loss: 0.2404 - accuracy: 0.8900 - val_loss: 0.2782 - val_accuracy: 0.8976
Epoch 54/200
1050/1050 [==============================] - 1s 631us/step - loss: 0.3016 - accuracy: 0.8800 - val_loss: 0.2847 - val_accuracy: 0.8951
Epoch 55/200
1050/1050 [==============================] - 1s 639us/step - loss: 0.2486 - accuracy: 0.9100 - val_loss: 0.2752 - val_accuracy: 0.8965
Epoch 56/200
1050/1050 [==============================] - 1s 623us/step - loss: 0.3040 - accuracy: 0.9300 - val_loss: 0.2834 - val_accuracy: 0.8928
###Markdown
Results
###Code
prediction = complex_network.predict(T.astype(np.complex64)).numpy()
prediction.shape
prediction_image = np.zeros(raw_labels.shape, dtype=int)
p_index = 0
for i in range(raw_labels.shape[0]):
for j in range(raw_labels.shape[1]):
if raw_labels[i, j] != 0:
prediction_image[i, j] = prediction[p_index] + 1
p_index += 1
assert p_index == len(prediction)
show_ground_truth(prediction_image, "./")
loss, acc = complex_network.evaluate(x_test.astype(np.complex64), y_test)
print("Test Accuracy: {0:.2%}; Loss: {1:.4}".format(acc, loss))
complex_network.get_confusion_matrix(x_test.astype(np.complex64), y_test)
###Output
_____no_output_____ |
04_training_linear_models v1.ipynb | ###Markdown
Chapter 4 – Training Linear Models** _This notebook contains all the sample code and solutions to the exercises in chapter 4._ Based onhttps://github.com/ageron/handson-ml Setup First, let's make sure this notebook works well in both python 2 and 3, import a few common modules, ensure MatplotLib plots figures inline and prepare a function to save the figures:
###Code
# To support both python 2 and python 3
from __future__ import division, print_function, unicode_literals
# Common imports
import numpy as np
import os
# to make this notebook's output stable across runs
np.random.seed(42)
# To plot pretty figures
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
mpl.rc('axes', labelsize=14)
mpl.rc('xtick', labelsize=12)
mpl.rc('ytick', labelsize=12)
# Where to save the figures
PROJECT_ROOT_DIR = "D:\handson-ml\\"
CHAPTER_ID = "training_linear_models\\"
def save_fig(fig_id, tight_layout=True):
path1 = PROJECT_ROOT_DIR + "images\\" + CHAPTER_ID + fig_id + ".png"
print("Saving figure: ", fig_id)
if tight_layout:
plt.tight_layout()
print(path1)
plt.savefig(path1, format='png', dpi=300)
# Ignore useless warnings (see SciPy issue #5998)
import warnings
warnings.filterwarnings(action="ignore", message="^internal gelsd")
###Output
_____no_output_____
###Markdown
Linear regression using the Normal Equation
###Code
import numpy as np
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([0, 2, 0, 15])
save_fig("generated_data_plot")
plt.show()
X_b = np.c_[np.ones((100, 1)), X] # add x0 = 1 to each instance
theta_best = np.linalg.inv(X_b.T.dot(X_b)).dot(X_b.T).dot(y)
theta_best
X_new = np.array([[0], [2]])
X_new_b = np.c_[np.ones((2, 1)), X_new] # add x0 = 1 to each instance
y_predict = X_new_b.dot(theta_best)
y_predict
plt.plot(X_new, y_predict, "r-")
plt.plot(X, y, "b.")
plt.axis([0, 2, 0, 15])
plt.show()
###Output
_____no_output_____
###Markdown
The figure in the book actually corresponds to the following code, with a legend and axis labels:
###Code
plt.plot(X_new, y_predict, "r-", linewidth=2, label="Predictions")
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 2, 0, 15])
save_fig("linear_model_predictions")
plt.show()
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X, y)
lin_reg.intercept_, lin_reg.coef_
lin_reg.predict(X_new)
###Output
_____no_output_____
###Markdown
The `LinearRegression` class is based on the `scipy.linalg.lstsq()` function (the name stands for "least squares"), which you could call directly:
###Code
theta_best_svd, residuals, rank, s = np.linalg.lstsq(X_b, y, rcond=1e-6)
theta_best_svd
###Output
_____no_output_____
###Markdown
This function computes $\mathbf{X}^+\mathbf{y}$, where $\mathbf{X}^{+}$ is the _pseudoinverse_ of $\mathbf{X}$ (specifically the Moore-Penrose inverse). You can use `np.linalg.pinv()` to compute the pseudoinverse directly:
###Code
np.linalg.pinv(X_b).dot(y)
###Output
_____no_output_____
###Markdown
**Note**: the first releases of the book implied that the `LinearRegression` class was based on the Normal Equation. This was an error, my apologies: as explained above, it is based on the pseudoinverse, which ultimately relies on the SVD matrix decomposition of $\mathbf{X}$ (see chapter 8 for details about the SVD decomposition). Its time complexity is $O(n^2)$ and it works even when $m < n$ or when some features are linear combinations of other features (in these cases, $\mathbf{X}^T \mathbf{X}$ is not invertible so the Normal Equation fails), see [issue 184](https://github.com/ageron/handson-ml/issues/184) for more details. However, this does not change the rest of the description of the `LinearRegression` class, in particular, it is based on an analytical solution, it does not scale well with the number of features, it scales linearly with the number of instances, all the data must fit in memory, it does not require feature scaling and the order of the instances in the training set does not matter. Linear regression using batch gradient descent
###Code
eta = 0.1
n_iterations = 1000
m = 100
theta = np.random.randn(2,1)
for iteration in range(n_iterations):
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
theta
X_new_b.dot(theta)
theta_path_bgd = []
def plot_gradient_descent(theta, eta, theta_path=None):
m = len(X_b)
plt.plot(X, y, "b.")
n_iterations = 1000
for iteration in range(n_iterations):
if iteration < 10:
y_predict = X_new_b.dot(theta)
style = "b-" if iteration > 0 else "r--"
plt.plot(X_new, y_predict, style)
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - eta * gradients
if theta_path is not None:
theta_path.append(theta)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 2, 0, 15])
plt.title(r"$\eta = {}$".format(eta), fontsize=16)
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
plt.figure(figsize=(10,4))
plt.subplot(131); plot_gradient_descent(theta, eta=0.02)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(132); plot_gradient_descent(theta, eta=0.1, theta_path=theta_path_bgd)
plt.subplot(133); plot_gradient_descent(theta, eta=0.5)
save_fig("gradient_descent_plot")
plt.show()
###Output
Saving figure: gradient_descent_plot
D:\handson-ml\images\training_linear_models\gradient_descent_plot.png
###Markdown
Stochastic Gradient Descent
###Code
theta_path_sgd = []
m = len(X_b)
np.random.seed(42)
n_epochs = 50
t0, t1 = 5, 50 # learning schedule hyperparameters
def learning_schedule(t):
return t0 / (t + t1)
theta = np.random.randn(2,1) # random initialization
for epoch in range(n_epochs):
for i in range(m):
if epoch == 0 and i < 20: # not shown in the book
y_predict = X_new_b.dot(theta) # not shown
style = "b-" if i > 0 else "r--" # not shown
plt.plot(X_new, y_predict, style) # not shown
random_index = np.random.randint(m)
xi = X_b[random_index:random_index+1]
yi = y[random_index:random_index+1]
gradients = 2 * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(epoch * m + i)
theta = theta - eta * gradients
theta_path_sgd.append(theta) # not shown
plt.plot(X, y, "b.") # not shown
plt.xlabel("$x_1$", fontsize=18) # not shown
plt.ylabel("$y$", rotation=0, fontsize=18) # not shown
plt.axis([0, 2, 0, 15]) # not shown
save_fig("sgd_plot") # not shown
plt.show() # not shown
theta
from sklearn.linear_model import SGDRegressor
sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty=None, eta0=0.1, random_state=42)
sgd_reg.fit(X, y.ravel())
sgd_reg.intercept_, sgd_reg.coef_
###Output
_____no_output_____
###Markdown
Mini-batch gradient descent
###Code
theta_path_mgd = []
n_iterations = 50
minibatch_size = 20
np.random.seed(42)
theta = np.random.randn(2,1) # random initialization
t0, t1 = 200, 1000
def learning_schedule(t):
return t0 / (t + t1)
t = 0
for epoch in range(n_iterations):
shuffled_indices = np.random.permutation(m)
X_b_shuffled = X_b[shuffled_indices]
y_shuffled = y[shuffled_indices]
for i in range(0, m, minibatch_size):
t += 1
xi = X_b_shuffled[i:i+minibatch_size]
yi = y_shuffled[i:i+minibatch_size]
gradients = 2/minibatch_size * xi.T.dot(xi.dot(theta) - yi)
eta = learning_schedule(t)
theta = theta - eta * gradients
theta_path_mgd.append(theta)
theta
theta_path_bgd = np.array(theta_path_bgd)
theta_path_sgd = np.array(theta_path_sgd)
theta_path_mgd = np.array(theta_path_mgd)
plt.figure(figsize=(7,4))
plt.plot(theta_path_sgd[:, 0], theta_path_sgd[:, 1], "r-s", linewidth=1, label="Stochastic")
plt.plot(theta_path_mgd[:, 0], theta_path_mgd[:, 1], "g-+", linewidth=2, label="Mini-batch")
plt.plot(theta_path_bgd[:, 0], theta_path_bgd[:, 1], "b-o", linewidth=3, label="Batch")
plt.legend(loc="upper left", fontsize=16)
plt.xlabel(r"$\theta_0$", fontsize=20)
plt.ylabel(r"$\theta_1$ ", fontsize=20, rotation=0)
plt.axis([2.5, 4.5, 2.3, 3.9])
save_fig("gradient_descent_paths_plot")
plt.show()
###Output
Saving figure: gradient_descent_paths_plot
D:\handson-ml\images\training_linear_models\gradient_descent_paths_plot.png
###Markdown
Polynomial regression
###Code
import numpy as np
import numpy.random as rnd
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 0.5 * X**2 + X + 2 + np.random.randn(m, 1)
plt.plot(X, y, "b.")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_data_plot")
plt.show()
from sklearn.preprocessing import PolynomialFeatures
poly_features = PolynomialFeatures(degree=2, include_bias=False)
X_poly = poly_features.fit_transform(X)
X[0]
X_poly[0]
lin_reg = LinearRegression()
lin_reg.fit(X_poly, y)
lin_reg.intercept_, lin_reg.coef_
X_new=np.linspace(-3, 3, 100).reshape(100, 1)
X_new_poly = poly_features.transform(X_new)
y_new = lin_reg.predict(X_new_poly)
plt.plot(X, y, "b.")
plt.plot(X_new, y_new, "r-", linewidth=2, label="Predictions")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.legend(loc="upper left", fontsize=14)
plt.axis([-3, 3, 0, 10])
save_fig("quadratic_predictions_plot")
plt.show()
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
for style, width, degree in (("g-", 1, 300), ("b--", 2, 2), ("r-+", 2, 1)):
polybig_features = PolynomialFeatures(degree=degree, include_bias=False)
std_scaler = StandardScaler()
lin_reg = LinearRegression()
polynomial_regression = Pipeline([
("poly_features", polybig_features),
("std_scaler", std_scaler),
("lin_reg", lin_reg),
])
polynomial_regression.fit(X, y)
y_newbig = polynomial_regression.predict(X_new)
plt.plot(X_new, y_newbig, style, label=str(degree), linewidth=width)
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left")
plt.xlabel("$x_1$", fontsize=18)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.axis([-3, 3, 0, 10])
save_fig("high_degree_polynomials_plot")
plt.show()
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
def plot_learning_curves(model, X, y):
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=10)
train_errors, val_errors = [], []
for m in range(1, len(X_train)):
model.fit(X_train[:m], y_train[:m])
y_train_predict = model.predict(X_train[:m])
y_val_predict = model.predict(X_val)
train_errors.append(mean_squared_error(y_train[:m], y_train_predict))
val_errors.append(mean_squared_error(y_val, y_val_predict))
plt.plot(np.sqrt(train_errors), "r-+", linewidth=2, label="train")
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="val")
plt.legend(loc="upper right", fontsize=14) # not shown in the book
plt.xlabel("Training set size", fontsize=14) # not shown
plt.ylabel("RMSE", fontsize=14) # not shown
lin_reg = LinearRegression()
plot_learning_curves(lin_reg, X, y)
plt.axis([0, 80, 0, 3]) # not shown in the book
save_fig("underfitting_learning_curves_plot") # not shown
plt.show() # not shown
from sklearn.pipeline import Pipeline
polynomial_regression = Pipeline([
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("lin_reg", LinearRegression()),
])
plot_learning_curves(polynomial_regression, X, y)
plt.axis([0, 80, 0, 3]) # not shown
save_fig("learning_curves_plot") # not shown
plt.show() # not shown
###Output
Saving figure: learning_curves_plot
D:\handson-ml\images\training_linear_models\learning_curves_plot.png
###Markdown
Regularized models
###Code
from sklearn.linear_model import Ridge
np.random.seed(42)
m = 20
X = 3 * np.random.rand(m, 1)
y = 1 + 0.5 * X + np.random.randn(m, 1) / 1.5
X_new = np.linspace(0, 3, 100).reshape(100, 1)
def plot_model(model_class, polynomial, alphas, **model_kargs):
for alpha, style in zip(alphas, ("b-", "g--", "r:")):
model = model_class(alpha, **model_kargs) if alpha > 0 else LinearRegression()
if polynomial:
model = Pipeline([
("poly_features", PolynomialFeatures(degree=10, include_bias=False)),
("std_scaler", StandardScaler()),
("regul_reg", model),
])
model.fit(X, y)
y_new_regul = model.predict(X_new)
lw = 2 if alpha > 0 else 1
plt.plot(X_new, y_new_regul, style, linewidth=lw, label=r"$\alpha = {}$".format(alpha))
plt.plot(X, y, "b.", linewidth=3)
plt.legend(loc="upper left", fontsize=15)
plt.xlabel("$x_1$", fontsize=18)
plt.axis([0, 3, 0, 4])
plt.figure(figsize=(8,4))
plt.subplot(121)
plot_model(Ridge, polynomial=False, alphas=(0, 10, 100), random_state=42)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(122)
plot_model(Ridge, polynomial=True, alphas=(0, 10**-5, 1), random_state=42)
save_fig("ridge_regression_plot")
plt.show()
from sklearn.linear_model import Ridge
ridge_reg = Ridge(alpha=1, solver="cholesky", random_state=42)
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
sgd_reg = SGDRegressor(max_iter=50, tol=-np.infty, penalty="l2", random_state=42)
sgd_reg.fit(X, y.ravel())
sgd_reg.predict([[1.5]])
ridge_reg = Ridge(alpha=1, solver="sag", random_state=42)
ridge_reg.fit(X, y)
ridge_reg.predict([[1.5]])
from sklearn.linear_model import Lasso
plt.figure(figsize=(8,4))
plt.subplot(121)
plot_model(Lasso, polynomial=False, alphas=(0, 0.1, 1), random_state=42)
plt.ylabel("$y$", rotation=0, fontsize=18)
plt.subplot(122)
plot_model(Lasso, polynomial=True, alphas=(0, 10**-7, 1), tol=1, random_state=42)
save_fig("lasso_regression_plot")
plt.show()
from sklearn.linear_model import Lasso
lasso_reg = Lasso(alpha=0.1)
lasso_reg.fit(X, y)
lasso_reg.predict([[1.5]])
from sklearn.linear_model import ElasticNet
elastic_net = ElasticNet(alpha=0.1, l1_ratio=0.5, random_state=42)
elastic_net.fit(X, y)
elastic_net.predict([[1.5]])
np.random.seed(42)
m = 100
X = 6 * np.random.rand(m, 1) - 3
y = 2 + X + 0.5 * X**2 + np.random.randn(m, 1)
X_train, X_val, y_train, y_val = train_test_split(X[:50], y[:50].ravel(), test_size=0.5, random_state=10)
poly_scaler = Pipeline([
("poly_features", PolynomialFeatures(degree=90, include_bias=False)),
("std_scaler", StandardScaler()),
])
X_train_poly_scaled = poly_scaler.fit_transform(X_train)
X_val_poly_scaled = poly_scaler.transform(X_val)
sgd_reg = SGDRegressor(max_iter=1,
tol=-np.infty,
penalty=None,
eta0=0.0005,
warm_start=True,
learning_rate="constant",
random_state=42)
n_epochs = 500
train_errors, val_errors = [], []
for epoch in range(n_epochs):
sgd_reg.fit(X_train_poly_scaled, y_train)
y_train_predict = sgd_reg.predict(X_train_poly_scaled)
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
train_errors.append(mean_squared_error(y_train, y_train_predict))
val_errors.append(mean_squared_error(y_val, y_val_predict))
best_epoch = np.argmin(val_errors)
best_val_rmse = np.sqrt(val_errors[best_epoch])
plt.annotate('Best model',
xy=(best_epoch, best_val_rmse),
xytext=(best_epoch, best_val_rmse + 1),
ha="center",
arrowprops=dict(facecolor='black', shrink=0.05),
fontsize=16,
)
best_val_rmse -= 0.03 # just to make the graph look better
plt.plot([0, n_epochs], [best_val_rmse, best_val_rmse], "k:", linewidth=2)
plt.plot(np.sqrt(val_errors), "b-", linewidth=3, label="Validation set")
plt.plot(np.sqrt(train_errors), "r--", linewidth=2, label="Training set")
plt.legend(loc="upper right", fontsize=14)
plt.xlabel("Epoch", fontsize=14)
plt.ylabel("RMSE", fontsize=14)
save_fig("early_stopping_plot")
plt.show()
from sklearn.base import clone
sgd_reg = SGDRegressor(max_iter=1, tol=-np.infty, warm_start=True, penalty=None,
learning_rate="constant", eta0=0.0005, random_state=42)
minimum_val_error = float("inf")
best_epoch = None
best_model = None
for epoch in range(1000):
sgd_reg.fit(X_train_poly_scaled, y_train) # continues where it left off
y_val_predict = sgd_reg.predict(X_val_poly_scaled)
val_error = mean_squared_error(y_val, y_val_predict)
if val_error < minimum_val_error:
minimum_val_error = val_error
best_epoch = epoch
best_model = clone(sgd_reg)
best_epoch, best_model
t1a, t1b, t2a, t2b = -1, 3, -1.5, 1.5
# ignoring bias term
t1s = np.linspace(t1a, t1b, 500)
t2s = np.linspace(t2a, t2b, 500)
t1, t2 = np.meshgrid(t1s, t2s)
T = np.c_[t1.ravel(), t2.ravel()]
Xr = np.array([[-1, 1], [-0.3, -1], [1, 0.1]])
yr = 2 * Xr[:, :1] + 0.5 * Xr[:, 1:]
J = (1/len(Xr) * np.sum((T.dot(Xr.T) - yr.T)**2, axis=1)).reshape(t1.shape)
N1 = np.linalg.norm(T, ord=1, axis=1).reshape(t1.shape)
N2 = np.linalg.norm(T, ord=2, axis=1).reshape(t1.shape)
t_min_idx = np.unravel_index(np.argmin(J), J.shape)
t1_min, t2_min = t1[t_min_idx], t2[t_min_idx]
t_init = np.array([[0.25], [-1]])
def bgd_path(theta, X, y, l1, l2, core = 1, eta = 0.1, n_iterations = 50):
path = [theta]
for iteration in range(n_iterations):
gradients = core * 2/len(X) * X.T.dot(X.dot(theta) - y) + l1 * np.sign(theta) + 2 * l2 * theta
theta = theta - eta * gradients
path.append(theta)
return np.array(path)
plt.figure(figsize=(12, 8))
for i, N, l1, l2, title in ((0, N1, 0.5, 0, "Lasso"), (1, N2, 0, 0.1, "Ridge")):
JR = J + l1 * N1 + l2 * N2**2
tr_min_idx = np.unravel_index(np.argmin(JR), JR.shape)
t1r_min, t2r_min = t1[tr_min_idx], t2[tr_min_idx]
levelsJ=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(J) - np.min(J)) + np.min(J)
levelsJR=(np.exp(np.linspace(0, 1, 20)) - 1) * (np.max(JR) - np.min(JR)) + np.min(JR)
levelsN=np.linspace(0, np.max(N), 10)
path_J = bgd_path(t_init, Xr, yr, l1=0, l2=0)
path_JR = bgd_path(t_init, Xr, yr, l1, l2)
path_N = bgd_path(t_init, Xr, yr, np.sign(l1)/3, np.sign(l2), core=0)
plt.subplot(221 + i * 2)
plt.grid(True)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.contourf(t1, t2, J, levels=levelsJ, alpha=0.9)
plt.contour(t1, t2, N, levels=levelsN)
plt.plot(path_J[:, 0], path_J[:, 1], "w-o")
plt.plot(path_N[:, 0], path_N[:, 1], "y-^")
plt.plot(t1_min, t2_min, "rs")
plt.title(r"$\ell_{}$ penalty".format(i + 1), fontsize=16)
plt.axis([t1a, t1b, t2a, t2b])
if i == 1:
plt.xlabel(r"$\theta_1$", fontsize=20)
plt.ylabel(r"$\theta_2$", fontsize=20, rotation=0)
plt.subplot(222 + i * 2)
plt.grid(True)
plt.axhline(y=0, color='k')
plt.axvline(x=0, color='k')
plt.contourf(t1, t2, JR, levels=levelsJR, alpha=0.9)
plt.plot(path_JR[:, 0], path_JR[:, 1], "w-o")
plt.plot(t1r_min, t2r_min, "rs")
plt.title(title, fontsize=16)
plt.axis([t1a, t1b, t2a, t2b])
if i == 1:
plt.xlabel(r"$\theta_1$", fontsize=20)
save_fig("lasso_vs_ridge_plot")
plt.show()
###Output
Saving figure: lasso_vs_ridge_plot
D:\handson-ml\images\training_linear_models\lasso_vs_ridge_plot.png
###Markdown
Logistic regression
###Code
t = np.linspace(-10, 10, 100)
sig = 1 / (1 + np.exp(-t))
plt.figure(figsize=(9, 3))
plt.plot([-10, 10], [0, 0], "k-")
plt.plot([-10, 10], [0.5, 0.5], "k:")
plt.plot([-10, 10], [1, 1], "k:")
plt.plot([0, 0], [-1.1, 1.1], "k-")
plt.plot(t, sig, "b-", linewidth=2, label=r"$\sigma(t) = \frac{1}{1 + e^{-t}}$")
plt.xlabel("t")
plt.legend(loc="upper left", fontsize=20)
plt.axis([-10, 10, -0.1, 1.1])
save_fig("logistic_function_plot")
plt.show()
from sklearn import datasets
iris = datasets.load_iris()
list(iris.keys())
print(iris.DESCR)
X = iris["data"][:, 3:] # petal width
y = (iris["target"] == 2).astype(np.int) # 1 if Iris-Virginica, else 0
from sklearn.linear_model import LogisticRegression
log_reg = LogisticRegression(solver="liblinear", random_state=42)
log_reg.fit(X, y)
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica")
plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica")
###Output
_____no_output_____
###Markdown
The figure in the book actually is actually a bit fancier:
###Code
X_new = np.linspace(0, 3, 1000).reshape(-1, 1)
y_proba = log_reg.predict_proba(X_new)
decision_boundary = X_new[y_proba[:, 1] >= 0.5][0]
plt.figure(figsize=(8, 3))
plt.plot(X[y==0], y[y==0], "bs")
plt.plot(X[y==1], y[y==1], "g^")
plt.plot([decision_boundary, decision_boundary], [-1, 2], "k:", linewidth=2)
plt.plot(X_new, y_proba[:, 1], "g-", linewidth=2, label="Iris-Virginica")
plt.plot(X_new, y_proba[:, 0], "b--", linewidth=2, label="Not Iris-Virginica")
plt.text(decision_boundary+0.02, 0.15, "Decision boundary", fontsize=14, color="k", ha="center")
plt.arrow(decision_boundary, 0.08, -0.3, 0, head_width=0.05, head_length=0.1, fc='b', ec='b')
plt.arrow(decision_boundary, 0.92, 0.3, 0, head_width=0.05, head_length=0.1, fc='g', ec='g')
plt.xlabel("Petal width (cm)", fontsize=14)
plt.ylabel("Probability", fontsize=14)
plt.legend(loc="center left", fontsize=14)
plt.axis([0, 3, -0.02, 1.02])
save_fig("logistic_regression_plot")
plt.show()
decision_boundary
log_reg.predict([[1.7], [1.5]])
from sklearn.linear_model import LogisticRegression
X = iris["data"][:, (2, 3)] # petal length, petal width
y = (iris["target"] == 2).astype(np.int)
log_reg = LogisticRegression(solver="liblinear", C=10**10, random_state=42)
log_reg.fit(X, y)
x0, x1 = np.meshgrid(
np.linspace(2.9, 7, 500).reshape(-1, 1),
np.linspace(0.8, 2.7, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = log_reg.predict_proba(X_new)
plt.figure(figsize=(10, 4))
plt.plot(X[y==0, 0], X[y==0, 1], "bs")
plt.plot(X[y==1, 0], X[y==1, 1], "g^")
zz = y_proba[:, 1].reshape(x0.shape)
contour = plt.contour(x0, x1, zz, cmap=plt.cm.brg)
left_right = np.array([2.9, 7])
boundary = -(log_reg.coef_[0][0] * left_right + log_reg.intercept_[0]) / log_reg.coef_[0][1]
plt.clabel(contour, inline=1, fontsize=12)
plt.plot(left_right, boundary, "k--", linewidth=3)
plt.text(3.5, 1.5, "Not Iris-Virginica", fontsize=14, color="b", ha="center")
plt.text(6.5, 2.3, "Iris-Virginica", fontsize=14, color="g", ha="center")
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.axis([2.9, 7, 0.8, 2.7])
save_fig("logistic_regression_contour_plot")
plt.show()
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
softmax_reg = LogisticRegression(multi_class="multinomial",solver="lbfgs", C=10, random_state=42)
softmax_reg.fit(X, y)
x0, x1 = np.meshgrid(
np.linspace(0, 8, 500).reshape(-1, 1),
np.linspace(0, 3.5, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
y_proba = softmax_reg.predict_proba(X_new)
y_predict = softmax_reg.predict(X_new)
zz1 = y_proba[:, 1].reshape(x0.shape)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica")
plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor")
plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa")
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap)
contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg)
plt.clabel(contour, inline=1, fontsize=12)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="center left", fontsize=14)
plt.axis([0, 7, 0, 3.5])
save_fig("softmax_regression_contour_plot")
plt.show()
softmax_reg.predict([[5, 2]])
softmax_reg.predict_proba([[5, 2]])
###Output
_____no_output_____
###Markdown
Exercise solutions 1. to 11. See appendix A. 12. Batch Gradient Descent with early stopping for Softmax Regression(without using Scikit-Learn) Let's start by loading the data. We will just reuse the Iris dataset we loaded earlier.
###Code
X = iris["data"][:, (2, 3)] # petal length, petal width
y = iris["target"]
###Output
_____no_output_____
###Markdown
We need to add the bias term for every instance ($x_0 = 1$):
###Code
X_with_bias = np.c_[np.ones([len(X), 1]), X]
###Output
_____no_output_____
###Markdown
And let's set the random seed so the output of this exercise solution is reproducible:
###Code
np.random.seed(2042)
###Output
_____no_output_____
###Markdown
The easiest option to split the dataset into a training set, a validation set and a test set would be to use Scikit-Learn's `train_test_split()` function, but the point of this exercise is to try understand the algorithms by implementing them manually. So here is one possible implementation:
###Code
test_ratio = 0.2
validation_ratio = 0.2
total_size = len(X_with_bias)
test_size = int(total_size * test_ratio)
validation_size = int(total_size * validation_ratio)
train_size = total_size - test_size - validation_size
rnd_indices = np.random.permutation(total_size)
X_train = X_with_bias[rnd_indices[:train_size]]
y_train = y[rnd_indices[:train_size]]
X_valid = X_with_bias[rnd_indices[train_size:-test_size]]
y_valid = y[rnd_indices[train_size:-test_size]]
X_test = X_with_bias[rnd_indices[-test_size:]]
y_test = y[rnd_indices[-test_size:]]
###Output
_____no_output_____
###Markdown
The targets are currently class indices (0, 1 or 2), but we need target class probabilities to train the Softmax Regression model. Each instance will have target class probabilities equal to 0.0 for all classes except for the target class which will have a probability of 1.0 (in other words, the vector of class probabilities for ay given instance is a one-hot vector). Let's write a small function to convert the vector of class indices into a matrix containing a one-hot vector for each instance:
###Code
def to_one_hot(y):
n_classes = y.max() + 1
m = len(y)
Y_one_hot = np.zeros((m, n_classes))
Y_one_hot[np.arange(m), y] = 1
return Y_one_hot
###Output
_____no_output_____
###Markdown
Let's test this function on the first 10 instances:
###Code
y_train[:10]
to_one_hot(y_train[:10])
###Output
_____no_output_____
###Markdown
Looks good, so let's create the target class probabilities matrix for the training set and the test set:
###Code
Y_train_one_hot = to_one_hot(y_train)
Y_valid_one_hot = to_one_hot(y_valid)
Y_test_one_hot = to_one_hot(y_test)
###Output
_____no_output_____
###Markdown
Now let's implement the Softmax function. Recall that it is defined by the following equation:$\sigma\left(\mathbf{s}(\mathbf{x})\right)_k = \dfrac{\exp\left(s_k(\mathbf{x})\right)}{\sum\limits_{j=1}^{K}{\exp\left(s_j(\mathbf{x})\right)}}$
###Code
def softmax(logits):
exps = np.exp(logits)
exp_sums = np.sum(exps, axis=1, keepdims=True)
return exps / exp_sums
###Output
_____no_output_____
###Markdown
We are almost ready to start training. Let's define the number of inputs and outputs:
###Code
n_inputs = X_train.shape[1] # == 3 (2 features plus the bias term)
n_outputs = len(np.unique(y_train)) # == 3 (3 iris classes)
###Output
_____no_output_____
###Markdown
Now here comes the hardest part: training! Theoretically, it's simple: it's just a matter of translating the math equations into Python code. But in practice, it can be quite tricky: in particular, it's easy to mix up the order of the terms, or the indices. You can even end up with code that looks like it's working but is actually not computing exactly the right thing. When unsure, you should write down the shape of each term in the equation and make sure the corresponding terms in your code match closely. It can also help to evaluate each term independently and print them out. The good news it that you won't have to do this everyday, since all this is well implemented by Scikit-Learn, but it will help you understand what's going on under the hood.So the equations we will need are the cost function:$J(\mathbf{\Theta}) =- \dfrac{1}{m}\sum\limits_{i=1}^{m}\sum\limits_{k=1}^{K}{y_k^{(i)}\log\left(\hat{p}_k^{(i)}\right)}$And the equation for the gradients:$\nabla_{\mathbf{\theta}^{(k)}} \, J(\mathbf{\Theta}) = \dfrac{1}{m} \sum\limits_{i=1}^{m}{ \left ( \hat{p}^{(i)}_k - y_k^{(i)} \right ) \mathbf{x}^{(i)}}$Note that $\log\left(\hat{p}_k^{(i)}\right)$ may not be computable if $\hat{p}_k^{(i)} = 0$. So we will add a tiny value $\epsilon$ to $\log\left(\hat{p}_k^{(i)}\right)$ to avoid getting `nan` values.
###Code
eta = 0.01
n_iterations = 5001
m = len(X_train)
epsilon = 1e-7
Theta = np.random.randn(n_inputs, n_outputs)
for iteration in range(n_iterations):
logits = X_train.dot(Theta)
Y_proba = softmax(logits)
loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))
error = Y_proba - Y_train_one_hot
if iteration % 500 == 0:
print(iteration, loss)
gradients = 1/m * X_train.T.dot(error)
Theta = Theta - eta * gradients
###Output
0 5.446205811872683
500 0.8350062641405651
1000 0.6878801447192402
1500 0.6012379137693313
2000 0.5444496861981873
2500 0.5038530181431525
3000 0.4729228972192248
3500 0.4482424418895776
4000 0.4278651093928793
4500 0.41060071429187134
5000 0.3956780375390373
###Markdown
And that's it! The Softmax model is trained. Let's look at the model parameters:
###Code
Theta
###Output
_____no_output_____
###Markdown
Let's make predictions for the validation set and check the accuracy score:
###Code
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_valid)
accuracy_score
###Output
_____no_output_____
###Markdown
Well, this model looks pretty good. For the sake of the exercise, let's add a bit of $\ell_2$ regularization. The following training code is similar to the one above, but the loss now has an additional $\ell_2$ penalty, and the gradients have the proper additional term (note that we don't regularize the first element of `Theta` since this corresponds to the bias term). Also, let's try increasing the learning rate `eta`.
###Code
eta = 0.1
n_iterations = 5001
m = len(X_train)
epsilon = 1e-7
alpha = 0.1 # regularization hyperparameter
Theta = np.random.randn(n_inputs, n_outputs)
for iteration in range(n_iterations):
logits = X_train.dot(Theta)
Y_proba = softmax(logits)
xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))
l2_loss = 1/2 * np.sum(np.square(Theta[1:]))
loss = xentropy_loss + alpha * l2_loss
error = Y_proba - Y_train_one_hot
if iteration % 500 == 0:
print(iteration, loss)
gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]]
Theta = Theta - eta * gradients
###Output
0 6.629842469083912
500 0.5339667976629506
1000 0.503640075014894
1500 0.49468910594603216
2000 0.4912968418075477
2500 0.48989924700933296
3000 0.48929905984511984
3500 0.48903512443978603
4000 0.4889173621830818
4500 0.4888643337449303
5000 0.4888403120738818
###Markdown
Because of the additional $\ell_2$ penalty, the loss seems greater than earlier, but perhaps this model will perform better? Let's find out:
###Code
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_valid)
accuracy_score
###Output
_____no_output_____
###Markdown
Cool, perfect accuracy! We probably just got lucky with this validation set, but still, it's pleasant. Now let's add early stopping. For this we just need to measure the loss on the validation set at every iteration and stop when the error starts growing.
###Code
eta = 0.1
n_iterations = 5001
m = len(X_train)
epsilon = 1e-7
alpha = 0.1 # regularization hyperparameter
best_loss = np.infty
Theta = np.random.randn(n_inputs, n_outputs)
for iteration in range(n_iterations):
logits = X_train.dot(Theta)
Y_proba = softmax(logits)
xentropy_loss = -np.mean(np.sum(Y_train_one_hot * np.log(Y_proba + epsilon), axis=1))
l2_loss = 1/2 * np.sum(np.square(Theta[1:]))
loss = xentropy_loss + alpha * l2_loss
error = Y_proba - Y_train_one_hot
gradients = 1/m * X_train.T.dot(error) + np.r_[np.zeros([1, n_outputs]), alpha * Theta[1:]]
Theta = Theta - eta * gradients
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
xentropy_loss = -np.mean(np.sum(Y_valid_one_hot * np.log(Y_proba + epsilon), axis=1))
l2_loss = 1/2 * np.sum(np.square(Theta[1:]))
loss = xentropy_loss + alpha * l2_loss
if iteration % 500 == 0:
print(iteration, loss)
if loss < best_loss:
best_loss = loss
else:
print(iteration - 1, best_loss)
print(iteration, loss, "early stopping!")
break
logits = X_valid.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_valid)
accuracy_score
###Output
_____no_output_____
###Markdown
Still perfect, but faster. Now let's plot the model's predictions on the whole dataset:
###Code
x0, x1 = np.meshgrid(
np.linspace(0, 8, 500).reshape(-1, 1),
np.linspace(0, 3.5, 200).reshape(-1, 1),
)
X_new = np.c_[x0.ravel(), x1.ravel()]
X_new_with_bias = np.c_[np.ones([len(X_new), 1]), X_new]
logits = X_new_with_bias.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
zz1 = Y_proba[:, 1].reshape(x0.shape)
zz = y_predict.reshape(x0.shape)
plt.figure(figsize=(10, 4))
plt.plot(X[y==2, 0], X[y==2, 1], "g^", label="Iris-Virginica")
plt.plot(X[y==1, 0], X[y==1, 1], "bs", label="Iris-Versicolor")
plt.plot(X[y==0, 0], X[y==0, 1], "yo", label="Iris-Setosa")
from matplotlib.colors import ListedColormap
custom_cmap = ListedColormap(['#fafab0','#9898ff','#a0faa0'])
plt.contourf(x0, x1, zz, cmap=custom_cmap)
contour = plt.contour(x0, x1, zz1, cmap=plt.cm.brg)
plt.clabel(contour, inline=1, fontsize=12)
plt.xlabel("Petal length", fontsize=14)
plt.ylabel("Petal width", fontsize=14)
plt.legend(loc="upper left", fontsize=14)
plt.axis([0, 7, 0, 3.5])
plt.show()
###Output
_____no_output_____
###Markdown
And now let's measure the final model's accuracy on the test set:
###Code
logits = X_test.dot(Theta)
Y_proba = softmax(logits)
y_predict = np.argmax(Y_proba, axis=1)
accuracy_score = np.mean(y_predict == y_test)
accuracy_score
###Output
_____no_output_____ |
notebooks/7_WordNet.ipynb | ###Markdown
Accessing WordNet through the NLTK interface >- [Accessing WordNet](Accessing-WordNet)>>>- [WN-based Semantic Similarity](WN-based-Semantic-Similarity) --- Accessing WordNetWordNet 3.0 can be accessed from NLTK by calling the appropriate NLTK corpus reader
###Code
from nltk.corpus import wordnet as wn
###Output
_____no_output_____
###Markdown
Retrieving Synsets The easiest way to retrieve synsets is by submitting the relevant lemma to the `synsets()` method, that returns the list of all the synsets containing it:
###Code
print(wn.synsets('dog'))
###Output
[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01'), Synset('chase.v.01')]
###Markdown
The optional paramater `pos` allows you to constrain the search to a given part of speech - available options: `wn.NOUN`, `wn.VERB`, `wn.ADJ`, `wn.ADV`
###Code
# let's ignore the verbal synsets from our previous results
print(wn.synsets('dog', pos = wn.NOUN))
###Output
[Synset('dog.n.01'), Synset('frump.n.01'), Synset('dog.n.03'), Synset('cad.n.01'), Synset('frank.n.02'), Synset('pawl.n.01'), Synset('andiron.n.01')]
###Markdown
You can use the `synset()` method together with the notation `lemma.pos.number` (e.g. `dog.n.01`) to access a given synset
###Code
# retrive the gloss of a given synset
wn.synset('dog.n.01').definition()
# let's see some examples
wn.synset('dog.n.01').examples()
###Output
_____no_output_____
###Markdown
Did anyone notice something weird in these results? Why did I get `frank.n.02`?
###Code
# let's retrieve the lemmas associated with a given synset
wn.synset('frank.n.02').lemmas()
###Output
_____no_output_____
###Markdown
What's the definition?
###Code
wn.synset('frank.n.02').definition()
###Output
_____no_output_____
###Markdown
The notation `lemmas.pos.number` is used to identify the **name** of the synset, that is the unique id that is used to store it in the semantic resources - note that it is different from the notation used to refer to synset lemmas, e.g. `frank.n.02.frank`
###Code
wn.synset('frank.n.02').name()
###Output
_____no_output_____
###Markdown
Applied to our original query...
###Code
# synsets for a given word
wn.synsets('dog', pos = wn.NOUN)
# synonyms for a particular meaning of a word
wn.synset('dog.n.01').lemmas()
wn.synset('dog.n.01').definition()
wn.synset('dog.n.03').lemmas()
wn.synset('dog.n.03').definition()
###Output
_____no_output_____
###Markdown
**Q. How are the senses in WordNet ordered?**A. *WordNet senses are ordered using sparse data from semantically tagged text. The order of the senses is given simply so that some of the most common uses are listed above others (and those for which there is no data are randomly ordered). The sense numbers and ordering of senses in WordNet should be considered random for research purposes.*(source: the [FAQ section](https://wordnet.princeton.edu/frequently-asked-questions) of the official WordNet web page) Finally, the method `all_synsets()` allows you to retrieve all the synsets in the resource:
###Code
for synset in list(wn.all_synsets())[:10]:
print(synset)
###Output
Synset('able.a.01')
Synset('unable.a.01')
Synset('abaxial.a.01')
Synset('adaxial.a.01')
Synset('acroscopic.a.01')
Synset('basiscopic.a.01')
Synset('abducent.a.01')
Synset('adducent.a.01')
Synset('nascent.a.01')
Synset('emergent.s.02')
###Markdown
... again, you can use the optional `pos` paramter to constrain your search:
###Code
for synset in list(wn.all_synsets(wn.ADV))[:10]:
print(synset)
###Output
Synset('a_cappella.r.01')
Synset('ad.r.01')
Synset('ce.r.01')
Synset('bc.r.01')
Synset('bce.r.01')
Synset('horseback.r.01')
Synset('barely.r.01')
Synset('just.r.06')
Synset('hardly.r.02')
Synset('anisotropically.r.01')
###Markdown
Retrieving Semantic and Lexical Relations the Nouns sub-netNLTK makes it easy to explore the WordNet hierarchy. The `hyponyms()` method allows you to retrieve all the immediate hyponyms of our target synset
###Code
wn.synset('dog.n.01').hyponyms()
###Output
_____no_output_____
###Markdown
to move in the opposite direction (i.e. towards more general synsets) we can use:- either the `hypernyms()` method to retrieve the immediate hypernym (or hypernyms in the following case)
###Code
wn.synset('dog.n.01').hypernyms()
###Output
_____no_output_____
###Markdown
- or the `hypernym_paths()` method to retrieve all the hyperonymyc chain **up to the root node**
###Code
wn.synset('dog.n.01').hypernym_paths()
###Output
_____no_output_____
###Markdown
Another important semantic relation for the nouns sub-net is **meronymy**, that links an object (holonym) with its parts (meronym). There are three semantic relations of this kind in WordNet:- **Part meronymy**: the relation between an object and its separable components:
###Code
wn.synset('tree.n.01').part_meronyms()
###Output
_____no_output_____
###Markdown
- **Substance meronymy**: the relation between an object and the substance it is made of
###Code
wn.synset('tree.n.01').substance_meronyms()
###Output
_____no_output_____
###Markdown
- **Member meronymy**: the relation between a group and its members
###Code
wn.synset('tree.n.01').member_holonyms()
###Output
_____no_output_____
###Markdown
**Instances** do not have hypernyms, but **instance_hypernyms**:
###Code
# amsterdam is a national capital vs *Amsterdam is a kind of a national capital
wn.synset('amsterdam.n.01').instance_hypernyms()
wn.synset('amsterdam.n.01').hypernyms()
###Output
_____no_output_____
###Markdown
the Verbs sub-netMoving in the Verbs sub-net, the **troponymy** relation can be navigated by using the same methods used to navigate the nominal hyperonymyc relations
###Code
wn.synset('sleep.v.01').hypernyms()
wn.synset('sleep.v.01').hypernym_paths()
###Output
_____no_output_____
###Markdown
The other central relation in the organization of the verbs is the **entailment** one:
###Code
wn.synset('eat.v.01').entailments()
###Output
_____no_output_____
###Markdown
Adjective clustersAdjectives are organized in clusters of **satellites** adjectives (labeled as `lemma.s.number`) connected to a central adjective (labeled as `lemma.a.number`) by means of the **similar_to** relation
###Code
# a satellite adjective is linked just to one central adjective
wn.synset('quick.s.01').similar_tos()
# a central adjective is linked to many satellite adjectives
wn.synset('fast.a.01').similar_tos()
###Output
_____no_output_____
###Markdown
The **lemmas** of the central adjective of each cluster, moreover, are connected to their **antonyms**, that is to lemmas that have the opposite meaning
###Code
wn.lemma('fast.a.01.fast').antonyms()
###Output
_____no_output_____
###Markdown
But take note:
###Code
try:
wn.synset('fast.a.01').antonyms()
except AttributeError:
print("antonymy is a LEXICAL relation, it cannot involve synsets")
###Output
antonymy is a LEXICAL relation, it cannot involve synsets
###Markdown
WN-based Semantic Similarity Simulating the human ability to estimate semantic distances between concepts is crucial for:- Psycholinguistics: for long time the study of human semantic memory has been tied to the study of concepts similarity- Natural Language Processing: for any task that requires some sort of semantic comprehensions Classes of Semantic Distance Measures Relatedness- two concepts are related if **a relation of any sort** holds between them- information can be extracted from: - semantic networks - dictionaries - corpora Similarity- it is a special case of relatedness- the relation holding between two concepts **by virtue of their ontological status**, i.e. by virtue of their taxonomic positions (Resnik, 1995) - car - bicycle - \*car - fuel- information can be extracted from - hierarchical networks - taxnomies WordNet-based Similarity Measures
###Code
dog = wn.synset('dog.n.01')
cat = wn.synset('cat.n.01')
hit = wn.synset('hit.v.01')
slap = wn.synset('slap.v.01')
fish = wn.synset('fish.n.01')
bird = wn.synset('bird.n.01')
###Output
_____no_output_____
###Markdown
Path Length-based measuresThese measures are based on $pathlen(c_1, c_2)$: - i.e. the number of arc in the shorted path connecting two nodes $c_1$ and $c_2$  you can use the `shortest_path_distance()` method to count the number of arcs
###Code
fish.shortest_path_distance(bird)
dog.shortest_path_distance(cat)
###Output
_____no_output_____
###Markdown
When two notes belongs to different sub-nets, it does not return any values...
###Code
print(dog.shortest_path_distance(hit))
###Output
None
###Markdown
... unless you simulate the existance of a **dummy root** by setting the `simulate_root` option to `True`
###Code
print(dog.shortest_path_distance(hit, simulate_root = True))
###Output
12
###Markdown
This is quite handy expecially when working on the **verb sub-net** that **do not have a unique root node** (differently to what happens in the nouns sub-net)
###Code
print(hit.shortest_path_distance(slap))
print(hit.shortest_path_distance(slap, simulate_root = True))
###Output
6
###Markdown
**Simple Path Length**:$$sim_{simple}(c_1,c_2) = \frac{1}{pathlen(c_1,c_2) + 1}$$ use the `path_similarity()` method to calculate this measure
###Code
dog.path_similarity(cat)
###Output
_____no_output_____
###Markdown
**Leacock & Chodorow (1998)**$$sim_{L\&C}(c_1,c_2) = -log \left(\frac{pathlen(c_1,c_2)}{2 \times D}\right)$$ where $D$ is the maximum depth of the taxonomy- as a consequence, $2 \times D$ is the maximum possible pathlen
###Code
dog.lch_similarity(cat)
###Output
_____no_output_____
###Markdown
you cannot compare synset belonging to different pos
###Code
try:
dog.lch_similarity(hit)
except Exception as e:
print(e)
###Output
Computing the lch similarity requires Synset('dog.n.01') and Synset('hit.v.01') to have the same part of speech.
###Markdown
Wu & Palmer (1994)This measure is based on the notion of **Least Common Subsumer**- i.e. the lowest node that dominates both synsets, e.g. `LCS({fish}, {bird}) = {vertebrate, craniate}`  NLTK allows you to use the `lowest_common_hypernyms()` method to identify the Least Common Subsumer of two nodes
###Code
dog.lowest_common_hypernyms(cat)
###Output
_____no_output_____
###Markdown
If necessary, use option `simulate_root` to simulate the existance onf a dummy root:
###Code
print(hit.lowest_common_hypernyms(slap, simulate_root = True))
###Output
[Synset('*ROOT*')]
###Markdown
Wu & Palmer (1998) proposed to measure the semantic simliiarity between concepts by contrasting the depth of the LCS with the depths of the nodes: $$sim_{W\&P(c_1, c_2)} = \frac{2 \times depth(LCS(c_1, c_2))}{depth(c_1) + depth(c_2)}$$ where $depth(s)$ is the number of arcs between the root node and the node $s$ the minimum and the maximum depths of each node can be calculated with the `min_depth()` and `max_depth()` modules
###Code
print(dog.min_depth(), dog.max_depth())
###Output
8 13
###Markdown
...and the `wup_similarity()` (authors' names) method to calculate this measure (option `simulate_root` available)
###Code
print(dog.wup_similarity(cat))
###Output
0.8571428571428571
###Markdown
Information Content-based measures- the **Information Content** of a concept $C$ is the probability of a randomly selected word to be an instance of the concept $C$ (i.e. the synset $c$ or one of its hyponyms) $$IC(C) = -log(P(C))$$ - Following Resnik (1995), corpus frequencies can be used to estimate this probability $$P(C) = \frac{freq(C)}{N} = \frac{\sum_{w \in words(c)}count(w)}{N}$$ - $words(c)$ = set of words that are hierarchically included by $C$ (i.e. its hyponyms)- N = number of corpus tokens for which there is a representation in WordNet A fragment of the WN nominal hierarchy, in which each node has been labeled with its $P(C)$ (from Lin, 1998)  **Resnik (1995)** $$sim_{resnik}(c_1,c_2) = IC(LCS(c_1,c_2)) = -log(P(LCS(c_1,c_2)))$$ Several Information Content dictionaries are available in NLTK...
###Code
from nltk.corpus import wordnet_ic
# the IC estimated from the brown corpus
brown_ic = wordnet_ic.ic('ic-brown.dat')
# the IC estimated from the semcor
semcor_ic = wordnet_ic.ic('ic-semcor.dat')
###Output
_____no_output_____
###Markdown
... or it can be estimated form an available corpus
###Code
from nltk.corpus import genesis
genesis_ic = wn.ic(genesis, False, 0.0)
###Output
_____no_output_____
###Markdown
Note that these calculation of the resnick measure depends on the corpus used to generate the information content
###Code
print(dog.res_similarity(cat, ic = brown_ic))
print(dog.res_similarity(cat, ic = semcor_ic))
print(dog.res_similarity(cat, ic = genesis_ic))
###Output
7.911666509036577
7.2549003421277245
7.204023991374837
###Markdown
**Lin (1998)** $$sim_{lin}(c_1,c_2) = \frac{log(P(common(c_1,c_2)))}{log(P(description(c_1,c_2)))} = \frac{2 \times IC(LCS(c_1,c_2))}{IC(c_1) + IC(c_2)}$$ - $common(c_1,c_2)$ = the information that is common between $c_1$ and $c_2$- $description(c_1,c_2)$ = the information that is needed to describe $c_1$ and $c_2$
###Code
print(dog.lin_similarity(cat, ic = brown_ic))
print(dog.lin_similarity(cat, ic = semcor_ic))
print(dog.lin_similarity(cat, ic = genesis_ic))
###Output
0.8768009843733973
0.8863288628086228
0.8043806652422293
###Markdown
**Jiang & Conrath (1997)** $$sim_{J\&C}(c_1,c_2) = \frac{1}{dist(c_1,c_2)} = \frac{1}{IC(c_1) + IC(c_2) - 2 \times IC(LCS(c_1, c_2))}$$
###Code
print(dog.jcn_similarity(cat, ic = brown_ic))
print(dog.jcn_similarity(cat, ic = semcor_ic))
print(dog.jcn_similarity(cat, ic = genesis_ic))
###Output
0.4497755285516739
0.537382154955756
0.28539390848096946
|
tuning_google_colab.ipynb | ###Markdown
Create the data
###Code
# @jit(nopython=True, parallel=True, fastmath=True)
def make_errors(errors, errorsi):
for n in prange(0, 12 + 1):
errors[n] = (fracs - eq_temps[n]) / eq_temps[n]
errorsi[n] = (fracsi - eq_temps[12 - n]) / eq_temps[12 - n]
eq_temps = np.zeros(13, dtype=np.longdouble)
for n in range(0, 13):
eq_temps[n] = 2 ** (n / 12)
eq_temps
max_num = 1000 * 1
%%time
a_s = np.arange(1, max_num + 1, dtype=np.int32)
b_s = 1 / a_s
fracs = np.multiply(a_s.reshape(-1, 1), b_s.reshape(1, -1), dtype=np.float64) # change to float64 or float128 to gain precision
fracsi = 2 / fracs
%%time
ones = np.ones(max_num, dtype=np.int16)
a_c = np.multiply(a_s.reshape(-1, 1), ones.reshape(1, -1),
dtype=np.int16).reshape((max_num * max_num),)
b_r = np.multiply(a_s.reshape(1, -1), ones.reshape(-1, 1),
dtype=np.int16).reshape((max_num * max_num),)
del ones
%%time
fracs = fracs.reshape((max_num * max_num),)
fracsi = fracsi.reshape((max_num * max_num),)
%%time
errors = [[] for i in range(13)]
errorsi = [[] for i in range(13)]
make_errors(errors, errorsi)
%%time
errors = np.concatenate(errors)
errorsi = np.concatenate(errorsi)
%%time
ns = list()
for i in range(13):
ns.append(i * np.ones(max_num * max_num, dtype=np.int8))
ns = np.concatenate(ns)
%%time
a_c = np.tile(a_c, reps=13)
b_r = np.tile(b_r, reps=13)
%%time
fracs = np.tile(fracs, reps=13)
fracsi = np.tile(fracsi, reps=13)
# The number of rows in a dataframe of the values
f'{max_num * max_num * 13:,}'
# %%time
## (Takes up way too much RAM)
# df = pd.DataFrame({'a_s': a_c, 'b_s': b_r,'fracs': fracs, 'fracsi': fracsi,
# 'n': ns, 'errors': errors, 'errorsi': errorsi})
###Output
_____no_output_____
###Markdown
Iterate through and delete each time to minimize RAM
###Code
%%time
df = pd.DataFrame({'a_s': a_c, 'b_s': b_r,'fracs': fracs})
del a_c, b_r, fracs
%%time
df_temp = pd.DataFrame({'fracsi': fracsi, 'n': ns})
del fracsi, ns
%%time
df = pd.concat([df, df_temp], axis=1)
del df_temp
%%time
df_temp = pd.DataFrame({'errors': errors})
del errors
%%time
df = pd.concat([df, df_temp], axis=1)
del df_temp
%%time
df_temp = pd.DataFrame({'errorsi': errorsi})
del errorsi
%%time
df = pd.concat([df, df_temp], axis=1)
del df_temp
###Output
_____no_output_____
###Markdown
Analysis
###Code
df['abs_errors'] = abs(df.errors)
df['abs_errorsi'] = abs(df.errorsi)
%%time
max_error = 0.15
# min_error was 0.1
min_error = 0
dfs = df.loc[(min_error <= abs(df.errors)) & (abs(df.errors) <= max_error)]
dfs.head()
# df.loc[np.gcd(df.a_s, df.b_s) == 1]
dfs = dfs.loc[np.gcd(dfs.a_s, dfs.b_s) == 1]
plt.scatter(dfs.a_s, dfs.b_s)
dfs.loc[dfs.n == 2].sort_values(by='abs_errors')
dfs.loc[dfs.n == 7].sort_values(by='abs_errors')
2 ** (7/12)
3 / 2
###Output
_____no_output_____
###Markdown
Fork analysis
###Code
# # the warnings are false positives
# dfs['mid'] = (1.056 * abs(dfs.errors) - (5 / 1000))
# dfs['fork'] = dfs.errorsi > dfs.mid
# dfs['fork'] = dfs.fork.astype('int8')
# dfs = dfs.drop(columns=['mid'])
# abserrors = abs(dfs.errors).tolist()
# abserrorsi = abs(dfs.errorsi).tolist()
# plt.scatter(abserrors, abserrorsi)
dfs.corr()
dfs.shape
dfs.fork.value_counts()
21356665 / (10529625 + 21356665)
###Output
_____no_output_____
###Markdown
It turns out that the fork of the inverse error (higher or lower) is completely predicted by the sign of the original error, such that if the starting fraction is lower than the equal temperament value (the original error is negative), then the inverse fraction will be greater than the inverse equal temperament by a greater amount than if the starting fraction was greater than the equal temperament value.
###Code
(1 - ((np.sign(dfs.errors) + 1) / 2).astype(np.int8) == dfs.fork).unique()
###Output
_____no_output_____
###Markdown
The error and inverse error are always opposite sign.
###Code
(np.sign(dfs.errors) == np.sign(dfs.errorsi)).unique()
plt.scatter(dfs.errors, dfs.errorsi)
x = np.arange(0.1, 0.3, 0.001)
y_low = (13 / 16) * x + (7 / 800)
y_high = (13 / 10) * x - (19 / 1000)
y = 1.056 * x - (5 / 1000)
plt.scatter(x, y_low)
plt.scatter(x, y_high)
plt.scatter(x, y)
# plt.ylim(0.09, 0.4)
###Output
_____no_output_____ |
python/3 Learning.ipynb | ###Markdown
Table of Contents1 Load and Prepare Data2 Support Vector Machines3 Classification Learning Methodology Implementation
###Code
# Import Packages
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from IPython.display import display
import scipy.sparse
from sklearn.metrics import accuracy_score
from matplotlib import pyplot as plt
# Programming tools
import os
import sys
import gc
# Notebook options
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load and Prepare Data
###Code
# First load the training datasets
X_train = scipy.sparse.load_npz('X_train1.npz')
y_train = np.load('y_train1.npy')
# Check data type
display(type(X_train))
display(type(y_train))
# Prepare Train and Test Data
from sklearn.model_selection import train_test_split
trainX, testX, trainy, testy = train_test_split(X_train, y_train, test_size=0.33)
###Output
_____no_output_____
###Markdown
Support Vector Machines
###Code
# Import Packages
from sklearn import svm
# Check the unique values
unique, counts = np.unique(y_train, return_counts=True)
dict(zip(unique, counts))
# Parameters Preparation
Cs = (2**-5,2**-4,2**-3,2**-2,2**-1,1,2,2**3,2**5,2**7,2**9,2**11,2**13,2**15)
gammas = (2**-15,2**-13,2**-11,2**-9,2**-7,2**-5,2**-3,2**-1,1,2,2**3,2**5)
print(degrees,gammas,Cs)
# Linear Kernel: Tune parameters
accuracy_svm_li = []
for i in range(len(Cs)):
svm_linear = svm.SVC(kernel='linear', C=Cs[i], class_weight={1:99773/227})
svm_linear.fit(trainX, trainy)
y_predict_svm_linear = svm_linear.predict(testX)
accuracy_svm_linear=accuracy_score(y_predict_svm_linear,testy)
accuracy_svm_li.append(accuracy_svm_linear)
# Plot Linear Kernel: Penalty Parameter vs Accuracy
plt.plot(Cs, accuracy_svm_li)
plt.ylabel('Accuracy for SVC with Linear Kernel')
plt.xlabel('Penalty Parameter C')
plt.title('SVC with Linear Kernel')
plt.legend()
plt.show()
# RBF Kernel: Tune parameters
accuracy_svm_rbf_ = []
#for i in range(len(gammas)):
svm_rbf = svm.SVC(kernel='rbf', gamma=gammas[4], C=Cs[0], class_weight={1:99773/227})
svm_rbf.fit(trainX, trainy)
y_predict_svm_rbf = svm_rbf.predict(testX)
accuracy_svm_rbf = accuracy_score(y_predict_svm_rbf,testy)
accuracy_svm_rbf_.append(accuracy_svm_rbf)
print(accuracy_svm_rbf_)
# Plot for Gammas and Accuracy
plt.plot(gammas, accuracy_svm_rbf_)
plt.ylabel('Accuracy for SVC with RBF Kernel')
plt.xlabel('Values of Gammas')
plt.title('SVC with RBF Kernel')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Classification
###Code
# Load Packages and Prepare Parameters
from sklearn.ensemble import AdaBoostClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
estimators_sizes = [10,20,30,40,50,60,80,100,120,140,200]
max_depths = range(1,4)
learn_rate = [0.1,0.3,0.5,0.7,0.9,1.1,1.3,1.5,1.7,1.9]
sample_weight = [1/99773 if x==1 else 1/227 for x in trainy ]
#AdaBoostClassifier: Tune Parameters
accuracy_adab_ = []
for i in range(len(estimators_sizes)):
dt = DecisionTreeClassifier(max_depth=6)
adab = AdaBoostClassifier(n_estimators=estimators_sizes[i], base_estimator=dt)
adab.fit(trainX,trainy,sample_weight=None)
y_predict_adab = adab.predict(testX)
accuracy_adab=accuracy_score(y_predict_adab,testy)
accuracy_adab_.append(accuracy_adab)
# Plot Accuracy vs Number of Estimators
plt.plot(estimators_sizes, accuracy_adab_)
plt.ylabel('Accuracy for AdaBoostClassifier')
plt.xlabel('Estimators_sizes')
plt.title('AdaBoostClassifier')
plt.legend()
plt.show()
# Random Forest
feature_sizes = range(1,20)
accuracy_rf_=[]
# Tune the parameters
for i in range(len(feature_sizes)):
rf = RandomForestClassifier(n_estimators=10,max_depth=max_depths[2],max_features=feature_sizes[i],class_weight={1:99773/227})
rf.fit(trainX,trainy)
y_predict_rf = rf.predict(testX)
accuracy_rf=accuracy_score(y_predict_rf,testy)
accuracy_rf_.append(accuracy_rf)
print(accuracy_rf_)
# Plot Accuracy vs Maximum Number of Features
plt.plot(feature_sizes, accuracy_rf_)
plt.ylabel('Accuracy for RandomForest')
plt.xlabel('Feature_sizes')
plt.title('RandomForestClassifier')
plt.legend()
plt.show()
# LightGBM Model
import lightgbm as lgb
# Prepare Variables and Parameters
predictors = cat_cols.extend(num_cols)
leaf_sizes = range(25,40)
learn_rate = range(1,110)
min_data_in_leaf_sizes = range(30,100,10)
params = {}
params['learning_rate'] = 0.003
params['boosting_type'] = 'gbdt'
params['objective'] = 'binary'
params['metric'] = 'binary_logloss'
params['sub_feature'] = 0.5
params['num_leaves'] = 45
params['min_data_in_leaf'] = min_data_in_leaf_sizes[0]
params['max_depth'] = max_depths[2]
cat_cols = ['ip', 'app', 'device', 'os', 'channel', 'click_minute_mod15', 'click_second_mod5']
num_cols = ['click_hour', 'click_minute', 'click_second',
'clicks_by_ip', 'downloads_by_ip', 'download_ratio_by_ip',
'clicks_by_app', 'downloads_by_app', 'download_ratio_by_app',
'clicks_by_device', 'downloads_by_device', 'download_ratio_by_device',
'clicks_by_os', 'downloads_by_os', 'download_ratio_by_os',
'clicks_by_channel', 'downloads_by_channel', 'download_ratio_by_channel']
target_col = 'is_attributed'
# Implement Model
d_train = lgb.Dataset(trainX, feature_name=predictors, label=trainy)
lightgbm = lgb.train(params, d_train, 100)
accuracy_lightgbm_=[]
learn = []
# Tune Parameters
for j in range(105):
params['learning_rate'] = learn_rate[j]/1000.0
learn.append(params['learning_rate'])
lightgbm = lgb.train(params, d_train, 100)
y_predict_lightgbm = lightgbm.predict(testX)
for i in range(0,len(y_predict_lightgbm)):
if y_predict_lightgbm[i]>=.5: # setting threshold to .5
y_predict_lightgbm[i]=1
else:
y_predict_lightgbm[i]=0
accuracy_lightgbm=accuracy_score(y_predict_lightgbm,testy)
accuracy_lightgbm_.append(accuracy_lightgbm)
#Plot learning rate and number of boosting round
plt.plot(learn, accuracy_lightgbm_)
plt.ylabel('Accuracy for LightGBM')
plt.xlabel('Learing_rate')
plt.title('LightGBM')
plt.legend()
plt.show()
###Output
_____no_output_____ |
jupyter/MLP_Prototype.ipynb | ###Markdown
Variables to be changed
###Code
taskname = "verytoxic" ## Specify task to be predicted
tasktype = "classification" ## Specify either classification or regression
datarep = "image" ## Specify data representation
# Specify dataset name
jobname = "tox_niehs"
# Specify location of data
homedir = os.path.dirname(os.path.realpath('__file__'))+"/data/"
network_name = ""
if datarep == "image":
network_name = "cnn"
archdir = os.path.dirname(os.path.realpath('__file__'))+"/data/archive/"
K.set_image_dim_ordering('tf')
pixel = 80
num_channel = 4
channel = "engA"
elif datarep == "tabular":
network_name = "mlp"
archdir = os.path.dirname(os.path.realpath('__file__'))+"/data/"
elif datarep == "text":
network_name = "rnn"
archdir = os.path.dirname(os.path.realpath('__file__'))+"/data/"
###Output
_____no_output_____
###Markdown
Loading Data
###Code
from chem_scripts import cs_load_csv, cs_load_smiles, cs_load_image, cs_create_dict, cs_prep_data_X, cs_prep_data_y, cs_data_balance
from chem_scripts import cs_compute_results, cs_keras_to_seaborn, cs_make_plots
from chem_scripts import cs_setup_mlp, cs_setup_rnn, cs_setup_cnn
if datarep == "tabular":
# Load training + validation data
filename=archdir+jobname+"_tv_"+taskname+"_rdkit.csv"
X, y = cs_load_csv(filename)
# Load test data
filename=archdir+jobname+"_int_"+taskname+"_rdkit.csv"
X_test, y_test = cs_load_csv(filename)
if tasktype == "classification":
y_test, _ = cs_prep_data_y(y_test, tasktype=tasktype)
elif datarep == "text":
# Load training + validation data
filename=archdir+jobname+"_tv_"+taskname+"_smiles.csv"
X, y = cs_load_smiles(filename)
# Load test data
filename=archdir+jobname+"_int_"+taskname+"_smiles.csv"
X_test, y_test = cs_load_smiles(filename)
if tasktype == "classification":
y_test, _ = cs_prep_data_y(y_test, tasktype=tasktype)
# Create dictionary
characters, char_table, char_lookup = cs_create_dict(X, X_test)
# Map chars to integers
X = cs_prep_data_X(X, datarep=datarep, char_table=char_table)
X_test = cs_prep_data_X(X_test, datarep=datarep, char_table=char_table)
elif datarep == "image":
# Load training + validation data
filename=archdir+jobname+"_tv_"+taskname
X, y = cs_load_image(filename, channel=channel)
# Load test data
filename=archdir+jobname+"_int_"+taskname
X_test, y_test = cs_load_image(filename, channel=channel)
if tasktype == "classification":
y_test, _ = cs_prep_data_y(y_test, tasktype=tasktype)
# Reshape X to be [samples][channels][width][height]
X = X.reshape(X.shape[0], pixel, pixel, num_channel).astype("float32")
X_test = X_test.reshape(X_test.shape[0], pixel, pixel, num_channel).astype("float32")
def f_nn(train_default=True):
# Define counter for hyperparam iterations
global run_counter
run_counter += 1
print('*** TRIAL: '+str(run_counter))
print('*** PARAMETERS TESTING: '+str(params))
# Intialize results file
# Setup cross-validation
if tasktype == "classification":
cv_results = pd.DataFrame(columns=['Train Loss', 'Validation Loss', 'Test Loss', 'Train AUC', 'Validation AUC', 'Test AUC'])
stratk = StratifiedKFold(n_splits=5, random_state=7)
splits = stratk.split(X, y)
elif tasktype == "regression":
cv_results = pd.DataFrame(columns=['Train Loss', 'Validation Loss', 'Test Loss', 'Train RMSE', 'Validation RMSE', 'Test RMSE'])
stratk = KFold(n_splits=5, random_state=7)
splits = stratk.split(X, y)
# Do cross-validation
for i, (train_index, valid_index) in enumerate(splits):
if prototype == True:
if i > 0:
break
print("\nOn CV iteration: "+str(i))
# Do standard k-fold splitting
X_train, y_train = X[train_index], y[train_index]
X_valid, y_valid = X[valid_index], y[valid_index]
print("BEFORE Sampling: "+str(i)+" Train: "+str(X_train.shape)+" Valid: "+str(X_valid.shape))
print("BEFORE Sampling: "+str(i)+" Train: "+str(y_train.shape)+" Valid: "+str(y_valid.shape))
if tasktype == "classification":
# Do class-balancing
balanced_indices = cs_data_balance(y_train)
X_train = X_train[balanced_indices]
y_train = y_train[balanced_indices]
balanced_indices = cs_data_balance(y_valid)
X_valid = X_valid[balanced_indices]
y_valid = y_valid[balanced_indices]
# One-hot encoding
y_train, y_class = cs_prep_data_y(y_train, tasktype=tasktype) #ONLY DO THIS AFTER SPLITTING
y_valid, y_class = cs_prep_data_y(y_valid, tasktype=tasktype)
print("AFTER Sampling: "+str(i)+" Train: "+str(X_train.shape)+" Valid: "+str(X_valid.shape))
print("AFTER Sampling: "+str(i)+" Train: "+str(y_train.shape)+" Valid: "+str(y_valid.shape))
elif tasktype == "regression":
y_class = 1
# Setup network
if datarep == "tabular":
model, submodel = cs_setup_mlp(params, inshape=X_train.shape[1], classes=y_class)
elif datarep == "text":
model, submodel = cs_setup_rnn(params, inshape=X_train.shape[1], classes=y_class, char=characters)
elif datarep == "image":
model, submodel = cs_setup_cnn(params, inshape=(pixel, pixel, num_channel), classes=y_class)
if i == 0:
# Print architecture
print(model.summary())
# Save model
model_json = submodel.to_json()
filemodel=jobname+"_"+network_name+"_"taskname+"_architecture_"+str(run_counter)+".json"
with open(filemodel, "w") as json_file:
json_file.write(model_json)
# Setup callbacks
filecp = jobname+"_"+network_name+"_"+taskname+"_bestweights_trial_"+str(run_counter)+"_"+str(i)+".hdf5"
filecsv = jobname+"_"+network_name+"_"+taskname+"_loss_curve_"+str(run_counter)+"_"+str(i)+".csv"
callbacks = [TerminateOnNaN(),
LambdaCallback(on_epoch_end=lambda epoch,logs: sys.stdout.flush()),
EarlyStopping(monitor='val_loss', patience=25, verbose=1, mode='auto'),
ModelCheckpoint(filecp, monitor="val_loss", verbose=1, save_best_only=True, mode="auto"),
CSVLogger(filecsv)]
# Train model
if datarep == "image":
datagen = ImageDataGenerator(rotation_range=180, fill_mode='constant', cval=0.)
hist = model.fit_generator(datagen.flow(X_train, y_train, batch_size=batch_size),
epochs=nb_epoch, steps_per_epoch=X_train.shape[0]/batch_size,
verbose=verbose,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
else:
hist = model.fit(x=X_train, y=y_train,
batch_size=batch_size,
epochs=nb_epoch,
verbose=verbose,
validation_data=(X_valid, y_valid),
callbacks=callbacks)
# Visualize loss curve
hist_df = cs_keras_to_seaborn(hist)
cs_make_plots(hist_df)
# Reload best model & compute results
model.load_weights(filecp)
y_preds_result = cs_compute_results(model, classes=y_class, df_out=cv_results,
train_data=(X_train,y_train),
valid_data=(X_valid,y_valid),
test_data=(X_test,y_test))
# Calculate results for entire CV
final_mean = cv_results.mean(axis=0)
final_std = cv_results.std(axis=0)
cv_results.to_csv('results_'+jobname+'_'+network_name+'_'+taskname+'.csv', index=False)
# ouput prediction of testset
with open("predictions_"+jobname+"_"+network_name+"_"+taskname+".csv", "w") as output:
writer = csv.writer(output, lineterminator='\n')
writer.writerows(y_preds_result)
# Print final results
print('*** TRIAL RESULTS: '+str(run_counter))
print('*** PARAMETERS TESTED: '+str(params))
if tasktype == "regression":
print(('train_loss: %.3f +/- %.3f, train_rmse: %.3f +/- %.3f, val_loss: %.3f +/- %.3f, val_rmse: %.3f +/- %.3f, test_loss: %.3f +/- %.3f, test_rmse: %.3f +/- %.3f')
%(final_mean[0], final_std[0], final_mean[3], final_std[3],
final_mean[1], final_std[1], final_mean[4], final_std[4],
final_mean[2], final_std[2], final_mean[5], final_std[5]))
elif tasktype == "classification":
print(('train_loss: %.3f +/- %.3f, train_auc: %.3f +/- %.3f, val_loss: %.3f +/- %.3f, val_auc: %.3f +/- %.3f, test_loss: %.3f +/- %.3f, test_auc: %.3f +/- %.3f')
%(final_mean[0], final_std[0], final_mean[3], final_std[3],
final_mean[1], final_std[1], final_mean[4], final_std[4],
final_mean[2], final_std[2], final_mean[5], final_std[5]))
# Network hyperparameters
# Hyperparams:
# Dropout: 0 to 0.5 (float)
# num_layer: 2 to 6 (int)
# relu_type: relu, elu, leakyrelu, prelu
# layerN_units: 16, 32, 64, 128, 256 (int)
# reg_flag: l1, l2, l1_l2, none
# reg_val: 1 to 6 (float)
if datarep == "tabular":
params = {"dropval":0.5, "num_layer":2, "relu_type":"prelu",
"layer1_units":128, "layer2_units":128, "layer3_units":128,
"layer4_units":128, "layer5_units":128, "layer6_units":128,
"reg_type": "l2", "reg_val": 2.5 }
# Hyperparams:
# Dropout: 0 to 0.5 (float)
# em_dim: 1 to 10 (int)
# num_layer: 1 to 3 (int)
# relu_type: relu, elu, leakyrelu, prelu
# conv units: 16, 32, 64, 128, 256 (int)
# layerN_units: 16, 32, 64, 128, 256 (int)
elif datarep == "text":
params = {"em_dim":5, "conv_units":6, "dropval":0.5, "num_layer":2,
"celltype":"GRU", "relu_type":"prelu",
"layer1_units":12, "layer2_units":12, "layer3_units":12,
"reg_type": "l2", "reg_val": 2}
# Hyperparams:
# Dropout: 0 to 0.5 (float)
# num_blockN: 1 to 5 (int)
# convN units: 16, 32, 64, 128, 256 (int)
elif datarep == "image":
params = {"conv1_units":32, "conv2_units":32, "conv3_units":32,
"conv4_units":32, "conv5_units":32, "conv6_units":32,
"num_block1":3, "num_block2":3, "num_block3":3, "dropval":0}
# Run settings
run_counter = 0
batch_size = 128
nb_epoch = 5
verbose = 1
prototype = True
# if train_default is true, models will be trained.
# otherwise, the model will read saved weight from file.
f_nn(True)
###Output
*** TRIAL: 1
*** PARAMETERS TESTING: {'conv1_units': 32, 'conv2_units': 32, 'conv3_units': 32, 'conv4_units': 32, 'conv5_units': 32, 'conv6_units': 32, 'num_block1': 3, 'num_block2': 3, 'num_block3': 3, 'dropval': 0}
On CV iteration: 0
BEFORE Sampling: 0 Train: (5996, 80, 80, 4) Valid: (1500, 80, 80, 4)
BEFORE Sampling: 0 Train: (5996,) Valid: (1500,)
y dim: (10984, 2)
y no. class: 2
y dim: (2748, 2)
y no. class: 2
AFTER Sampling: 0 Train: (10984, 80, 80, 4) Valid: (2748, 80, 80, 4)
AFTER Sampling: 0 Train: (10984, 2) Valid: (2748, 2)
Channel axis is -1
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 80, 80, 4) 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 40, 40, 32) 2080 input_1[0][0]
__________________________________________________________________________________________________
activation_1 (Activation) (None, 40, 40, 32) 0 conv2d_1[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 40, 40, 32) 1056 activation_1[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 40, 40, 32) 1056 activation_1[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 40, 40, 48) 13872 conv2d_5[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 40, 40, 32) 1056 activation_1[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 40, 40, 32) 9248 conv2d_3[0][0]
__________________________________________________________________________________________________
conv2d_7 (Conv2D) (None, 40, 40, 64) 27712 conv2d_6[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate) (None, 40, 40, 128) 0 conv2d_2[0][0]
conv2d_4[0][0]
conv2d_7[0][0]
__________________________________________________________________________________________________
conv2d_8 (Conv2D) (None, 40, 40, 32) 4128 concatenate_1[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 40, 40, 32) 0 activation_1[0][0]
conv2d_8[0][0]
__________________________________________________________________________________________________
activation_2 (Activation) (None, 40, 40, 32) 0 add_1[0][0]
__________________________________________________________________________________________________
conv2d_12 (Conv2D) (None, 40, 40, 32) 1056 activation_2[0][0]
__________________________________________________________________________________________________
conv2d_10 (Conv2D) (None, 40, 40, 32) 1056 activation_2[0][0]
__________________________________________________________________________________________________
conv2d_13 (Conv2D) (None, 40, 40, 48) 13872 conv2d_12[0][0]
__________________________________________________________________________________________________
conv2d_9 (Conv2D) (None, 40, 40, 32) 1056 activation_2[0][0]
__________________________________________________________________________________________________
conv2d_11 (Conv2D) (None, 40, 40, 32) 9248 conv2d_10[0][0]
__________________________________________________________________________________________________
conv2d_14 (Conv2D) (None, 40, 40, 64) 27712 conv2d_13[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate) (None, 40, 40, 128) 0 conv2d_9[0][0]
conv2d_11[0][0]
conv2d_14[0][0]
__________________________________________________________________________________________________
conv2d_15 (Conv2D) (None, 40, 40, 32) 4128 concatenate_2[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 40, 40, 32) 0 activation_2[0][0]
conv2d_15[0][0]
__________________________________________________________________________________________________
activation_3 (Activation) (None, 40, 40, 32) 0 add_2[0][0]
__________________________________________________________________________________________________
conv2d_19 (Conv2D) (None, 40, 40, 32) 1056 activation_3[0][0]
__________________________________________________________________________________________________
conv2d_17 (Conv2D) (None, 40, 40, 32) 1056 activation_3[0][0]
__________________________________________________________________________________________________
conv2d_20 (Conv2D) (None, 40, 40, 48) 13872 conv2d_19[0][0]
__________________________________________________________________________________________________
conv2d_16 (Conv2D) (None, 40, 40, 32) 1056 activation_3[0][0]
__________________________________________________________________________________________________
conv2d_18 (Conv2D) (None, 40, 40, 32) 9248 conv2d_17[0][0]
__________________________________________________________________________________________________
conv2d_21 (Conv2D) (None, 40, 40, 64) 27712 conv2d_20[0][0]
__________________________________________________________________________________________________
concatenate_3 (Concatenate) (None, 40, 40, 128) 0 conv2d_16[0][0]
conv2d_18[0][0]
conv2d_21[0][0]
__________________________________________________________________________________________________
conv2d_22 (Conv2D) (None, 40, 40, 32) 4128 concatenate_3[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, 40, 40, 32) 0 activation_3[0][0]
conv2d_22[0][0]
__________________________________________________________________________________________________
activation_4 (Activation) (None, 40, 40, 32) 0 add_3[0][0]
__________________________________________________________________________________________________
conv2d_24 (Conv2D) (None, 40, 40, 32) 1056 activation_4[0][0]
__________________________________________________________________________________________________
conv2d_25 (Conv2D) (None, 40, 40, 32) 9248 conv2d_24[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 19, 19, 32) 0 activation_4[0][0]
__________________________________________________________________________________________________
conv2d_23 (Conv2D) (None, 19, 19, 48) 13872 activation_4[0][0]
__________________________________________________________________________________________________
conv2d_26 (Conv2D) (None, 19, 19, 48) 13872 conv2d_25[0][0]
__________________________________________________________________________________________________
concatenate_4 (Concatenate) (None, 19, 19, 128) 0 max_pooling2d_1[0][0]
conv2d_23[0][0]
conv2d_26[0][0]
__________________________________________________________________________________________________
activation_5 (Activation) (None, 19, 19, 128) 0 concatenate_4[0][0]
__________________________________________________________________________________________________
conv2d_28 (Conv2D) (None, 19, 19, 32) 4128 activation_5[0][0]
__________________________________________________________________________________________________
conv2d_29 (Conv2D) (None, 19, 19, 40) 9000 conv2d_28[0][0]
__________________________________________________________________________________________________
conv2d_27 (Conv2D) (None, 19, 19, 32) 4128 activation_5[0][0]
__________________________________________________________________________________________________
conv2d_30 (Conv2D) (None, 19, 19, 48) 13488 conv2d_29[0][0]
__________________________________________________________________________________________________
concatenate_5 (Concatenate) (None, 19, 19, 80) 0 conv2d_27[0][0]
conv2d_30[0][0]
__________________________________________________________________________________________________
conv2d_31 (Conv2D) (None, 19, 19, 128) 10368 concatenate_5[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, 19, 19, 128) 0 activation_5[0][0]
conv2d_31[0][0]
__________________________________________________________________________________________________
activation_6 (Activation) (None, 19, 19, 128) 0 add_4[0][0]
__________________________________________________________________________________________________
conv2d_33 (Conv2D) (None, 19, 19, 32) 4128 activation_6[0][0]
__________________________________________________________________________________________________
conv2d_34 (Conv2D) (None, 19, 19, 40) 9000 conv2d_33[0][0]
__________________________________________________________________________________________________
conv2d_32 (Conv2D) (None, 19, 19, 32) 4128 activation_6[0][0]
__________________________________________________________________________________________________
conv2d_35 (Conv2D) (None, 19, 19, 48) 13488 conv2d_34[0][0]
__________________________________________________________________________________________________
concatenate_6 (Concatenate) (None, 19, 19, 80) 0 conv2d_32[0][0]
conv2d_35[0][0]
__________________________________________________________________________________________________
conv2d_36 (Conv2D) (None, 19, 19, 128) 10368 concatenate_6[0][0]
__________________________________________________________________________________________________
add_5 (Add) (None, 19, 19, 128) 0 activation_6[0][0]
conv2d_36[0][0]
__________________________________________________________________________________________________
activation_7 (Activation) (None, 19, 19, 128) 0 add_5[0][0]
__________________________________________________________________________________________________
conv2d_38 (Conv2D) (None, 19, 19, 32) 4128 activation_7[0][0]
__________________________________________________________________________________________________
conv2d_39 (Conv2D) (None, 19, 19, 40) 9000 conv2d_38[0][0]
__________________________________________________________________________________________________
conv2d_37 (Conv2D) (None, 19, 19, 32) 4128 activation_7[0][0]
__________________________________________________________________________________________________
conv2d_40 (Conv2D) (None, 19, 19, 48) 13488 conv2d_39[0][0]
__________________________________________________________________________________________________
concatenate_7 (Concatenate) (None, 19, 19, 80) 0 conv2d_37[0][0]
conv2d_40[0][0]
__________________________________________________________________________________________________
conv2d_41 (Conv2D) (None, 19, 19, 128) 10368 concatenate_7[0][0]
__________________________________________________________________________________________________
add_6 (Add) (None, 19, 19, 128) 0 activation_7[0][0]
conv2d_41[0][0]
__________________________________________________________________________________________________
activation_8 (Activation) (None, 19, 19, 128) 0 add_6[0][0]
__________________________________________________________________________________________________
conv2d_46 (Conv2D) (None, 19, 19, 32) 4128 activation_8[0][0]
__________________________________________________________________________________________________
conv2d_42 (Conv2D) (None, 19, 19, 32) 4128 activation_8[0][0]
__________________________________________________________________________________________________
conv2d_44 (Conv2D) (None, 19, 19, 32) 4128 activation_8[0][0]
__________________________________________________________________________________________________
conv2d_47 (Conv2D) (None, 19, 19, 36) 10404 conv2d_46[0][0]
__________________________________________________________________________________________________
max_pooling2d_2 (MaxPooling2D) (None, 9, 9, 128) 0 activation_8[0][0]
__________________________________________________________________________________________________
conv2d_43 (Conv2D) (None, 9, 9, 48) 13872 conv2d_42[0][0]
__________________________________________________________________________________________________
conv2d_45 (Conv2D) (None, 9, 9, 36) 10404 conv2d_44[0][0]
__________________________________________________________________________________________________
conv2d_48 (Conv2D) (None, 9, 9, 40) 13000 conv2d_47[0][0]
__________________________________________________________________________________________________
concatenate_8 (Concatenate) (None, 9, 9, 252) 0 max_pooling2d_2[0][0]
conv2d_43[0][0]
conv2d_45[0][0]
conv2d_48[0][0]
__________________________________________________________________________________________________
activation_9 (Activation) (None, 9, 9, 252) 0 concatenate_8[0][0]
__________________________________________________________________________________________________
conv2d_50 (Conv2D) (None, 9, 9, 32) 8096 activation_9[0][0]
__________________________________________________________________________________________________
conv2d_51 (Conv2D) (None, 9, 9, 37) 3589 conv2d_50[0][0]
__________________________________________________________________________________________________
conv2d_49 (Conv2D) (None, 9, 9, 32) 8096 activation_9[0][0]
__________________________________________________________________________________________________
conv2d_52 (Conv2D) (None, 9, 9, 42) 4704 conv2d_51[0][0]
__________________________________________________________________________________________________
concatenate_9 (Concatenate) (None, 9, 9, 74) 0 conv2d_49[0][0]
conv2d_52[0][0]
__________________________________________________________________________________________________
conv2d_53 (Conv2D) (None, 9, 9, 252) 18900 concatenate_9[0][0]
__________________________________________________________________________________________________
add_7 (Add) (None, 9, 9, 252) 0 activation_9[0][0]
conv2d_53[0][0]
__________________________________________________________________________________________________
activation_10 (Activation) (None, 9, 9, 252) 0 add_7[0][0]
__________________________________________________________________________________________________
conv2d_55 (Conv2D) (None, 9, 9, 32) 8096 activation_10[0][0]
__________________________________________________________________________________________________
conv2d_56 (Conv2D) (None, 9, 9, 37) 3589 conv2d_55[0][0]
__________________________________________________________________________________________________
conv2d_54 (Conv2D) (None, 9, 9, 32) 8096 activation_10[0][0]
__________________________________________________________________________________________________
conv2d_57 (Conv2D) (None, 9, 9, 42) 4704 conv2d_56[0][0]
__________________________________________________________________________________________________
concatenate_10 (Concatenate) (None, 9, 9, 74) 0 conv2d_54[0][0]
conv2d_57[0][0]
__________________________________________________________________________________________________
conv2d_58 (Conv2D) (None, 9, 9, 252) 18900 concatenate_10[0][0]
__________________________________________________________________________________________________
add_8 (Add) (None, 9, 9, 252) 0 activation_10[0][0]
conv2d_58[0][0]
__________________________________________________________________________________________________
activation_11 (Activation) (None, 9, 9, 252) 0 add_8[0][0]
__________________________________________________________________________________________________
conv2d_60 (Conv2D) (None, 9, 9, 32) 8096 activation_11[0][0]
__________________________________________________________________________________________________
conv2d_61 (Conv2D) (None, 9, 9, 37) 3589 conv2d_60[0][0]
__________________________________________________________________________________________________
conv2d_59 (Conv2D) (None, 9, 9, 32) 8096 activation_11[0][0]
__________________________________________________________________________________________________
conv2d_62 (Conv2D) (None, 9, 9, 42) 4704 conv2d_61[0][0]
__________________________________________________________________________________________________
concatenate_11 (Concatenate) (None, 9, 9, 74) 0 conv2d_59[0][0]
conv2d_62[0][0]
__________________________________________________________________________________________________
conv2d_63 (Conv2D) (None, 9, 9, 252) 18900 concatenate_11[0][0]
__________________________________________________________________________________________________
add_9 (Add) (None, 9, 9, 252) 0 activation_11[0][0]
conv2d_63[0][0]
__________________________________________________________________________________________________
activation_12 (Activation) (None, 9, 9, 252) 0 add_9[0][0]
__________________________________________________________________________________________________
final_pool (GlobalAveragePoolin (None, 252) 0 activation_12[0][0]
__________________________________________________________________________________________________
dropout_end (Dropout) (None, 252) 0 final_pool[0][0]
__________________________________________________________________________________________________
predictions (Dense) (None, 2) 506 dropout_end[0][0]
==================================================================================================
Total params: 528,573
Trainable params: 528,573
Non-trainable params: 0
__________________________________________________________________________________________________
None
Epoch 1/5
86/85 [==============================] - 841s 10s/step - loss: 0.6847 - val_loss: 0.6791
Epoch 00001: val_loss improved from inf to 0.67911, saving model to tox_niehs_verytoxic_bestweights_trial_1_0.hdf5
Epoch 2/5
86/85 [==============================] - 836s 10s/step - loss: 0.6545 - val_loss: 0.7460
Epoch 00002: val_loss did not improve
Epoch 3/5
86/85 [==============================] - 832s 10s/step - loss: 0.6369 - val_loss: 0.7089
Epoch 00003: val_loss did not improve
Epoch 4/5
86/85 [==============================] - 807s 9s/step - loss: 0.6248 - val_loss: 0.6814
Epoch 00004: val_loss did not improve
Epoch 5/5
86/85 [==============================] - 862s 10s/step - loss: 0.6076 - val_loss: 0.6793
Epoch 00005: val_loss did not improve
|
1. Quantum_Computing_Using_Qiskit/Day_01.ipynb | ###Markdown
Qiskit> `Qiskit [quiss-kit] is an open source SDK for working with quantum computers at the level of pulses, circuits and application modules` Open-Source Quantum Development
###Code
import qiskit.quantum_info as qi
from qiskit.circuit.library import FourierChecking
from qiskit.visualization import plot_histogram
f=[1,-1,-1,-1]
g=[1,1,-1,-1]
circ = FourierChecking(f=f,g=g)
circ.draw()
zero = qi.Statevector.from_label('00')
sv = zero.evolve(circ)
probs = sv.probabilities_dict()
plot_histogram(probs)
###Output
_____no_output_____ |
Intro to Deep Learning/Week 2/intro_to_tensorflow.ipynb | ###Markdown
Contents1 Intro to TensorFlow2 TensorBoard3 Warming up4 Tensoflow teaser5 How does it work?6 Summary7 Loss function: Mean Squared Error8 Variables9 tf.gradients - why graphs matter10 Why that rocks11 Almost done - optimizers Intro to TensorFlowThis notebook covers the basics of TF and shows you an animation with gradient descent trajectory. TensorBoard **Plase note that if you are running on the Coursera platform, you won't be able to access the tensorboard instance due to the network setup there.**Run `tensorboard --logdir=./tensorboard_logs --port=7007` in bash.If you run the notebook locally, you should be able to access TensorBoard on http://127.0.0.1:7007/
###Code
import tensorflow as tf
import sys
sys.path.append("..") # keras_utils script is in parent folder
from keras_utils import reset_tf_session
s = reset_tf_session()
print("We're using TF", tf.__version__)
###Output
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:526: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint8 = np.dtype([("qint8", np.int8, 1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:527: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint8 = np.dtype([("quint8", np.uint8, 1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:528: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint16 = np.dtype([("qint16", np.int16, 1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:529: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_quint16 = np.dtype([("quint16", np.uint16, 1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:530: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
_np_qint32 = np.dtype([("qint32", np.int32, 1)])
C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\dtypes.py:535: FutureWarning: Passing (type, 1) or '1type' as a synonym of type is deprecated; in a future version of numpy, it will be understood as (type, (1,)) / '(1,)type'.
np_resource = np.dtype([("resource", np.ubyte, 1)])
Using TensorFlow backend.
###Markdown
Warming upFor starters, let's implement a python function that computes the sum of squares of numbers from 0 to N-1.
###Code
import numpy as np
def sum_python(N):
return np.sum(np.arange(N)**2)
%%time
sum_python(10**5)
###Output
Wall time: 0 ns
###Markdown
Tensoflow teaserDoing the very same thing
###Code
# An integer parameter
N = tf.placeholder('int64', name="input_to_your_function")
# A recipe on how to produce the same result
result = tf.reduce_sum(tf.range(N)**2)
# just a graph definition
result
%%time
# actually executing
result.eval({N: 10**5})
# logger for tensorboard
writer = tf.summary.FileWriter("tensorboard_logs", graph=s.graph)
###Output
_____no_output_____
###Markdown
How does it work?1. Define placeholders where you'll send inputs2. Make a symbolic graph: a recipe for mathematical transformation of those placeholders3. Compute outputs of your graph with particular values for each placeholder * `output.eval({placeholder: value})` * `s.run(output, {placeholder: value})`So far there are two main entities: "placeholder" and "transformation" (operation output)* Both can be numbers, vectors, matrices, tensors, etc.* Both can be int32/64, floats, booleans (uint8) of various size.* You can define new transformations as an arbitrary operation on placeholders and other transformations * `tf.reduce_sum(tf.arange(N)**2)` are 3 sequential transformations of placeholder `N` * There's a tensorflow symbolic version for every numpy function * `a+b, a/b, a**b, ...` behave just like in numpy * `np.mean` -> `tf.reduce_mean` * `np.arange` -> `tf.range` * `np.cumsum` -> `tf.cumsum` * If you can't find the operation you need, see the [docs](https://www.tensorflow.org/versions/r1.3/api_docs/python). `tf.contrib` has many high-level features, may be worth a look.
###Code
with tf.name_scope("Placeholders_examples"):
# Default placeholder that can be arbitrary float32
# scalar, vertor, matrix, etc.
arbitrary_input = tf.placeholder('float32')
# Input vector of arbitrary length
input_vector = tf.placeholder('float32', shape=(None,))
# Input vector that _must_ have 10 elements and integer type
fixed_vector = tf.placeholder('int32', shape=(10,))
# Matrix of arbitrary n_rows and 15 columns
# (e.g. a minibatch of your data table)
input_matrix = tf.placeholder('float32', shape=(None, 15))
# You can generally use None whenever you don't need a specific shape
input1 = tf.placeholder('float64', shape=(None, 100, None))
input2 = tf.placeholder('int32', shape=(None, None, 3, 224, 224))
# elementwise multiplication
double_the_vector = input_vector*2
# elementwise cosine
elementwise_cosine = tf.cos(input_vector)
# difference between squared vector and vector itself plus one
vector_squares = input_vector**2 - input_vector + 1
my_vector = tf.placeholder('float32', shape=(None,), name="VECTOR_1")
my_vector2 = tf.placeholder('float32', shape=(None,))
my_transformation = my_vector * my_vector2 / (tf.sin(my_vector) + 1)
print(my_transformation)
dummy = np.arange(5).astype('float32')
print(dummy)
my_transformation.eval({my_vector: dummy, my_vector2: dummy[::-1]})
writer.add_graph(my_transformation.graph)
writer.flush()
###Output
_____no_output_____
###Markdown
TensorBoard allows writing scalars, images, audio, histogram. You can read more on tensorboard usage [here](https://www.tensorflow.org/get_started/graph_viz). Summary* Tensorflow is based on computation graphs* A graph consists of placeholders and transformations Loss function: Mean Squared ErrorLoss function must be a part of the graph as well, so that we can do backpropagation.
###Code
with tf.name_scope("MSE"):
y_true = tf.placeholder("float32", shape=(None,), name="y_true")
y_predicted = tf.placeholder("float32", shape=(None,), name="y_predicted")
# Implement MSE(y_true, y_predicted), use tf.reduce_mean(...)
mse = tf.reduce_mean((y_true - y_predicted)**2)
def compute_mse(vector1, vector2):
return mse.eval({y_true: vector1, y_predicted: vector2})
writer.add_graph(mse.graph)
writer.flush()
# Rigorous local testing of MSE implementation
import sklearn.metrics
for n in [1, 5, 10, 10**3]:
elems = [np.arange(n), np.arange(n, 0, -1), np.zeros(n),
np.ones(n), np.random.random(n), np.random.randint(100, size=n)]
for el in elems:
for el_2 in elems:
true_mse = np.array(sklearn.metrics.mean_squared_error(el, el_2))
my_mse = compute_mse(el, el_2)
if not np.allclose(true_mse, my_mse):
print('mse(%s,%s)' % (el, el_2))
print("should be: %f, but your function returned %f" % (true_mse, my_mse))
raise ValueError('Wrong result')
###Output
_____no_output_____
###Markdown
VariablesPlaceholder and transformation values are not stored in the graph once the execution is finished. This isn't too comfortable if you want your model to have parameters (e.g. network weights) that are always present, but can change their value over time.Tensorflow solves this with `tf.Variable` objects.* You can assign variable a value at any time in your graph* Unlike placeholders, there's no need to explicitly pass values to variables when `s.run(...)`-ing* You can use variables the same way you use transformations
###Code
# Creating a shared variable
shared_vector_1 = tf.Variable(initial_value=np.ones(5), name="example_variable")
# Initialize variable(s) with initial values
s.run(tf.global_variables_initializer())
# Evaluating the shared variable
print("Initial value", s.run(shared_vector_1))
# Setting a new value
s.run(shared_vector_1.assign(np.arange(5)))
# Getting that new value
print("New value", s.run(shared_vector_1))
###Output
New value [0. 1. 2. 3. 4.]
###Markdown
tf.gradients - why graphs matter* Tensorflow can compute derivatives and gradients automatically using the computation graph* True to its name it can manage matrix derivatives* Gradients are computed as a product of elementary derivatives via the chain rule:$$ {\partial f(g(x)) \over \partial x} = {\partial f(g(x)) \over \partial g(x)}\cdot {\partial g(x) \over \partial x} $$It can get you the derivative of any graph as long as it knows how to differentiate elementary operations
###Code
my_scalar = tf.placeholder('float32')
scalar_squared = my_scalar**2
# A derivative of scalar_squared by my_scalar
derivative = tf.gradients(scalar_squared, [my_scalar, ])
derivative
import matplotlib.pyplot as plt
%matplotlib inline
x = np.linspace(-3, 3)
x_squared, x_squared_der = s.run([scalar_squared, derivative[0]],
{my_scalar:x})
plt.plot(x, x_squared,label="$x^2$")
plt.plot(x, x_squared_der, label=r"$\frac{dx^2}{dx}$")
plt.legend();
###Output
_____no_output_____
###Markdown
Why that rocks
###Code
my_vector = tf.placeholder('float32', [None])
# Compute the gradient of the next weird function over my_scalar and my_vector
# Warning! Trying to understand the meaning of that function may result in permanent brain damage
weird_psychotic_function = tf.reduce_mean(
(my_vector+my_scalar)**(1+tf.nn.moments(my_vector,[0])[1]) +
1./ tf.atan(my_scalar))/(my_scalar**2 + 1) + 0.01*tf.sin(
2*my_scalar**1.5)*(tf.reduce_sum(my_vector)* my_scalar**2
)*tf.exp((my_scalar-4)**2)/(
1+tf.exp((my_scalar-4)**2))*(1.-(tf.exp(-(my_scalar-4)**2)
)/(1+tf.exp(-(my_scalar-4)**2)))**2
der_by_scalar = tf.gradients(weird_psychotic_function, my_scalar)
der_by_vector = tf.gradients(weird_psychotic_function, my_vector)
# Plotting the derivative
scalar_space = np.linspace(1, 7, 100)
y = [s.run(weird_psychotic_function, {my_scalar:x, my_vector:[1, 2, 3]})
for x in scalar_space]
plt.plot(scalar_space, y, label='function')
y_der_by_scalar = [s.run(der_by_scalar,
{my_scalar:x, my_vector:[1, 2, 3]})
for x in scalar_space]
plt.plot(scalar_space, y_der_by_scalar, label='derivative')
plt.grid()
plt.legend();
###Output
_____no_output_____
###Markdown
Almost done - optimizersWhile you can perform gradient descent by hand with automatic gradients from above, tensorflow also has some optimization methods implemented for you. Recall momentum & rmsprop?
###Code
y_guess = tf.Variable(np.zeros(2, dtype='float32'))
y_true = tf.range(1, 3, dtype='float32')
loss = tf.reduce_mean((y_guess - y_true + 0.5*tf.random_normal([2]))**2)
step = tf.train.MomentumOptimizer(0.03, 0.5).minimize(loss, var_list=y_guess)
###Output
_____no_output_____
###Markdown
Let's draw a trajectory of a gradient descent in 2D
###Code
from matplotlib import animation, rc
import matplotlib_utils
from IPython.display import HTML, display_html
# nice figure settings
fig, ax = plt.subplots()
y_true_value = s.run(y_true)
level_x = np.arange(0, 2, 0.02)
level_y = np.arange(0, 3, 0.02)
X, Y = np.meshgrid(level_x, level_y)
Z = (X - y_true_value[0])**2 + (Y - y_true_value[1])**2
ax.set_xlim(-0.02, 2)
ax.set_ylim(-0.02, 3)
s.run(tf.global_variables_initializer())
ax.scatter(*s.run(y_true), c='red')
contour = ax.contour(X, Y, Z, 10)
ax.clabel(contour, inline=1, fontsize=10)
line, = ax.plot([], [], lw=2)
# start animation with empty trajectory
def init():
line.set_data([], [])
return (line,)
trajectory = [s.run(y_guess)]
# one animation step (make one GD step)
def animate(i):
s.run(step)
trajectory.append(s.run(y_guess))
line.set_data(*zip(*trajectory))
return (line,)
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=100, interval=20, blit=True)
try:
display_html(HTML(anim.to_html5_video()))
except (RuntimeError, KeyError):
# In case the build-in renderers are unaviable, fall back to
# a custom one, that doesn't require external libraries
anim.save(None, writer=matplotlib_utils.SimpleMovieWriter(0.001))
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/Interest-Rate-Regression-Model-KERAS-checkpoint.ipynb | ###Markdown
Interest Rate Regression Model using KERAS (Neural Networks)- Five models were created. Three Interest Rate Regression Models and two Loan Default Classification Models- This notebook is a deep dive into the KERAS Regressor Model created to predict Loan Interest Rates.- We will begin by exploring the top features that affect Loan Interest Rates - We will then split the data into train/test sets before training the model- The evaluation metrics will be the R2 and Mean Absolute Error (MAE) Import Packages and Data
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.options.display.float_format = '{:.2f}'.format
data = pd.read_csv("lending-club-subset.csv")
data.head()
data.shape
###Output
_____no_output_____
###Markdown
Choose meaningful features needed to run a correlation matrix. This will be used to observe the top features that affect Interest Rates
###Code
data = data[[
'loan_amnt'
, 'funded_amnt'
, 'funded_amnt_inv'
, 'term'
, 'int_rate'
, 'installment'
, 'grade'
, 'sub_grade'
, 'emp_title'
, 'emp_length'
, 'home_ownership'
, 'annual_inc'
, 'verification_status'
, 'issue_d'
, 'loan_status'
, 'purpose'
, 'addr_state'
, 'dti'
, 'delinq_2yrs'
, 'fico_range_low'
, 'fico_range_high'
, 'inq_last_6mths'
, 'mths_since_last_delinq'
, 'mths_since_last_record'
, 'open_acc'
, 'pub_rec'
, 'revol_bal'
, 'revol_util'
, 'total_acc'
, 'initial_list_status'
, 'acc_open_past_24mths'
, 'mort_acc'
, 'pub_rec_bankruptcies'
, 'tax_liens'
, 'earliest_cr_line'
]]
# remove % sign and set to float
data['int_rate'] = data['int_rate'].str.replace('%', '')
data['int_rate'] = data['int_rate'].astype(float)
data['revol_util'] = data['revol_util'].str.replace('%', '')
data['revol_util'] = data['revol_util'].astype(float)
data.head()
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Feature ExplorationPlotting Correlation matrix is a way to understand what features are correlated with eachother for with the target variable, which in this case is the Loan Interest Rate
###Code
# Create Corelation Matrix
corr = data.corr()
plt.figure(figsize = (10, 8))
sns.heatmap(corr)
plt.show()
# Print the correlation values for all features with respect to interest rates
corr_int_rate = corr[['int_rate']]
corr_int_rate
# Top 10 positive features
top_10_pos = corr_int_rate[corr_int_rate['int_rate'] > 0].sort_values(by=['int_rate'],ascending=False)
top_10_pos
# Top 10 negative features
top_10_neg = corr_int_rate[corr_int_rate['int_rate']< 0].sort_values(by=['int_rate'],ascending=False)
top_10_neg
###Output
_____no_output_____
###Markdown
KERAS
###Code
# Split Data
from sklearn.model_selection import train_test_split
train, test = train_test_split(data, test_size=0.30, random_state=42)
train.shape, test.shape
#Create Train/Test sets
target = 'int_rate'
features = train.columns.drop(['int_rate'
,'revol_bal'
,'loan_status'
,'funded_amnt'
,'grade'
,'sub_grade'
,'issue_d'
,'installment'
, 'fico_range_high'
, 'funded_amnt_inv']) # These feature must be removed, as they are not feature known prior to loan application
X_train = train[features]
y_train = train[target]
X_test = test[features]
y_test = test[target]
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.wrappers.scikit_learn import KerasRegressor
from sklearn.model_selection import GridSearchCV
import category_encoders as ce
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
from sklearn.pipeline import Pipeline
# Encode categorical features
encoder = ce.OrdinalEncoder()
X_train_encoded = encoder.fit_transform(X_train)
X_test_encoded = encoder.transform(X_test)
# Impute NaN values
imputer = SimpleImputer()
X_train_imputed = imputer.fit_transform(X_train_encoded)
X_test_imputed = imputer.transform(X_test_encoded)
# Scale features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train_imputed)
X_test_scaled = scaler.transform(X_test_imputed)
inputs = X_train_scaled.shape[1]
# Create Model Function
def create_model(optimizer = 'adam', loss='mse',
metrics=['mse','mae']):
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(inputs,)))
model.add(Dense(64, activation='relu'))
model.add(Dense(1))
model.compile(optimizer = 'adam', loss='mse', metrics=['mse','mae'])
return model
# Model Wrapper
model = KerasRegressor(build_fn=create_model,verbose=0)
# Grid Search Params
param_grid = {'batch_size': [10, 20, 40, 60, 80, 100],
'epochs': [20]}
# Create Grid Search
grid = GridSearchCV(estimator=model, param_grid=param_grid, n_jobs=1)
grid_result = grid.fit(X_train_scaled, np.array(y_train))
# Report Results
print(f"Best: {grid_result.best_score_} using {grid_result.best_params_}")
means = grid_result.cv_results_['mean_test_score']
stds = grid_result.cv_results_['std_test_score']
params = grid_result.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print(f"Means: {mean}, Stdev: {stdev} with: {param}")
# Calculate R2 and MAE
from sklearn.metrics import mean_absolute_error,r2_score
# Get y_preds
y_pred = grid_result.predict(X_test_scaled)
# Calc MAE
mae = mean_absolute_error(y_test, y_pred)
# Calc R2
r2 = r2_score(y_test, y_pred)
# Results
print(f'Test MAE: {mae:,.2f}% \n')
print(f'Test R2: {r2:,.2f} \n')
###Output
Test MAE: 2.24%
Test R2: 0.39
|
Previous Methods/CAIDA_s1_ToR_Classification_NDToR_x_training_core.ipynb | ###Markdown
Import python packages
###Code
from collections import defaultdict
import pickle
import numpy as np
from sklearn.model_selection import train_test_split
from collections import Counter
import random
np.random.seed(7)
###Output
_____no_output_____
###Markdown
Define parameters and load bgp_routes and ToR datasets
###Code
ToR_MODEL_NAME = "CAIDA_s1_ToR_Classification_NDToR_x_training_core"
TEST_SIZE = 0.2
TOR_LABELS_DICT = {'P2P':0, 'C2P': 1,'P2C': 2}
class_names = ['P2P', 'C2P', 'P2C']
DATA_PATH = '../../Data/'
MODELS_PATH = '../../Models/'
RESULTS_PATH = '../../Results/'
bgp_routes = np.load(DATA_PATH + "bgp_routes_dataset.npy")
bgp_routes_labels = np.load(DATA_PATH + "bgp_routes_labels.npy")
print(bgp_routes.shape, bgp_routes_labels.shape)
DATA = "caida_s1_tor"
tor_dataset = np.load(DATA_PATH + DATA + "_dataset.npy")
tor_labels = np.load(DATA_PATH + DATA + "_labels.npy")
print(tor_dataset.shape, tor_labels.shape)
###Output
(3669655,) (3669655,)
(580762, 2) (580762,)
###Markdown
Generate training and test sets Shauffle dataset
###Code
from sklearn.utils import shuffle
dataset, labels = shuffle(tor_dataset, tor_labels, random_state=7)
###Output
_____no_output_____
###Markdown
Generate a balanced dataset
###Code
# def generate_balanced_dataset(dataset, labels, labels_set):
# sets_dict = dict()
# for label in labels_set:
# sets_dict[label] = np.asarray([np.asarray(dataset[i]) for i in range(len(dataset)) if labels[i] == label])
# min_set_len = min([len(label_set) for label_set in sets_dict.values()])
# for label, label_set in sets_dict.items():
# sets_dict[label] = label_set[np.random.choice(label_set.shape[0], min_set_len, replace=False)]
# dataset = np.concatenate((sets_dict.values()))
# labels = []
# for label, label_set in sets_dict.items():
# labels += [label]*len(label_set)
# print label, len(label_set)
# labels = np.asarray(labels)
# return shuffle(dataset, labels, random_state=7)
# dataset, labels = generate_balanced_dataset(dataset, labels, (0,1,3))
# print dataset.shape, labels.shape
###Output
_____no_output_____
###Markdown
Train Test Split
###Code
x_training, x_test, y_training, y_test = train_test_split(dataset, labels, test_size=TEST_SIZE)
del dataset, labels
print(x_training.shape, y_training.shape)
print(x_test.shape, y_test.shape)
print(1.0*len(x_training)/(len(x_test)+len(x_training)))
from collections import Counter
training_c = Counter(y_training)
test_c = Counter(y_test)
print(training_c, test_c)
for k,v in training_c.items():
print(k, 100.0*v/len(x_training))
print
for k,v in test_c.items():
print(k, 100.0*v/len(x_test))
###Output
1 20.386174180870366
0 59.27177476114324
2 20.342051057986392
0 59.03420488493625
2 20.571143233493753
1 20.39465188157
###Markdown
Run ND-ToR Algo Load k_shell
###Code
with open(MODELS_PATH + 's1_k_shell.pickle', 'rb') as handle:
k_shell = pickle.load(handle)
print(len(k_shell))
###Output
62523
###Markdown
Load CAIDA results and create CP-Core with NetworkX
###Code
caida_p2p = [tor for i, tor in enumerate(tor_dataset) if tor_labels[i] == 0]
TIER1 = ["174", "209", "286", "701", "1239", "1299", "2828", "2914", "3257", "3320", "3356",
"3491", "5511", "6453", "6461", "6762", "6830", "7018", "12956"]
# with open(caida_path, "r") as f:
# for line in f:
# as0, as1, label = [int(part) for part in line.split()[0].split('|')]
# if label == 0 and as0 in ASN_index_map and as1 in ASN_index_map:
# caida_p2p.append((ASN_index_map[as0], ASN_index_map[as1]))
caida_p2p = [tor for i, tor in enumerate(tor_dataset) if tor_labels[i] == 0]
print(len(caida_p2p))
import networkx as nx
verteces = set()
for pair in caida_p2p:
verteces.add(pair[0])
verteces.add(pair[1])
print(len(verteces))
vertex2ind = dict()
ind2vertex = dict()
for i, vertex in enumerate(verteces):
vertex2ind[vertex] = i
ind2vertex[i] = vertex
print(len(vertex2ind), len(ind2vertex))
g = nx.DiGraph()
g.add_edges_from([(vertex2ind[pair[0]], vertex2ind[pair[1]]) for pair in caida_p2p])
g.add_edges_from([(vertex2ind[pair[1]], vertex2ind[pair[0]]) for pair in caida_p2p])
SCCs = [c for c in sorted(nx.strongly_connected_components(g),key=len, reverse=True)]
print(len(SCCs))
for i, scc in enumerate(SCCs):
for as_tier1 in TIER1:
if vertex2ind[as_tier1] not in scc:
break
print(i)
print(len(SCCs[0]))
scc = SCCs[0]
scc = set([ind2vertex[ind] for ind in scc])
print(len(scc))
cp_core = set()
for pair in caida_p2p:
if pair[0] in scc and pair[1] in scc:
cp_core.add(tuple(pair))
cp_core.add(tuple((pair[1], pair[0])))
print(len(cp_core))
print("numbr of edges " + str(len(cp_core)/2))
###Output
343446
numbr of edges 171723.0
###Markdown
Load CAIDA results and create CP-Core
###Code
caida_path = 'CAIDA_20181001.as-rel_cleaned.txt'
caida_p2p = list()
TIER1 = [7018, 209, 3356, 3549, 4323, 3320, 3257, 286, 6830, 2914, 5511, 3491, 1239, 6453, 6762, 12956, 701, 702, 703, 2828, 6461]
TIER1 = [ASN_index_map[asn] for asn in TIER1]
with open(caida_path, "r") as f:
for line in f:
as0, as1, label = [int(part) for part in line.split()[0].split('|')]
if label == 0 and as0 in ASN_index_map and as1 in ASN_index_map:
caida_p2p.append((ASN_index_map[as0], ASN_index_map[as1]))
print(len(caida_p2p))
#This class represents a directed graph using adjacency list representation
class Graph:
def __init__(self,vertices):
self.V = vertices #No. of vertices
self.graph = defaultdict(list) # default dictionary to store graph
self.scc_list = []
# function to add an edge to graph
def addEdge(self,u,v):
self.graph[u].append(v)
# A function used by DFS
def DFSUtil(self,v,visited):
# Mark the current node as visited and print it
visited[v]= True
# print v,
self.scc_list[-1].append(v)
#Recur for all the vertices adjacent to this vertex
for i in self.graph[v]:
if visited[i]==False:
self.DFSUtil(i,visited)
def fillOrder(self,v,visited, stack):
# Mark the current node as visited
visited[v]= True
#Recur for all the vertices adjacent to this vertex
for i in self.graph[v]:
if visited[i]==False:
self.fillOrder(i, visited, stack)
stack = stack.append(v)
# Function that returns reverse (or transpose) of this graph
def getTranspose(self):
g = Graph(self.V)
# Recur for all the vertices adjacent to this vertex
for i in self.graph:
for j in self.graph[i]:
g.addEdge(j,i)
return g
# The main function that finds and prints all strongly
# connected components
def printSCCs(self):
stack = []
# Mark all the vertices as not visited (For first DFS)
visited =[False]*(self.V)
# Fill vertices in stack according to their finishing
# times
for i in range(self.V):
if visited[i]==False:
self.fillOrder(i, visited, stack)
# Create a reversed graph
gr = self.getTranspose()
# Mark all the vertices as not visited (For second DFS)
visited =[False]*(self.V)
# Now process all vertices in order defined by Stack
while stack:
i = stack.pop()
if visited[i]==False:
gr.scc_list.append([])
gr.DFSUtil(i, visited)
# print""
scc = gr.scc_list
return scc
verteces = set()
for pair in caida_p2p:
verteces.add(pair[0])
verteces.add(pair[1])
print(len(verteces))
vertex2ind = dict()
ind2vertex = dict()
for i, vertex in enumerate(verteces):
vertex2ind[vertex] = i
ind2vertex[i] = vertex
print(len(vertex2ind), len(ind2vertex))
g = Graph(len(verteces))
for pair in caida_p2p:
g.addEdge(vertex2ind[pair[0]], vertex2ind[pair[1]])
g.addEdge(vertex2ind[pair[1]], vertex2ind[pair[0]])
SCCs = g.printSCCs()
print(len(SCCs))
for i, scc in enumerate(SCCs):
for as_tier1 in TIER1:
if vertex2ind[as_tier1] not in scc:
break
print(i)
print(len(SCCs[134]))
scc = SCCs[134]
scc = set([ind2vertex[ind] for ind in scc])
print(len(scc))
cp_core = set()
for pair in caida_p2p:
if pair[0] in scc and pair[1] in scc:
cp_core.add(pair)
cp_core.add((pair[1], pair[0]))
print(len(cp_core))
print("numbr of edges " + str(len(cp_core)/2))
###Output
383512
numbr of edges 191756
###Markdown
Create k_max-core
###Code
k_max = max(k_shell.values())
k_max_core = [vertex for vertex, k in k_shell.items() if k == k_max]
print(len(k_max_core))
k_max_edges = set()
for i in range(len(k_max_core)):
for j in range(i):
if (k_max_core[i], k_max_core[j]) in x_training or (k_max_core[i], k_max_core[j]) in x_test:
k_max_edges.add((k_max_core[i], k_max_core[j]))
if (k_max_core[j], k_max_core[i]) in x_training or (k_max_core[j], k_max_core[i]) in x_test:
k_max_edges.add((k_max_core[j], k_max_core[i]))
print(len(k_max_edges))
###Output
420
###Markdown
Create x_training_core
###Code
x_training_edges = set()
x_training_vertecs = set()
for pair in x_training:
x_training_edges.add((pair[0], pair[1]))
x_training_vertecs.add(pair[0])
x_training_vertecs.add(pair[1])
print(len(x_training_edges), len(x_training_vertecs))
###Output
464609 57888
###Markdown
Run NDTOR_CP
###Code
class NDTOR_CP:
def __init__(self, core_verteces, core_edges, routing_tables, core_labels=None, is_core=True, threshold=0.85, k_shell=None):
if is_core:
self.tor_dict = self.__intialize_tor_dict_with_core(core_edges)
print('Finished __intialize_tor_dict_with_core')
self.pahse2_routes = self.__split_path_through_core(routing_tables, core_verteces)
else:
self.tor_dict = self.__intialize_tor_dict_with_training_set(core_edges, core_labels)
print('Finished __intialize_tor_dict_with_training_set')
self.pahse2_routes = routing_tables
print('Finished __split_path_through_core with ' + str(len(self.pahse2_routes)) + ' remaining for phase 2')
self.unclassified_pairs = self.__pahse2(threshold)
print('Finished __pahse2 with ' + str(len(self.unclassified_pairs)) + " unclassified pairs")
self.__pahse3()
print('Finished __pahse3 with ' + str(len(self.unclassified_pairs)) + " unclassified pairs")
if len(self.unclassified_pairs) > 0:
if k_shell is None:
self.k_shell = self.__compute_k_shells(routing_tables)
else:
self.k_shell = k_shell
print("Finished __compute_k_shells")
self.__compare_k_shells()
print("Finished __compare_k_shells")
def __intialize_tor_dict_with_training_set(self, core_edges, core_labels):
tor_dict = dict()
for i, edge in enumerate(core_edges):
tor_dict[edge] = core_labels[i]
return tor_dict
def __intialize_tor_dict_with_core(self, core_edges):
tor_dict = dict()
for edge in core_edges:
tor_dict[edge] = 0 # 0 - p2p
return tor_dict
def __split_path_through_core(self, routes, core_verteces):
pahse2_routes = list()
for path in routes:
core_inds = list()
for j, vertex in enumerate(path):
if vertex in core_verteces:
core_inds.append(j)
if len(core_inds) == 0:
pahse2_routes.append(path)
else:
for i in range(core_inds[0]):
if (path[i], path[i+1]) not in self.tor_dict:
self.tor_dict[(path[i], path[i+1])] = 1 # 1 - c2p
self.tor_dict[(path[i+1], path[i])] = 3
for i in range(core_inds[0], core_inds[-1]):
if (path[i], path[i+1]) not in self.tor_dict:
self.tor_dict[(path[i], path[i+1])] = 0 # 0 - p2p
self.tor_dict[(path[i+1], path[i])] = 0
for i in range(core_inds[-1], len(path)-1):
if (path[i], path[i+1]) not in self.tor_dict:
self.tor_dict[(path[i], path[i+1])] = 3 # 0 - p2c
self.tor_dict[(path[i+1], path[i])] = 1
return pahse2_routes
def __pahse2(self, threshold):
votings_p2c = defaultdict(lambda:0)
votings_c2p = defaultdict(lambda:0)
voting_pairs = set()
for path in self.pahse2_routes:
pairs = list(zip(path[:-1], path[1:]))
pairs_tor = []
for i, pair in enumerate(pairs):
if pair in self.tor_dict:
pairs_tor.append(self.tor_dict[pair])
else:
pairs_tor.append(-1)
voting_pairs.add(pair)
for i in range(len(pairs)):
if pairs_tor[i] == -1:
if i > 1 and pairs_tor[i-1] == 3: #p2c
pairs_tor[i] = 3
votings_p2c[pairs[i]] += 1
if i + 1 < len(pairs) and pairs_tor[i+1] == 1: #c2p
pairs_tor[i] = 1
votings_c2p[pairs[i]] += 1
unclassified_pairs = set()
for pair in voting_pairs:
if (votings_p2c[pair] + votings_c2p[pair]) > 0:
rank = (votings_p2c[pair]*1.0)/(votings_p2c[pair] + votings_c2p[pair])
if rank >= threshold:
self.tor_dict[pair] = 3
elif rank <= (1 - threshold):
self.tor_dict[pair] = 1
else:
unclassified_pairs.add(pair)
else:
unclassified_pairs.add(pair)
return unclassified_pairs
def __pahse3(self):
for path in self.pahse2_routes:
pairs = list(zip(path[:-1], path[1:]))
if len(pairs) > 2:
for i in range(1,len(pairs)-1):
if pairs[i] not in self.tor_dict and pairs[i-1] in self.tor_dict and pairs[i+1] in self.tor_dict:
if self.tor_dict[pairs[i-1]] == 1 and self.tor_dict[pairs[i+1]] == 3:
self.tor_dict[pairs[i]] = 0
self.unclassified_pairs.remove(pairs[i])
elif self.tor_dict[pairs[i-1]] == 3 and self.tor_dict[pairs[i+1]] == 1:
self.tor_dict[pairs[i]] = 0
self.unclassified_pairs.remove(pairs[i])
def __get_k_shell(self, k, edges, k_shell):
neighbors = defaultdict(set)
k_shell_verteces = set()
for edge in edges:
neighbors[edge[0]].add(edge)
neighbors[edge[1]].add(edge)
for asn, asn_edges in neighbors.items():
if len(asn_edges) <= k:
k_shell[asn] = k
k_shell_verteces.add(asn)
return neighbors, k_shell_verteces
def __get_graph_for_routes(self, P):
edges = []
for route in P:
for edge in zip(route[:-1], route[1:]):
edges.append(edge)
return set(edges)
def __remove_k_shell_edges(self, edges, k_shell_verteces, neighbors):
for vertex in k_shell_verteces:
edges = edges - neighbors[vertex]
return edges
def __compute_k_shells(self, routes):
k_shell = dict()
k = 1
edges = self.__get_graph_for_routes(routes)
while len(edges) > 0:
print("K: " + str(k) + " Start Iteration on " + str(len(edges)) + " edges")
neighbors, k_shell_verteces = self.__get_k_shell(k, edges, k_shell)
k += 1
edges = self.__remove_k_shell_edges(edges, k_shell_verteces, neighbors)
print("Number of remaining edges: " + str(len(edges)))
print()
return k_shell
def __compare_k_shells(self):
for pair in self.unclassified_pairs:
try:
as0_k = self.k_shell[pair[0]]
except:
as0_k = 0
try:
as1_k = self.k_shell[pair[1]]
except:
as0_k = 0
if as0_k == as1_k:
self.tor_dict[pair] = 0 # p2p
elif as0_k > as1_k:
self.tor_dict[pair] = 3 # p2c
else:
self.tor_dict[pair] = 1 # c2p
def tor_dict2dataset(self):
dataset = []
labels = []
for pair, label in self.tor_dict.items():
dataset.append(np.asarray(pair))
labels.append(label)
print("Finished __tor_dict2dataset")
return np.asarray(dataset), np.asarray(labels)
def generate_labels_for_set(self, pairs):
labels = []
for pair in pairs:
if (pair[0], pair[1]) in self.tor_dict:
labels.append(self.tor_dict[(pair[0], pair[1])])
elif (pair[1], pair[0]) in self.tor_dict:
if self.tor_dict[(pair[1], pair[0])] == 0 or self.tor_dict[(pair[1], pair[0])] == 2:
labels.append(self.tor_dict[(pair[1], pair[0])])
else:
labels.append((self.tor_dict[(pair[1], pair[0])] + 2)%4)
else:
labels.append(-1)
return np.asarray(labels)
ndtor_cp = NDTOR_CP(scc, cp_core, bgp_routes,k_shell=k_shell) # CP - Core
ndtor_k_max_core = NDTOR_CP(k_max_core, k_max_edges, bgp_routes, k_shell=k_shell) # k_max_core - Core
### Reverse to original labels
core_labels = list()
for label in y_training:
if label == 2:
core_labels.append(3)
else:
core_labels.append(label)
ndtor_x_training_core = NDTOR_CP(x_training_vertecs, x_training_edges, bgp_routes, core_labels=core_labels, is_core=False, k_shell=k_shell)
# ToR_MODEL_NAME = "Cleaned_Orig_3_ToR_Classification_NDToR_x_training_core"
# with open(MODELS_PATH + ToR_MODEL_NAME + '_tor_dict.pickle', 'wb') as handle:
# pickle.dump(ndtor_k_max_core.tor_dict, handle, protocol=pickle.HIGHEST_PROTOCOL)
# # with open(MODELS_PATH + 'k_shell.pickle', 'wb') as handle:
# # pickle.dump(ndtor_cp.k_shell, handle, protocol=pickle.HIGHEST_PROTOCOL)
# k_max = max(ndtor_cp.k_shell.values())
# k_max_core = [vertex for vertex, k in ndtor_cp.k_shell.items() if k == k_max]
# print(len(k_max_core))
# index_ASN_map = {index: ASN for ASN, index in ASN_index_map.items()}
# for ind in k_max_core:
# print(index_ASN_map[ind],)
###Output
(3356,)
(6939,)
(1299,)
(174,)
(3257,)
(2914,)
(6453,)
(209,)
(1239,)
(6762,)
(9002,)
(701,)
(6461,)
(4637,)
(3491,)
(286,)
(37100,)
(2497,)
(3303,)
(2516,)
(1273,)
###Markdown
Save kshell
###Code
with open(MODELS_PATH + ToR_MODEL_NAME + '_tor_dict.pickle', 'wb') as handle:
pickle.dump(ndtor_x_training_core.tor_dict, handle, protocol=pickle.HIGHEST_PROTOCOL)
# with open(MODELS_PATH + ToR_MODEL_NAME + 's1_k_shell.pickle', 'wb') as handle:
# pickle.dump(ndtor_cp.k_shell, handle, protocol=pickle.HIGHEST_PROTOCOL)
# with open(MODELS_PATH + 's1_k_shell.pickle', 'rb') as handle:
# k_shell = pickle.load(handle)
# print(len(k_shell))
###Output
_____no_output_____
###Markdown
Final evaluation of the model Evaluate accuracy over the test set
###Code
y_test_prediction = ndtor_x_training_core.generate_labels_for_set(x_test)
print(set(y_test_prediction))
print(len(y_test_prediction))
y_test_prediction_new = []
for i in range(len(y_test_prediction)):
if y_test_prediction[i] %2 == 0:
y_test_prediction_new.append(0)
elif y_test_prediction[i] == 3:
y_test_prediction_new.append(2)
elif y_test_prediction[i] == 1:
y_test_prediction_new.append(1)
else:
y_test_prediction_new.append(-1)
y_test_prediction_new = np.asarray(y_test_prediction_new)
print(len(y_test_prediction_new))
y_test_prediction = y_test_prediction_new
print(set(y_test_prediction))
y_test = [y_test[i] for i, label in enumerate(y_test_prediction) if label!=-1]
y_test_prediction = [label for i, label in enumerate(y_test_prediction) if label!=-1]
print(set(y_test_prediction))
print(len(y_test), len(y_test_prediction))
from sklearn.metrics import accuracy_score
test_scores = accuracy_score(y_test, y_test_prediction)
print("Accuracy: %.2f%%" % (test_scores*100))
# x_test_cleaned = np.asarray([np.asarray(x_test[i]) for i in range(len(x_test)) if y_test_prediction[i] != -1])
# y_test_cleaned = np.asarray([y_test[i] for i in range(len(y_test)) if y_test_prediction[i] != -1])
# y_test_prediction_cleaned = np.asarray([y_test_prediction[i] for i in range(len(y_test_prediction)) if y_test_prediction[i] != -1])
# print(len(x_test_cleaned), len(y_test_cleaned), len(y_test_prediction_cleaned))
# from sklearn.metrics import accuracy_score
# test_scores = accuracy_score(y_test_cleaned, y_test_prediction_cleaned)
# print("Accuracy: %.2f%%" % (test_scores*100))
###Output
_____no_output_____
###Markdown
Test if by learning (asn1, asn2) -> p2c then (asn2, asn1) -> c2p and vice versa
###Code
p2c = TOR_ORIG_LABELS_DICT['P2C']
c2p = TOR_ORIG_LABELS_DICT['C2P']
p2c_training = np.asarray([np.asarray(x_training[i]) for i in range(len(x_training)) if y_training[i] == p2c])
p2c_training_oposite = np.asarray([np.asarray([pair[1], pair[0]]) for pair in p2c_training])
p2c_training_labels = [p2c]*len(p2c_training)
p2c_training_oposite_labels = [c2p]*len(p2c_training_oposite)
print(p2c_training.shape, p2c_training_oposite.shape)
p2c_training_labels_prediction = generate_labels_for_set(caida_tor_dict, p2c_training)
p2c_training_scores = accuracy_score(p2c_training_labels, p2c_training_labels_prediction)
print("Accuracy: %.2f%%" % (p2c_training_scores*100))
p2c_training_oposite_labels_prediction = generate_labels_for_set(caida_tor_dict, p2c_training_oposite)
p2c_training_oposite_scores = accuracy_score(p2c_training_oposite_labels, p2c_training_oposite_labels_prediction)
print("Accuracy: %.2f%%" % (p2c_training_oposite_scores*100))
###Output
_____no_output_____
###Markdown
Plot and save a confusion matrix for results over the test set Define a function
###Code
%matplotlib inline
import matplotlib
import pylab as pl
from sklearn.metrics import confusion_matrix
import itertools
import matplotlib.pyplot as plt
import matplotlib.cm as cm
def plot_confusion_matrix(cm, classes,
normalize=False,
title='Confusion matrix',
fname='Confusion matrix',
cmap=plt.cm.Blues):
"""
This function prints and plots the confusion matrix.
Normalization can be applied by setting `normalize=True`.
"""
if normalize:
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
print("Normalized confusion matrix")
else:
print('Confusion matrix, without normalization')
# print(cm)
plt.imshow(cm, interpolation='nearest', cmap=cmap)
# plt.title(title)
plt.colorbar()
tick_marks = np.arange(len(classes))
plt.xticks(tick_marks, classes, rotation=45)
plt.yticks(tick_marks, classes)
fmt = '.1f' if normalize else 'd'
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
if normalize:
plt.text(j, i, format(cm[i, j]*100, fmt) + '%',
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
else:
plt.text(j, i, format(cm[i, j], fmt),
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
plt.tight_layout()
plt.ylabel('True Label')
plt.xlabel('Predicted Label')
plt.savefig(fname, bbox_inches='tight')
# Compute confusion matrix
cnf_matrix = confusion_matrix(y_test, y_test_prediction)
np.set_printoptions(precision=2)
# Plot non-normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names,
title='Confusion matrix, without normalization',
fname=RESULTS_PATH + ToR_MODEL_NAME + "_" + 'Confusion_matrix_without_normalization')
# Plot normalized confusion matrix
plt.figure()
plot_confusion_matrix(cnf_matrix, classes=class_names, normalize=True,
title='Normalized Confusion Msatrix',
fname=RESULTS_PATH +ToR_MODEL_NAME + "_" + 'Normalized_confusion_matrix')
plt.show()
###Output
Confusion matrix, without normalization
Normalized confusion matrix
###Markdown
Export the model to a file
###Code
model_json = pairs_model.to_json()
with open(MODELS_PATH + ToR_MODEL_NAME + '.json', "w") as json_file:
json_file.write(model_json)
pairs_model.save_weights(MODELS_PATH + ToR_MODEL_NAME + '.h5')
print("Save Model")
###Output
_____no_output_____
###Markdown
Export results to a csv file (with original ASNs) Define functions
###Code
def index2ASN(dataset_indexed, ASN_index_map):
dataset = []
index_ASN_map = {index: ASN for ASN, index in ASN_index_map.items()}
for row_indexed in dataset_indexed:
row = []
for index in row_indexed:
if index != 0:
row += [index_ASN_map[index]]
dataset.append(row)
return dataset
def index2ASN_labeled(dataset_indexed, labels_indexed, ASN_index_map):
dataset = []
index_ASN_map = {index: ASN for ASN, index in ASN_index_map.items()}
labels_colors_map = {0:'GREEN', 1:'RED'}
for i, row_indexed in enumerate(dataset_indexed):
row = []
for index in row_indexed:
if index != 0:
row += [index_ASN_map[index]]
row += [labels_colors_map[labels_indexed[i]]]
dataset.append(row)
return dataset
import csv
def export_csv(dataset, csv_name):
with open(csv_name + '.csv', 'wb') as csv_file:
csv_writer = csv.writer(csv_file)
for row in dataset:
csv_writer.writerow(row)
###Output
_____no_output_____
###Markdown
Load a relevant dataset {all, misclassified, decided, undecided} and get model predictions
###Code
### misclassified from the entire dataset ###
dataset = np.load(DATA_PATH + "bgp_routes_indexed_dataset.npy")
labels = np.load(DATA_PATH + "bgp_routes_labels.npy")
# remove UNDECIDED
dataset = np.asarray([np.asarray(dataset[i]) for i in range(len(dataset)) if labels[i] != 2])
labels = np.asarray([labels[i] for i in range(len(labels)) if labels[i] != 2])
# pad sequences
dataset = sequence.pad_sequences(dataset, maxlen=max_len)
# Get Model Predictions
predictions = model.predict_classes(dataset, verbose=1)
# Create misclassified dataset
x_misclassified = np.asarray([route for i,route in enumerate(dataset) if labels[i] != predictions[i]])
y_misclassified_prediction = np.asarray([label for i,label in enumerate(predictions) if labels[i] != predictions[i]])
print len(x_misclassified), len(y_misclassified_prediction)
###Output
_____no_output_____
###Markdown
Export Results
###Code
dataset_misclassified = index2ASN_labeled(x_misclassified, y_misclassified_prediction, ASN_index_map)
export_csv(dataset_misclassified, RESULTS_PATH + MODEL_NAME + "_misclassified")
###Output
_____no_output_____ |
lingam.ipynb | ###Markdown
VAR-LiNGAM
###Code
t = np.arange(1, 101)
x = t/10 + np.random.normal(size=len(t))
y = x + np.random.normal(size=len(t))
plt.plot(t, x)
plt.plot(t, y)
plt.show
data = np.array([[x],[y]]).reshape(100, 2)
model = lingam.VARLiNGAM()
model.fit(data)
print(model.causal_order_)
print(model.adjacency_matrices_)
from lingam.utils import make_dot
labels = ['x(t)', 'y(t)', 'x(t-1)', 'y(t-1)']
make_dot(np.hstack(model.adjacency_matrices_), ignore_shape=True, lower_limit=0.05, labels=labels)
###Output
_____no_output_____
###Markdown
LiNGAM
###Code
t = np.arange(1, 101)
x = t/10 + np.random.normal(size=len(t))
y = x + np.random.normal(size=len(t))
plt.scatter(x, y)
plt.show
data = np.array([[x],[y]]).reshape(100, 2)
###Output
_____no_output_____
###Markdown
DirectLiNGAM
###Code
model = lingam.DirectLiNGAM()
model.fit(data)
print(model.causal_order_)
print(model.adjacency_matrix_)
from lingam.utils import make_dot
labels = ['x', 'y']
make_dot(model.adjacency_matrix_, labels=labels)
# Total Effect
# x0 --> x1
te = model.estimate_total_effect(data, 0, 1)
print(f'total effect: {te:.3f}')
te = model.estimate_total_effect(data, 1, 0)
print(f'total effect: {te:.3f}')
###Output
total effect: 0.942
###Markdown
ICALiNGAM
###Code
model = lingam.ICALiNGAM()
model.fit(data)
print(model.causal_order_)
print(model.adjacency_matrix_)
labels = ['x', 'y']
make_dot(model.adjacency_matrix_, labels=labels)
###Output
_____no_output_____
###Markdown
事前知識を入れる・prior_knowledge引数を使う ・DirectLiNGAMでしか事前知識は渡せない ・ref: https://lingam.readthedocs.io/en/latest/tutorial/prior_knowledge.html
###Code
from lingam.utils import make_prior_knowledge
prior_knowledge = make_prior_knowledge(
n_variables=4,
no_paths=[[0, 1], [1, 2], [1, 0], [2, 1]])
print(prior_knowledge)
t = np.arange(1, 101)
x0 = t/10 + np.random.normal(size=len(t))
x1 = t/10 + np.random.normal(size=len(t))
x2 = t/10 + np.random.normal(size=len(t))
x3 = t/10 + np.random.normal(size=len(t))
data = np.array([[x0],[x1],[x2],[x3]]).reshape(100, 4)
model = lingam.DirectLiNGAM(prior_knowledge=prior_knowledge)
model.fit(data)
print(model.causal_order_)
print(model.adjacency_matrix_)
labels = ['x0', 'x1', 'x2', 'x3']
make_dot(model.adjacency_matrix_, labels=labels)
model._prior_knowledge
###Output
_____no_output_____ |
RR_Lyrae_classifications.ipynb | ###Markdown
RR-Lyrae classificationWe use the RR-Lyrae training dataset from AstroML.datasets.
###Code
# Importing Libraries
import numpy as np
from matplotlib import pyplot as plt
from matplotlib import colors
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.metrics import mean_absolute_error
from astroML.datasets import fetch_rrlyrae_combined
from astroML.utils import split_samples
from astroML.utils import completeness_contamination
% matplotlib inline
# Downloading RR-Lyrae data
X, y = fetch_rrlyrae_combined()
# Observed colors
print(X)
color_names = ["$u-g$", "$g-r$", "$r-i$", "$i-z$"]
# make a boolean array denoting classification as RR Lyrae
isRR = (y == 1)
noRR = (y == 0) # NOTE: that (~noRR) is simply isRR
# Plotting the 1D color-distributions
plt.figure(figsize=(12, 8))
for i in range(4):
color = X[:, i]
bins = np.linspace(np.nanmin(color), np.nanmax(color), 20) # this is to have a consistent no of bins in both histograms
plt.subplot(221 + i)
plt.hist(color[isRR], bins=bins, log=True, color="r", histtype="step", label="RR lyrae")
plt.hist(color[noRR], bins=bins, log=True, color="k", histtype="step", label="stars")
plt.xlabel(color_names[i])
plt.legend(loc="upper left")
plt.tight_layout()
plt.show()
# Plotting 2D colors
# in scatter plots (not histograms), show 5000 non-RR Lyrae stars
N_plot = 5000 + int(sum(y))
noRR[:-N_plot] = False
plt.figure(figsize=(12, 8))
k = 1
for i in range(4):
c1 = X[:, i]
for j in range(i + 1, 4):
c2 = X[:, j]
plt.subplot(320 + k)
plt.plot(c1[noRR], c2[noRR], "k.", label="stars")
plt.plot(c1[isRR], c2[isRR], "r.", label="RR lyrae")
plt.xlabel(color_names[i])
plt.ylabel(color_names[j])
plt.legend(loc="upper right", framealpha=0.7, mode="expand", ncol=2)
k += 1
plt.tight_layout()
plt.show()
# Plotting 3D colors
from mpl_toolkits.mplot3d import Axes3D
combinations = [(1, 0, 2), (1, 0, 3), (1, 2, 3), (0, 2, 3)]
fig = plt.figure(figsize=(12, 12))
for index, combination in enumerate(combinations):
i, j, k = combination
ax = fig.add_subplot(221 + index, projection='3d')
ax.view_init(60, -130) # set camera position for better visualization
ax.scatter(X[:, i][noRR], X[:, j][noRR], X[:, k][noRR], c=[0.5,0.7,0.7], marker="o", alpha=0.5,
edgecolors="k", label="stars")
ax.scatter(X[:, i][isRR], X[:, j][isRR], X[:, k][isRR], c="r", edgecolors="k", label="RR Lyrae")
ax.set_xlabel(color_names[i])
ax.set_ylabel(color_names[j])
ax.set_zlabel(color_names[k])
ax.legend()
plt.show()
# RUNNING THE kNN CLASSIFIER
# split the sample in a training and test subset
(X_train, X_test), (y_train, y_test) = split_samples(X, y, [0.75, 0.25], random_state=0)
N_tot = len(y) # number of stars
N_st = np.sum(y == 0) # number of non-RR Lyrae stars
N_rr = N_tot - N_st # number of RR Lyrae
N_train = len(y_train) # size of training sample
N_test = len(y_test) # size of test sample
N_plot = 5000 + N_rr # number of stars plotted (for better visualization)
Ncolors = np.arange(1, X.shape[1] + 1) # number of available colors
print(np.sqrt(N_rr)) # because the best selection of k ~ sqrt(N)
###Output
_____no_output_____
###Markdown
1) k-Nearest Neighbour
###Code
# PERFORM CLASSIFICATION FOR VARIOUS VALUES OF k
# for each 'k', store the classifier and predictions on test sample
classifiers = []
predictions = []
kvals = [1,10,20,50,100] # k values to be used
for k in kvals:
classifiers.append([])
predictions.append([])
for nc in Ncolors:
clf = KNeighborsClassifier(n_neighbors=k) # prepare the classifiers
clf.fit(X_train[:, :nc], y_train) # supply training data
y_pred = clf.predict(X_test[:, :nc]) # predict class of test data
classifiers[-1].append(clf)
predictions[-1].append(y_pred)
# use astroML
completeness, contamination = completeness_contamination(predictions, y_test)
print('Completeness and contamination per color (col) and per k (line)')
print('Colors: ',color_names)
print("completeness", completeness)
print("contamination", contamination)
# COMPUTE DECISION BOUNDARY
clf = classifiers[1][1]
xlim = (0.7, 1.35)
ylim = (-0.15, 0.4)
xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 71), np.linspace(ylim[0], ylim[1], 81))
Z = clf.predict(np.c_[yy.ravel(), xx.ravel()])
Z = Z.reshape(xx.shape)
# PLOT THE RESULTS
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0, left=0.1, right=0.95, wspace=0.2)
# # left plot: data and decision boundary
# ax = fig.add_subplot(121)
# im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:], s=4, lw=0, cmap=plt.cm.binary, zorder=2)
# im.set_clim(-0.5, 1)
# im = ax.imshow(Z, origin='lower', aspect='auto', cmap=plt.cm.binary, zorder=1, extent=xlim + ylim)
# im.set_clim(0, 2)
# ax.contour(xx, yy, Z, [0.5], colors='k')
# ax.set_xlim(xlim)
# ax.set_ylim(ylim)
# ax.set_xlabel('$u-g$')
# ax.set_ylabel('$g-r$')
# ax.text(0.02, 0.02, "k = %i" % kvals[1], transform=ax.transAxes)
# plot completeness vs Ncolors
ax = fig.add_subplot(222)
ax.plot(Ncolors, completeness[0], 'o-k', ms=6, label='k=%i' % kvals[0])
ax.plot(Ncolors, completeness[1], '^--k', ms=6, label='k=%i' % kvals[1])
ax.plot(Ncolors, completeness[2], 'v:k', ms=6, label='k=%i' % kvals[2])
ax.plot(Ncolors, completeness[3], 'o--k', ms=6, label='k=%i' % kvals[3])
ax.plot(Ncolors, completeness[4], '^:k', ms=6, label='k=%i' % kvals[4])
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylabel('completeness')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
# plot contamination vs Ncolors
ax = fig.add_subplot(224)
ax.plot(Ncolors, contamination[0], 'o-k', label='k=%i' % kvals[0])
ax.plot(Ncolors, contamination[1], '^--k', label='k=%i' % kvals[1])
ax.plot(Ncolors, contamination[2], 'v:k', label='k=%i' % kvals[2])
ax.plot(Ncolors, contamination[3], 'o--k', label='k=%i' % kvals[3])
ax.plot(Ncolors, contamination[4], '^:k', label='k=%i' % kvals[4])
ax.legend(loc='lower right', bbox_to_anchor=(1.0, 0.79))
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i'))
ax.set_xlabel('N colors')
ax.set_ylabel('contamination')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
2) Support Vector Machines
###Code
def apply_SVM(linear):
if linear:
kernel_to_use = "linear" # if 1D use linear boundary
gamma_to_use = "auto"
else:
kernel_to_use = "rbf" # if n-D use hyperplane boundary
gamma_to_use = 20.0
def compute_SVM(Ncolors):
classifiers = []
predictions = []
for nc in Ncolors:
print(" Computing for", nc, "color(s)...")
# perform support vector classification
clf = SVC(kernel=kernel_to_use, gamma=gamma_to_use, class_weight='balanced')
clf.fit(X_train[:, :nc], y_train)
y_pred = clf.predict(X_test[:, :nc])
classifiers.append(clf)
predictions.append(y_pred)
return classifiers, predictions
print("Performing SVM classification...")
classifiers, predictions = compute_SVM(Ncolors)
completeness, contamination = completeness_contamination(predictions, y_test)
print("completeness", completeness)
print("contamination", contamination)
# COMPUTE THE DECISION BOUNDARY
clf = classifiers[1]
if linear:
w = clf.coef_[0]
a = -w[0] / w[1]
yy = np.linspace(-0.1, 0.4)
xx = a * yy - clf.intercept_[0] / w[1]
else:
xlim = (0.7, 1.35)
ylim = (-0.15, 0.4)
xx, yy = np.meshgrid(np.linspace(xlim[0], xlim[1], 101), np.linspace(ylim[0], ylim[1], 101))
Z = clf.predict(np.c_[yy.ravel(), xx.ravel()])
Z = Z.reshape(xx.shape)
# smooth the boundary
from scipy.ndimage import gaussian_filter
Z = gaussian_filter(Z, 2)
# PLOT THE RESULTS
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0, left=0.1, right=0.95, wspace=0.2)
# # left plot: data and decision boundary
# ax = fig.add_subplot(121)
# im = ax.scatter(X[-N_plot:, 1], X[-N_plot:, 0], c=y[-N_plot:], s=4, lw=0, cmap=plt.cm.binary, zorder=2)
# if linear:
# ax.plot(xx, yy, '-k')
# else:
# ax.contour(xx, yy, Z, [0.5], colors='k')
# im.set_clim(-0.5, 1)
# ax.set_xlim(0.7, 1.35)
# ax.set_ylim(-0.15, 0.4)
# ax.set_xlabel('$u-g$')
# ax.set_ylabel('$g-r$')
# plot completeness vs Ncolors
ax = fig.add_subplot(222)
ax.plot(Ncolors, completeness, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylabel('completeness')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
# plot contamination vs Ncolors
ax = fig.add_subplot(224)
ax.plot(Ncolors, contamination, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i'))
ax.set_xlabel('N colors')
ax.set_ylabel('contamination')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
plt.show()
apply_SVM(linear=True)
###Output
Performing SVM classification...
Computing for 1 color(s)...
Computing for 2 color(s)...
Computing for 3 color(s)...
Computing for 4 color(s)...
completeness [0.94890511 1. 1. 1. ]
contamination [0.96057022 0.85347594 0.85347594 0.85471898]
###Markdown
However if the boundary is not 1D-linear but 2D multinomial then:
###Code
apply_SVM(linear=False)
###Output
Performing SVM classification...
Computing for 1 color(s)...
Computing for 2 color(s)...
Computing for 3 color(s)...
Computing for 4 color(s)...
completeness [0.94890511 1. 1. 1. ]
contamination [0.95967742 0.83901293 0.83573141 0.81561238]
###Markdown
3) Random-Forests Classifier
###Code
from sklearn.ensemble import RandomForestClassifier
def apply_RF():
def compute_RF(Ncolors):
classifiers = []
predictions = []
for nc in Ncolors:
print(" Computing for", nc, "color(s)...")
# perform support vector classification
clf = RandomForestClassifier()
clf.fit(X_train[:, :nc], y_train)
y_pred = clf.predict(X_test[:, :nc])
classifiers.append(clf)
predictions.append(y_pred)
return classifiers, predictions
print("Performing RF classification...")
classifiers, predictions = compute_RF(Ncolors)
completeness, contamination = completeness_contamination(predictions, y_test)
print("completeness", completeness)
print("contamination", contamination)
# PLOT THE RESULTS
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0, left=0.1, right=0.95, wspace=0.2)
# plot completeness vs Ncolors
ax = fig.add_subplot(222)
ax.plot(Ncolors, completeness, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylabel('completeness')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
# plot contamination vs Ncolors
ax = fig.add_subplot(224)
ax.plot(Ncolors, contamination, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i'))
ax.set_xlabel('N colors')
ax.set_ylabel('contamination')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
plt.show()
apply_RF()
###Output
Performing RF classification...
Computing for 1 color(s)...
Computing for 2 color(s)...
Computing for 3 color(s)...
Computing for 4 color(s)...
completeness [0. 0.24087591 0.37226277 0.46715328]
contamination [1. 0.45901639 0.17741935 0.15789474]
###Markdown
4) AdaBoost
###Code
from sklearn.ensemble import AdaBoostClassifier
def apply_ada():
def compute_ada(Ncolors):
classifiers = []
predictions = []
for nc in Ncolors:
print(" Computing for", nc, "color(s)...")
# perform support vector classification
clf = AdaBoostClassifier()
clf.fit(X_train[:, :nc], y_train)
y_pred = clf.predict(X_test[:, :nc])
classifiers.append(clf)
predictions.append(y_pred)
return classifiers, predictions
print("Performing AdaBoost classification...")
classifiers, predictions = compute_ada(Ncolors)
completeness, contamination = completeness_contamination(predictions, y_test)
print("completeness", completeness)
print("contamination", contamination)
# PLOT THE RESULTS
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(bottom=0.15, top=0.95, hspace=0.0, left=0.1, right=0.95, wspace=0.2)
# plot completeness vs Ncolors
ax = fig.add_subplot(222)
ax.plot(Ncolors, completeness, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.NullFormatter())
ax.set_ylabel('completeness')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
# plot contamination vs Ncolors
ax = fig.add_subplot(224)
ax.plot(Ncolors, contamination, 'o-k', ms=6)
ax.xaxis.set_major_locator(plt.MultipleLocator(1))
ax.yaxis.set_major_locator(plt.MultipleLocator(0.2))
ax.xaxis.set_major_formatter(plt.FormatStrFormatter('%i'))
ax.set_xlabel('N colors')
ax.set_ylabel('contamination')
ax.set_xlim(0.5, 4.5)
ax.set_ylim(-0.1, 1.1)
ax.grid(True)
plt.show()
apply_ada()
###Output
Performing AdaBoost classification...
Computing for 1 color(s)...
Computing for 2 color(s)...
Computing for 3 color(s)...
Computing for 4 color(s)...
completeness [0. 0.31386861 0.40875912 0.45985401]
contamination [1. 0.36764706 0.24324324 0.23170732]
|
analysis_notebooks/Analyze Experiment 0 - All.ipynb | ###Markdown
--- HS ID by batch size
###Code
hs_ids_0 = all_0['hs_id'].unique()
hs_ids_1 = all_1['hs_id'].unique()
hs_ids_2 = all_2['hs_id'].unique()
overlap_hs_ids = np.intersect1d(np.intersect1d(hs_ids_0, hs_ids_1), hs_ids_2)
unioned_hs_ids = np.union1d(np.union1d(hs_ids_0, hs_ids_1), hs_ids_2)
print('Successful HS batch size 96: {}.'.format(hs_ids_0.shape[0]))
print('Successful HS batch size 384: {}.'.format(hs_ids_1.shape[0]))
print('Successful HS batch size 1536: {}.'.format(hs_ids_2.shape[0]))
print('Successful HS intersection batch sizes: {}.'.format(overlap_hs_ids.shape[0]))
print('Successful HS union batch sizes: {}.'.format(unioned_hs_ids.shape[0]))
###Output
Successful HS batch size 96: 764.
Successful HS batch size 384: 763.
Successful HS batch size 1536: 301.
Successful HS intersection batch sizes: 298.
Successful HS union batch sizes: 775.
###Markdown
--- Plots
###Code
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_context("paper")
sns.set(font_scale=1.5)
%load_ext autoreload
%autoreload 2
all_df['batch_size'] = all_df['total_batch_size']
tmp_df = all_df[all_df['hs_id'].isin(overlap_hs_ids)]
plt.figure(figsize=(14, 8))
sns.pointplot(x="iteration", y="hits_to_batch_size_ratio", hue="batch_size", data=tmp_df.drop('total'))
plt.title('CBWS Hyperparameter Sweep - Mean of per-iteration hits grouped by batch_size');
plt.ylabel('hits / batch_size ratio')
plt.show()
tmp_df = all_df[all_df['hs_id'].isin(overlap_hs_ids)]
running_sum = tmp_df.drop('total').groupby('hyperparameter_id').cumsum(axis=0)
running_sum['cumsum_hits_to_budget_ratio'] = running_sum['total_hits'] / running_sum['total_batch_size']
running_sum['batch_size'] = tmp_df.drop('total')['total_batch_size']
running_sum['iteration'] = tmp_df.drop('total')['iteration']
plt.figure(figsize=(14, 8))
sns.pointplot(x="iteration", y="cumsum_hits_to_budget_ratio", hue="batch_size", data=running_sum)
plt.title('CBWS Hyperparameter Sweep - Mean of cumulative hits grouped by batch_size')
plt.ylabel('cumulative_hits / batch_size ratio')
plt.show()
###Output
_____no_output_____
###Markdown
--- In-Depth
###Code
top_hs_0 = all_0[all_0['iteration'] == 9999].sort_values('total_hits')
top_hs_0 = top_hs_0.reset_index(drop=True)
top_hs_1 = all_1[all_1['iteration'] == 9999].sort_values('total_hits')
top_hs_1 = top_hs_1.reset_index(drop=True)
top_hs_2 = all_2[all_2['iteration'] == 9999].sort_values('total_hits')
top_hs_2 = top_hs_2.reset_index(drop=True)
tmp_df0 = top_hs_0.iloc[-15:,:][['hs_id', 'exploitation_batch_size', 'exploration_batch_size',
'exploitation_hits', 'exploration_hits',
'total_hits']]
tmp_df0
#print(tmp_df0.to_latex(index=False))
tmp_df1 = top_hs_1.iloc[-15:,:][['hs_id', 'exploitation_batch_size', 'exploration_batch_size',
'exploitation_hits', 'exploration_hits',
'total_hits']]
tmp_df1
#print(tmp_df1.to_latex(index=False))
tmp_df2 = top_hs_2.iloc[-15:,:][['hs_id', 'exploitation_batch_size', 'exploration_batch_size',
'exploitation_hits', 'exploration_hits',
'total_hits']]
tmp_df2
#print(tmp_df2.to_latex(index=False))
overlap = np.hstack([np.intersect1d(tmp_df0['hs_id'].unique(), tmp_df1['hs_id'].unique()), np.intersect1d(tmp_df0['hs_id'].unique(), tmp_df2['hs_id'].unique()), np.intersect1d(tmp_df1['hs_id'].unique(), tmp_df2['hs_id'].unique())])
print('Overlapping top 15 hyperparameters: {}.'.format(overlap))
###Output
Overlapping top 15 hyperparameters: ['CBWS_678' 'CBWS_28' 'CBWS_219' 'CBWS_411'].
###Markdown
--- DTK tests for hyperparametersNull Hypothesis: Groups have same mean.If confidence interval does not contain 0, then we REJECT null hypothesis; i.e. groups do not have same mean.
###Code
from statsmodels.stats.multicomp import pairwise_tukeyhsd
from statsmodels.stats.multicomp import MultiComparison
import rpy2.robjects as robjects
import rpy2.robjects.packages as rpackages
# Null Hypothesis: Groups have same mean
# If confidence interval does not contain 0, then we REJECT null hypothesis
dtk_lib = rpackages.importr('DTK')
alpha=0.05
def get_important_hs(top_hs):
config_df = pd.concat([pd.read_csv(x) for x in top_hs['config_file']])
config_df = config_df.reset_index(drop=True)
req_cols = config_df.columns[2:-4]
dtk_dict = {}
for c in req_cols:
df = pd.concat([top_hs['total_hits'], config_df[c]], axis=1)
group_names = list(np.sort(df[c].unique()))
group_means = df.groupby(c).mean()
index_names_1 = []
index_names_2 = []
mean1 = []
mean2 = []
for i in range(len(group_names)):
for j in range(i+1, len(group_names)):
index_names_1.append(group_names[j])
index_names_2.append(group_names[i])
mean1.append(group_means.iloc[j,0])
mean2.append(group_means.iloc[i,0])
m_df_mat = np.around(df['total_hits'].as_matrix(), decimals=4)
dtk_results_init = dtk_lib.DTK_test(robjects.FloatVector(m_df_mat), robjects.FactorVector(df[c].tolist()), alpha)
dtk_results = np.array(dtk_results_init[1])
dtk_pd = pd.DataFrame(data=[index_names_1, index_names_2,
list(mean1), list(mean2),
list(dtk_results[:,0]),list(dtk_results[:,1]),
list(dtk_results[:,2]), [False for _ in range(len(index_names_1))]]).T
dtk_pd.columns = ['group1', 'group2', 'mean1', 'mean2', 'meandiff', 'Lower CI', 'Upper CI', 'reject']
for j in range(dtk_pd.shape[0]):
if dtk_pd.loc[j,'Lower CI'] > 0 or dtk_pd.loc[j,'Upper CI'] < 0:
dtk_pd.loc[j,'reject'] = True
if True in list(dtk_pd['reject']):
dtk_dict[c] = dtk_pd
important_hs = {}
for k in dtk_dict:
dtk_pd = dtk_dict[k]
g1_max = dtk_pd[dtk_pd['mean1'] == dtk_pd['mean1'].max()].iloc[0,:]
g2_max = dtk_pd[dtk_pd['mean2'] == dtk_pd['mean2'].max()].iloc[0,:]
if g1_max['mean1'] > g2_max['mean2']:
important_hs[k] = g1_max['group1']
else:
important_hs[k] = g1_max['group2']
return important_hs
top_hs_0 = all_0[all_0['iteration'] == 9999].sort_values('total_hits')
top_hs_0 = top_hs_0.reset_index(drop=True)
important_hs_0 = get_important_hs(top_hs_0)
top_hs_1 = all_1[all_1['iteration'] == 9999].sort_values('total_hits')
top_hs_1 = top_hs_1.reset_index(drop=True)
important_hs_1 = get_important_hs(top_hs_1)
top_hs_2 = all_2[all_2['iteration'] == 9999].sort_values('total_hits')
top_hs_2 = top_hs_2.reset_index(drop=True)
important_hs_2 = get_important_hs(top_hs_2)
###Output
C:\Users\Moeman\Anaconda3\lib\site-packages\ipykernel_launcher.py:34: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
C:\Users\Moeman\Anaconda3\lib\site-packages\ipykernel_launcher.py:34: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
C:\Users\Moeman\Anaconda3\lib\site-packages\ipykernel_launcher.py:34: FutureWarning: Method .as_matrix will be removed in a future version. Use .values instead.
###Markdown
--- Prepare config files for Experiment 1Run the top-15, middle-5, and bottom-5 hyperparameters (based on total_hits) from each batch size with different starting initial 96 plate.Number of different initial starts: 10.Total of (45+15+15)(hyperparams)\*3(batch_sizes)\*10(initial plates) = 2250 jobs
###Code
import pathlib
import json
top_hs_0 = all_0[all_0['iteration'] == 9999].sort_values('total_hits')
top_hs_0 = top_hs_0.reset_index(drop=True)
top_hs_1 = all_1[all_1['iteration'] == 9999].sort_values('total_hits')
top_hs_1 = top_hs_1.reset_index(drop=True)
top_hs_2 = all_2[all_2['iteration'] == 9999].sort_values('total_hits')
top_hs_2 = top_hs_2.reset_index(drop=True)
config_dir = '../param_configs/first_pass_hyperparams/top/batch_size_{}/'
for bsize, top_hs in zip([96, 384, 1536],
[top_hs_0.iloc[-15:,:], top_hs_1.iloc[-15:,:], top_hs_2.iloc[-15:,:]]):
cf_dir = config_dir.format(bsize)
pathlib.Path(cf_dir).mkdir(parents=True, exist_ok=True)
config_files = top_hs['config_file']
hs_ids = top_hs['hyperparameter_id'].apply((lambda x: '_'.join(x.split('_')[2:4])))
for hid, cf in zip(hs_ids, config_files):
cdf = pd.read_csv(cf)
cdf = cdf.iloc[0].to_dict()
cdf['batch_size'] = [96, 384, 1536]
cdf['hyperparameter_id'] = hid
cdf['hyperparameter_group'] = 'top_{}'.format(bsize)
for k in cdf:
if type(cdf[k]) == np.bool_:
cdf[k] = bool(cdf[k])
elif type(cdf[k]) == np.int64:
cdf[k] = int(cdf[k])
elif type(cdf[k]) == np.float64:
cdf[k] = float(cdf[k])
with open(cf_dir + hid+'.json', 'w') as f:
json.dump(cdf, f)
config_dir = '../param_configs/first_pass_hyperparams/middle/batch_size_{}/'
for bsize, top_hs in zip([96, 384, 1536],
[top_hs_0.iloc[-150:-150+5], top_hs_1.iloc[-150:-150+5], top_hs_2.iloc[-100:-100+5]]):
cf_dir = config_dir.format(bsize)
pathlib.Path(cf_dir).mkdir(parents=True, exist_ok=True)
config_files = top_hs['config_file']
hs_ids = top_hs['hyperparameter_id'].apply((lambda x: '_'.join(x.split('_')[2:4])))
for hid, cf in zip(hs_ids, config_files):
cdf = pd.read_csv(cf)
cdf = cdf.iloc[0].to_dict()
cdf['batch_size'] = [96, 384, 1536]
cdf['hyperparameter_id'] = hid
cdf['hyperparameter_group'] = 'middle_{}'.format(bsize)
for k in cdf:
if type(cdf[k]) == np.bool_:
cdf[k] = bool(cdf[k])
elif type(cdf[k]) == np.int64:
cdf[k] = int(cdf[k])
elif type(cdf[k]) == np.float64:
cdf[k] = float(cdf[k])
with open(cf_dir + hid+'.json', 'w') as f:
json.dump(cdf, f)
config_dir = '../param_configs/first_pass_hyperparams/worst/batch_size_{}/'
for bsize, top_hs in zip([96, 384, 1536],
[top_hs_0.iloc[:5,:], top_hs_1.iloc[:5,:], top_hs_2.iloc[:5,:]]):
cf_dir = config_dir.format(bsize)
pathlib.Path(cf_dir).mkdir(parents=True, exist_ok=True)
config_files = top_hs['config_file']
hs_ids = top_hs['hyperparameter_id'].apply((lambda x: '_'.join(x.split('_')[2:4])))
for hid, cf in zip(hs_ids, config_files):
cdf = pd.read_csv(cf)
cdf = cdf.iloc[0].to_dict()
cdf['batch_size'] = [96, 384, 1536]
cdf['hyperparameter_id'] = hid
cdf['hyperparameter_group'] = 'worst_{}'.format(bsize)
for k in cdf:
if type(cdf[k]) == np.bool_:
cdf[k] = bool(cdf[k])
elif type(cdf[k]) == np.int64:
cdf[k] = int(cdf[k])
elif type(cdf[k]) == np.float64:
cdf[k] = float(cdf[k])
with open(cf_dir + hid+'.json', 'w') as f:
json.dump(cdf, f)
###Output
_____no_output_____
###Markdown
--- Setup 10 random initial 96-compound plates with exactly 1 active
###Code
csv_files = glob.glob('../datasets/aid624173_cv_96/*.csv')
files_with_actives = []
for c in csv_files:
df = pd.read_csv(c)
if df['pcba-aid624173'].sum() > 0:
files_with_actives.append(c)
files_with_actives = [f for f in files_with_actives if 'unlabeled_10.csv' not in f]
random_active_files = list(np.random.choice(files_with_actives, size=10, replace=False))
import pathlib, os, shutil, glob
import json
import pandas as pd
import numpy as np
random_active_files =['../datasets/aid624173_cv_96/unlabeled_1338.csv',
'../datasets/aid624173_cv_96/unlabeled_424.csv',
'../datasets/aid624173_cv_96/unlabeled_3845.csv',
'../datasets/aid624173_cv_96/unlabeled_1179.csv',
'../datasets/aid624173_cv_96/unlabeled_2233.csv',
'../datasets/aid624173_cv_96/unlabeled_1069.csv',
'../datasets/aid624173_cv_96/unlabeled_2053.csv',
'../datasets/aid624173_cv_96/unlabeled_3303.csv',
'../datasets/aid624173_cv_96/unlabeled_1017.csv',
'../datasets/aid624173_cv_96/unlabeled_150.csv']
hparams_files = glob.glob('../param_configs/first_pass_hyperparams/*/*.json')
###Output
_____no_output_____ |
Code/Community_assets/Community_Assets.ipynb | ###Markdown
Toronto Police Service Community Asset Portal---The website of interest is dynamic, meaning it needs to interacted with inorder for the data of interest to be accessed from back-end server then displayed on the webpage. For example, the table HTML changes as you scroll through it. As such, Selenium, with its more nuanced tools for navigating webpages, is a better choice for web scrapping than splinter.
###Code
#import libraries
from bs4 import BeautifulSoup as BS
from selenium import webdriver
from pandas import pandas as pd
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import Select
import time
import numpy as np
###Output
_____no_output_____
###Markdown
Web Scrape===The webpage has a drop-down menu to select tables of services by categoryThe categories are as follows ):Health ServicesCommunity ServicesFood & HousingLaw & GovernmentEducation & EmploymentFinancial ServicesTransportationOtherReceivedData
###Code
#open url
DRIVER_PATH = "driver/chromedriver.exe"
driver = webdriver.Chrome(executable_path=DRIVER_PATH)
driver.get("https://torontops.maps.arcgis.com/home/item.html?id=077c19d8628b44c7ab9f0fff75a55211&view=list&sortOrder=true&sortField=defaultFSOrder#data")
###Output
_____no_output_____
###Markdown
Scrapping process is as follows: 1. select and open webpage version (table category) 2. scrape html tables (each row is a element) 3. append output to list 4. loop: > locate and click on last element in static html (triggering html to load new elements) > append output 5. concat into a dataframe 6. drop duplicates 7. add headers
###Code
# health services
# select dropdown menu with data tabel option
select = Select(driver.find_element_by_xpath("//select[@data-dojo-attach-point = 'layerSelect']"))
# select data table
select.select_by_visible_text("Health Services")
time.sleep(5)
# parse initial web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#append first set of rows to list
health_services = []
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
health_services.append(row)
tracker += 1
for x in range(35):
#scroll down the table by clicking lost row in static html
selector = driver.find_element_by_xpath(f"/html/body/div[2]/div/div[2]/div/div[1]/main/div[3]/div[2]/div[2]/div/div[1]/div/div/div[2]/div/div[2]/div/div[{tracker}]/table")
selector.click()
# parse next web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
health_services.append(row)
tracker += 1
time.sleep(5)
health_services_df = pd.concat(health_services)
health_services_df.drop_duplicates(keep='first', inplace=True)
len(health_services_df)
# grab headers
list_columns = pd.read_html(str(html_table[0]), flavor='bs4')
row = pd.concat(list_columns)
row = pd.concat(list_columns)
headers =[]
for item in row:
headers.append(item)
headers
#add headers
health_services_df.set_axis(headers, axis=1, inplace=True)
#verfiy
health_services_df.head(1)
# community services
# select dropdown menu with data tabel option
select = Select(driver.find_element_by_xpath("//select[@data-dojo-attach-point = 'layerSelect']"))
# select data table
select.select_by_visible_text("Community Services")
time.sleep(5)
# parse initial web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
community_services = []
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
health_services.append(row)
tracker += 1
for x in range(35):
#scroll down the table by clicking lost row in static html
selector = driver.find_element_by_xpath(f"/html/body/div[2]/div/div[2]/div/div[1]/main/div[3]/div[2]/div[2]/div/div[1]/div/div/div[2]/div/div[2]/div/div[{tracker}]/table")
selector.click()
# parse next web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
community_services.append(row)
tracker += 1
time.sleep(5)
community_services_df = pd.concat(community_services)
community_services_df.drop_duplicates(keep='first', inplace=True)
len(community_services_df)
#add headers
community_services_df.set_axis(headers, axis=1, inplace=True)
#verfiy
community_services_df.head(1)
# Food & Housing
# select dropdown menu with data tabel option
select = Select(driver.find_element_by_xpath("//select[@data-dojo-attach-point = 'layerSelect']"))
# select data table
select.select_by_visible_text("Food & Housing")
time.sleep(5)
# parse initial web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
food_housing = []
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
food_housing.append(row)
tracker += 1
for x in range(35):
#scroll down the table by clicking lost row in static html
selector = driver.find_element_by_xpath(f"/html/body/div[2]/div/div[2]/div/div[1]/main/div[3]/div[2]/div[2]/div/div[1]/div/div/div[2]/div/div[2]/div/div[{tracker}]/table")
selector.click()
# parse next web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
food_housing.append(row)
tracker += 1
time.sleep(5)
food_housing_df = pd.concat(food_housing)
food_housing_df.drop_duplicates(keep='first', inplace=True)
len(food_housing_df)
#add headers
food_housing_df.set_axis(headers, axis=1, inplace=True)
#verfiy
food_housing_df.head(1)
# Law & Government
# select dropdown menu with data tabel option
select = Select(driver.find_element_by_xpath("//select[@data-dojo-attach-point = 'layerSelect']"))
# select data table
select.select_by_visible_text("Law & Government")
time.sleep(5)
# parse initial web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
law_government = []
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
law_government.append(row)
tracker += 1
for x in range(20):
#scroll down the table by clicking lost row in static html
selector = driver.find_element_by_xpath(f"/html/body/div[2]/div/div[2]/div/div[1]/main/div[3]/div[2]/div[2]/div/div[1]/div/div/div[2]/div/div[2]/div/div[{tracker}]/table")
selector.click()
# parse next web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
law_government.append(row)
tracker += 1
time.sleep(5)
law_government_df = pd.concat(law_government)
law_government_df.drop_duplicates(keep='first', inplace=True)
len(law_government_df)
#add headers
law_government_df.set_axis(headers, axis=1, inplace=True)
#verfiy
law_government_df.head(1)
# Education & Employment
# select dropdown menu with data tabel option
select = Select(driver.find_element_by_xpath("//select[@data-dojo-attach-point = 'layerSelect']"))
# select data table
select.select_by_visible_text("Education & Employment")
time.sleep(5)
# parse initial web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
education_employment = []
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
education_employment.append(row)
tracker += 1
for x in range(20):
#scroll down the table by clicking lost row in static html
selector = driver.find_element_by_xpath(f"/html/body/div[2]/div/div[2]/div/div[1]/main/div[3]/div[2]/div[2]/div/div[1]/div/div/div[2]/div/div[2]/div/div[{tracker}]/table")
selector.click()
# parse next web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
education_employment.append(row)
tracker += 1
time.sleep(5)
education_employment_df = pd.concat(education_employment)
education_employment_df.drop_duplicates(keep='first', inplace=True)
len(education_employment_df)
#add headers
education_employment_df.set_axis(headers, axis=1, inplace=True)
#verfiy
education_employment_df.head(1)
# Financial Services
# select dropdown menu with data tabel option
select = Select(driver.find_element_by_xpath("//select[@data-dojo-attach-point = 'layerSelect']"))
# select data table
select.select_by_visible_text("Financial Services")
time.sleep(5)
# parse initial web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
financial_services = []
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
financial_services.append(row)
tracker += 1
for x in range(5):
#scroll down the table by clicking lost row in static html
selector = driver.find_element_by_xpath(f"/html/body/div[2]/div/div[2]/div/div[1]/main/div[3]/div[2]/div[2]/div/div[1]/div/div/div[2]/div/div[2]/div/div[{tracker}]/table")
selector.click()
# parse next web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
financial_services.append(row)
tracker += 1
time.sleep(5)
financial_services_df = pd.concat(financial_services)
financial_services_df.drop_duplicates(keep='first', inplace=True)
len(financial_services_df)
#add headers
financial_services_df.set_axis(headers, axis=1, inplace=True)
#verfiy
financial_services_df.head(1)
# Other
# select dropdown menu with data tabel option
select = Select(driver.find_element_by_xpath("//select[@data-dojo-attach-point = 'layerSelect']"))
# select data table
select.select_by_visible_text("Other")
time.sleep(5)
# parse initial web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
other = []
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
other.append(row)
tracker += 1
for x in range(10):
#scroll down the table by clicking lost row in static html
selector = driver.find_element_by_xpath(f"/html/body/div[2]/div/div[2]/div/div[1]/main/div[3]/div[2]/div[2]/div/div[1]/div/div/div[2]/div/div[2]/div/div[{tracker}]/table")
selector.click()
# parse next web page state
page_source = driver.page_source
soup = BS(page_source, 'html5lib')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
tracker = 1
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
other.append(row)
tracker += 1
time.sleep(5)
other_df = pd.concat(other)
other_df.drop_duplicates(keep='first', inplace=True)
len(other_df)
#add headers
other_df.set_axis(headers, axis=1, inplace=True)
#verfiy
other_df.head(1)
# Transportation
# select dropdown menu with data label option
select = Select(driver.find_element_by_xpath("//select[@data-dojo-attach-point = 'layerSelect']"))
# select data table
select.select_by_visible_text("Transportation")
time.sleep(5)
# parse web page
page_source = driver.page_source
soup = BS(page_source, 'html.parser')
html_table = soup.find_all("table", class_="dgrid-row-table")
#create dataframe
transportation = []
for table in html_table[1:]:
list_columns = pd.read_html(str(table), flavor='bs4')
row = pd.concat(list_columns)
transportation.append(row)
transportation_df = pd.concat(transportation)
len(transportation_df)
#add headers
transportation_df.set_axis(headers, axis=1, inplace=True)
#verfiy
transportation_df.head(1)
###Output
_____no_output_____
###Markdown
Transformation===
###Code
#create master dataframe
dataframes = [community_services_df, health_services_df,
financial_services_df, transportation_df,
other_df, law_government_df,
food_housing_df, education_employment_df]
community_assets_df = pd.concat(dataframes)
#verfiy number of rows
len(community_assets_df)
#create primary key
community_assets_df['unique_id'] = np.arange(community_assets_df.shape[0])
community_assets_df.set_index("unique_id", inplace=True)
#scan columns
community_assets_df["Fees"].value_counts()
#drop columns
community_assets_df.drop([
'Date Modified (Main Record)',
'Date Updated',
'SINV (TO)',
'DD Code.1',
'LAST_UPDATED',
'FULL_NAME',
'OBJECTID'], axis=1, inplace=True)
#address differences in reportig fees
community_assets_df['Fees'] = community_assets_df['Fees'].fillna("Unknown")
community_assets_df.loc[community_assets_df['Fees'] == "Free", 'Fees'] = "None"
community_assets_df.loc[community_assets_df['Fees'] == "None ; free", 'Fees'] = "None"
community_assets_df.loc[community_assets_df['Fees'] == "None - Mental Health Services ; Autism - Call Autism Services Coordinator for information - 416 240 1111", 'Fees'] = "None"
#insert postal code if missing
community_assets_df['Site Postal Code'] = community_assets_df['Site Postal Code'].fillna(community_assets_df['Address'].str.slice(start=-6))
#drop rows without an address
community_assets_df.drop(community_assets_df[community_assets_df.Address == "Toronto, ON"].index, inplace=True)
community_assets_df.dropna(subset=["Address"], inplace=True)
#verfiy number of rows
len(community_assets_df)
#create an FSA row
community_assets_df['FSA'] = community_assets_df['Site Postal Code'].str.slice(stop=4)
#count how many row do not have an appropriate FSA
len(community_assets_df.loc[community_assets_df['FSA'].str.slice(start=0, stop=1) != "M"])
# drop the 22 rows with inappropriate FSA
community_assets_df.drop(community_assets_df[community_assets_df['FSA'].str.slice(start=0, stop=1) != "M"].index, inplace=True)
#verfiy number of rows
len(community_assets_df)
#verify
community_assets_df.head()
#export to csv
community_assets_df.to_csv("data/community_assets.csv")
###Output
_____no_output_____
###Markdown
Prepping for addition to database---
###Code
community_assets_df = pd.read_csv ("data/community_assets.csv")
community_assets_df.head()
headers = list(community_assets_df.columns)
#format headers
lc_headers = []
for name in headers:
a= name.lower()
b= a.replace(" ", "_")
lc_headers.append(b)
lc_headers[0] = "id"
lc_headers[4] = "service_name_2"
lc_headers[10] = "service_description"
lc_headers
#update header
community_assets_df.set_axis(lc_headers, axis=1, inplace=True)
community_assets_df.head()
#export to csv
community_assets_df.to_csv("data/community_assets.csv")
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.