path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
2. Numpy/1. La base de NumPy - ndarray.ipynb | ###Markdown
La base de NumPy - ndarray Toda la libería de NumPy se articula alrededor de una única estructura de datos: la matriz multidimensional o ndarray (N-dimensional array). Características básicas de ndarray Un ndarray puede contener elementos de CUALQUIER TIPOTodos los elementos de un ndarray deben tener EL MISMO TIPO.El tamaño de un ndarray (número de elementos) se define en el momento de la creación y no puede modificarse.Pero la organización de esos elementos entre diferentes dimensiones sí puede modificarse Uso básico de cualquier elemento de NumPy Hay que recordar que NumPy no es un módulo del core de Python por lo que SIEMPRE habrá que importarlo de forma completa o componente a componente.
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Creación básica de ndarrays Existen varias formas de crear un ndarray en NumPy. Vamos a ver las más relevantes. Creación de un ndarray vacío
###Code
# Especificando dimensiones
array_vacio = np.empty((3, 2), dtype= np.unicode)
array_vacio
# Copiando dimensiones y tipo desde otra estructura
array_vacio_copia = np.empty_like([1, 2, 3, 4, 5])
array_vacio_copia
###Output
_____no_output_____
###Markdown
Creación de un ndarray de unos
###Code
# Especificando dimensiones
array_unos = np.ones((3, 2))
array_unos
# Copiando dimensiones y tipo desde otra estructura
array_unos_copia = np.ones_like([1, 2, 3, 4, 5])
array_unos_copia
###Output
_____no_output_____
###Markdown
Creación de un ndarray de ceros
###Code
# Especificando dimensiones
array_ceros = np.zeros((3, 2))
array_ceros
# Copiando dimensiones y tipo desde otra estructura
array_ceros_copia = np.zeros_like([1, 2, 3, 4, 5])
array_ceros_copia
###Output
_____no_output_____
###Markdown
Creación de un ndarray con la matriz identidad
###Code
array_identidad = np.identity(3)
array_identidad
###Output
_____no_output_____
###Markdown
Creación de un ndarray con unos en una de las diagonales
###Code
# Cuadrada con unos en la diagonal principal
array_identidad = np.eye(4)
array_identidad
# Cuadrada con unos en la diagonal especificada
array_con_segunda_diagonal = np.eye(4, k = 1)
array_con_segunda_diagonal
# No cuadrada con unos en la diagonal especificada
array_no_cuadrado = np.eye(4, 3, k = -1)
array_no_cuadrado
###Output
_____no_output_____
###Markdown
Creación de un ndarray cuyos elementos son una secuencia numérica
###Code
# Un parámetro: desde 0 (incluido) hasta el valor indicado (no incluido)
array_secuencia_1 = np.arange(10)
array_secuencia_1
# Dos parámetros: desde el primer valor (incluido) hasta el segundo valor (no incluido)
array_secuencia_2 = np.arange(5, 10)
array_secuencia_2
# Tres parámetros: desde el primer valor (incluido) hasta el segundo (no incluido) con saltos del tercer valor
array_secuencia_3 = np.arange(5, 20, 2)
array_secuencia_3
###Output
_____no_output_____
###Markdown
Creación de un ndarray a partir de una secuencia básica de Python
###Code
# Unidimensional
array_basico = np.array([1, 2, 3, 4, 5])
type(array_basico)
# Multidimensional
array_basico_multidimensional = np.array([[1, 2, 3, 4], [5, 6, 7, 8]])
array_basico_multidimensional
###Output
_____no_output_____
###Markdown
Tipos de dato en ndarrays de NumPy Enteros con signo: np.int8, np.int16, np.int32 y np.int64Enteros sin signo: np.uint8, np.uint16, np.uint32 y np.uint64Números en coma flotante: np.float16, np.float32, np.float64, np.float128Booleanos: np.boolObjetos: np.objectCadenas de caracteres: np.string\_, np.unicode\_... Especificación/Casting/Conversión de tipos entre ndarrays
###Code
array_inicial_enteros = np.array([1, 2, 3, 4, 5], dtype=np.int32)
array_inicial_enteros
array_float = np.asarray(array_inicial_enteros, dtype=np.float64)
array_float
array_strings = np.asarray(array_float, dtype=np.unicode_)
array_strings
###Output
_____no_output_____
###Markdown
Consulta de la composición de un ndarray dtype: Tipo del contenido del ndarray.ndim: Número de dimensiones/ejes del ndarray.shape: Estructura/forma del ndarray, es decir, número de elementos en cada uno de los ejes/dimensiones.size: Número total de elementos en el ndarray.
###Code
array = np.array([[1, 2, 3, 4], [5, 6, 7, 8], [9, 10, 11, 12]])
# Tipo de dato (único)
array.dtype
# Número de dimensiones
array.ndim
# Forma/Dimensiones
array.shape
# Número total de elementos
array.size
###Output
_____no_output_____
###Markdown
Operaciones aritméticas entre ndarrays y escalares
###Code
array = np.array([1, 2, 3, 4, 5, 6], dtype=np.float64)
# Suma
array + 5
# Resta
array - 2
# Multiplicación
array * 3
# División
1 / array
# División entera
array // 2
# Potencia
array ** 2
# Asignación con operador
array += 1
array
###Output
_____no_output_____
###Markdown
Operaciones aritméticas entre ndarrays IMPORTANTE: Los dos términos de la operación tienen que ser ndarrays de las mismas dimensiones y forma. Se aplica la operación elemento a elemento.
###Code
array = np.array([1, 2, 3, 4, 5, 6], dtype=np.float64)
# Suma (elemento a elmento)
array + array
# Resta (elemento a elmento)
array - array
# Multiplicación (elemento a elmento)
array * array
# División (elemento a elmento)
array / array
# Asignación con operador
array += array
array
# Suma de ndarrays de distinto tamaño
array1 = np.array([1, 2,3 ,4, 5])
array + array1
###Output
_____no_output_____
###Markdown
Indexación y slicing básico En ndarrays unidimensionales el funcionamiento es idéntico al que se tiene en secuencias básicas de Python. Es decir, se utiliza la indexación [a:b:c].
###Code
array = np.arange(1, 11)
# Indexación con primer parámetro
array[2]
# Indexación con primer y segundo parámetro
array[2:5]
# Indexación con tercer parámetro
array[::2]
# Indexación con negativos
array[::-1]
###Output
_____no_output_____
###Markdown
En ndarrays multidimensionales, existen dos posibles formas de realizar el acceso:Mediante indexación recursiva: array[a:b:c en dim_1][a:b:c en dim_2]...[a:b:c en dim_n]Mediante indexación con comas: array[a:b:c en dim_1, a:b:c en dim_2, ...a:b:c en dim_n]
###Code
array = np.array([[[1, 2, 3, 4], [5, 6, 7, 8]], [[9, 10, 11, 12], [13, 14, 15, 16]]])
array
# Forma de la matriz
array.shape
# Indexación recursiva primer nivel
array[1]
# Indexación recursiva segundo nivel
array[1][0]
# Indexación recursiva tercer nivel
array[1][0][3]
# Indexación con comas segundo nivel
array[1, 0]
# Indexación con comas tercer nivel
array[1, 0, 3]
# Indexación recursiva tercer nivel con slice
array[0][0][:2]
# Indexación recursiva tercer nivel con slice de índice negativo
array[1][0][::-1]
###Output
_____no_output_____
###Markdown
Del mismo modo a como ocurre en Python básico, se puede utilizar la indexación/slicing para modificar secciones del contenido de un ndarray.
###Code
array = np.array([[1, 2, 3, 4],[5, 6, 7, 8]])
# Modificación de una posición
array[0][1] = 50
array
# Modificación de un slice
array[0][::2] = 30
array
###Output
_____no_output_____
###Markdown
Indexación y slicing booleano
###Code
personas = np.array(['Miguel', 'Pedro', 'Juan', 'Miguel'])
personas
datos = np.random.randn(4, 4)
datos
# Indexación/slicing booleano sobre valores
datos[datos < 0]
# Máscara booleana
personas == 'Miguel'
# Indexación/slicing mediante máscara
datos[personas == 'Miguel']
# Indexación/slicing mediante máscara y básico combinado
datos[personas == 'Miguel', ::2]
# Indexación/slicing mediante máscara negativo por operador
datos[personas != 'Miguel']
# Indexación/slicing mediante máscara negativa por signo
datos[~(personas == 'Miguel')]
###Output
_____no_output_____
###Markdown
De nuevo, podemos utilizar indexación/slicing booleano para realizar modificaciones sobre el contenido de un ndarray.
###Code
array = np.random.randn(7, 4)
array
# Eliminación de valores negativos mediante slicing
array[array < 0] = 0
array
###Output
_____no_output_____
###Markdown
Indexación y slicing basado en secuencias de enteros - Fancy indexing
###Code
array = np.empty((8, 4))
for i in range(8):
array[i] = i
array
# Indexación/slicing de un conjunto (arbitrario) de elementos
array[[2, 5]]
# Indexación/slicing de un conjunto (arbitrario) de elementos (índices negativos)
array[[-2, -5]]
###Output
_____no_output_____
###Markdown
También podemos indexar de manera arbitraria en múltiples dimensiones, utilizando para ello, una secuencia de enteros por cada dimensión. El resultado será la combinación de secuencias.
###Code
array = np.arange(32).reshape((8, 4))
array
# Indexación/slicing con una secuencia de varios niveles (elemento a elemento)
array[[1, 5, 7, 2], [0, 3, 1, 2]]
# Indexación/slicing con una secuencia de varios niveles (región resultante)
array[[1, 5, 7, 2]][:, [0, 3, 1, 2]]
###Output
_____no_output_____
###Markdown
Trasposición y modificación de ejes/dimensiones
###Code
array = np.arange(15)
array
# Modificación de ejes/dimensiones
array2 = array.reshape(3, 5)
array2
# Trasposición de ejes/dimensiones"
array2.T
###Output
_____no_output_____
###Markdown
Concatenación de ndarrays NumPy ofrece la posibilidad de combinar ndarrays de dos formas posibles:hstack, column_stack: Los elementos del segundo array se añaden a los del primero a lo ancho.vstack, row_stack: Los elementos del segundo array se añaden a los del primero a lo largo.
###Code
array1 = np.arange(15).reshape(3, 5)
array1
array2 = np.arange(15, 30).reshape(3, 5)
array2
# Concatenación horizontal
np.hstack((array1, array2))
# Concatenación vertical
np.vstack((array1, array2))
###Output
_____no_output_____
###Markdown
División de ndarrays Al igual que para la concatenación, NumPy permite dividir los ndarray de tres formas distintas:hsplit: División de arrays en n partes "iguales" por columnas.vsplit: División de arrays en n partes "iguales" por filas.split: División de arrays en n partes no simétricas.
###Code
array = np.arange(16).reshape(4, 4)
array
# División simétrica por columnas
np.hsplit(array, 2)
# División simétrica por filas
np.vsplit(array, 2)
# División no simétrica
np.split(array, [1, 3], axis=1)
###Output
_____no_output_____
###Markdown
**Axis**Valor 0: Aplicará la función por columnasValor 1: Aplicará la función por filas
###Code
array.sum(axis=0)
###Output
_____no_output_____ |
01-Python-Crash-Course/.ipynb_checkpoints/Python Crash Course Exercises - Solutions-checkpoint.ipynb | ###Markdown
Python Crash Course Exercises - SolutionsThis is an optional exercise to test your understanding of Python Basics. The questions tend to have a financial theme to them, but don't look to deeply into these tasks themselves, many of them don't hold any significance and are meaningless. If you find this extremely challenging, then you probably are not ready for the rest of this course yet and don't have enough programming experience to continue. I would suggest you take another course more geared towards complete beginners, such as [Complete Python Bootcamp]() ExercisesAnswer the questions or complete the tasks outlined in bold below, use the specific method described if applicable. Task 1Given price = 300 , use python to figure out the square root of the price.
###Code
price = 300
price**0.5
import math
math.sqrt(price)
###Output
_____no_output_____
###Markdown
Task 2Given the string: stock_index = "SP500" Grab '500' from the string using indexing.
###Code
stock_index = "SP500"
stock_index[2:]
###Output
_____no_output_____
###Markdown
Task 3** Given the variables:** stock_index = "SP500" price = 300** Use .format() to print the following string: ** The SP500 is at 300 today.
###Code
stock_index = "SP500"
price = 300
print("The {} is at {} today.".format(stock_index,price))
###Output
The SP500 is at 300 today.
###Markdown
Task 4** Given the variable of a nested dictionary with nested lists: ** stock_info = {'sp500':{'today':300,'yesterday': 250}, 'info':['Time',[24,7,365]]} ** Use indexing and key calls to grab the following items:*** Yesterday's SP500 price (250)* The number 365 nested inside a list nested inside the 'info' key.
###Code
stock_info = {'sp500':{'today':300,'yesterday': 250}, 'info':['Time',[24,7,365]]}
stock_info['sp500']['yesterday']
stock_info['info'][1][2]
###Output
_____no_output_____
###Markdown
Task 5** Given strings with this form where the last source value is always separated by two dashes -- ** "PRICE:345.324:SOURCE--QUANDL" **Create a function called source_finder() that returns the source. For example, the above string passed into the function would return "QUANDL"**
###Code
def source_finder(s):
return s.split('--')[-1]
source_finder("PRICE:345.324:SOURCE--QUANDL")
###Output
_____no_output_____
###Markdown
Task 5** Create a function called price_finder that returns True if the word 'price' is in a string. Your function should work even if 'Price' is capitalized or next to punctuation ('price!') **
###Code
def price_finder(s):
return 'price' in s.lower()
price_finder("What is the price?")
price_finder("DUDE, WHAT IS PRICE!!!")
price_finder("The price is 300")
###Output
_____no_output_____
###Markdown
Task 6** Create a function called count_price() that counts the number of times the word "price" occurs in a string. Account for capitalization and if the word price is next to punctuation. **
###Code
def count_price(s):
count = 0
for word in s.lower().split():
# Need to use in, can't use == or will get error with punctuation
if 'price' in word:
count += 1
# Note the indentation!
return count
# Simpler Alternative
def count_price(s):
return s.lower().count('price')
s = 'Wow that is a nice price, very nice Price! I said price 3 times.'
count_price(s)
###Output
_____no_output_____
###Markdown
Task 7**Create a function called avg_price that takes in a list of stock price numbers and calculates the average (Sum of the numbers divided by the number of elements in the list). It should return a float. **
###Code
def avg_price(stocks):
return sum(stocks)/len(stocks) # Python 2 users should multiply numerator by 1.0
avg_price([3,4,5])
###Output
_____no_output_____ |
v4_MedFlask_Code_from_v7_Super_Compact_Fast_Basilica_Model_for_Text_Input_M_Cabinet_3_Colab.ipynb | ###Markdown
Version where everything happens in one "predict" function
###Code
def predict(user_input):
# Part 1
# a function to calculate_user_text_embedding
# to save the embedding value in session memory
user_input_embedding = 0
def calculate_user_text_embedding(input, user_input_embedding):
# setting a string of two sentences for the algo to compare
sentences = [input]
# calculating embedding for both user_entered_text and for features
with basilica.Connection('36a370e3-becb-99f5-93a0-a92344e78eab') as c:
user_input_embedding = list(c.embed_sentences(sentences))
return user_input_embedding
# run the function to save the embedding value in session memory
user_input_embedding = calculate_user_text_embedding(user_input, user_input_embedding)
# part 2
score = 0
def score_user_input_from_stored_embedding_from_stored_values(input, score, row1, user_input_embedding):
# obtains pre-calculated values from a pickled dataframe of arrays
embedding_stored = unpickled_df_test.loc[row1, 0]
# calculates the similarity of user_text vs. product description
score = 1 - spatial.distance.cosine(embedding_stored, user_input_embedding)
# returns a variable that can be used outside of the function
return score
# Part 3
for i in range(2351):
# calls the function to set the value of 'score'
# which is the score of the user input
score = score_user_input_from_stored_embedding_from_stored_values(user_input, score, i, user_input_embedding)
#stores the score in the dataframe
df.loc[i,'score'] = score
# Part 4
output = df['Strain'].groupby(df['score']).value_counts().nlargest(5, keep='last')
output_string = str(output)
# Part 5: the output
return output_string
predict(user_input)
###Output
_____no_output_____
###Markdown
Super-inclusive function version3 cells, two steps1. user_input = "user med description"2. predict(user_input)
###Code
# user input
user_input = "text, Relaxed, Violet, Aroused, Creative, Happy, Energetic, Flowery, Diesel"
def predict(user_input):
# install basilica
!pip install basilica
import basilica
import numpy as np
import pandas as pd
from scipy import spatial
# get data
!wget https://raw.githubusercontent.com/MedCabinet/ML_Machine_Learning_Files/master/med1.csv
# turn data into dataframe
df = pd.read_csv('med1.csv')
# get pickled trained embeddings for med cultivars
!wget https://github.com/lineality/4.4_Build_files/raw/master/medembedv2.pkl
#unpickling file of embedded cultivar descriptions
unpickled_df_test = pd.read_pickle("./medembedv2.pkl")
# Part 1
# maybe make a function to perform the last few steps
# a function to calculate_user_text_embedding
# to save the embedding value in session memory
user_input_embedding = 0
def calculate_user_text_embedding(input, user_input_embedding):
# setting a string of two sentences for the algo to compare
sentences = [input]
# calculating embedding for both user_entered_text and for features
with basilica.Connection('36a370e3-becb-99f5-93a0-a92344e78eab') as c:
user_input_embedding = list(c.embed_sentences(sentences))
return user_input_embedding
# run the function to save the embedding value in session memory
user_input_embedding = calculate_user_text_embedding(user_input, user_input_embedding)
# part 2
score = 0
def score_user_input_from_stored_embedding_from_stored_values(input, score, row1, user_input_embedding):
# obtains pre-calculated values from a pickled dataframe of arrays
embedding_stored = unpickled_df_test.loc[row1, 0]
# calculates the similarity of user_text vs. product description
score = 1 - spatial.distance.cosine(embedding_stored, user_input_embedding)
# returns a variable that can be used outside of the function
return score
# Part 3
for i in range(2351):
# calls the function to set the value of 'score'
# which is the score of the user input
score = score_user_input_from_stored_embedding_from_stored_values(user_input, score, i, user_input_embedding)
#stores the score in the dataframe
df.loc[i,'score'] = score
# Part 4
output = df['Strain'].groupby(df['score']).value_counts().nlargest(5, keep='last')
output_string = str(output)
# Part 5: output
return output_string
predict(user_input)
###Output
Requirement already satisfied: basilica in /usr/local/lib/python3.6/dist-packages (0.2.8)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from basilica) (1.12.0)
Requirement already satisfied: Pillow in /usr/local/lib/python3.6/dist-packages (from basilica) (4.3.0)
Requirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from basilica) (2.21.0)
Requirement already satisfied: olefile in /usr/local/lib/python3.6/dist-packages (from Pillow->basilica) (0.46)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->basilica) (1.24.3)
Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->basilica) (2.8)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->basilica) (2019.11.28)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->basilica) (3.0.4)
--2020-01-05 17:45:04-- https://raw.githubusercontent.com/MedCabinet/ML_Machine_Learning_Files/master/med1.csv
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1267451 (1.2M) [text/plain]
Saving to: ‘med1.csv.1’
med1.csv.1 100%[===================>] 1.21M --.-KB/s in 0.06s
2020-01-05 17:45:04 (19.7 MB/s) - ‘med1.csv.1’ saved [1267451/1267451]
--2020-01-05 17:45:05-- https://github.com/lineality/4.4_Build_files/raw/master/medembedv2.pkl
Resolving github.com (github.com)... 192.30.253.112
Connecting to github.com (github.com)|192.30.253.112|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/lineality/4.4_Build_files/master/medembedv2.pkl [following]
--2020-01-05 17:45:05-- https://raw.githubusercontent.com/lineality/4.4_Build_files/master/medembedv2.pkl
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16288303 (16M) [application/octet-stream]
Saving to: ‘medembedv2.pkl.1’
medembedv2.pkl.1 100%[===================>] 15.53M --.-KB/s in 0.1s
2020-01-05 17:45:05 (108 MB/s) - ‘medembedv2.pkl.1’ saved [16288303/16288303]
###Markdown
one-cell version
###Code
# user input
user_input = "text, Relaxed, Violet, Aroused, Creative, Happy, Energetic, Flowery, Diesel"
# ominbus function
def predict(user_input):
# install basilica
!pip install basilica
import basilica
import numpy as np
import pandas as pd
from scipy import spatial
# get data
!wget https://raw.githubusercontent.com/MedCabinet/ML_Machine_Learning_Files/master/med1.csv
# turn data into dataframe
df = pd.read_csv('med1.csv')
# get pickled trained embeddings for med cultivars
!wget https://github.com/lineality/4.4_Build_files/raw/master/medembedv2.pkl
#unpickling file of embedded cultivar descriptions
unpickled_df_test = pd.read_pickle("./medembedv2.pkl")
# Part 1
# maybe make a function to perform the last few steps
# a function to calculate_user_text_embedding
# to save the embedding value in session memory
user_input_embedding = 0
def calculate_user_text_embedding(input, user_input_embedding):
# setting a string of two sentences for the algo to compare
sentences = [input]
# calculating embedding for both user_entered_text and for features
with basilica.Connection('36a370e3-becb-99f5-93a0-a92344e78eab') as c:
user_input_embedding = list(c.embed_sentences(sentences))
return user_input_embedding
# run the function to save the embedding value in session memory
user_input_embedding = calculate_user_text_embedding(user_input, user_input_embedding)
# part 2
score = 0
def score_user_input_from_stored_embedding_from_stored_values(input, score, row1, user_input_embedding):
# obtains pre-calculated values from a pickled dataframe of arrays
embedding_stored = unpickled_df_test.loc[row1, 0]
# calculates the similarity of user_text vs. product description
score = 1 - spatial.distance.cosine(embedding_stored, user_input_embedding)
# returns a variable that can be used outside of the function
return score
# Part 3
for i in range(2351):
# calls the function to set the value of 'score'
# which is the score of the user input
score = score_user_input_from_stored_embedding_from_stored_values(user_input, score, i, user_input_embedding)
#stores the score in the dataframe
df.loc[i,'score'] = score
# Part 4
output = df['Strain'].groupby(df['score']).value_counts().nlargest(5, keep='last')
output_string = str(output)
# Part 5: output
return output_string
predict(user_input)
# user input
user_input = "text, Relaxed, Violet, Aroused, Creative, Happy, Energetic, Flowery, Diesel"
# ominbus function
def predict(user_input):
# install basilica
#!pip install basilica
import basilica
import numpy as np
import pandas as pd
from scipy import spatial
# get data
#!wget https://raw.githubusercontent.com/MedCabinet/ML_Machine_Learning_Files/master/med1.csv
# turn data into dataframe
df = pd.read_csv('med1.csv')
# get pickled trained embeddings for med cultivars
#!wget https://github.com/lineality/4.4_Build_files/raw/master/medembedv2.pkl
#unpickling file of embedded cultivar descriptions
unpickled_df_test = pd.read_pickle("./medembedv2.pkl")
# Part 1
# maybe make a function to perform the last few steps
# a function to calculate_user_text_embedding
# to save the embedding value in session memory
user_input_embedding = 0
def calculate_user_text_embedding(input, user_input_embedding):
# setting a string of two sentences for the algo to compare
sentences = [input]
# calculating embedding for both user_entered_text and for features
with basilica.Connection('36a370e3-becb-99f5-93a0-a92344e78eab') as c:
user_input_embedding = list(c.embed_sentences(sentences))
return user_input_embedding
# run the function to save the embedding value in session memory
user_input_embedding = calculate_user_text_embedding(user_input, user_input_embedding)
# part 2
score = 0
def score_user_input_from_stored_embedding_from_stored_values(input, score, row1, user_input_embedding):
# obtains pre-calculated values from a pickled dataframe of arrays
embedding_stored = unpickled_df_test.loc[row1, 0]
# calculates the similarity of user_text vs. product description
score = 1 - spatial.distance.cosine(embedding_stored, user_input_embedding)
# returns a variable that can be used outside of the function
return score
# Part 3
for i in range(2351):
# calls the function to set the value of 'score'
# which is the score of the user input
score = score_user_input_from_stored_embedding_from_stored_values(user_input, score, i, user_input_embedding)
#stores the score in the dataframe
df.loc[i,'score'] = score
# Part 4
output = df['Strain'].groupby(df['score']).value_counts().nlargest(5, keep='last')
#print(output)
#print(output[1:])
output_string = str(output)
#print(type(output))
#print(output.shape)
#print(output_string)
import re
output_regex = re.sub(r'[^a-zA-Z ]', '', output_string)
print(output_regex)
output_string_clipped = output_regex[26:-27]
print(output_string_clipped)
# Part 5: output
return output_string_clipped
predict(user_input)
# this next version fixes the incorrect first listed prediction (why was it showing that?)
# user input
user_input = "text, Relaxed, Violet, Aroused, Creative, Happy, Energetic, Flowery, Diesel"
# ominbus function
def predict(user_input):
# install basilica
#!pip install basilica
import basilica
import numpy as np
import pandas as pd
from scipy import spatial
# get data
#!wget https://raw.githubusercontent.com/MedCabinet/ML_Machine_Learning_Files/master/med1.csv
# turn data into dataframe
df = pd.read_csv('med1.csv')
# get pickled trained embeddings for med cultivars
#!wget https://github.com/lineality/4.4_Build_files/raw/master/medembedv2.pkl
#unpickling file of embedded cultivar descriptions
unpickled_df_test = pd.read_pickle("./medembedv2.pkl")
# Part 1
# maybe make a function to perform the last few steps
# a function to calculate_user_text_embedding
# to save the embedding value in session memory
user_input_embedding = 0
def calculate_user_text_embedding(input, user_input_embedding):
# setting a string of two sentences for the algo to compare
sentences = [input]
# calculating embedding for both user_entered_text and for features
with basilica.Connection('36a370e3-becb-99f5-93a0-a92344e78eab') as c:
user_input_embedding = list(c.embed_sentences(sentences))
return user_input_embedding
# run the function to save the embedding value in session memory
user_input_embedding = calculate_user_text_embedding(user_input, user_input_embedding)
# part 2
score = 0
def score_user_input_from_stored_embedding_from_stored_values(input, score, row1, user_input_embedding):
# obtains pre-calculated values from a pickled dataframe of arrays
embedding_stored = unpickled_df_test.loc[row1, 0]
# calculates the similarity of user_text vs. product description
score = 1 - spatial.distance.cosine(embedding_stored, user_input_embedding)
# returns a variable that can be used outside of the function
return score
# Part 3
for i in range(2351):
# calls the function to set the value of 'score'
# which is the score of the user input
score = score_user_input_from_stored_embedding_from_stored_values(user_input, score, i, user_input_embedding)
#stores the score in the dataframe
df.loc[i,'score'] = score
# Part 4
output = df['Strain'].groupby(df['score']).value_counts().nlargest(6, keep='last')
#print(output)
output_string = str(output)
#print(output_string)
import re
output_regex = re.sub(r'[^a-zA-Z ]', '', output_string)
output_string_clipped = output_regex[39:-28]
# Part 5: output
return output_string_clipped
predict(user_input)
'score Strain \n0.905756 B-Witched
# user input
user_input = "text, Relaxed, Violet, Aroused, Creative, Happy, Energetic, Flowery, Diesel"
# ominbus function
def predict(user_input):
# install basilica
#!pip install basilica
import basilica
import numpy as np
import pandas as pd
from scipy import spatial
# get data
#!wget https://raw.githubusercontent.com/MedCabinet/ML_Machine_Learning_Files/master/med1.csv
# turn data into dataframe
df = pd.read_csv('med1.csv')
# get pickled trained embeddings for med cultivars
#!wget https://github.com/lineality/4.4_Build_files/raw/master/medembedv2.pkl
#unpickling file of embedded cultivar descriptions
unpickled_df_test = pd.read_pickle("./medembedv2.pkl")
# Part 1
# maybe make a function to perform the last few steps
# a function to calculate_user_text_embedding
# to save the embedding value in session memory
user_input_embedding = 0
def calculate_user_text_embedding(input, user_input_embedding):
# setting a string of two sentences for the algo to compare
sentences = [input]
# calculating embedding for both user_entered_text and for features
with basilica.Connection('36a370e3-becb-99f5-93a0-a92344e78eab') as c:
user_input_embedding = list(c.embed_sentences(sentences))
return user_input_embedding
# run the function to save the embedding value in session memory
user_input_embedding = calculate_user_text_embedding(user_input, user_input_embedding)
# part 2
score = 0
def score_user_input_from_stored_embedding_from_stored_values(input, score, row1, user_input_embedding):
# obtains pre-calculated values from a pickled dataframe of arrays
embedding_stored = unpickled_df_test.loc[row1, 0]
# calculates the similarity of user_text vs. product description
score = 1 - spatial.distance.cosine(embedding_stored, user_input_embedding)
# returns a variable that can be used outside of the function
return score
# Part 3
for i in range(2351):
# calls the function to set the value of 'score'
# which is the score of the user input
score = score_user_input_from_stored_embedding_from_stored_values(user_input, score, i, user_input_embedding)
#stores the score in the dataframe
df.loc[i,'score'] = score
# Part 4
output = df['Strain'].groupby(df['score']).value_counts().nlargest(6, keep='last')
#print(output)
#print(output[1:])
output_string = str(output)
#print(type(output))
#print(output.shape)
#print(output_string)
import re
output_regex = re.sub(r'[^a-zA-Z ^0-9 ^.]', '', output_string)
#print(output_regex)
output_string_clipped = output_regex[50:-28]
#print(output_string_clipped)
# Part 5: output
return output_string_clipped
predict(user_input)
'score Strain 0.905756 BWitched
###Output
_____no_output_____ |
Create_Dataset_Devel.ipynb | ###Markdown
Train data
###Code
dataset_name = 'reuters'
data_dir = os.path.join('../VDSH/dataset/', dataset_name)
fn = 'train.NN.pkl'
train_df = pd.read_pickle(os.path.join(data_dir, fn))
num_trains = len(train_df)
bows_mat = sparse.vstack(list(train_df.bow))
if dataset_name in ['ng20']:
# convert the label to a sparse matrix
labels = list(train_df.label)
num_labels = (np.max(labels) - np.min(labels)) + 1
one_hot_mat = np.eye(num_labels, dtype=int)
label_mat = sparse.csr_matrix(one_hot_mat[labels])
else:
label_mat = sparse.vstack(list(train_df.label))
num_labels = label_mat.shape[1]
dist = cosine_similarity(bows_mat, bows_mat)
indices = np.argsort(-dist, axis=1)
docid2index = {docid: index for index, docid in enumerate(list(train_df.index))}
index2docid = {index: docid for index, docid in enumerate(list(train_df.index))}
import functools
top_nn = list(map(lambda v: index2docid[v], indices.reshape(-1)))
top_nn = np.array(top_nn).reshape(num_trains, num_trains)
assert(np.all([v in train_df.index for v in top_nn[:, 0]])) # makesure all docid does exist in the train_df
data = {'doc_id': list(train_df.index),
'bow': list(train_df.bow),
'label': [arr for arr in label_mat],
'neighbors': [list(arr) for arr in top_nn[:, 1:101]]}
new_df = pd.DataFrame.from_dict(data)
new_df.set_index('doc_id', inplace=True)
new_df.to_pickle('dataset/clean/{}/{}.train.pkl'.format(dataset_name, dataset_name))
###Output
_____no_output_____
###Markdown
Test data
###Code
data_dir = '../VDSH/dataset/{}'.format(dataset_name)
fn = 'test.NN.pkl'
test_df = pd.read_pickle(os.path.join(data_dir, fn))
num_tests = len(test_df)
test_bows_mat = sparse.vstack(list(test_df.bow))
if dataset_name in ['ng20']:
# convert the label to a sparse matrix
labels = list(test_df.label)
label_mat = sparse.csr_matrix(one_hot_mat[labels])
else:
label_mat = sparse.vstack(list(test_df.label))
dist = cosine_similarity(test_bows_mat, bows_mat)
indices = np.argsort(-dist, axis=1)
top_nn = list(map(lambda v: index2docid[v], indices.reshape(-1)))
top_nn = np.array(top_nn).reshape(num_tests, num_trains)
data = {'doc_id': list(test_df.index),
'bow': list(test_df.bow),
'label': [arr for arr in label_mat],
'neighbors': [list(arr) for arr in top_nn[:, :100]]}
new_df = pd.DataFrame.from_dict(data)
new_df.set_index('doc_id', inplace=True)
new_df.to_pickle('dataset/clean/{}/{}.test.pkl'.format(dataset_name, dataset_name))
###Output
_____no_output_____ |
Troubleshooting-Notebooks/Big-Data-Clusters/CU4/Public/content/notebook-runner/run505a-sample-notebook.ipynb | ###Markdown
RUN505a - Sample - Demo expert rules====================================Description----------- Common functionsDefine helper functions used in this notebook.
###Code
# Define `run` function for transient fault handling, hyperlinked suggestions, and scrolling updates on Windows
import sys
import os
import re
import json
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
first_run = True
rules = None
debug_logging = False
def run(cmd, return_output=False, no_output=False, retry_count=0):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
global first_run
global rules
if first_run:
first_run = False
rules = load_rules()
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportabilty, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
if which_binary == None:
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if rules is not None:
apply_expert_rules(line)
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# apply expert rules (to run follow-on notebooks), based on output
#
if rules is not None:
apply_expert_rules(line_decoded)
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
return output
else:
return
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
return output
def load_json(filename):
"""Load a json file from disk and return the contents"""
with open(filename, encoding="utf8") as json_file:
return json.load(json_file)
def load_rules():
"""Load any 'expert rules' from the metadata of this notebook (.ipynb) that should be applied to the stderr of the running executable"""
# Load this notebook as json to get access to the expert rules in the notebook metadata.
#
try:
j = load_json("run505a-sample-notebook.ipynb")
except:
pass # If the user has renamed the book, we can't load ourself. NOTE: Is there a way in Jupyter, to know your own filename?
else:
if "metadata" in j and \
"azdata" in j["metadata"] and \
"expert" in j["metadata"]["azdata"] and \
"expanded_rules" in j["metadata"]["azdata"]["expert"]:
rules = j["metadata"]["azdata"]["expert"]["expanded_rules"]
rules.sort() # Sort rules, so they run in priority order (the [0] element). Lowest value first.
# print (f"EXPERT: There are {len(rules)} rules to evaluate.")
return rules
def apply_expert_rules(line):
"""Determine if the stderr line passed in, matches the regular expressions for any of the 'expert rules', if so
inject a 'HINT' to the follow-on SOP/TSG to run"""
global rules
for rule in rules:
notebook = rule[1]
cell_type = rule[2]
output_type = rule[3] # i.e. stream or error
output_type_name = rule[4] # i.e. ename or name
output_type_value = rule[5] # i.e. SystemExit or stdout
details_name = rule[6] # i.e. evalue or text
expression = rule[7].replace("\\*", "*") # Something escaped *, and put a \ in front of it!
if debug_logging:
print(f"EXPERT: If rule '{expression}' satisfied', run '{notebook}'.")
if re.match(expression, line, re.DOTALL):
if debug_logging:
print("EXPERT: MATCH: name = value: '{0}' = '{1}' matched expression '{2}', therefore HINT '{4}'".format(output_type_name, output_type_value, expression, notebook))
match_found = True
display(Markdown(f'HINT: Use [{notebook}]({notebook}) to resolve this issue.'))
print('Common functions defined successfully.')
# Hints for binary (transient fault) retry, (known) error and install guide
#
retry_hints = {'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use']}
error_hints = {'azdata': [['azdata login', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Error processing command: "ApiError', 'TSG110 - Azdata returns ApiError', '../repair/tsg110-azdata-returns-apierror.ipynb'], ['Error processing command: "ControllerError', 'TSG036 - Controller logs', '../log-analyzers/tsg036-get-controller-logs.ipynb'], ['ERROR: 500', 'TSG046 - Knox gateway logs', '../log-analyzers/tsg046-get-knox-logs.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ["Can't open lib 'ODBC Driver 17 for SQL Server", 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb']]}
install_hint = {'azdata': ['SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb']}
###Output
_____no_output_____
###Markdown
Run test
###Code
print("This is the expression to match")
print('Notebook execution complete.')
###Output
_____no_output_____ |
Recognizing Image with ResNet50 Model.ipynb | ###Markdown
Using ResNet50 Model
###Code
model = resnet50.ResNet50()
###Output
_____no_output_____
###Markdown
Target Image
###Code
# input as a 224 X 224 pixels image : Requirement for this model
img = image.load_img("C:\\Users\\dell\\Desktop\\b.jpg", target_size = (224, 224))
plt.rcParams["figure.figsize"] = (6, 6)
plt.imshow(img)
plt.axis('off')
###Output
_____no_output_____
###Markdown
Preparing Model
###Code
# Converting Image to numpy array
input_as_array = image.img_to_array(img)
# Add a dimension to behave like we are having a list of image : Keras Requirement
input_as_array = np.expand_dims(input_as_array, axis =0)
input_as_array = resnet50.preprocess_input(input_as_array)
###Output
_____no_output_____
###Markdown
Predicting the Recognised classes
###Code
predictions = model.predict(input_as_array)
# Viewing top 9 predictions of our model
predicted_classes = resnet50.decode_predictions(predictions, top=9)
print (predicted_classes)
###Output
[[('n03461385', 'grocery_store', 0.38105598), ('n04200800', 'shoe_shop', 0.12307566), ('n02927161', 'butcher_shop', 0.087185621), ('n02930766', 'cab', 0.073266014), ('n04507155', 'umbrella', 0.071341269), ('n04462240', 'toyshop', 0.046535105), ('n03769881', 'minibus', 0.020067651), ('n04081281', 'restaurant', 0.018935004), ('n03089624', 'confectionery', 0.018834688)]]
|
03-Step3-PatternConfirmation/.ipynb_checkpoints/PatternConfirmation-checkpoint.ipynb | ###Markdown
Step 3: Pattern Confirmation1. [Analysis 1: Specificty and Concreteness](a1) 2. [Measuring Specificity using WordNet](spec) 2. [Measuring Concreteness using a Crowdsourced Weighted Dictionary](concrete) 3. [Analysis 2: Counting People and Organizations Mentioned using Named Entity Recognition](ner) Analysis 1: Specificty and Concreteness
###Code
import pandas
import nltk
from nltk import word_tokenize
from nltk.corpus import wordnet as wn
from nltk.corpus import stopwords
import numpy as np
import scipy
import matplotlib.pyplot as plt
import json
#Define function to count the number of hypernyms for each noun and verb
def specificty(x):
x = x.replace('[\x00-\x1f]'," ")
text = word_tokenize(x)
total_list = []
for w in text:
if not wn.synsets(w):
pass
else:
synset = wn.synsets(w)
#limit to nouns and verbs, as other words are not arranged hierarchically
if ((synset[0].pos() == (wn.NOUN)) or (synset[0].pos() == (wn.VERB))):
#I assume the most popular definition of each word.
paths = synset[0].hypernym_paths()
a_path = []
for num in range(0,len(paths)):
a_path.append(len([synset.name for synset in paths[num]]))
#I am taking the path with the minimum number of hypernyms, but this could be calculated some other way.
path_num = min(a_path)
total_list.append( (w, path_num) )
return total_list
#Function to calculate the concreteness of a word based on the crowdsourced dictionary
def concrete(x, dict):
x = x.replace('[\x00-\x1f]'," ")
x = x.lower()
text = word_tokenize(x)
text = [word for word in text if word not in stopwords.words('english')]
concrete_score = []
for w in text:
if w in dict:
concrete_score.append((w, dict[w]))
return concrete_score
###Output
_____no_output_____
###Markdown
Create Strings
###Code
#first create strings for the two comparison texts,
#Kant's The Metaphysical Elements of Ethics
#and the Wikipedia page on Germany
kant_string = open("../input_data/kant_metaphysics.txt", 'r', encoding='utf-8').read()
wiki_string = open("../input_data/wiki_germany.txt", 'r', encoding='utf-8').read()
#Read in our dataframe to extract text
df = pandas.read_csv("../data/comparativewomensmovement_dataset.csv", sep='\t', index_col=0, encoding='utf-8')
df
#concatenate the documents from each organization together, creaing four strings
redstockings = df[df['org']=='redstockings']
redstockings_string = ' '.join(str(s) for s in redstockings['text_string'].tolist())
cwlu = df[df['org']=='cwlu']
cwlu_string = ' '.join(str(s) for s in cwlu['text_string'].tolist())
heterodoxy = df[df['org']=='heterodoxy']
heterodoxy_string = ' '.join(str(s) for s in heterodoxy['text_string'].tolist())
hullhouse = df[df['org']=='hullhouse']
hullhouse_string = ' '.join(str(s) for s in hullhouse['text_string'].tolist())
###Output
_____no_output_____
###Markdown
Specificity Score
###Code
#Calculate specificity score for each noun and verb in each string
#Creates a list with word and specificity score for each string
kant_specificity = specificty(kant_string)
wiki_specificity = specificty(wiki_string)
redstockings_specificity = specificty(redstockings_string)
cwlu_specificity = specificty(cwlu_string)
heterodoxy_specificity = specificty(heterodoxy_string)
hullhouse_specificity = specificty(hullhouse_string)
#extract just the specificity score from each list
kant_specificity_array = list(int(x[1]) for x in kant_specificity)
wiki_specificity_array = list(int(x[1]) for x in wiki_specificity)
cwlu_specificity_array = list(int(x[1]) for x in cwlu_specificity)
heterodoxy_specificity_array = list(int(x[1]) for x in heterodoxy_specificity)
hh_specificity_array = list(int(x[1]) for x in hullhouse_specificity)
red_specificity_array = list(int(x[1]) for x in redstockings_specificity)
#check for a normal distribution
fig = plt.figure()
ax1 = fig.add_subplot(221) #top left
ax2 = fig.add_subplot(222) #top right
ax3 = fig.add_subplot(223) #bottom left
ax4 = fig.add_subplot(224) #bottom right
ax1.hist(heterodoxy_specificity_array, bins=10)
ax1.set_title("Heterodoxy")
ax2.hist(hh_specificity_array, bins = 10)
ax2.set_title("Hull House")
ax3.hist(red_specificity_array, bins = 10)
ax3.set_title("Redstockings")
ax4.hist(cwlu_specificity_array, bins = 10)
ax4.set_title("CWLU")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Compare the Distributions
###Code
#print descriptive stats
print("Mean Specificity Score for Kant")
print(np.mean(kant_specificity_array))
print("Mean Specificity Score for Wikipedia entry on Germany")
print(np.mean(wiki_specificity_array))
print("Mean Specificity Score for Heterodoxy")
print(np.mean(heterodoxy_specificity_array))
print("Mean Specificity Score for Hull House")
print(np.mean(hh_specificity_array))
print("Mean Specificity Score for Redstockings")
print(np.mean(red_specificity_array))
print("Mean Specificity Score for CWLU")
print(np.mean(cwlu_specificity_array))
#create an array for each city, and an array for each wave, for comparrison
newyork_specificity_array = red_specificity_array + heterodoxy_specificity_array
chicago_specificity_array = cwlu_specificity_array + hh_specificity_array
firstwave_specificity_array = hh_specificity_array + heterodoxy_specificity_array
secondwave_specificity_array = cwlu_specificity_array + red_specificity_array
#compare percent difference on the specificity scale (1:18) for the test arrays
(np.mean(wiki_specificity_array) - np.mean(kant_specificity_array)) / (max(wiki_specificity_array) - min(kant_specificity_array))
#compare percent difference on the specificity scale (1:18) for the city arrays
(np.mean(chicago_specificity_array) - np.mean(newyork_specificity_array)) / (max(chicago_specificity_array) - min(newyork_specificity_array))
#compare percent difference on the specificity scale (1:18) for the wave arrays
#note this difference is much smaller than the city-based difference
(np.mean(firstwave_specificity_array) - np.mean(secondwave_specificity_array)) / (max(firstwave_specificity_array) - min(secondwave_specificity_array))
#calculate ttest statistics on city and wave arrays
#note the statistic is much smaller on the wave-based arrays compared to the city-based arrays
print(scipy.stats.ttest_ind(chicago_specificity_array, newyork_specificity_array))
print(scipy.stats.ttest_ind(firstwave_specificity_array, secondwave_specificity_array))
###Output
Ttest_indResult(statistic=24.568635698492429, pvalue=3.3929628420398261e-133)
Ttest_indResult(statistic=8.858902043752753, pvalue=8.1108293299567897e-19)
###Markdown
Concreteness Score
###Code
#Read in the dictionary created by Brysbaert et al.
dict_df = pandas.read_excel("../input_data/Concreteness_ratings_Brysbaert_et_al_BRM.xlsx",sheetname="Sheet1")
dict_df = dict_df[dict_df['Bigram']==0]
word_dict = dict_df.set_index("Word")['Conc.M'].to_dict()
#Calculate concreteness score for each noun and verb in each string
#Creates a list of tuples, with word and concreteness score for each string
kant_concrete = concrete(kant_string, word_dict)
wiki_concrete = concrete(wiki_string, word_dict)
redstockings_concrete = concrete(redstockings_string, word_dict)
cwlu_concrete = concrete(cwlu_string, word_dict)
heterodoxy_concrete = concrete(heterodoxy_string, word_dict)
hullhouse_concrete = concrete(hullhouse_string, word_dict)
#extract just the concreteness score from each list
kant_concrete_array = list(int(x[1]) for x in kant_concrete)
wiki_concrete_array = list(int(x[1]) for x in wiki_concrete)
cwlu_concrete_array = list(int(x[1]) for x in cwlu_concrete)
heterodoxy_concrete_array = list(int(x[1]) for x in heterodoxy_concrete)
hh_concrete_array = list(int(x[1]) for x in hullhouse_concrete)
red_concrete_array = list(int(x[1]) for x in redstockings_concrete)
###Output
_____no_output_____
###Markdown
Compare the Distributions
###Code
#check for a normal distribution
fig2 = plt.figure()
ax1 = fig2.add_subplot(221) #top left
ax2 = fig2.add_subplot(222) #top right
ax3 = fig2.add_subplot(223) #bottom left
ax4 = fig2.add_subplot(224) #bottom right
ax1.hist(heterodoxy_concrete_array, bins=5)
ax1.set_title("Heterodoxy")
ax2.hist(hh_concrete_array, bins = 5)
ax2.set_title("Hull House")
ax3.hist(red_concrete_array, bins = 5)
ax3.set_title("Redstockings")
ax4.hist(cwlu_concrete_array, bins = 5)
ax4.set_title("CWLU")
plt.tight_layout()
plt.show()
#print descriptive stats
print("Mean Concreteness Score for Kant")
print(np.mean(kant_concrete_array))
print("Mean Concreteness Score for Wikipedia entry on Germany")
print(np.mean(wiki_concrete_array))
print("Mean Concreteness Score for Heterodoxy")
print(np.mean(heterodoxy_concrete_array))
print("Mean Concreteness Score for Hull House")
print(np.mean(hh_concrete_array))
print("Mean Concreteness Score for Redstockings")
print(np.mean(red_concrete_array))
print("Mean Concreteness Score for CWLU")
print(np.mean(cwlu_concrete_array))
#create one array for each city
newyork_concrete_array = heterodoxy_concrete_array + red_concrete_array
chicago_concrete_array = hh_concrete_array + cwlu_concrete_array
#create one array for each wave
firstwave_concrete_array = heterodoxy_concrete_array + hh_concrete_array
secondwave_concrete_array = red_concrete_array + cwlu_concrete_array
#compare percent difference on the concreteness scale (1:5) for the test arrays
(np.mean(wiki_concrete_array) - np.mean(kant_concrete_array)) / (5-1)
#compare percent difference on the concreteness scale (1:5) for the city-based arrays
(np.mean(chicago_concrete_array) - np.mean(newyork_concrete_array)) / (5-1)
#compare percent difference on the concreteness scale (1:5) for the wave-based arrays
#notice this percent difference is around half as much as the city-based differernce
(np.mean(firstwave_concrete_array) - np.mean(secondwave_concrete_array)) / (5-1)
#calculate ttest statistics on the city- and wave-based arrays
#note the statistic is more than twice as large for the New York/Chicago comparison versus the first wave/second wave comparison
print(scipy.stats.ttest_ind(newyork_concrete_array, chicago_concrete_array))
print(scipy.stats.ttest_ind(firstwave_concrete_array, secondwave_concrete_array))
###Output
Ttest_indResult(statistic=-65.289536584554682, pvalue=0.0)
Ttest_indResult(statistic=30.944453326355294, pvalue=7.2414699433184674e-210)
###Markdown
Analysis 2: Count Organizations and People Mentioned using NER The below code counts the number of persons and organizations mentioned, and compares across organizations.**Note:** Because the published data is sorted, and not the full text, the code below will not reproduce the actual named entities counted. Instead, I will read in the saved named entities, count them, and print the output.
###Code
#############################################################################################
##Don't run this code to reproduce the named entities. It will not work on the sorted text.##
#############################################################################################
def extract_entities(text):
#text = text.decode('ascii','ignore') #convert all characters to ascii
text = re.sub('[\x00-\x1f]'," ", text)
org_list = []
person_list = []
for sent in nltk.sent_tokenize(text):
chunked = nltk.ne_chunk(nltk.pos_tag(nltk.word_tokenize(sent)))
for n in chunked:
if isinstance(n, nltk.tree.Tree):
if n.label()=="ORGANIZATION":
org_list.append(' '.join(c[0] for c in n.leaves()))
if n.label()=="PERSON":
person_list.append(' '.join(c[0] for c in n.leaves()))
return org_list, person_list
org_strings = [hullhouse_string, heterodoxy_string, cwlu_string, redstockings_string]
for org in org_strings:
org_list, person_list = extract_entities(org)
myList = [org_list, person_list]
filename="../input_data/named_entities_%s.json" % org
##Uncomment the line below to save the named entities as a JSON file
#json.dump( myList_cwlu, open( filename, "w", endoding = 'utf-8' ) )
##################################################################
##Count saved named entities to reproduce named entity analysis.##
##################################################################
myList_cwlu = json.load( open ("../input_data/named_entities_cwlu.json", "r",
encoding='utf-8') )
cwlu_orgs = myList_cwlu[0]
cwlu_person = myList_cwlu[1]
myList_hh = json.load( open ("../input_data/named_entities_hullhouse.json", "r",
encoding = 'utf-8') )
hh_orgs = myList_hh[0]
hh_person = myList_hh[1]
myList_red = json.load( open ("../input_data/named_entities_redstockings.json", "r"),
encoding='utf-8')
red_orgs = myList_red[0]
red_person = myList_red[1]
myList_heterodoxy = json.load( open ("../input_data/named_entities_heterodoxy.json", "r"),
encoding='utf-8')
heterodoxy_orgs = myList_heterodoxy[0]
heterodoxy_person = myList_heterodoxy[1]
#plots the number of organizations and persons mentioned by each organization
import matplotlib.pyplot as plt
# data to plot
n_groups = 4
num_persons = (len(hh_person), len(heterodoxy_person), len(cwlu_person), len(red_person))
num_orgs = (len(hh_orgs), len(heterodoxy_orgs), len(cwlu_orgs), len(red_orgs))
# create plot
fig, ax = plt.subplots()
index = np.arange(n_groups)
bar_width = 0.35
opacity = 0.8
counts1 = plt.bar(index, num_persons, bar_width,
alpha=opacity,
color='b',
label='Persons')
counts2 = plt.bar(index + bar_width, num_orgs, bar_width,
alpha=opacity,
color='g',
label='Organizations')
plt.xlabel('Organization')
plt.ylabel('Count of Named Entities')
plt.title('Count of Named Entities by Organization')
plt.xticks(index + bar_width, ('Hull House', 'Heterodoxy', 'CWLU', 'Redstockings'))
plt.legend(loc='upper left')
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
_downloads/plot_boundaries.ipynb | ###Markdown
Segmentation contours=====================Visualize segmentation contours on original grayscale image.
###Code
from skimage import data, segmentation
from skimage import filters
import matplotlib.pyplot as plt
import numpy as np
coins = data.coins()
mask = coins > filters.threshold_otsu(coins)
clean_border = segmentation.clear_border(mask).astype(np.int)
coins_edges = segmentation.mark_boundaries(coins, clean_border)
plt.figure(figsize=(8, 3.5))
plt.subplot(121)
plt.imshow(clean_border, cmap='gray')
plt.axis('off')
plt.subplot(122)
plt.imshow(coins_edges)
plt.axis('off')
plt.tight_layout()
plt.show()
###Output
_____no_output_____ |
analyses/SERV-1/Ts-5min-HOLTWINTERS.ipynb | ###Markdown
Time series forecasting using Holt-Winters Import necessary libraries
###Code
%matplotlib notebook
import numpy
import pandas
import datetime
import sys
import time
import matplotlib.pyplot as ma
import statsmodels.tsa.holtwinters as hw
###Output
/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
return f(*args, **kwds)
###Markdown
Load necessary CSV file
###Code
try:
ts = pandas.read_csv('../../datasets/srv-1-ts-5m.csv')
except:
print("I am unable to connect to read .csv file", sep=',', header=1)
ts.index = pandas.to_datetime(ts['ts'])
# delete unnecessary columns
del ts['id']
del ts['ts']
del ts['min']
del ts['max']
del ts['avg']
del ts['cnt']
# print table info
ts.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 23727 entries, 2018-04-11 20:00:00 to 2018-07-16 13:50:00
Data columns (total 1 columns):
sum 23727 non-null int64
dtypes: int64(1)
memory usage: 370.7 KB
###Markdown
Get values from specified range
###Code
ts = ts['2018-06-16':'2018-07-15']
###Output
_____no_output_____
###Markdown
Remove possible NA values (by interpolation)NA values are explicitely removed by linear interpolation.
###Code
def print_values_stats():
print("Zero Values:\n",sum([(1 if x == 0 else 0) for x in ts.values]),"\n\nMissing Values:\n",ts.isnull().sum(),"\n\nFilled in Values:\n",ts.notnull().sum(), "\n")
idx = pandas.date_range(ts.index.min(), ts.index.max(), freq="5min")
ts = ts.reindex(idx, fill_value=None)
print("Before interpolation:\n")
print_values_stats()
ts = ts.replace(0, numpy.nan)
ts = ts.interpolate(limit_direction="both")
print("After interpolation:\n")
print_values_stats()
###Output
Before interpolation:
Zero Values:
0
Missing Values:
sum 99
dtype: int64
Filled in Values:
sum 8541
dtype: int64
After interpolation:
Zero Values:
0
Missing Values:
sum 0
dtype: int64
Filled in Values:
sum 8640
dtype: int64
###Markdown
Plot values
###Code
# Idea: Plot figure now and do not wait on ma.show() at the end of the notebook
ma.ion()
ma.show()
fig1 = ma.figure(1)
ma.plot(ts, color="blue")
ma.draw()
try:
ma.pause(0.001) # throws NotImplementedError, ignore it
except:
pass
###Output
_____no_output_____
###Markdown
Split time series into train and test seriesWe have decided to split train and test time series by two weeks.
###Code
train_data_length = 12*24*7
ts_train = ts[:train_data_length]
ts_test = ts[train_data_length+1:]
###Output
_____no_output_____
###Markdown
Fit and predict Time Serie
###Code
def print_hw_parameters(model):
alpha, beta, gamma = model.params['smoothing_level'], model.params['smoothing_slope'], model.params['smoothing_seasonal']
print("Holt-Winters parameters:")
print("Alpha: ", alpha)
print("Beta: ", beta)
print("Gamma: ", gamma)
print("Forecasting started...")
start_time = time.time()
try:
model = hw.ExponentialSmoothing(ts_train, seasonal='additive', seasonal_periods=train_data_length-1).fit()
predictions = model.predict(start=ts_test.index[0], end=ts_test.index[-1])
except Exception as e:
print("Error during forecast: ", str(e))
print("Forecasting finished")
print("Time elapsed: ", time.time() - start_time)
print_hw_parameters(model)
###Output
Forecasting started...
Forecasting finished
Time elapsed: 70.20740914344788
Holt-Winters parameters:
Alpha: 1.0
Beta: 0.0
Gamma: 0.0
###Markdown
Count mean absolute percentage errorWe use MAPE (https://www.forecastpro.com/Trends/forecasting101August2011.html) instead of MSE because the result of MAPE does not depend on size of values.
###Code
values_sum = 0
for value in zip(ts_test.values, predictions.values):
actual = value[0]
predicted = value[1]
values_sum += abs((actual - predicted) / actual)
values_sum *= 100/len(predictions)
print("MAPE: ", values_sum, "%\n")
###Output
MAPE: [184.91285763] %
###Markdown
Plot forecasted values
###Code
ma.figure(2)
ma.plot(ts_train.index, ts_train, label='Train')
ma.plot(ts_test.index, ts_test, label='Test')
ma.plot(predictions.index, predictions, label='Holt-Winters')
ma.legend(loc='best')
ma.draw()
###Output
_____no_output_____ |
week_01_images/homework/homework_01.ipynb | ###Markdown
Домашняя работа №1
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
def plot_one_image(image: np.ndarray) -> None:
"""
Отобразить изображение с помощью matplotlib.
Вспомогательная функция.
:param image: изображение для отображения
:return: None
"""
fig, axs = plt.subplots(1, 1, figsize=(8, 7))
axs.imshow(image)
axs.axis('off')
plt.plot()
###Output
_____no_output_____
###Markdown
Задача №1 - ЛабиринтРеализуйте алгоритм поиска выхода из лабиринта по растровому изобажению.Вам нужно написать код, который будет находить путь (координаты пикселей) от заданного входа сверху до выхода снизу.Отрисуйте получившийся маршрут на карте с помощью функции ```plot_maze_path(img, coords)``` или воспользуйтесь вам известным графическим инструментом.__Input:__Изображение лабиринта в кодировке $RGB$.Все карты лежат на [яндекс-диске](https://yadi.sk/d/qEWVZk2picDdZw)__Ouput:__Массив координат пути через лабиринт в виде ```(np.array(x), np.array(y))```. Оценивается __каждое__ успешное решение лабиринта.Пример решенной задачи.
###Code
from task_1 import find_way_from_maze
def plot_maze_path(image: np.ndarray, coords: tuple) -> np.ndarray:
"""
Нарисовать путь через лабиринт на изображении.
Вспомогательная функция.
:param image: изображение лабиринта
:param coords: координаты пути через лабиринт типа (x, y) где x и y - массивы координат точек
:return img_wpath: исходное изображение с отрисованными координатами
"""
if image.ndim != 3:
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
img_wpath = image.copy()
if coords:
x, y = coords
img_wpath[x, y, :] = [0, 0, 255]
return img_wpath
###Output
_____no_output_____
###Markdown
Загрузим тестовое изображение и отобразим его.
###Code
test_image = cv2.imread('task_1/20 by 20 orthogonal maze.png') # загрузить тестовую картинку
plot_one_image(test_image)
###Output
_____no_output_____
###Markdown
Теперь ваша задача реализовать функцию ```plot_maze_path``` в ```task_1.py``` для того, чтобы найти координаты пути через лабиринт.
###Code
way_coords = find_way_from_maze(test_image) # вычислить координаты пути через лабиринт
image_with_way = plot_maze_path(test_image, way_coords)
plot_one_image(image_with_way)
###Output
_____no_output_____
###Markdown
Задача №2 - Пробки в городеТребуется написать программу, которая на вход принимает картинку, на которой схематически изображена машинка на дороге с $N$ полосами и препятствия на полосах. Соответствующие объекты обозначены цветами, которые сохраняются на всех изображениях. Результатом работы программы является номер полосы, на которую нужно перестроиться или сообщение о том, что перестраиваться не нужно.**Примечание: номер дороги считается слева направо, отсчет начинается с нуля.**Примеры изображений:
###Code
from task_2 import find_road_number
test_image = cv2.imread('task_2/image_00.jpg')
test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2RGB)
plot_one_image(test_image)
road_number = find_road_number(test_image)
print(f'Нужно перестроиться на дорогу номер {road_number}')
###Output
_____no_output_____
###Markdown
Задача №3 - Аффинные преобразования Задача №3.1 - Поверни изображениеРеализуйте функцию, которая поворачивает изображение вокруг заданной точки на заданный угол ($0^\circ-360^\circ$) и преобразует размер изображения, чтобы оно не обрезалось после поворота.
###Code
from task_3 import rotate
test_image = cv2.imread('task_3/lk.jpg')
test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2RGB)
plot_one_image(test_image)
test_point = (200, 200)
test_angle = 15
transformed_image = rotate(test_image, test_point, test_angle)
plot_one_image(transformed_image)
###Output
_____no_output_____
###Markdown
Проверьте как это должно было получиться
###Code
result_image = cv2.imread('task_3/lk_rotate.jpg')
result_image = cv2.cvtColor(result_image, cv2.COLOR_BGR2RGB)
plot_one_image(result_image)
###Output
_____no_output_____
###Markdown
Задача №3.2 - Афинные преобразованияРеализуйте функцию, которая применяет афинное преобразование между заданными точками на исходном изображении и преобразует размер получившегося изображения, чтобы оно не обрезалось.
###Code
from task_3 import apply_warpAffine
test_image = cv2.imread('task_3/lk.jpg')
test_image = cv2.cvtColor(test_image, cv2.COLOR_BGR2RGB)
plot_one_image(test_image)
test_point_1 = np.float32([[50, 50], [400, 50], [50, 200]])
test_point_2 = np.float32([[100, 100], [200, 20], [100, 250]])
transformed_image = apply_warpAffine(test_image, test_point_1, test_point_2)
plot_one_image(transformed_image)
###Output
_____no_output_____
###Markdown
Проверьте как это должно было получиться
###Code
result_image = cv2.imread('task_3/lk_affine.jpg')
result_image = cv2.cvtColor(result_image, cv2.COLOR_BGR2RGB)
plot_one_image(result_image)
###Output
_____no_output_____ |
03. Employee_Retention.ipynb | ###Markdown
Employee Retention
We got employee data from a few companies. We have data about all employees who joined from 2011-01-24 to 2015-12-13. For each employee, we also know if they are still at the company as of 2015-12-13 or they have quit. Beside that, we have general info about the employee, such as avg salary during her tenure, dept, and yrs of experience.
As said above, the goal is to predict employee retention and understand its main drivers. Specifically, you should:
1. Assume, for each company, that the headcount starts from zero on 2011-01-23. Estimate employee headcount, for each company on each day, from 2011-01-24 to 2015-12-13. That is, if by 2012-03-02 2000 people have joined company 1 and 1000 of them have already quit, then company headcount on 2012-03-02 for company 1 would be 1000. You should create a table with 3 columns: day, employee_headcount, company_id
2. What are the main factors that drive employee churn? Do they make sense? Explain your findings
3. If you could add to this data set just one variable that could help explain employee churn, what would that be?
Data Description
- **employee_id:** id of the employee. Unique by employee per company
- **company_id:** company id. It is unique by company
- **dept:** employee dept
- **seniority:** number of yrs of work experience when hired
- **salary:** avg yearly salary of the employee during her tenure within the company
- **join_date:** when the employee joined the company, it can only be between 2011/01/24 and 2015/12/13
- **quit_date:** when the employee left her job (if she is still employed as of 2015/12/13, this field is NA) Navigation
1. [Challenge Description](Challenge)
2. [Data Description](Data)
3. [Initial Exploration](Exploration)
4. Challenge Questions
1. [Employee Headcounts](Headcounts)
2. [Employee Churn](Churn)
- [Employment Length](Tenure)
- [Salary](Salary)
- [Hire Date](Join)
- [Seniority](Senority)
- [Modeling](Modeling)
3. [Adding to the Dataset](Adding)
5. [Conclusions](Conclusions)
Initial Exploration
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import graphviz
from sklearn.tree import DecisionTreeClassifier
from sklearn.tree import export_graphviz
from graphviz import Source
df = pd.read_csv('PRIVATE CSV')
print(df.shape)
df.head()
df.info()
df['join_date'] = pd.to_datetime(df['join_date'])
df['quit_date'] = pd.to_datetime(df['quit_date'])
df.describe()
###Output
_____no_output_____
###Markdown
Finding Company Headcounts Finding the employee headcounts is just a quick and dirty nested loop to get the counts at any given date.
###Code
dates = pd.date_range(start='2011/01/24', end='2015/12/13')
employee_counts = []
company_list = []
date_list = []
for company in df['company_id'].unique():
for i, date in enumerate(dates):
total_joined = len(df[(df['join_date'] <= date) &
(df['company_id'] == company)])
total_quit = len(df[(df['quit_date'] <= date) &
(df['company_id'] == company)])
employee_counts.append(total_joined - total_quit)
company_list.append(company)
date_list.append(date)
headcount_table = pd.DataFrame({'date': date_list, 'company_id': company_list,
'count': employee_counts})
headcount_table[headcount_table['company_id'] == 1].head()
###Output
_____no_output_____
###Markdown
Exploring Employee Churn
We are asked to explore empoyee churn so we first need to find employment length for each employee to see when they are leaving.
###Code
df['employment_length'] = (df['quit_date'] - df['join_date']).astype('timedelta64[D]')
df.head()
###Output
_____no_output_____
###Markdown
We see that employees are more likely to quit near their hire anniversary and the average employee stays less than 2 years (730 days).
###Code
df.groupby('company_id')['employment_length'].mean()
df.groupby('company_id')['employment_length'].median()
plt.hist(df['employment_length'].dropna(), bins=50)
plt.show()
###Output
_____no_output_____
###Markdown
Next we will define employees that quit early. There is a spike at the 1 year mark skewing the mean left so we'll say anyone that lasts more than a year and one month was a worthwhile hire. This year and one month time frame also lines up fairly closely with the median employment length. This will allow us to better understand the individuals leaving these companies early.
###Code
def early_quit(employment_length, join_date, max_date):
if (join_date + pd.Timedelta(days=395)) > max_date:
return np.NaN
elif (employment_length > 395):
return 0.0
return 1.0
df['early_quit'] = np.vectorize(early_quit)(df['employment_length'],
df['join_date'],
max(df['join_date']))
df.head()
###Output
_____no_output_____
###Markdown
Companies `11` and `12` have the lowest pay and the highest `early_quit` rates. We know salary is not the only consideration for quitting early thanks to a mix of pay rates and the variation in `early_quit` but it's clear there is a lower limit for pay and that limit is dependent on company and position.
###Code
df.groupby('company_id')[['salary', 'early_quit']].mean()
salary_ranges = pd.cut(df['salary'], 30)
salary_table = pd.crosstab(salary_ranges, df['early_quit'],
margins=True, dropna=False)
salary_table[0.0] = salary_table[0.0] / salary_table['All']
salary_table[1.0] = salary_table[1.0] / salary_table['All']
salary_table
print('Average Salary by Dept. (All Companies)')
print(df.groupby('dept')[['salary', 'early_quit']].mean().reset_index())
for i in range(1,13):
print('\nCompany:', i)
print(df[df['company_id'] == i].groupby('dept')[['salary', 'early_quit']].mean().reset_index())
###Output
Average Salary by Dept. (All Companies)
dept salary early_quit
0 customer_service 82245.424837 0.596674
1 data_science 206885.893417 0.588562
2 design 137460.869565 0.581670
3 engineer 205544.548016 0.597781
4 marketing 135598.042311 0.583129
5 sales 135912.358134 0.612645
Company: 1
dept salary early_quit
0 customer_service 90554.006969 0.594947
1 data_science 230938.832252 0.581882
2 design 150434.869739 0.557841
3 engineer 224193.877551 0.624797
4 marketing 151084.792627 0.582072
5 sales 150912.568306 0.613396
Company: 2
dept salary early_quit
0 customer_service 92073.643411 0.612091
1 data_science 234919.014085 0.601382
2 design 154556.053812 0.532609
3 engineer 227469.240048 0.597064
4 marketing 148792.975970 0.554187
5 sales 152017.543860 0.625935
Company: 3
dept salary early_quit
0 customer_service 72229.702970 0.571429
1 data_science 176616.714697 0.586957
2 design 121404.255319 0.645455
3 engineer 185306.201550 0.584158
4 marketing 122250.000000 0.577855
5 sales 119154.269972 0.614035
Company: 4
dept salary early_quit
0 customer_service 72875.160875 0.613377
1 data_science 181749.103943 0.588496
2 design 114700.934579 0.609195
3 engineer 187154.255319 0.565217
4 marketing 118126.394052 0.582915
5 sales 123228.346457 0.648241
Company: 5
dept salary early_quit
0 customer_service 73796.850394 0.598778
1 data_science 186902.777778 0.578313
2 design 128972.222222 0.604167
3 engineer 183530.158730 0.536000
4 marketing 117008.849558 0.595238
5 sales 121803.921569 0.558824
Company: 6
dept salary early_quit
0 customer_service 72282.306163 0.604712
1 data_science 178084.967320 0.624000
2 design 128857.142857 0.587302
3 engineer 183126.696833 0.613260
4 marketing 117737.142857 0.553957
5 sales 124827.160494 0.655738
Company: 7
dept salary early_quit
0 customer_service 74705.756930 0.584000
1 data_science 177411.764706 0.541667
2 design 118114.285714 0.611111
3 engineer 181123.348018 0.607527
4 marketing 122737.588652 0.610619
5 sales 121628.048780 0.607407
Company: 8
dept salary early_quit
0 customer_service 73574.025974 0.642857
1 data_science 183561.643836 0.605263
2 design 120264.150943 0.694444
3 engineer 188821.989529 0.580000
4 marketing 116096.296296 0.657407
5 sales 107985.401460 0.550847
Company: 9
dept salary early_quit
0 customer_service 72225.146199 0.545802
1 data_science 181328.358209 0.600000
2 design 126300.000000 0.560000
3 engineer 179851.063830 0.553957
4 marketing 120959.677419 0.633333
5 sales 121106.194690 0.597826
Company: 10
dept salary early_quit
0 customer_service 74336.309524 0.560784
1 data_science 171220.183486 0.617284
2 design 112658.536585 0.625000
3 engineer 183406.976744 0.589928
4 marketing 128854.166667 0.602941
5 sales 116837.837838 0.633333
Company: 11
dept salary early_quit
0 customer_service 42833.333333 0.833333
1 data_science 153500.000000 0.000000
2 engineer 156666.666667 0.833333
3 marketing 124500.000000 0.000000
Company: 12
dept salary early_quit
0 customer_service 42583.333333 0.900000
1 data_science 131250.000000 0.333333
2 design 82000.000000 1.000000
3 engineer 80000.000000 1.000000
4 marketing 117000.000000 1.000000
5 sales 98500.000000 0.500000
###Markdown
Looking by date we see an increase in the turnover rate year over year and that the trend holds when we look at each company individually. We'll make a new feature, `hire_year`, for later modeling.
We also see a trend for quarter, month, week, and day of week so we'll create those features as well. With quarter, month, and week we see a clear trend of starting in the third or fourth quarter being worse for employee retention. It is likely the cause is that most companies are hectic at the end of the year and employees starting in the middle of that are less likely to get the attention they need for a smooth onboarding. With day of week we see that starting on Friday is worse for retention. Again, an indicator that having a smooth uninterrupted onboarding is likely a key factor for retention.
###Code
print('Early Quit (All Companies)')
join_table = pd.crosstab(df['join_date'].dt.year,
df['early_quit'], margins=True)
join_table[0.0] = join_table[0.0] / join_table['All']
join_table[1.0] = join_table[1.0] / join_table['All']
print(join_table)
for i in range(1, 13):
print('\nCompany:', i)
join_table = pd.crosstab(df['join_date'].dt.year,
df[df['company_id'] == i]['early_quit'],
margins=True)
join_table[0.0] = join_table[0.0] / join_table['All']
join_table[1.0] = join_table[1.0] / join_table['All']
print(join_table)
join_table.drop('All').drop(
'All', axis='columns').plot.bar(
title='Early Quit Rate by Year Joined(All Companies)');
join_table = pd.crosstab(df['join_date'].dt.quarter,
df['early_quit'], margins=True)
join_table[0.0] = join_table[0.0] / join_table['All']
join_table[1.0] = join_table[1.0] / join_table['All']
join_table.drop('All').drop(
'All', axis='columns').plot.bar(
title='Early Quit Rate by Month Joined(All Companies)');
join_table = pd.crosstab(df['join_date'].dt.week,
df['early_quit'], margins=True)
join_table[0.0] = join_table[0.0] / join_table['All']
join_table[1.0] = join_table[1.0] / join_table['All']
join_table.drop('All').drop(
'All', axis='columns').plot.bar(
title='Early Quit Rate by Week Joined(All Companies)');
join_table = pd.crosstab(df['join_date'].dt.dayofweek,
df['early_quit'], margins=True)
join_table[0.0] = join_table[0.0] / join_table['All']
join_table[1.0] = join_table[1.0] / join_table['All']
join_table.drop('All').drop(
'All', axis='columns').plot.bar(
title='Early Quit Rate by Day of Week Joined(All Companies)');
df['hire_year'] = df['join_date'].dt.year
df['hire_quarter'] = df['join_date'].dt.quarter
df['hire_month'] = df['join_date'].dt.month
df['hire_week'] = df['join_date'].dt.week
df['hire_dayofweek'] = df['join_date'].dt.dayofweek
###Output
/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:4: FutureWarning: Series.dt.weekofyear and Series.dt.week have been deprecated. Please use Series.dt.isocalendar().week instead.
after removing the cwd from sys.path.
###Markdown
Seniority doesn't seem to have a strong correlation to quitting early.
###Code
seniority_table = pd.crosstab(df['seniority'], df['early_quit'], margins='all')
seniority_table[0.0] = seniority_table[0.0] / seniority_table['All']
seniority_table[1.0] = seniority_table[1.0] / seniority_table['All']
seniority_table
seniority_bins = pd.cut(df['seniority'], 20)
seniority_table = pd.crosstab(seniority_bins, df['early_quit'], margins='all')
seniority_table[0.0] = seniority_table[0.0] / seniority_table['All']
seniority_table[1.0] = seniority_table[1.0] / seniority_table['All']
seniority_table
###Output
_____no_output_____
###Markdown
Using a decision tree and a visualization we can see what a model is picking up on. The model confirms what we saw above, when you were hired has the largest impact on your retention followed by salary.
###Code
data_dummy = pd.get_dummies(df[['company_id', 'dept', 'seniority', 'hire_year',
'hire_quarter', 'hire_week', 'hire_dayofweek',
'salary', 'early_quit']], drop_first=True)
data_dummy = data_dummy.dropna()
model = DecisionTreeClassifier(max_depth=4, min_samples_leaf=10,
class_weight="balanced",
min_impurity_decrease = 0.0001)
model.fit(data_dummy.drop('early_quit', axis=1), data_dummy['early_quit'])
# Visualize It
export_graphviz(model, out_file="tree_employee.dot", feature_names=data_dummy.drop('early_quit', axis=1).columns, proportion=True, rotate=True)
with open("tree_employee.dot") as f:
dot_graph = f.read()
tree_source = Source.from_file("tree_employee.dot")
tree_source
data_dummy
###Output
_____no_output_____ |
notebookcode/.ipynb_checkpoints/stat-checkpoint.ipynb | ###Markdown
删去缺失严重的变量,这里选择删去缺失比率大于0.6的变量,即sodium_max,sodium_min,chloride_max,chloride_min,be_max,be_min,crp_max,crp_min,amylase_max,amylase_min,lipase_max,lipase_min,urine_ph_max,urine_ph_min,urine_wbc_max,urine_wbc_min ,urine_protein_max,urine_protein_min,urine_glucose_max,urine_glucose_min,urine_bilirubin_max,urine_bilirubin_min,urine_ketone_max,urine_ketone_min,urine_rbc_max,urine_rbc_min ,specificgravity_max,specificgravity_min,urobilinogen_max,urobilinogen_min,d_dimer_max,d_dimer_min,fib_max,fib_min
###Code
# drop [sodium_max,sodium_min,chloride_max,chloride_min,be_max,be_min,crp_max,crp_min,amylase_max,amylase_min,lipase_max,lipase_min,
# urine_ph_max,urine_ph_min,urine_wbc_max,urine_wbc_min,urine_protein_max,urine_protein_min,urine_glucose_max,urine_glucose_min,
# urine_bilirubin_max,urine_bilirubin_min,urine_ketone_max,urine_ketone_min,urine_rbc_max,urine_rbc_min ,
# specificgravity_max,specificgravity_min,urobilinogen_max,urobilinogen_min,d_dimer_max,d_dimer_min,fib_max,fib_min]
data1=data.drop(['sodium_max','sodium_min','chloride_max','chloride_min','be_max','be_min','crp_max','crp_min','amylase_max','amylase_min','lipase_max','lipase_min',
'urine_ph_max','urine_ph_min','urine_wbc_max','urine_wbc_min','urine_protein_max','urine_protein_min','urine_glucose_max','urine_glucose_min',
'urine_bilirubin_max','urine_bilirubin_min','urine_ketone_max','urine_ketone_min','urine_rbc_max','urine_rbc_min' ,
'specificgravity_max','specificgravity_min','urobilinogen_max','urobilinogen_min','d_dimer_max','d_dimer_min','fib_max','fib_min'],axis=1)
data1.isnull().sum()/len(data1)
corrmat = data1.corr()
plt.subplots(figsize=(24,9))
sns.heatmap(corrmat,vmax=0.9,square=True)
data1.columns
data1.to_csv('../data/feature1.csv')
import sklearn as sk
###Output
_____no_output_____ |
swap_Keys_and_Values_in_Dictionary.ipynb | ###Markdown
###Code
myDictionary = {'color':'blue', 'speed':'fast','number':1, 5:'number'}
#print(myDictionary)
#Swap keys for values
#swapDictionary = {}
new_dictionary = dict([(val, key) for key, val in myDictionary.items()])
#swapDictionary[val] = key
print ("Original dictionary is : ")
print(myDictionary)
print()
# Printing new dictionary after swapping keys and values
print ("Dictionary after swapping is : ")
print("keys: values")
for i in new_dictionary:
print(i, " : ", new_dictionary[i])
# Python3 code to demonstrate
# swap of key and value
# initializing dictionary
old_dict = {'A': 67, 'B': 23, 'C': 45, 'D': 56, 'E': 12, 'F': 69, 'G': 67, 'H': 23}
new_dict = dict([(value, key) for key, value in old_dict.items()])
# Printing original dictionary
print ("Original dictionary is : ")
print(old_dict)
print()
# Printing new dictionary after swapping keys and values
print ("Dictionary after swapping is : ")
print("keys: values")
for i in new_dict:
print(i, " : ", new_dict[i])
###Output
_____no_output_____ |
pca_knn_desafio/Desafio/MouseBehavior KNN - I2A2.ipynb | ###Markdown
Reading, cleaning and splitting datasets
###Code
# reading csv files and creating dataframes
df_evandro = pd.read_csv('Evandro.csv', sep=';', encoding='latin-1')
df_celso = pd.read_csv('Celso.csv', sep=';', encoding='latin-1')
df_eliezer = pd.read_csv('Eliezer.csv', sep=';', encoding='latin-1')
# drop NaN values (if any)
df_evandro.dropna(inplace=True)
df_celso.dropna(inplace=True)
df_eliezer.dropna(inplace=True)
# check maximum row numbers
maxRows = [df_evandro.shape[0], df_eliezer.shape[0], df_celso.shape[0]]
#maxRows.sort()
maxRows[0] = 10287
# slice dataframes in order to equalize the length
df_evandro = df_evandro.loc[:maxRows[0]-1,:]
df_celso = df_celso.loc[:maxRows[0]-1,:]
df_eliezer = df_eliezer.loc[:maxRows[0]-1,:]
# converting Event Types into binary classification
#df_evandro['Event Type'] = df_evandro['Event Type'].apply(lambda s: 0 if s=='mouseMove' else 1)
#df_celso['Event Type'] = df_celso['Event Type'].apply(lambda s: 0 if s=='mouseMove' else 1)
#df_eliezer['Event Type'] = df_eliezer['Event Type'].apply(lambda s: 0 if s=='mouseMove' else 1)
# drop useless data
df_evandro.drop(['Date', 'Time', 'Event Type'], axis=1, inplace=True)
df_celso.drop(['Date', 'Time', 'Event Type'], axis=1, inplace=True)
df_eliezer.drop(['Date', 'Time', 'Event Type'], axis=1, inplace=True)
# splitting into training data
df_evandro_train = df_evandro.loc[:maxRows[0]*0.75,:]
df_celso_train = df_celso.loc[:maxRows[0]*0.75,:]
df_eliezer_train = df_eliezer.loc[:maxRows[0]*0.75,:]
# splitting into testing data
df_evandro_test = df_evandro.loc[maxRows[0]*0.75:,:].reset_index(drop=True)
df_celso_test = df_celso.loc[maxRows[0]*0.75:,:].reset_index(drop=True)
df_eliezer_test = df_eliezer.loc[maxRows[0]*0.75:,:].reset_index(drop=True)
df_evandro_train.head()
###Output
_____no_output_____
###Markdown
Adding new variables to the training dataset
###Code
def createFeatures(df):
offset_list, xm_list, ym_list, xstd_list, ystd_list, distm_list, diststd_list, arct_list = ([] for i in range(8))
# deleting rows with coordinate X being 0
df = df[df['Coordinate X'] != 0]
# filtering unique id == 1
ulist = df['EventId'].unique()
for u in ulist:
df_unique = df[df['EventId'] == u]
if df_unique.shape[0] == 1: # original is "== 1"
df = df[df['EventId'] != u]
# list of unique id with occurrence > 1
ulist = df['EventId'].unique()
for u in ulist:
df_unique = df[df['EventId'] == u]
# adding mean
x_mean = df_unique['Coordinate X'].mean()
y_mean = df_unique['Coordinate Y'].mean()
xm_list.append(x_mean)
ym_list.append(y_mean)
# adding std
xstd_list.append(df_unique['Coordinate X'].std())
ystd_list.append(df_unique['Coordinate Y'].std())
# calculating euclidean distances
arr = np.array([(x, y) for x, y in zip(df_unique['Coordinate X'], df_unique['Coordinate Y'])])
dist = [np.linalg.norm(arr[i+1]-arr[i]) for i in range(arr.shape[0]-1)]
ideal_dist = np.linalg.norm(arr[arr.shape[0]-1]-arr[0])
# adding offset
offset_list.append(sum(dist)-ideal_dist)
# adding distance mean
distm_list.append(np.asarray(dist).mean())
# adding distance std deviation
diststd_list.append(np.asarray(dist).std())
# adding slope angle of the tangent (arctan(Ym/Xm))
arct_list.append(np.arctan(y_mean/x_mean))
# create df subset with the new features
df_subset = pd.DataFrame(ulist, columns=['EventId'])
#df_subset['X Mean'] = xm_list
#df_subset['Y Mean'] = ym_list
#df_subset['X Std Dev'] = xstd_list
#df_subset['Y Std Dev'] = ystd_list
df_subset['Dist Mean'] = distm_list
df_subset['Dist Std Dev'] = diststd_list
df_subset['Offset'] = offset_list
df_subset['Slope Mean'] = arct_list
# drop EventId
df_subset.drop(['EventId'], axis=1, inplace=True)
return df_subset
df_evandro_train = createFeatures(df_evandro_train)
df_celso_train = createFeatures(df_celso_train)
df_eliezer_train = createFeatures(df_eliezer_train)
# get the minimum number of rows
maxRows = [df_evandro_train.shape[0], df_celso_train.shape[0], df_eliezer_train.shape[0]]
maxRows.sort()
# slice dataframes in order to equalize the length
df_evandro_train = df_evandro_train.loc[:maxRows[0]-1,:]
df_celso_train = df_celso_train.loc[:maxRows[0]-1,:]
df_eliezer_train = df_eliezer_train.loc[:maxRows[0]-1,:]
df_evandro_train.head()
###Output
_____no_output_____
###Markdown
Standardizing the data for training datasets
###Code
def standardize(df):
# instanciate StandardScaler object
scaler = StandardScaler()
# compute the mean and std to be used for later scaling
scaler.fit(df)
# perform standardization by centering and scaling
scaled_features = scaler.transform(df)
return pd.DataFrame(scaled_features)
# standardizing training datasets
df_evandro_train = standardize(df_evandro_train)
df_celso_train = standardize(df_celso_train)
df_eliezer_train = standardize(df_eliezer_train)
df_evandro_train.head()
###Output
_____no_output_____
###Markdown
Running PCA on training datasets
###Code
# applying PCA and concat on train datasets
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
principalComponents = pca.fit_transform(df_evandro_train)
df_evandro_train_pca = pd.DataFrame(data = principalComponents)
# labeling observations
df_evandro_train_pca['Label'] = [1 for s in range(df_evandro_train_pca.shape[0])]
principalComponents = pca.fit_transform(df_celso_train)
df_celso_train_pca = pd.DataFrame(data = principalComponents)
# labeling observations
df_celso_train_pca['Label'] = [0 for s in range(df_celso_train_pca.shape[0])]
principalComponents = pca.fit_transform(df_eliezer_train)
df_eliezer_train_pca = pd.DataFrame(data = principalComponents)
# labeling observations
df_eliezer_train_pca['Label'] = [0 for s in range(df_eliezer_train_pca.shape[0])]
df_shuffle_train = pd.concat([df_evandro_train_pca, df_celso_train_pca, df_eliezer_train_pca])
df_shuffle_train = df_shuffle_train.sample(frac=1).reset_index(drop=True)
df_shuffle_train.head()
df_evandro_train_pca.head()
###Output
_____no_output_____
###Markdown
Creating validation data
###Code
df_evandro_test = createFeatures(df_evandro_test)
#df_celso_test = createFeatures(df_celso_test)
#df_eliezer_test = createFeatures(df_eliezer_test)
df_evandro_test = standardize(df_evandro_test)
#df_celso_test = standardize(df_celso_test)
#df_eliezer_test = standardize(df_eliezer_test)
df_evandro_test.head()
# running PCA on test data
pca = PCA(n_components=3)
principalComponents = pca.fit_transform(df_evandro_test)
df_evandro_test_pca = pd.DataFrame(data = principalComponents)
# labeling observations
df_evandro_test_pca['Label'] = [1 for s in range(df_evandro_test_pca.shape[0])]
#principalComponents = pca.fit_transform(df_celso_test)
#df_celso_test_pca = pd.DataFrame(data = principalComponents)
# labeling observations
#df_celso_test_pca['Label'] = [0 for s in range(df_celso_test_pca.shape[0])]
#principalComponents = pca.fit_transform(df_eliezer_test)
#df_eliezer_test_pca = pd.DataFrame(data = principalComponents)
# labeling observations
#df_eliezer_test_pca['Label'] = [0 for s in range(df_eliezer_test_pca.shape[0])]
#df_shuffle_test = pd.concat([df_evandro_test_pca, df_celso_test_pca, df_eliezer_test_pca])
#df_shuffle_test = df_shuffle_test.sample(frac=1).reset_index(drop=True)
df_shuffle_test = df_evandro_test_pca.sample(frac=1).reset_index(drop=True)
print(df_shuffle_test.shape)
df_shuffle_test.head()
X_train = df_shuffle_train.drop('Label', axis=1)
Y_train = df_shuffle_train['Label']
X_test = df_shuffle_test.drop('Label', axis=1)
Y_test = df_shuffle_test['Label']
# looking for the K value which has optimal error rate
error_rate = []
for i in range(1,40):
knn = KNeighborsClassifier(n_neighbors=i)
knn.fit(X_train, Y_train)
Y_pred = knn.predict(X_test)
error_rate.append(np.mean(Y_pred != Y_test))
plt.figure(figsize=(10,6))
plt.plot(range(1,40), error_rate, color='blue', lw=1, ls='dashed', marker='o', markerfacecolor='red')
plt.title('Error Rate vs. K Value')
plt.xlabel('K')
plt.ylabel('Error Rate')
knn = KNeighborsClassifier(n_neighbors=9)
knn.fit(X_train, Y_train)
pred = knn.predict(X_test)
print("Accuracy: {}%".format(round(accuracy_score(Y_test, pred)*100,2)))
###Output
Accuracy: 88.94%
|
results/notebooks/wind_ppas/state_to_respondent_id_association_midwest_ppa.ipynb | ###Markdown
Connect the EIA923 plant_state with the FERC1 respondent_id & Determine Midwest Purchase Power Prices
###Code
import sys
import os
import numpy as np
import pandas as pd
sys.path.append(os.path.abspath(os.path.join('..','..')))
from pudl import pudl, ferc1, eia923
from pudl import models, models_ferc1, models_eia923
from pudl import settings, constants
import matplotlib.pyplot as plt
%matplotlib inline
pudl_engine = pudl.connect_db()
###Output
_____no_output_____
###Markdown
Connecting the EIA923 plant_state with the FERC1 respondent_id Pull in all the tables required to associate the EIA923 plant_state with the FERC1 respndent_id.
###Code
plants_eia = pd.read_sql('''SELECT * FROM plants_eia''', pudl_engine)
plant_state_eia923 = pd.read_sql('''SELECT plant_id, plant_state \
FROM plants_eia923''', pudl_engine)
utility_ids_ferc1 = pd.read_sql('''SELECT * FROM utilities_ferc''', pudl_engine)
utility_ids_ferc1.rename(columns={'util_id_pudl': 'utility_id_pudl'},
inplace=True)
util_plant_assn_pudl = pd.read_sql('''SELECT utility_id, plant_id \
FROM util_plant_assn''', pudl_engine)
util_plant_assn_pudl.rename(columns={'plant_id': 'plant_id_pudl', 'utility_id': 'utility_id_pudl'},
inplace=True)
###Output
_____no_output_____
###Markdown
Merge these tables from plant_state/plant_id to plant_state/operator_id to plant_state/operator_id/utility_id_pudl to plant_state/utility_id_pudl/respondent_id
###Code
plants_eia_compiled = plants_eia.merge(plant_state_eia923, on='plant_id',
how='left')
utility_state_assn = plants_eia_compiled.merge(util_plant_assn_pudl, on='plant_id_pudl',
how='left')
utility_state_assn.drop_duplicates(['utility_id_pudl', 'plant_state'], inplace=True)
utility_state_assn = utility_state_assn[['utility_id_pudl', 'plant_state']]
utility_state_assn_ferc1 = utility_state_assn.merge(utility_ids_ferc1[['utility_id_pudl', 'respondent_id']],
on='utility_id_pudl', how='left')
###Output
_____no_output_____
###Markdown
Purchased Power tablePull in the purchased power table. Then merge the plant_state on the respondent_id.
###Code
purchased_power_ferc1 = pd.read_sql('''SELECT *\
FROM purchased_power_ferc1''', pudl_engine)
purchased_power_ferc1_states = purchased_power_ferc1.merge(utility_state_assn_ferc1, on='respondent_id', how='left')
purchased_power_ferc1_states = purchased_power_ferc1_states[purchased_power_ferc1_states.mwh_purchased != 0]
purchased_power_ferc1_states['calculated_cost_per_mwh'] = \
((purchased_power_ferc1_states['settlement_total'])/(purchased_power_ferc1_states['mwh_purchased']))
###Output
_____no_output_____
###Markdown
Determine which statistical_classification contains wind
###Code
mask = purchased_power_ferc1_states['authority_company_name'].str.contains('Wind|Renew|Solar')
zc = purchased_power_ferc1_states[mask]
stat_class = zc.statistical_classification.drop_duplicates()
for stat in stat_class:
print(stat, len(zc[zc.statistical_classification == stat]))
for stat in stat_class:
print(stat, len(purchased_power_ferc1_states[purchased_power_ferc1_states.statistical_classification == stat]))
###Output
LF 2057
IU 1265
OS 24014
LU 18048
SF 16935
AD 2261
###Markdown
Missouri Sort for only purchased power associated with Missouri. Sort only
###Code
ppa_mo = purchased_power_ferc1_states[purchased_power_ferc1_states.plant_state == 'IA']
mask = ppa_mo['authority_company_name'].str.contains('Wind|Renew|Nextera')
renew = ppa_mo[mask]
###Output
_____no_output_____
###Markdown
Graph Missouri Purchased Power
###Code
f, (ax1) = plt.subplots(1)
ax1.hist(renew.calculated_cost_per_mwh,bins=100, range=(1,50), weights=renew.mwh_purchased)
ax1.set_xlabel('purchased price ($ per mWh)', size=18)
ax1.yaxis.set_tick_params(labelsize=15)
ax1.xaxis.set_tick_params(labelsize=15)
ax1.text(.6, .55, 'Mean = ${:.2f} per MWh'.format(renew.calculated_cost_per_mwh.mean()), transform=ax1.transAxes, size=15)
plt.text(-0.1, 1.2,'Missouri PPA Price', ha='center',
va='top', transform=ax1.transAxes, fontsize=24)
plt.tick_params(axis='both', which='major', labelsize=15)
f.subplots_adjust(left=None, bottom=None, right=1.9, top=None, wspace=None, hspace=None)
plt.show()
###Output
_____no_output_____
###Markdown
Midwest States
###Code
midwest_states = ['MO','NE','IA','IL','AR','OK','KA']
ppa_midwest = pd.DataFrame()
for state in midwest_states:
ppa_state = purchased_power_ferc1_states[(purchased_power_ferc1_states.plant_state == state)]
ppa_midwest = ppa_midwest.append(ppa_state)
mask = ppa_midwest['authority_company_name'].str.contains('Wind|Renew|Nextera')
renew_midwest = ppa_midwest[mask]
renew_midwest.to_csv('./midwest_purchased_power.csv', index=False)
###Output
_____no_output_____
###Markdown
Graph the total region purchased power
###Code
f, (ax1) = plt.subplots(1, dpi=100)
f.set_figwidth(10)
f.set_figheight(4)
ax1.hist(renew_midwest.calculated_cost_per_mwh,bins=100, range=(15,90), weights=renew_midwest.mwh_purchased)
ax1.set_xlabel('purchased price ($ per mWh)')
ax1.set_ylabel('MWh')
ax1.set_title('Midwest Renewable PPA Prices', size=18)
plt.show()
###Output
_____no_output_____
###Markdown
Graph the yearly purchases for this region
###Code
years = ppa_midwest.report_year.unique()
f, axarr = plt.subplots(len(years), dpi=100)
f.set_figwidth(10)
f.set_figheight(4*len(years))
for year, ax in zip(years, axarr):
yearly_cost = renew_midwest.calculated_cost_per_mwh[renew_midwest.report_year == year]
yearly_mwh = renew_midwest.mwh_purchased[renew_midwest.report_year == year]
ax.text(.6, .55, 'Mean = ${:.2f} per MWh'.format(yearly_cost.mean()), transform=ax.transAxes, size=15)
ax.text(.65, .45, 'Total {:.0f} GWh'.format(yearly_mwh.sum()/1000), transform=ax.transAxes, size=15)
ax.hist(yearly_cost,
bins=100, range=(15,90), weights=yearly_mwh)
ax.set_xlabel('purchased price ($ per mWh)')
ax.set_ylabel('MWh')
ax.set_title('Midwest Renewable Purchased Power Prices {}'.format(year))
plt.tight_layout()
###Output
_____no_output_____ |
examples/two_step_hashin_shtrikman_interpolated.ipynb | ###Markdown
Define isotropic constituents
###Code
inclusion = mechkit.material.Isotropic(E=inp["E_f"], nu=inp["nu_f"])
matrix = mechkit.material.Isotropic(E=inp["E_m"], nu=inp["nu_m"])
###Output
_____no_output_____
###Markdown
Define orientation averager and polairzation
###Code
averager = mechmean.orientation_averager.AdvaniTucker(N4=inp["N4"])
P_func = mechmean.hill_polarization.Factory().needle
###Output
_____no_output_____
###Markdown
Homogenize
###Code
input_dict = {
"phases": {
"inclusion": {
"material": inclusion,
"volume_fraction": inp["c_f"],
},
"matrix": {
"material": matrix,
"volume_fraction": 1.0 - inp["c_f"],
},
},
"k": 1.0 / 2.0,
"averaging_func": averager.average,
}
hashin = mechmean.approximation.Kehrer2019(**input_dict)
C_eff = hashin.calc_C_eff()
print("Effective stiffness two step Hashin Shtrikman")
print(C_eff)
###Output
Effective stiffness two step Hashin Shtrikman
Effective_stiffness(upper=array([[ 1.79306622e+01, 6.79183199e+00, 6.44049100e+00, -7.92491438e-03, -1.22458610e-01, 2.79693636e-01],
[ 6.79183199e+00, 1.66571351e+01, 6.50198561e+00, -1.44257800e-02, -9.10987462e-03, 2.56586798e-01],
[ 6.44049100e+00, 6.50198561e+00, 1.43808190e+01, -2.85337268e-03, -4.82239343e-02, -3.03802346e-02],
[-7.92491438e-03, -1.44257800e-02, -2.85337268e-03, 8.56565464e+00, 1.06716825e-01, -7.33643806e-02],
[-1.22458610e-01, -9.10987462e-03, -4.82239343e-02, 1.06716825e-01, 8.95472773e+00, -1.95087657e-02],
[ 2.79693636e-01, 2.56586798e-01, -3.03802346e-02, -7.33643806e-02, -1.95087657e-02, 1.09913269e+01]]), lower=array([[ 1.31958542e+01, 6.53652351e+00, 5.06275498e+00, -2.56586400e-02, -2.11023481e-01, 4.35412759e-01],
[ 6.53652351e+00, 1.13419501e+01, 5.07695238e+00, -2.30964777e-02, -5.61794756e-02, 3.73083046e-01],
[ 5.06275498e+00, 5.07695238e+00, 8.76711885e+00, 8.96494982e-03, -8.82286689e-03, -1.95238689e-02],
[-2.56586400e-02, -2.30964777e-02, 8.96494982e-03, 3.73696055e+00, 1.40581209e-02, -9.26497992e-02],
[-2.11023481e-01, -5.61794756e-02, -8.82286689e-03, 1.40581209e-02, 3.84729163e+00, -3.93273718e-02],
[ 4.35412759e-01, 3.73083046e-01, -1.95238689e-02, -9.26497992e-02, -3.93273718e-02, 7.12215158e+00]]))
|
Calculate_AUC.ipynb | ###Markdown
Jupyter notebook -------This notebook illustrates the codes used to generate single and accummulated AUC for selected proteins as shown in Fig. 1b of the paper **"Data independent acquisition mass spectrometry in severe Rheumatic Heart Disease (RHD) identifies a proteomic signature showing ongoing inflammation and effectively classifying RHD cases"**Author: **Jing Yang**Date: **17/11/2021**Contact: [email protected]
###Code
library(caret)
library(data.table)
library(tidyverse)
library(Boruta)
library(DescTools)
library(broom)
sessionInfo()
###Output
_____no_output_____
###Markdown
Read log2 scaled protein expression data, log2 fold change data and the mapping between UniProtID and protein names
###Code
### data is log2 scaled protein expression data
### fold change is generated from the notebook "get_foldchange.ipynb"
data <- read.csv(file='Data/RHD_data_filtered.csv', stringsAsFactors = FALSE)
foldchange_data <- read.csv('Data/Protein_withfoldchange.csv', header=TRUE)
protein_withname <- read.table('Data/protein_withname.txt', header=TRUE)
head(foldchange_data)
head(data)
data[is.na(data)] <- 0
###Output
_____no_output_____
###Markdown
Read baseline data for all the samples
###Code
baseline_data <- fread('Data/Demographic_info.csv')
head(baseline_data)
###Output
_____no_output_____
###Markdown
Proteins selected from Boruta algorithm are imported directly
###Code
load(file='Data/Boruta_results_2108.RData')
result_allsample <- attStats(Boruta.allsample) %>% filter(decision %in% 'Confirmed') %>% mutate(UniProtID=rownames(.)) %>% arrange(desc(medianImp))
proteins_confirmed <- result_allsample$UniProtID
###Output
_____no_output_____
###Markdown
Start calculate ROC for each protein
###Code
ROC_single <- data.table()
coef_single <- data.table()
coef_single_oddsratio <- data.table()
ci_lower <- list()
ci_upper <- list()
pvalue_single <- list()
for (ii in 1:length(proteins_confirmed))
{
proteins_forlogistic <- proteins_confirmed[ii]
tmp <- data %>% select(StollerID, any_of(proteins_forlogistic), Group)
tmp[is.na(tmp)] <- 0
joined_data <- inner_join(tmp, baseline_data) %>% select(Group, Age, BMI, Gender, any_of(proteins_forlogistic))
joined_data$Group <- factor(joined_data$Group, levels=c('Case','Control'))
levels(joined_data$Group) <- c(1,0)
joined_data_clean <- joined_data[complete.cases(joined_data),] %>% mutate(BMIAge = BMI * Age)
#joined_data1_clean$BMIAge <- joined_data1_clean$BMI * joined_data_clean$Age
mf1 <- glm(Group~., data=joined_data_clean, family=binomial, na.action = na.pass)
coef_single <- rbind(coef_single, (tidy(mf1) %>% filter(term %in% proteins_forlogistic)))
#print(coef_single)
tmp_oddsratio <- as.data.frame(exp(cbind(coef(mf1), confint(mf1)))) %>% filter(rownames(.) %in% proteins_forlogistic)
coef_single_oddsratio <- rbind(coef_single_oddsratio, tmp_oddsratio)
ROC_single <- rbind(ROC_single, Cstat(mf1))
print(proteins_forlogistic)
print(Cstat(mf1))
}
coef_single_oddsratio$UniProtID <- proteins_confirmed
coef_single_oddsratio$Pvalue <- coef_single$p.value
coef_single_oddsratio$AUC <- ROC_single$x
head(coef_single_oddsratio)
###Output
_____no_output_____
###Markdown
Generate Table 2 illustated in the paper
###Code
names(coef_single_oddsratio)[1] <- c('OddsRatio')
ROC_single <- left_join(left_join(coef_single_oddsratio, foldchange_data %>% select(-c(p_value, t_value))), result_allsample %>%
select(UniProtID, meanImp)) %>% select(UniProtID, ProteinName, mean_Case, mean_Control, log2foldchange, meanImp,
OddsRatio, everything()) %>% mutate_if(is.numeric, format, digits=4)
ROC_single
write.table(file='Data/ROC_for_single_protein.csv',ROC_single, quote=F, row.names=F, sep=',')
###Output
_____no_output_____
###Markdown
Calculate accummulated ROC for selected proteins
###Code
ROC_accummulate <- data.table()
for (ii in 1:length(proteins_confirmed)){
proteins_forlogistic <- proteins_confirmed[1:ii]
tmp <- data %>% select(StollerID, any_of(proteins_forlogistic), Group)
tmp[is.na(tmp)] <- 0
joined_data <- inner_join(tmp, baseline_data) %>% select(Group, Age, BMI, Gender, any_of(proteins_forlogistic))
joined_data$Group <- factor(joined_data$Group, levels=c('Case','Control'))
levels(joined_data$Group) <- c(1,0)
joined_data_clean <- joined_data[complete.cases(joined_data),] %>% mutate(BMIAge = BMI * Age)
mf1 <- glm(Group~., data=joined_data_clean, family=binomial, na.action = na.pass)
print(proteins_forlogistic)
print(Cstat(mf1))
ROC_accummulate <- rbind(ROC_accummulate, round(Cstat(mf1),3))
}
ROC_accummulate$UniProtID <- proteins_confirmed
names(ROC_accummulate) <- c('AUC','UniProtID')
ROC_accummulate <- left_join(ROC_accummulate, protein_withname) %>% select(UniProtID, ProteinName, AUC)
head(ROC_accummulate)
ROC_accummulate$ProteinName <- factor(ROC_accummulate$ProteinName, levels=ROC_accummulate$ProteinName)
###Output
_____no_output_____
###Markdown
Generate Fig 1b
###Code
ggplot(ROC_accummulate, aes(x=ProteinName, y=AUC)) + geom_point(col='red', size=3) +
annotate("text", x = 2.5, y = 1, label = "b", size=8) +
xlab('Proteins') + ylab('Cumulative AUC') + theme(panel.background=element_blank(),#panel.background=element_rect(fill='white',color='black',linetype=1),
#panel.grid.major=element_line(color='grey', size=0.1),
axis.line=element_line(size=1),text=element_text(size=14, face="bold"),
axis.text.x=element_text(size=9, angle=90, face='bold', hjust=0.95, vjust=0.2), axis.title=element_text(size=14, face="bold"),
strip.text.x=element_blank(),strip.text.y=element_blank(), strip.background=element_blank())
###Output
_____no_output_____ |
Anomaly_Detection_RealTime/Anomaly_Detection_RealTime.ipynb | ###Markdown
POINT ANOMALIES: RANDOM WALKS
###Code
### GENERATE DATA ###
np.random.seed(42)
n_series, timesteps = 3, 200
data = sim_randomwalk(n_series=n_series, timesteps=timesteps,
process_noise=10, measure_noise=30)
data.shape
plt.plot(data.T)
np.set_printoptions(False)
### SLIDING WINDOW PARAMETER ###
window_len = 20
### SIMULATE PROCESS REAL-TIME AND CREATE GIF ###
fig = plt.figure(figsize=(18,10))
camera = Camera(fig)
axes = [plt.subplot(n_series,1,ax+1) for ax in range(n_series)]
series = defaultdict(partial(np.ndarray, shape=(n_series,1), dtype='float32'))
for i in tqdm(range(timesteps+1), total=(timesteps+1)):
if i>window_len:
smoother = ConvolutionSmoother(window_len=window_len, window_type='ones')
smoother.smooth(series['original'][:,-window_len:])
series['smooth'] = np.hstack([series['smooth'], smoother.smooth_data[:,[-1]]])
_low, _up = smoother.get_intervals('sigma_interval', n_sigma=2)
series['low'] = np.hstack([series['low'], _low[:,[-1]]])
series['up'] = np.hstack([series['up'], _up[:,[-1]]])
is_anomaly = np.logical_or(
series['original'][:,-1] > series['up'][:,-1],
series['original'][:,-1] < series['low'][:,-1]
).reshape(-1,1)
if is_anomaly.any():
series['ano_id'] = np.hstack([series['ano_id'], is_anomaly*i]).astype(int)
for s in range(n_series):
pltargs = {k:v[s,:] for k,v in series.items()}
plot_history(axes[s], i, is_anomaly[s], window_len,
**pltargs)
camera.snap()
if i>=timesteps:
continue
series['original'] = np.hstack([series['original'], data[:,[i]]])
print('CREATING GIF...') # it may take a few seconds
camera._photos = [camera._photos[-1]] + camera._photos
animation = camera.animate()
animation.save('animation1.gif')
plt.close(fig)
print('DONE')
### PLOT FINAL RESULT ###
fig = plt.figure(figsize=(18,10))
axes = [plt.subplot(n_series,1,ax+1) for ax in range(n_series)]
for i,ax in enumerate(axes):
posrange = np.arange(window_len,timesteps)
ax.plot(series['original'][i,1:], '.k')
ax.plot(posrange, series['smooth'][i,1:], c='blue', linewidth=3)
ax.fill_between(posrange,
series['low'][i,1:], series['up'][i,1:],
color='blue', alpha=0.2)
ano_id = series['ano_id'][i][series['ano_id'][i] != 0] -1
if len(ano_id)>0:
ax.scatter(ano_id, series['original'][i,1:][ano_id],
c='red', alpha=1.)
###Output
_____no_output_____
###Markdown
POINT ANOMALIES: SEASONAL DATA WITHOUT TREND
###Code
### GENERATE DATA ###
np.random.seed(42)
n_series, timesteps = 3, 200
data = sim_seasonal_data(n_series=n_series, timesteps=timesteps,
freq=24, measure_noise=20, amp=[30,40,50])
data.shape
plt.plot(data.T)
np.set_printoptions(False)
### SLIDING WINDOW PARAMETER ###
window_len = 20
### SIMULATE PROCESS REAL-TIME AND CREATE GIF ###
fig = plt.figure(figsize=(18,10))
camera = Camera(fig)
axes = [plt.subplot(n_series,1,ax+1) for ax in range(n_series)]
series = defaultdict(partial(np.ndarray, shape=(n_series,1), dtype='float32'))
for i in tqdm(range(timesteps+1), total=(timesteps+1)):
if i>window_len:
smoother = ExponentialSmoother(window_len=window_len//2, alpha=0.4)
smoother.smooth(series['original'][:,-window_len:])
series['smooth'] = np.hstack([series['smooth'], smoother.smooth_data[:,[-1]]])
_low, _up = smoother.get_intervals('sigma_interval', n_sigma=2)
series['low'] = np.hstack([series['low'], _low[:,[-1]]])
series['up'] = np.hstack([series['up'], _up[:,[-1]]])
is_anomaly = np.logical_or(
series['original'][:,-1] > series['up'][:,-1],
series['original'][:,-1] < series['low'][:,-1]
).reshape(-1,1)
if is_anomaly.any():
series['ano_id'] = np.hstack([series['ano_id'], is_anomaly*i]).astype(int)
for s in range(n_series):
pltargs = {k:v[s,:] for k,v in series.items()}
plot_history(axes[s], i, is_anomaly[s], window_len,
**pltargs)
camera.snap()
if i>=timesteps:
continue
series['original'] = np.hstack([series['original'], data[:,[i]]])
print('CREATING GIF...') # it may take a few seconds
camera._photos = [camera._photos[-1]] + camera._photos
animation = camera.animate()
animation.save('animation2.gif')
plt.close(fig)
print('DONE')
### PLOT FINAL RESULT ###
fig = plt.figure(figsize=(18,10))
axes = [plt.subplot(n_series,1,ax+1) for ax in range(n_series)]
for i,ax in enumerate(axes):
posrange = np.arange(window_len,timesteps)
ax.plot(series['original'][i,1:], '.k')
ax.plot(posrange, series['smooth'][i,1:], c='blue', linewidth=3)
ax.fill_between(posrange,
series['low'][i,1:], series['up'][i,1:],
color='blue', alpha=0.2)
ano_id = series['ano_id'][i][series['ano_id'][i] != 0] -1
if len(ano_id)>0:
ax.scatter(ano_id, series['original'][i,1:][ano_id],
c='red', alpha=1.)
###Output
_____no_output_____
###Markdown
PATTERN ANOMALIES: SEASONAL DATA WITH TREND
###Code
### GENERATE DATA ###
np.random.seed(42)
n_series, timesteps = 3, 600
data = sim_randomwalk(n_series=n_series, timesteps=timesteps,
process_noise=1, measure_noise=0)
seasons = sim_seasonal_data(n_series=n_series, timesteps=timesteps,
freq=24, measure_noise=4, level=0, amp=10)
data = data + seasons
plt.plot(data.T)
np.set_printoptions(False)
### SLIDING WINDOW PARAMETER ###
window_len = 24*5
### SIMULATE PROCESS REAL-TIME AND CREATE GIF ###
fig = plt.figure(figsize=(18,10))
camera = Camera(fig)
axes = [plt.subplot(n_series,1,ax+1) for ax in range(n_series)]
series = defaultdict(partial(np.ndarray, shape=(n_series,1), dtype='float32'))
for i in tqdm(range(timesteps+1), total=(timesteps+1)):
if i>window_len:
smoother = DecomposeSmoother(smooth_type='convolution', periods=24,
window_len=window_len//3, window_type='ones')
smoother.smooth(series['original'][:,-window_len:])
series['smooth'] = np.hstack([series['smooth'], smoother.smooth_data[:,[-1]]])
_low, _up = smoother.get_intervals('sigma_interval', n_sigma=2.5)
series['low'] = np.hstack([series['low'], _low[:,[-1]]])
series['up'] = np.hstack([series['up'], _up[:,[-1]]])
is_anomaly = np.logical_or(
series['original'][:,-1] > series['up'][:,-1],
series['original'][:,-1] < series['low'][:,-1]
).reshape(-1,1)
if is_anomaly.any():
series['ano_id'] = np.hstack([series['ano_id'], is_anomaly*i]).astype(int)
for s in range(n_series):
pltargs = {k:v[s,:] for k,v in series.items()}
plot_history(axes[s], i, is_anomaly[s], window_len,
**pltargs)
camera.snap()
if i>=timesteps:
continue
series['original'] = np.hstack([series['original'], data[:,[i]]])
print('CREATING GIF...') # it may take a few seconds
camera._photos = [camera._photos[-1]] + camera._photos
animation = camera.animate()
animation.save('animation3.gif')
plt.close(fig)
print('DONE')
### PLOT FINAL RESULT ###
fig = plt.figure(figsize=(18,10))
axes = [plt.subplot(n_series,1,ax+1) for ax in range(n_series)]
for i,ax in enumerate(axes):
posrange = np.arange(window_len,timesteps)
ax.plot(series['original'][i,1:], '.k')
ax.plot(posrange, series['smooth'][i,1:], c='blue', linewidth=3)
ax.fill_between(posrange,
series['low'][i,1:], series['up'][i,1:],
color='blue', alpha=0.2)
ano_id = series['ano_id'][i][series['ano_id'][i] != 0] -1
if len(ano_id)>0:
ax.scatter(ano_id, series['original'][i,1:][ano_id],
c='red', alpha=1.)
###Output
_____no_output_____
###Markdown
PATTERN ANOMALIES: SEASONAL DATA WITH TREND AND SHIFT
###Code
### GENERATE DATA ###
data[:,300:380] = data[:,300:380] + 40
plt.plot(data.T)
np.set_printoptions(False)
### SIMULATE PROCESS REAL-TIME AND CREATE GIF ###
fig = plt.figure(figsize=(18,9))
camera = Camera(fig)
axes = [plt.subplot(n_series,1,ax+1) for ax in range(n_series)]
series = defaultdict(partial(np.ndarray, shape=(n_series,1), dtype='float32'))
recovered = np.copy(data)
for i in tqdm(range(timesteps+1), total=(timesteps+1)):
if i>window_len:
smoother = DecomposeSmoother(smooth_type='convolution', periods=24,
window_len=window_len, window_type='ones')
smoother.smooth(series['recovered'][:,-window_len:])
series['smooth'] = np.hstack([series['smooth'], smoother.smooth_data[:,[-1]]])
_low, _up = smoother.get_intervals('sigma_interval', n_sigma=4)
series['low'] = np.hstack([series['low'], _low[:,[-1]]])
series['up'] = np.hstack([series['up'], _up[:,[-1]]])
is_anomaly = np.logical_or(
series['original'][:,-1] > series['up'][:,-1],
series['original'][:,-1] < series['low'][:,-1]
).reshape(-1,1)
if is_anomaly.any():
ano_series = np.where(is_anomaly)[0]
series['ano_id'] = np.hstack([series['ano_id'], is_anomaly*i]).astype(int)
recovered[ano_series,i] = smoother.smooth_data[ano_series,[-1]]
for s in range(n_series):
pltargs = {k:v[s,:] for k,v in series.items()}
plot_history(axes[s], i, is_anomaly[s], window_len,
**pltargs)
camera.snap()
if i>=timesteps:
continue
series['original'] = np.hstack([series['original'], data[:,[i]]])
series['recovered'] = np.hstack([series['recovered'], recovered[:,[i]]])
print('CREATING GIF...') # it may take a few seconds
camera._photos = [camera._photos[-1]] + camera._photos
animation = camera.animate()
animation.save('animation4.gif')
plt.close(fig)
print('DONE')
### PLOT FINAL RESULT ###
fig = plt.figure(figsize=(18,10))
axes = [plt.subplot(n_series,1,ax+1) for ax in range(n_series)]
for i,ax in enumerate(axes):
posrange = np.arange(window_len,timesteps)
ax.plot(series['original'][i,1:], '.k')
ax.plot(posrange, series['smooth'][i,1:], c='blue', linewidth=3)
ax.fill_between(posrange,
series['low'][i,1:], series['up'][i,1:],
color='blue', alpha=0.2)
ano_id = series['ano_id'][i][series['ano_id'][i] != 0] -1
if len(ano_id)>0:
ax.scatter(ano_id, series['original'][i,1:][ano_id],
c='red', alpha=1.)
###Output
_____no_output_____ |
50_ppdd.ipynb | ###Markdown
pico_pi_controller.ppdd> System daemon
###Code
# hide
# from nbdev.showdoc import *
# export
from sys import byteorder
from os import getloadavg
from platform import node
from uptime import uptime
import threading, _thread
import argparse, logging
import pigpio, atexit
import datetime, time
import uuid, socketserver
import queue
from CircuitPython_pico_pi_common.codes import *
logger = logging.getLogger()
logging.basicConfig(level = logging.DEBUG)
# export
tcp_address = ('127.0.0.1', 16164)
hostname = bytearray(node(), "utf-8")
def log_txn(fname, message, msg=None, i2c_addr=None):
"""Wrapper for logger."""
id_str = ID_CODE.decode()
hex_addr = ''
if i2c_addr:
hex_addr = str(hex(i2c_addr))
i2c_str = '|'
logger.info('%-4s %-47s %s' % (id_str, fname+': '+hex_addr+message+str(msg or ''), i2c_str))
class PwrLedFlicker(threading.Thread):
def __init__(self, duration):
threading.Thread.__init__(self)
self.duration = duration
def run(self):
fname='run'
try:
with open("/sys/class/leds/led1/brightness", "w") as sys_pwr_led:
for x in range(self.duration * 2):
sys_pwr_led.write("0")
sys_pwr_led.flush()
time.sleep(0.1)
sys_pwr_led.write("1")
sys_pwr_led.flush()
time.sleep(0.4)
sys_pwr_led.close()
except PermissionError:
log_txn(fname,"Must run as root.")
_thread.interrupt_main()
class PPCcHandler(socketserver.BaseRequestHandler):
def handle(self):
"""Handle commands from ppcc; some processed iternally, some sent to PPC"""
fname='handle'
global cmd_queue, awt_queue, cfm_queue
#log_txn(fname,"recvd request from {}".format(self.client_address[0]),
# " on {}".format(threading.current_thread().name))
cmd_len = bytearray(1)
cmd_len = int.from_bytes( self.request.recv(1), byteorder=byteorder)
#log_txn(fname,"recvd command from {}, len: ".format(self.client_address[0]),str(cmd_len)+" bytes:")
command = bytearray(cmd_len)
command = self.request.recv(cmd_len)
cmd_code, i2c_addr, cmdargs, cmd_uid, valid_status = parse_cmd(command)
if valid_status:
log_txn(fname,'',
str(hex(i2c_addr))+' '+CMD_NAME[cmd_code]+' 0'+str(cmdargs)[3:-1]+" UID "+str(hex(int.from_bytes(cmd_uid, byteorder)))[2:])
cmd_queue.put(command)
log_txn(fname,"command queue size:", cmd_queue.qsize() )
# blocking, wait for confirmation of i2c_event_handler acting on command
# todo: implement a timeout if confirm never appears in awt_queue
confirm = bytearray(0)
while confirm != command:
log_txn(fname,"awaitng confirmation...")
confirm = awt_queue.get(block=True,timeout=2)
# put it back if it's not the confirm we're looking for, or if it's only a
# partial confirm, then log it & delete it.
if confirm != command:
# todo: handle partial confirms (only one 2c_addr from 0xFF)
awt_queue.put(confirm)
log_txn(fname,"awaitng queue size:", awt_queue.qsize() )
if confirm == command:
cfm_queue.put(command)
num_bytes_sent = self.request.send(command)
log_txn(fname,"confirm queue size:", cfm_queue.qsize() )
else:
pass #handle error
self.request.close()
@atexit.register
def goodbye():
"""Cancel pigpio event handler, close I2C peripheral & connection to pigpio"""
global e, pi, ppdd_server
fname='goodbye'
if 'e' in globals():
try:
e.cancel()
log_txn(fname,"bsc_i2c event handler cleanup completed.")
except:
log_txn(fname,"bsc_i2c error on event handler cleanup.")
if 'pi' in globals():
try:
pi.stop()
log_txn(fname,"pigpio cleanup completed.")
except:
log_txn(fname,"pigpio error on cleanup.")
if 'ppdd_server' in globals():
try:
ppdd_server.server_close()
log_txn(fname,"TCP serversocket cleanup completed.")
except:
log_txn(fname,"TCP serversocket error on cleanup.")
log_txn(fname,"Exiting.")
def i2c_event_handler(id, tick):
"""Handle register probes."""
fname='i2c_event_handler'
global pi, i2c_addr, bosmang, flicker, cmd_queue, cfm_queue
def reg_idf():
fname='reg_idf'
try:
mcu_uid = d[1:8]
except:
mcu_uid = bytearray(8)
log_txn(fname, "recvd identity: ", mcu_uid.decode())
log_txn(fname, "sendn identity: ", ID_CODE.decode())
s, b, d = pi.bsc_i2c(i2c_addr, ID_CODE)
def reg_bos():
fname='reg_bos'
log_txn(fname, "sending bosmang status: ", str(bool(int.from_bytes(bosmang, byteorder=byteorder))))
s, b, d = pi.bsc_i2c(i2c_addr, bosmang)
def reg_tim():
fname='reg_tim'
dt = datetime.datetime.now()
# send local time converted from UTC of system time
dt = dt.replace(
tzinfo=datetime.timezone(datetime.timedelta(0), "UTC") )
ts = dt.timestamp()
log_txn(fname, "sending time.time(): ", int(ts) )
s, b, d = pi.bsc_i2c(i2c_addr, bytearray(int(ts).to_bytes(4, byteorder)))
def reg_cmd():
fname='reg_cmd'
global cmd_queue, awt_queue
#log_txn(fname,"checking command queue")
if cmd_queue.qsize() > 0:
command=cmd_queue.get()
if command[0] in CMD_NAME.keys():
#log_txn(fname,"sending queud command: ", command )
s, b, d = pi.bsc_i2c(i2c_addr, command)
# place the command in the queue for awaiting confirmation
awt_queue.put(command)
else:
log_txn(fname,"invalid command found in queue: ", command )
else:
s, b, d = pi.bsc_i2c(i2c_addr, bytearray([0]))
#log_txn(fname,"no command")
def reg_cfm():
fname='cfm_cmd'
global awt_queue
i2c_addr = d[0] # address of the ppd acted upon by the command
if d[1] in CMD_INT:
# add PPDevice CFM received to awt_queue (awaiting ppdd confirm); it may be
# only 1 of many i2c_addrs. logic in handle.
awt_queue.add(d[1:]) # CFM includes echo of entire command
log_txn(fname,str(hex(i2c_addr))+" recvd confirm: ", CMD_NAME[d[0]])
def reg_hos():
fname='reg_hos'
log_txn(fname,
"sending hostname: ", hostname.decode())
s, b, d = pi.bsc_i2c(i2c_addr, bytes([len(hostname)]) + hostname)
def reg_lod():
fname='reg_lod'
load = bytearray("{:04.2f}".format(getloadavg()[0]), "utf-8")
log_txn(fname, "sending loadavg: ", load.decode())
s, b, d = pi.bsc_i2c(i2c_addr, load)
def reg_upt():
fname='reg_upt'
# so the PPC can pause and wait for the long-ish uptime fetch
s, b, d = pi.bsc_i2c(i2c_addr, bytearray(1) )
uptimeba = bytearray(int(uptime()).to_bytes(4, byteorder))
log_txn(fname, "sending uptime: ", int.from_bytes(bytes(uptimeba), byteorder)
)
s, b, d = pi.bsc_i2c(i2c_addr, uptimeba)
def reg_tzn():
fname='reg_tzn'
log_txn(fname, "sending time.timezone(): ", time.timezone, )
s, b, d = pi.bsc_i2c(i2c_addr, bytearray(time.timezone.to_bytes(3, byteorder)))
def reg_flk():
fname='reg_flk'
nonlocal d
log_txn(fname, "Flickering PWR LED for "+str(d[1])+" seconds.")
flicker = PwrLedFlicker(duration=d[1])
flicker.start()
# if FLK was from a command, d will hold that command. echo it back to confirm execution.
if len(d) > REG_VAL_LEN['FLK']:
if d[REG_VAL_LEN['FLK']+1] in CMD_INT:
s, b, d = pi.bsc_i2c(i2c_addr, d[REG_VAL_LEN['FLK']:] )
def reg_clr():
fname='reg_clr'
#log_txn(fname, "CLR recieved, FIFO buffer to be emptied.")
s, b, d = pi.bsc_i2c(i2c_addr, bytearray(0) )
s, b, d = pi.bsc_i2c(i2c_addr)
if b:
#log_txn(fname, "Register probe recvd, status:",s)
if d[0] in REG_NAME.keys():
reg_hndlr = locals()['reg_'+REG_NAME[d[0]].lower()]
#log_txn(fname, ": reg_"+REG_NAME[d[0]].lower())
reg_hndlr()
else:
log_txn(fname, ": unrecognized REG probed: ",d[0])
def main():
fname='main'
global cmd_queue, awt_queue, cfm_queue, i2c_addr, id_str, e, pi, ppdd_server, flicker, bosmang
cmd_queue = queue.Queue() #command queue (commands recvd from ppcc, outgoing to PPC)
awt_queue = queue.Queue() #awaiting confirmation of execution of command queue
cfm_queue = queue.Queue() #confirmed execution of command queue
# flicker power LED on startup, checking for root access
flicker = PwrLedFlicker(duration=2)
flicker.start()
time.sleep(0.1)
# setup pigpio, checking that it's running
pi = pigpio.pi()
if not pi.connected:
log_txn(fname,"pigpiod not running.")
exit()
# handle command-line arguments
parser = argparse.ArgumentParser(description='ppdd argument parser')
parser.add_argument("-a", default='0x13', type=str, help="Default I2C address: 0x13")
parser.add_argument("-b", action="store_true", help="Set bosmang status to True")
args, unknown = parser.parse_known_args()
if unknown:
log_txn(fname,"unrecognized command line arguments: ",unknown)
# didn't comment before. what does this do if no args.a?
i2c_addr = int(args.a,0)
if args.b:
bosmang = bytearray(int(1).to_bytes(1, byteorder))
else:
bosmang = bytearray(int(0).to_bytes(1, byteorder))
# setup socket server for incoming commands from ppcc
try:
ppdd_server = socketserver.ThreadingTCPServer(tcp_address, PPCcHandler)
log_txn(fname,"Listening for ppcc commands on TCP port ", tcp_address[1])
except OSError:
log_txn(fname,"TCP Error. Socket in use?")
exit()
threading.Thread(target=ppdd_server.serve_forever).start()
log_txn(fname,"Initializing I2C peripheral on address: ", hex(i2c_addr))
log_txn(fname,"--> hostname: ", hostname.decode())
log_txn(fname,"--> bosmang: ", str(bool(int.from_bytes(bosmang, byteorder=byteorder))))
# setup event handler for incoming I2C messages from PPController
e = pi.event_callback(pigpio.EVENT_BSC, i2c_event_handler)
pi.bsc_i2c(i2c_addr)
while True:
time.sleep(120)
# heartbeat
log_txn(fname,"koradewu ",datetime.datetime.now().isoformat())
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt as e:
# handled by atexit
#exit(str('Exiting.'))
pass
%tb
# hide
try:
from IPython.display import display, Javascript
display(Javascript('IPython.notebook.save_checkpoint();'))
from time import sleep
sleep(0.3)
from nbdev.export import notebook2script
notebook2script()
except ModuleNotFoundError:
pass
"""CircuitPython kernel has no nbdev"""
###Output
_____no_output_____ |
site/ko/federated/tutorials/simulations.ipynb | ###Markdown
High-performance simulations with TFFThis tutorial will describe how to setup high-performance simulations with TFFin a variety of common scenarios.TODO(b/134543154): Populate the content, some of the things to cover here:- using GPUs in a single-machine setup,- multi-machine setup on GCP/GKE, with and without TPUs,- interfacing MapReduce-like backends,- current limitations and when/how they will be relaxed. Before we beginFirst, make sure your notebook is connected to a backend that has the relevantcomponents (including gRPC dependencies for multi-machine scenarios) compiled. 이제 TFF 웹 사이트에서 MNIST 예제를 로드하고 10개의 클라이언트 그룹에 대해 작은 실험 루프를 실행할 Python 함수를 선언하는 것으로 시작하겠습니다.
###Code
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated_nightly
!pip install --quiet --upgrade nest_asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).shuffle(500).batch(20).map(map_fn)
train_data = [client_data(n) for n in range(10)]
element_spec = train_data[0].element_spec
def model_fn():
model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(784,)),
tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
return tff.learning.from_keras_model(
model,
input_spec=element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
trainer = tff.learning.build_federated_averaging_process(
model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))
def evaluate(num_rounds=10):
state = trainer.initialize()
for _ in range(num_rounds):
t1 = time.time()
state, metrics = trainer.next(state, train_data)
t2 = time.time()
print('metrics {m}, round time {t:.2f} seconds'.format(
m=metrics, t=t2 - t1))
###Output
_____no_output_____
###Markdown
단일 머신 시뮬레이션다음은 기본입니다.
###Code
evaluate()
###Output
metrics <sparse_categorical_accuracy=0.13858024775981903,loss=3.0073554515838623>, round time 3.59 seconds
metrics <sparse_categorical_accuracy=0.1796296238899231,loss=2.749046802520752>, round time 2.29 seconds
metrics <sparse_categorical_accuracy=0.21656379103660583,loss=2.514779567718506>, round time 2.33 seconds
metrics <sparse_categorical_accuracy=0.2637860178947449,loss=2.312587261199951>, round time 2.06 seconds
metrics <sparse_categorical_accuracy=0.3334362208843231,loss=2.068122386932373>, round time 2.00 seconds
metrics <sparse_categorical_accuracy=0.3737654387950897,loss=1.9268712997436523>, round time 2.42 seconds
metrics <sparse_categorical_accuracy=0.4296296238899231,loss=1.7216310501098633>, round time 2.20 seconds
metrics <sparse_categorical_accuracy=0.4655349850654602,loss=1.6489890813827515>, round time 2.18 seconds
metrics <sparse_categorical_accuracy=0.5048353672027588,loss=1.5485210418701172>, round time 2.16 seconds
metrics <sparse_categorical_accuracy=0.5564814805984497,loss=1.4140453338623047>, round time 2.41 seconds
|
02-syntax.ipynb | ###Markdown
SyntaxNow that we've seen what regular expressions are, what they're good for, let's get down to business. Learning to use regular expressions is mostly about learning regular expression syntax, the special ways we can characters together to make regular expressions. This notebook will be the bulk of our workshop. Regular expression syntaxAll regular expressions are composed of two types of characters: * Literals (normal characters)* Metacharacters (special characters) Matching characters exactlyLiterals match exactly what they are, they mean what they say. For example, the regular expression `Berkeley` will match the string "Berkeley". (It won't match "berkeley", "berkeeley" or "berkely"). Most characters are literals.In the example below, the regular expression `regular` will match the string "regular" exactly.
###Code
import re
pattern = 'regular'
test_string = 'we are practising our regular expressions'
re.findall(pattern, test_string)
###Output
_____no_output_____
###Markdown
Matching special patternsMetacharacters don't match themselves. Instead, they signal that some out-of-the-ordinary thing should be matched, or they affect other portions of the RE by repeating them or changing their meaning. For example, you might want find all mentions of "dogs" in a text, but you also want to include "dog". That is, you want to match "dogs" but you don't care if the "s" is or isn't there. Or you might want to find the word "the" but only at the beginning of a sentence, not in the middle. For these out-of-the-ordinary patterns, we use metacharacters.In this workshop, we'll discuss the following metacharacters:. ^ $ * + ? { } [ ] \ | ( )
###Code
pattern = 'dogs?'
test_string = "I like dogs but my dog doesn't like me."
re.findall(pattern, test_string)
pattern = '^the'
test_string = "the best thing about the theatre is the atmosphere"
re.findall(pattern, test_string)
###Output
_____no_output_____
###Markdown
Our first metacharacters: [ and ]The first metacharacters we’ll look at are [ and ]. They’re used for specifying a character class, which is a set of characters that you wish to match.
###Code
vowel_pattern = '[ab]'
test_string = 'abracadabra'
re.findall(vowel_pattern, test_string)
###Output
_____no_output_____
###Markdown
Challenge 2Find all the p's and q's in the test string below.
###Code
test_string = "Quick, there's a large goat filled with pizzaz. Is there a path to the queen of Zanzabar?"
###Output
_____no_output_____
###Markdown
Challenge 3Find all the vowels in the test sentence below.
###Code
test_string = 'the quick brown fox jumped over the lazy dog'
###Output
_____no_output_____
###Markdown
RangesCharacters can be listed individually, or a range of characters can be indicated by giving two characters and separating them by a '-'. For example, `[abc]` will match any of the characters a, b, or c; this is the same as `[a-c]`. Challenge 4Find all the capital letters in the following string.
###Code
test_string = 'The 44th pPresident of the United States of America was Barack Obama'
###Output
_____no_output_____
###Markdown
ComplementsYou can match the characters not listed within the class by complementing the set. This is indicated by including a `^` as the first character of the class; `^` outside a character class will simply match the `^` character. For example, `[^5]` will match any character except `5`.
###Code
everything_but_t = '[^t]'
test_string = 'the quick brown fox jumped over the lazy dog'
re.findall(everything_but_t, test_string)[:5]
###Output
_____no_output_____
###Markdown
Challenge 5Find all the consonants in the test sentence below.
###Code
test_string = 'the quick brown fox jumped over the lazy dog'
###Output
_____no_output_____
###Markdown
Challenge 6Find all the `^` characters in the following test sentence.
###Code
test_string = """You can match the characters not listed within the class by complementing the set.
This is indicated by including a ^ as the first character of the class;
^ outside a character class will simply match the ^ character.
For example, [^5] will match any character except 5."""
###Output
_____no_output_____
###Markdown
Matching metacharacters literallyChallenge 6 is a bit of a trick. The problem is that we want to match the `^` character, but it's interpreted as a metacharacter, a character which has a special meaning. If we want to literally match the `^`, we have to "escape" its special meaning. For this, we use the `\`. Challenge 7Find all the square brackets `[` and `]` in the following test string
###Code
test_string = "The first metacharacters we'll look at are [ and ]."
###Output
_____no_output_____
###Markdown
Character classesThe backslash `\` has another use in regexes, in addition to escaping metacharacters. It's used as the first character in special two-character combinations that have special meanings. These special two-character combinations are really shorthand for sets of characters.| Character | Meaning | Shorthand for ||:------------------:|:------------------:|:----------:|| `\d` | any digit | `[0-9]` || `\D` | any non-digit | `[^0-9]` || `\s` | any whitespace | `[ \t\n\r\f\v]` || `\S` | any non-whitespace | `[^ \t\n\r\f\v]` || `\w` | any word | `[a-zA-Z0-9_]` || what do you think? | any non-word | `?` |Now here's a quick tip. When writing regular expressions in Python, use raw strings instead of normal strings. Raw strings are preceded by an `r` in Python code. If we don't, the Python interpreter will try to convert backslashed characters before passing them to the regular expression engine. This will end in tears. You can read more about this [here](https://docs.python.org/3/library/re.htmlmodule-re). Challenge 8Find all three digit prices in the following test sentence. Remember the `$` is a metacharacter so needs to be escaped.
###Code
test_string = 'The iPhone X costs over $999, while the Android competitor comes in at around $550.'
###Output
_____no_output_____
###Markdown
Repeating thingsBeing able to match varying sets of characters is the first thing regular expressions can do that isn’t already possible with the methods available on strings. However, if that was the only additional capability of regexes, they wouldn’t be much of an advance. Another capability is that you can specify that portions of the RE must be repeated a certain number of times.| Character | Meaning | Example | Matches ||:---------:|:---------------------:|:-------------:|:------------------------------------:|| `{n}` | exactly n times | `a{3}` | 'aaa' || `{n, m}` | between n and m times | `[1-9]{2, 4}` | '12', '123', '1234' || `?` | 0 or 1 times | `colou?r` | 'color', 'colour' || `*` | 0 or more times | `data!*` | 'data', 'data!', 'data!!', 'data!!!' || `+` | 1 or more times | `lo+l` | 'lol', 'lool', 'loool' | Challenge 9Find all prices in the following test sentence.
###Code
test_string = """The iPhone X costs over $999, while the Android competitor comes in at around $550.
Apple's MacBook Pro costs $1200, while just a few years ago it was $1700.
A new charger for the MacBook costs over $80.
"""
###Output
_____no_output_____
###Markdown
The `re` module in PythonThe regular expression syntax that we've seen so far covers most of the common use cases. Let's take a break from the syntax, and focus on Python's re module. It has some quirks that we should talk about, after which we'll get back to the syntax.Up until now we've only used `re.findall`. This function takes two arguments, a `pattern` and a `text` to search through. It returns a list of all the substrings in `text` that follow `pattern`. Two other common functions are `re.match` and `re.search`. These take the same two arguments as `re.findall`. `re.search` looks through `text` for the **first** occurrence of `pattern`. `re.match` only looks at the start of `text`. Rather than returning a list, these two functions return a `match` object, which contains information about the substring in `text` that matches `pattern`. For example, it gives you the starting and ending index of the substring. If no such matching substring is found, they return `None`.
###Code
price_pattern = r'\$\d+'
test_string = """The iPhone X costs over $999, while the Android competitor comes in at around $550.
Apple's MacBook Pro costs $1200, while just a few years ago it was $1700.
A new charger for the MacBook costs over $80.
"""
m = re.search(price_pattern, test_string)
m
###Output
_____no_output_____
###Markdown
The `match` object has everal methods and attributes; the most important ones are `group()`, `start()`, `end()` and `span()`. `group()` returns the string that matched the regex, `start()` and `end()` return the relevant indicies, and `span()` returns the indicies as a tuple.
###Code
print(m.group())
print(m.start())
print(m.end())
print(m.span())
###Output
$999
24
28
(24, 28)
###Markdown
In general, I prefer just using `re.findall`, because I rarely need the information that `match` object instances give. Challenge 10Write a function called `first_vowel` that takes in a single word, and returns the first vowel. If there is no vowel in the word, it should return the string `"Hey, no vowel!"`.
###Code
print(first_vowel('hello'))
print(first_vowel('sky'))
###Output
e
Hey, no vowel!
###Markdown
Replacing thingsSo far we've just been finding, but I promised you advanced "find and replace"! That's what `re.sub` is for. `re.sub` takes three arguments: a `pattern` to look for, a `replacement` string to replace it with, and a `text` to look for `pattern` in. Challenge 11Replace all the prices in the test string below with `"one million dollars"`.
###Code
test_string = """The iPhone X costs over $999, while the Android competitor comes in at around $550.
Apple's MacBook Pro costs $1200, while just a few years ago it was $1700.
A new charger for the MacBook costs over $80.
"""
###Output
_____no_output_____
###Markdown
So far we've used the module-level functions `re.findall` and friends. We can also `compile` a regex into a `pattern` object. The `pattern` object has methods with identical names to the module-level functions. The benefits are if you're searching over huge texts. It's entirely the same as what we've been doing so far so no need to complicate things. But you'll see it around so it's good to know about.
###Code
vowel_pattern = re.compile(r'[aeiou]')
test_string = 'abracadabra'
vowel_pattern.findall(test_string)
###Output
_____no_output_____
###Markdown
You might also want to experiment with `re.split`. Challenge 12You've received a problematic dataset from a fellow researcher, with some data entry errors/discrepancies. How would you use regular expressions to correct these errors?1. Replace all instances of "district" or "District" with "County". 2. Replace all instances of "Not available" or "[Name] looking up" with numeric codes.
###Code
import os
DATA_DIR = '../data'
fname = os.path.join(DATA_DIR, 'usecase1/problem_dataset.csv')
with open(fname) as f:
text = f.read()
# DO SOME REGEX MAGIC
# cleaned_text = ...
# with open("data/usecase1/cleaned_dataset.csv", "w") as f:
# f.write(cleaned_text)
###Output
_____no_output_____
###Markdown
Challenge 13Find all words in the following string about robots.
###Code
robot_string = '''Robots are branching out. A new prototype soft robot takes inspiration from plants by growing to explore its environment.
Vines and some fungi extend from their tips to explore their surroundings.
Elliot Hawkes of the University of California in Santa Barbara
and his colleagues designed a bot that works
on similar principles. Its mechanical body
sits inside a plastic tube reel that extends
through pressurized inflation, a method that some
invertebrates like peanut worms (Sipunculus nudus)
also use to extend their appendages. The plastic
tubing has two compartments, and inflating one
side or the other changes the extension direction.
A camera sensor at the tip alerts the bot when it’s
about to run into something.
In the lab, Hawkes and his colleagues
programmed the robot to form 3-D structures such
as a radio antenna, turn off a valve, navigate a maze,
swim through glue, act as a fire extinguisher, squeeze
through tight gaps, shimmy through fly paper and slither
across a bed of nails. The soft bot can extend up to
72 meters, and unlike plants, it can grow at a speed of
10 meters per second, the team reports July 19 in Science Robotics.
The design could serve as a model for building robots
that can traverse constrained environments
This isn’t the first robot to take
inspiration from plants. One plantlike
predecessor was a robot modeled on roots.'''
###Output
_____no_output_____
###Markdown
Challenge 14We can use parentheses to match certain parts of a regular expression.
###Code
price_pattern = pattern = r'\$(\d+)\.(\d{2})'
test_string = "The iPhone X costs over $999.99, while the Android competitor comes in at around $550.50."
m = re.search(price_pattern, test_string)
dollars, cents = m.group(1), m.group(2)
print(dollars)
print(cents)
###Output
999
99
###Markdown
Use parentheses to group together the area code of a US phone number. Write a function called `area_code` that takes in a string, and if it is a valid US phone number, returns the area code. If not, it should return the string `"Hey, not a phone number!"`. Challenge 15Parentheses can also be used to group together characters in a regular expression so that metacharacters can apply to the entire group, not just a single character.
###Code
bat_pattern = r'Bat(wo)?man'
test_string = 'Batwoman, Batman and Robin are good friends.'
re.findall(bat_pattern, test_string)
###Output
_____no_output_____
###Markdown
What went wrong? Well, parentheses have a double life in regular expression syntax. They are used to signal groups like in Challenge 14, but also to let metacharacters apply to those groups. Those two uses interfere with each other. If we want the `?` to apply to the whole `wo` sequence, but we want the whole substring that matches, we have to use a non-capturing group.
###Code
bat_pattern = r'Bat(?:wo)?man'
test_string = 'Batwoman, Batman and Robin are good friends.'
re.findall(bat_pattern, test_string)
###Output
_____no_output_____
###Markdown
Look back at challenge 13, where we looked for words to do with robots. We missed 'Robotics'. Using your newfound non-capturing group skills, correct this. Challenging challenges Jane EyreI've downloaded the entire text of Charlotte Brontë's _Jane Eyre_ from [Project Gutenberg](https://www.gutenberg.org/). Imagine you're a literary scholar studying various aspects of Brontë's work. You might begin by extracting out various pieces of information from this book, and comparing them with other works. Here are some tasks you might need to do.- Find all years (e.g. 1847).- Find all direct quotes (text between quotation marks).- Find all Mr.'s, Mrs.'s and and Misses (including the name that comes after it).- Find all lines that use the same word at least twice.- Write a function that takes in a plural noun and returns the singular version.- Write a function that takes in a past tense verb and returns the base form.- Find the relative frequencies of I, you, she, he, we and they.- Find all URLs (before and after that actual text, there's some legal information from Project Gutenberg).- Find all email addresses (see above)
###Code
fname = os.path.join(DATA_DIR, 'usecase3/jane_eyre.txt')
with open(fname) as f:
text = f.read()
###Output
_____no_output_____
###Markdown
RedditI've also included a dataset (in csv format) from [Reddit](https://www.reddit.com/). Regular expressions are really useful for working with text data from the web. In the variable `questions`, you'll find all sorts of questions that people ask on the Internet. Find out:- How many of them are "serious" (these include the word "serious" in some spelling variant)- What words do people use before "of Reddit"?
###Code
import csv
fname = os.path.join(DATA_DIR, 'askreddit_2015.csv')
with open(fname) as f:
reader = csv.reader(f)
posts_with_header = list(reader)
posts = posts_with_header[1:]
questions = [p[0] for p in posts]
###Output
_____no_output_____ |
03_qda-generation.ipynb | ###Markdown
Exercise 3 Group Members: Luis Pazos Clemens, Robert Freund, Eugen DizerDeadline: 15.12.2020, 16:00.
###Code
#Load standard libraries
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
3 Data Preparation
###Code
#load library and data
from sklearn import datasets
digits = datasets.load_digits()
print ( digits.keys () )
data = digits["data"]
images = digits["images"]
target = digits["target"]
target_names = digits["target_names"]
print ( data.dtype )
#size of total dataset
print(np.shape(data))
print(np.shape(images))
print(np.shape(target))
#split into training and test set
from sklearn import model_selection
X_train , X_test , Y_train , Y_test =\
model_selection.train_test_split( digits .data , digits . target , test_size = 0.4 , random_state = 0)
#choose numbers "1" and "3" only
X_train_1_3 = X_train[np.where((Y_train == 1) | (Y_train == 3))]
Y_train_1_3 = Y_train[np.where((Y_train == 1) | (Y_train == 3))]
X_test_1_3 = X_test[np.where((Y_test == 1) | (Y_test == 3))]
Y_test_1_3 = Y_test[np.where((Y_test == 1) | (Y_test == 3))]
#size of datasets with number "1" and "3" only
print(np.shape(X_train_1_3))
print(np.shape(Y_train_1_3))
print(np.shape(X_test_1_3))
print(np.shape(Y_test_1_3))
#plotting a test image
img = X_train_1_3[np.random.randint(0,np.shape(X_train_1_3)[0])].reshape(8,8)
assert 2 == len( img.shape )
plt.figure ()
plt.gray ()
f, axarr = plt.subplots(1,2)
axarr[0].imshow(img, interpolation ="nearest" )
axarr[1].imshow(img, interpolation ="bicubic")
plt.show ()
# fit_qda function from exercise 2:
def fit_qda(training_features, training_labels):
"""
QDA fit function for arbitrary training size N and feature dimension D.
params
------
training_features : np.array shape=(N, D)
training set for the classifier.
rows: features x_i.
training_labels : np.array shape=(N)
Training labels for the classifier with "1" and "3".
returns
-------
mu: np.array shape=(2, D)
the two class means.
covmat: np.array shape=(2, D, D)
the two covariance matrices.
p: np.array shape=(2)
the two priors.
"""
N = len(training_labels)
#Sets that only contain class 1 and 3
X0 = training_features[training_labels == 1]
X1 = training_features[training_labels == 3]
N0 = len(X0)
N1 = len(X1)
#Calculate the means
mu_0 = np.mean(X0, axis=0)
mu_1 = np.mean(X1, axis=0)
#Calculate covariance matrices
covmat_0 = np.matmul((X0 - mu_0).T, (X0 - mu_0)) / N0
covmat_1 = np.matmul((X1 - mu_1).T, (X1 - mu_1)) / N1
#Calculate the priors
p_0 = N0 / N
p_1 = N1 / N
return np.array([mu_0, mu_1]), np.array([covmat_0, covmat_1]), np.array([p_0, p_1])
#Apply this function to our training data:
training_features = X_train_1_3
training_labels = Y_train_1_3
mu, covmat, p = fit_qda(training_features, training_labels)
#Generate 8 new insatces of each class using the mu and covmat obtained above:
gen_1 = np.random.multivariate_normal(mu[0],covmat[0],(8))
gen_3 = np.random.multivariate_normal(mu[1],covmat[1],(8))
#Plot the generated instances:
#1s
fig=plt.figure(figsize=(18, 6))
axes=[]
for i in range(8):
img = gen_1.reshape(8,8,8)[i]
axes.append( fig.add_subplot(2, 8, i+1) )
plt.imshow(img, interpolation ="nearest")
axes.append( fig.add_subplot(2, 8, i+9) )
plt.imshow(img, interpolation ="bicubic")
fig.tight_layout()
plt.show()
#3s
fig=plt.figure(figsize=(18, 6))
axes=[]
for i in range(8):
img = gen_3.reshape(8,8,8)[i]
axes.append( fig.add_subplot(2, 8, i+1) )
plt.imshow(img, interpolation ="nearest")
axes.append( fig.add_subplot(2, 8, i+9) )
plt.imshow(img, interpolation ="bicubic")
fig.tight_layout()
plt.show()
###Output
_____no_output_____ |
2_Mod_Statistique/Mod_Statistique.ipynb | ###Markdown
PROJET : RECONNAISSANCE VOCALE SYSTEME DE TRADUCTION ADAPTE AUX LUNETTES CONNECTEES *** Partie II : Modélisation - Itération 1 Dans cette itération, nous avons réalisé notre première modélisation, faisant suite à l'étape de visualisation. L'hypothèse de base de cette itération est l'hypothèse suivante : "H0: dans une phrase donnée, les mots à traduire en anglais et en français sont quasiment à la même place. En s'appuyant sur cette logique, une traduction mot à mot d'un ensemble de phrases est possible."Cette hypothèse naive, bien qu'intéressante, ne nous semble pas vraiment plausible.En effet, dans l'étape de visualisation, nous avons constaté que le nombre de mots et la structure des phrases sont très différents en français et en anglais.Nous allons toutefois essayer de trouver des solutions pour faire fonctionner au mieux une telle traduction et comparer les résultats obtenus avec le dictionnaire de référence fourni.
###Code
# Import des données
import numpy as np
import pandas as pd
data = pd.read_csv("small_vocab_fr-eng.csv", sep = ';')
# Suppression des doublons
data.drop_duplicates(inplace = True)
data.reset_index(inplace = True, drop = True)
data.head()
###Output
_____no_output_____
###Markdown
1. Création d'un dictionnaire anglais - français La première étape de modélisation consiste à réaliser un dictionnaire de correspondance entre un mot anglais et un mot français. Nous avons choisi ce sens de réalisation du dictionnaire car c'est celui qui nous sera utile pour notre projet de lunettes connectées (la transcription à l'écrit des phrases orales étant réalisée en langue anglaise). Pour réaliser ce dictionnaire, nous avons procédé ainsi: - création de la liste des mots en anglais dont nous cherchons la traduction; - pour chaque mot anglais, en fonction de sa place dans la phrase, création d'une fenêtre de possibilité de trois mots (autour de la place du mot anglais) dans la version française;- formalisation d'un tableau (array) de quatre colonnes fournissant, pour chaque mot anglais, les trois mots possibles en français;- tri du tableau par mot anglais afin d'affecter à chaque mot anglais unique toutes les possibilités de traduction en francais;- calcul de l'occurence d'apparition de chaque possibilité ;- selection de la plus forte occurence pour chaque mot unique ce qui nous permet d'établir la correspondance souhaitée entre un mot anglais et un mot français.
###Code
# A partir du dataframe, on crée une liste de phrases (par ligne de tableau) en anglais et en français
# Création de la liste de phrases en anglais
liste_eng = []
for i in range(len(data.index)):
text_eng = ""
for carac in data.English[i]:
if carac in ",.?":
text_eng += " "
elif carac in "'":
text_eng += " '"
else:
text_eng += carac
liste_eng.append(text_eng)
# Création de la liste de phrases en français
liste_fr = []
for i in range(len(data.index)):
text_fr = ""
for carac in data.Français[i]:
if carac in ",.?-":
text_fr += " "
elif carac in "'":
text_fr += "' "
else:
text_fr += carac
liste_fr.append(text_fr)
# Création de la liste de mots en anglais
liste_eng1 = []
for i in range(len(liste_eng)):
for j in range(len(liste_eng[i].split())):
liste_eng1.append(liste_eng[i].split()[j])
# Correction des éléments en forme contractée
for index, value in enumerate(liste_eng1):
if value == "didn":
liste_eng1[index] = 'did'
elif value == "isn":
liste_eng1[index] = 'is'
elif value == "aren":
liste_eng1[index] = 'are'
elif value == "'s":
liste_eng1[index] = 'is'
elif value == "'t":
liste_eng1[index] = 'not'
#liste_eng1
# Création de la liste des 3 mots en français en fonction de la position du mot anglais à traduire :
liste_fr1 = []
for i in range(len(liste_eng)):
k = len(liste_fr[i].split()) - 1
l = len(liste_eng[i].split()) - 1
# Création de la liste des mots par phrase anglaise
for j in range(len(liste_eng[i].split())):
# Liste de mots pour le premier mot de la phrase anglaise
if j == 0:
liste_fr1.append([liste_fr[i].split()[0], liste_fr[i].split()[1], liste_fr[i].split()[2]])
# Liste des mots pour le dernier mot de la phrase anglaise
elif j == l:
liste_fr1.append([liste_fr[i].split()[-3], liste_fr[i].split()[-2], liste_fr[i].split()[-1]])
# Liste des mots pour les mots centraux de la phrase anglaise
else:
# Phrases anglaises et françaises avec le même nombre de mots
if k == l:
liste_fr1.append([liste_fr[i].split()[j - 1], liste_fr[i].split()[j], liste_fr[i].split()[j + 1]])
# Phrases anglaises avec plus de mots que les françaises
elif k < l:
if 1 <= j < k:
liste_fr1.append([liste_fr[i].split()[j - 1], liste_fr[i].split()[j], liste_fr[i].split()[j + 1]])
else:
liste_fr1.append([liste_fr[i].split()[-3], liste_fr[i].split()[-2], liste_fr[i].split()[-1]])
# Phrases françaises avec plus de mots que phrases anglaises
else:
liste_fr1.append([liste_fr[i].split()[j - 1], liste_fr[i].split()[j], liste_fr[i].split()[j + 1]])
#liste_fr1
# Vérification de la taille des listes obtenues
print("Taille de la liste anglaise liste_eng1 :",len(liste_eng1))
print("Taille de la liste française liste_fr1 :",len(liste_fr1))
# Création d'un array reprenant les mots anglais et les trios de mots français
arr_eng = np.array(liste_eng1).reshape(1501574,1)
arr_fr = np.array(liste_fr1)
arr = np.hstack((arr_eng, arr_fr))
arr.shape
# Réorganisation de l'array en fonction des mots anglais
arr1 = arr[arr[:, 0].argsort()]
arr1.shape
# Création d'un array des mots anglais uniques avec leurs occurences
uniq_eng = np.asarray(list(zip(*np.unique(arr1[:,0], return_counts= True))))
print("Nombre de mots anglais uniques :", uniq_eng.shape[0])
# Création du dictionnaire de traduction
# Dictionnaire de traduction anglais - français
dico_trad = {}
# Dictionnaire des possibles traductions suivant l'occurrence
dico_occ_nbr = {}
# Dictionnaire des possibles traductions suivant l'occurrence en pourcentage
dico_occ_pctg = {}
k1 = 0
for i in range(uniq_eng.shape[0]):
k2 = uniq_eng[i, 1].astype('int')
arr2 = arr1[k1 : k1 + k2, 1:4]
values, counts = np.unique(arr2, return_counts = True)
dico_trad.update({uniq_eng[i, 0] : values[np.argmax(counts)]})
dico_occ_nbr.update({uniq_eng[i, 0] : dict(zip(*np.unique(arr2, return_counts= True)))})
dico_count = dict(zip(*np.unique(arr2, return_counts= True)))
for key, value in dico_count.items():
dico_count[key] = round(value / (k2 * 3) * 100, 2)
dico_occ_pctg.update({uniq_eng[i, 0] : dico_count})
k1 += k2
# Création du dictionnaire de traduction avec les différentes possibilités de traductions
dico_occ = {}
for i in dico_trad.keys():
dico_occ.update({i : list(dico_occ_nbr[i].keys())})
# Création du dictionnaire de traduction en précisant le pourcentage d'occurrence du mot choisi
dico_trad_pctg = {}
for i in dico_trad.keys():
for j in dico_occ_pctg[i].keys():
if j == dico_trad[i]:
dico_trad_pctg.update({i : [dico_trad[i], dico_occ_pctg[i][j]]})
dico_trad_pctg
###Output
_____no_output_____
###Markdown
Dans l'étape de visualisation, nous avions calculé la présence de 196 mots unique en anglais, nous retrouvons ce nombre dans notre dictionnaire, ce qui nous assure une certaine cohérence. En parcourant visuellement la liste des 196 mots traduits, nous constatons plusieurs points : - les occurences d'apparition ne sont pas très élevées (jamais au-dessus de 50%);- certaines traductions sont manifestement erronées. En première conclusion, et sans même comparer avec le dictionnaire de référence, nous pouvons présager de l'invalidité de l'hypothèse de base. Dans la suite de l'étude, nous appelerons ce doctionnaire "le 1er dictionnaire". Nous allons toutefois procéder à la comparaison et tâcher de trouver des pistes d'amélioration. 2. Etude du dictionnaire créé : comparaison avec un dictionnaire de référence Dans cette étape, nous construisons le dictionnaire de référence.Mais tout d'abord, nous souhaiterions commenter les données d'entrée. Le dictionnaire proposé comme référence est très riche (113 286 mots). Toutefois, nous constatons des erreurs plus ou moins importantes: - Le premier biais est de parfois traduire un mot par un terme du même champs lexical (ex: cat par félin);- Le deuxième de traduire un gérondif par un nom (ex: canoying par canoe);- De nombreux mots sont incompréhensibles (ex: livi-livi ou heiliger-heiliger), ces mots sont généralement identiquement traduits en français et en anglais, ce qui ne causera pas de problèmes pour notre comparaison mais fait douter de la valeur de ce dictionnaire; - De façon moins anecdoctique, certains mots sont traduits une fois par le mot en français (ex: bird par oiseau) et une autre fois par le mot en anglais (ex: bird par bird), ou bien une fois par un nom et une fois par un adjectif (religion - religion et religion - religieux), ce qui, par contre, pourrait être problématique pour notre comparaison. Dans le code ci-après, nous construisons non pas un dictionnaire un pour un, c'est-à-dire ne proposant qu'une traduction pour chaque mot, mais un dictionnaire un pour multiples (via l'utilisation d'une liste), ce qui permet d'obtenir plusieurs choix de traduction pour un même mot. Cette méthode nous permet de nous affranchir des biais des données d'entrée.
###Code
# Création du dictionnaire de référence à partir du 2ème dataset
# Import des données du dataset 2
ds2 = pd.read_csv("en-fr.txt", sep = " ", names = ["Anglais", "Français"])
ds2_ord = ds2.groupby(['Anglais']).agg(lambda x : list(x)).reset_index()
# Création du dictionnaire complet
dico_cplt = {ds2_ord['Anglais'][i] : ds2_ord['Français'][i] for i in range(ds2_ord.shape[0])}
# Nombre de mots anglais différents dans ce dataset
print("Ce dataset constituant le dictionnaire de référence contient les traductions françaises de", len(dico_cplt),
"mots anglais différents.")
# Création du dictionnaire en reprenant les mots anglais en commun dans les 2 datasets
dico_part = {}
for i in dico_trad.keys():
if i in dico_cplt.keys():
dico_part.update({i : dico_cplt[i]})
print("Le dictionnaire créé dans la partie précédente et le dictionnaire de référence ont", len(dico_part),
"mots en commun.")
#dico_part
liste_nc = []
for i in dico_trad.keys():
if i not in dico_part:
liste_nc.append(i)
print("Liste des mots qui n'apparaissent pas dans le dictionnaire de référence contenant", len(liste_nc), "mots:\n",
liste_nc)
###Output
Ce dataset constituant le dictionnaire de référence contient les traductions françaises de 94679 mots anglais différents.
Le dictionnaire créé dans la partie précédente et le dictionnaire de référence ont 174 mots en commun.
Liste des mots qui n'apparaissent pas dans le dictionnaire de référence contenant 22 mots:
['a', 'am', 'been', 'chilly', 'disliked', 'dislikes', 'do', 'does', 'drives', 'go', 'has', 'he', 'i', 'in', 'is', 'it', 'least', 'my', 'snowy', 'thinks', 'to', 'we']
###Markdown
Analyse de la comparaison entre le dictionnaire dico_trad obtenu et le dictionnaire de référence
###Code
# Comparaison de la traduction proposée dans le 1er dictionnaire et des traductions du dictionnaire de référence
dico_com1 = {}
liste_nt = []
for i in dico_part.keys():
if dico_trad[i] in dico_part[i]:
dico_com1.update({i : [dico_trad[i], dico_trad_pctg[i][1]]})
else:
liste_nt.append(i)
# Analyse du pourcentage d'occurrence des mots choisis pour la traduction
liste_pctg = [dico_com1[i][1] for i in dico_com1.keys()]
print("Les mots français les plus fréquents choisis pour la traduction des mots anglais du 1er dictionnaire ont une \
occurrence comprise entre", min(liste_pctg), "% et", max(liste_pctg), "%.")
# Analyse du score du 1er dictionnaire
print("Les traductions proposées dans le 1er dictionnaire correspondent à celles proposées par le dictionnaire de \
référence pour", len(dico_com1), "mots, soit dans", round(len(dico_com1)/len(dico_part) * 100, 2), "% des cas.\n")
print("Liste des mots mal traduits, de", len(liste_nt), "mots :\n", liste_nt)
###Output
Les mots français les plus fréquents choisis pour la traduction des mots anglais du 1er dictionnaire ont une occurrence comprise entre 13.48 % et 33.33 %.
Les traductions proposées dans le 1er dictionnaire correspondent à celles proposées par le dictionnaire de référence pour 82 mots, soit dans 47.13 % des cas.
Liste des mots mal traduits, de 92 mots :
['animal', 'animals', 'apple', 'apples', 'april', 'automobile', 'banana', 'bananas', 'bears', 'beautiful', 'black', 'blue', 'california', 'december', 'did', 'dislike', 'driving', 'drove', 'during', 'eiffel', 'english', 'fall', 'feared', 'february', 'field', 'football', 'freezing', 'fruit', 'going', 'grape', 'grapefruit', 'grapes', 'green', 'grocery', 'have', 'her', 'january', 'july', 'june', 'last', 'lemon', 'lemons', 'like', 'liked', 'lime', 'limes', 'loved', 'mango', 'mangoes', 'march', 'may', 'might', 'most', 'new', 'next', 'nice', 'november', 'october', 'orange', 'oranges', 'peach', 'peaches', 'pear', 'pears', 'plan', 'plans', 'rusty', 'saw', 'school', 'september', 'shiny', 'sometimes', 'spring', 'states', 'store', 'strawberries', 'strawberry', 'that', 'the', 'think', 'tower', 'translate', 'translating', 'usually', 'visit', 'want', 'wanted', 'weather', 'went', 'where', 'white', 'would']
###Markdown
De ces valeurs, nous établissons les constats suivants: - la plupart des mots du 1er dictionnaire sont bien présents dans le dictionnaire de référence mais certains en sont absents (22 mots);- notre score de performance est assez faible (moins 50%), ce qui confirme que nous allons sûrement rejeter l'hypothèse H0;Par la suite, nous allons pousser plus loin l'étude afin de trouver tout de même des pistes d'amélioration de ce score.
###Code
# Comparaison entre les différentes possibilités de traductions du 1er dictionnaire et celles du dictionnaire de référence
dico_lt = {}
for i in dico_part.keys():
for j in range(len(dico_occ[i])):
if dico_occ[i][j] in dico_part[i]:
dico_lt.update({i : dico_occ[i]})
break
liste_lnt = []
for i in dico_part.keys():
if i not in dico_lt.keys():
liste_lnt.append(i)
print("Les possibles traductions proposées dans le 1er dictionnaire trouvent une correspondance dans les traductions \
proposées dans le dictionnaire de référence pour", len(dico_lt), "mots, soit dans",
round(len(dico_lt)/len(dico_part) * 100, 2), "% des cas.\n")
print("Liste des mots pour lesquels aucune occurrence ne correspond à une traduction, de",
len(liste_lnt), "mots :\n", liste_lnt)
###Output
Les possibles traductions proposées dans le 1er dictionnaire trouvent une correspondance dans les traductions proposées dans le dictionnaire de référence pour 160 mots, soit dans 91.95 % des cas.
Liste des mots pour lesquels aucune occurrence ne correspond à une traduction, de 14 mots :
['did', 'dislike', 'drove', 'feared', 'field', 'have', 'her', 'most', 'nice', 'plan', 'plans', 'saw', 'store', 'would']
###Markdown
De ces analyses, nous extrayons les observations suivantes: - dans la grande majorité des cas, la bonne traduction de chaque mot anglais est bien dans la fenêtre de 3 mots sélectionnée dans la version française; - seuls 14 mots se trouvent en-dehors de cette fenêtre de 3 mots. Nous n'avons pas, dans ce cas, jugé utile de modifier la fenêtre de traduction, considérant cette piste d'amélioration de nos performances comme négligeable. En effet, ces mots sont mal traduits pour d'autres raisons (conjuguaison de verbes, autres façons de construire une phrase en anglais et en francais). Analyse d'un second dictionnaire créé sans certains mots français trop communs Dans la poursuite de l'objectif d'amélioration des performances, nous avons testé le modèle en retirant des mots usuels listés dans "ban" et retrouvés trop fréquemment comme traduction erronée des mots anglais. Le modèle de traduction faisant suite est identique à ce qui a déjà été présenté et réalisé.
###Code
# Création d'un nouveau dictionnaire basé sur le 1er dataset et ignorant certains mots français
# Le code qui suit reprend le code de la première partie qui a été utilisé pour créer le 1er dictionnaire
ban = ['le', 'la', 'les', 'une', 'est', 'en', "n'", "l'", 'de', 'ce']
liste_fr1_m = []
for i in range(len(liste_eng)):
k = len(liste_fr[i].split()) - 1
l = len(liste_eng[i].split()) - 1
# Création de la liste des mots par phrase anglaise
for j in range(len(liste_eng[i].split())):
# Liste de mots pour le premier mot de la phrase anglaise
if j == 0:
liste_fr1_m.append([liste_fr[i].split()[0], liste_fr[i].split()[1], liste_fr[i].split()[2]])
# Liste des mots pour le dernier mot de la phrase anglaise
elif j == l:
liste_fr1_m.append([liste_fr[i].split()[-3], liste_fr[i].split()[-2], liste_fr[i].split()[-1]])
# Liste des mots pour les mots centraux de la phrase anglaise
else:
# Phrases anglaises et françaises avec le même nombre de mots
if k == l:
liste_fr1_m.append([liste_fr[i].split()[j - 1], liste_fr[i].split()[j], liste_fr[i].split()[j + 1]])
# Phrases anglaises avec plus de mots que les françaises
elif k < l:
if 1 <= j < k:
liste_fr1_m.append([liste_fr[i].split()[j - 1], liste_fr[i].split()[j], liste_fr[i].split()[j + 1]])
else:
liste_fr1_m.append([liste_fr[i].split()[-3], liste_fr[i].split()[-2], liste_fr[i].split()[-1]])
# Phrases françaises avec plus de mots que phrases anglaises
else:
v = liste_fr[i].split()[j - 1]
w = liste_fr[i].split()[j + 1]
if (v and w not in ban) or (j == 1) or (j == l-1):
liste_fr1_m.append([liste_fr[i].split()[j - 1], liste_fr[i].split()[j], liste_fr[i].split()[j + 1]])
elif v in ban and w not in ban:
liste_fr1_m.append([liste_fr[i].split()[j - 2], liste_fr[i].split()[j], liste_fr[i].split()[j + 1]])
elif w in ban and v not in ban:
liste_fr1_m.append([liste_fr[i].split()[j - 1], liste_fr[i].split()[j], liste_fr[i].split()[j + 2]])
elif v and w in ban:
liste_fr1_m.append([liste_fr[i].split()[j - 2], liste_fr[i].split()[j], liste_fr[i].split()[j + 2]])
print("Taille de la liste anglaise liste_eng1 :",len(liste_eng1))
print("Taille de la liste française liste_fr1_m :",len(liste_fr1_m))
arr_eng = np.array(liste_eng1).reshape(1501574,1)
arr_fr_m = np.array(liste_fr1_m)
arr_m = np.hstack((arr_eng, arr_fr_m))
arr1_m = arr_m[arr_m[:, 0].argsort()]
dico_trad_m = {}
dico_occ_nbr_m = {}
dico_occ_pctg_m = {}
k1 = 0
for i in range(uniq_eng.shape[0]):
k2 = uniq_eng[i, 1].astype('int')
arr2_m = arr1_m[k1 : k1 + k2, 1:4]
values, counts = np.unique(arr2_m, return_counts = True)
dico_trad_m.update({uniq_eng[i, 0] : values[np.argmax(counts)]})
dico_occ_nbr_m.update({uniq_eng[i, 0] : dict(zip(*np.unique(arr2_m, return_counts= True)))})
dico_count_m = dict(zip(*np.unique(arr2_m, return_counts= True)))
for key, value in dico_count_m.items():
dico_count_m[key] = round(value / (k2 * 3) * 100, 2)
dico_occ_pctg_m.update({uniq_eng[i, 0] : dico_count_m})
k1 += k2
dico_occ_m = {}
for i in dico_trad_m.keys():
dico_occ_m.update({i : list(dico_occ_nbr_m[i].keys())})
dico_trad_pctg_m = {}
for i in dico_trad_m.keys():
for j in dico_occ_pctg_m[i].keys():
if j == dico_trad_m[i]:
dico_trad_pctg_m.update({i : [dico_trad_m[i], dico_occ_pctg_m[i][j]]})
dico_com1_m = {}
liste_nt_m = []
for i in dico_part.keys():
if dico_trad_m[i] in dico_part[i]:
dico_com1_m.update({i : [dico_trad_m[i], dico_trad_pctg_m[i][1]]})
else:
liste_nt_m.append(i)
# Analyse du pourcentage d'occurrence des mots choisis pour la traduction
liste_pctg_m = [dico_com1_m[i][1] for i in dico_com1_m.keys()]
print("Les mots les plus fréquents choisis pour la traduction des mots anglais dans le 2eme dictionnaire ont une \
occurrence comprise entre", min(liste_pctg_m), "% et", max(liste_pctg_m), "%.\n")
# Analyse du score du 2eme dictionnaire
print("Les traductions proposées dans le 2eme dictionnaire correspondent à celles proposées par le dictionnaire de \
référence pour", len(dico_com1_m), "mots, soit dans", round(len(dico_com1_m)/len(dico_part) * 100, 2), "% des cas.\n")
print("Liste des mots mal traduits, de", len(liste_nt_m), "mots :\n", liste_nt_m)
# Comparaison entre les différentes possibilités de traductions du 2e dictionnaire et celles du dictionnaire de référence
dico_lt_m = {}
for i in dico_part.keys():
for j in range(len(dico_occ_m[i])):
if dico_occ_m[i][j] in dico_part[i]:
dico_lt_m.update({i : dico_occ_m[i]})
break
liste_lnt_m = []
for i in dico_part.keys():
if i not in dico_lt_m.keys():
liste_lnt_m.append(i)
print("Les possibles traductions proposées dans le 2eme dictionnaire trouvent une correspondance dans les traductions \
proposées dans le dictionnaire de référence pour", len(dico_lt_m), "mots, soit dans",
round(len(dico_lt_m)/len(dico_part) * 100, 2), "% des cas.\n")
print("Liste des mots pour lesquels aucune occurrence ne correspond à une traduction, de",
len(liste_lnt_m), "mots :\n", liste_lnt_m)
###Output
Les possibles traductions proposées dans le 2eme dictionnaire trouvent une correspondance dans les traductions proposées dans le dictionnaire de référence pour 160 mots, soit dans 91.95 % des cas.
Liste des mots pour lesquels aucune occurrence ne correspond à une traduction, de 14 mots :
['did', 'dislike', 'drove', 'feared', 'field', 'have', 'her', 'most', 'nice', 'plan', 'plans', 'saw', 'store', 'would']
###Markdown
En retirant les mots contenus dans la variable "ban", nous observons une augmentation du taux de bonnes réponses de 47% à 68%, ce qui valide cette piste d'amélioration. Analyse sur le seuil d'occurence En dernier lieu, nous allons travailler sur la fixation d'un seuil d'occurence visant à mieux traduire les mots anglais.1) méthode 1: conserver uniquement les traductions dont l'occurence est au-delà d'un certain seuil et vérifier leur bonne traduction par rapport à l'ensemble des mots en commun (174 mots).2) méthode 2: conserver uniquement les traductions dont l'occurence est au-delà d'un certain seuil et vérifier leur bonne traduction par rapport à un nouveau dictionnaire de référence qui ne contient que les mots retenus (au-dessus du seuil précédemment mentionné).Nous allons appliquer ces deux méthodes au dictionnaire obtenu.
###Code
# Seuil d'occurrence pris en compte dans la création du dictionnaire de traduction commune
# Le seuil indiqué est en pourcentage
seuil = 25
# Dictionnaire 1
dico_com1_s = {}
for i in dico_com1.keys():
if dico_com1[i][1]>=seuil:
dico_com1_s.update({i : dico_com1[i][0]})
print("Pour un seuil de", seuil, "% d'occurrence minimum, les traductions proposées dans le 1er dictionnaire \
correspondent à celles de référence pour", len(dico_com1_s), "mots, soit dans",
round(len(dico_com1_s)/len(dico_part) * 100, 2), "% des cas.\n")
dico_lt_s = {}
for i in dico_part.keys():
for j in range(len(dico_occ[i])):
if dico_occ[i][j] in dico_part[i]:
for k in dico_occ_pctg[i].keys():
if dico_occ_pctg[i][k] >= seuil:
dico_lt_s.update({i : dico_occ_m[i]})
break
liste_lnt_s = []
for i in dico_part.keys():
if i not in dico_lt_s.keys():
liste_lnt_s.append(i)
print("Les possibles traductions proposées dans le 1er dictionnaire trouvent une correspondance dans les traductions \
proposées dans le dictionnaire de référence pour", len(dico_lt_s), "mots, soit dans",
round(len(dico_lt_s)/len(dico_part) * 100, 2), "% des cas.\n")
print("Liste des mots pour lesquels aucune occurrence ne correspond à une traduction, de",
len(liste_lnt_s), "mots :\n", liste_lnt_s)
# Dictionnaire 2
dico_com1_m_s = {}
for i in dico_com1_m.keys():
if dico_com1_m[i][1]>=seuil:
dico_com1_m_s.update({i : dico_com1_m[i][0]})
print("Pour un seuil de", seuil, "% d'occurrence minimum, les traductions proposées dans le 1er dictionnaire \
correspondent à celles de référence pour", len(dico_com1_m_s), "mots, soit dans",
round(len(dico_com1_m_s)/len(dico_part) * 100, 2), "% des cas.\n")
dico_lt_m_s = {}
for i in dico_part.keys():
for j in range(len(dico_occ_m[i])):
if dico_occ_m[i][j] in dico_part[i]:
for k in dico_occ_pctg_m[i].keys():
if dico_occ_pctg_m[i][k] >= seuil:
dico_lt_m_s.update({i : dico_occ_m[i]})
break
liste_lnt_m_s = []
for i in dico_part.keys():
if i not in dico_lt_m_s.keys():
liste_lnt_m_s.append(i)
print("Les possibles traductions proposées dans le 2eme dictionnaire trouvent une correspondance dans les traductions \
proposées dans le dictionnaire de référence pour", len(dico_lt_m_s), "mots, soit dans",
round(len(dico_lt_m_s)/len(dico_part) * 100, 2), "% des cas.\n")
print("Liste des mots pour lesquels aucune occurrence ne correspond à une traduction, de",
len(liste_lnt_m_s), "mots :\n", liste_lnt_m_s)
###Output
Les possibles traductions proposées dans le 2eme dictionnaire trouvent une correspondance dans les traductions proposées dans le dictionnaire de référence pour 123 mots, soit dans 70.69 % des cas.
Liste des mots pour lesquels aucune occurrence ne correspond à une traduction, de 51 mots :
['and', 'animal', 'animals', 'are', 'august', 'beautiful', 'between', 'big', 'black', 'blue', 'chinese', 'did', 'dislike', 'driving', 'drove', 'dry', 'during', 'english', 'feared', 'field', 'freezing', 'french', 'fruit', 'going', 'green', 'have', 'her', 'little', 'most', 'next', 'nice', 'not', 'old', 'plan', 'plans', 'portuguese', 'quiet', 'red', 'rusty', 'saw', 'shiny', 'store', 'that', 'the', 'this', 'visit', 'was', 'went', 'white', 'would', 'yellow']
###Markdown
Nous constatons que le seuil baisse ce qui est normal car on compare avec l'ensemble des mots en commun mais le résultat est plus précis.
###Code
# Seuil d'occurrence pris en compte dans la création du dictionnaire de référence
# Dictionnaire 1
dico_trad_s1 = {}
for i in dico_trad.keys():
if dico_trad_pctg[i][1]>=seuil:
dico_trad_s1.update({i : dico_trad[i]})
dico_part_s1 = {}
for i in dico_trad_s1.keys():
if i in dico_part.keys():
dico_part_s1.update({i : dico_cplt[i]})
print("Avec l'application d'un seuil minimum d'occurrence à", seuil, "%, on obtient un dictionnaire de",len(dico_trad_s1),
"mots traduits.")
print("Ce dictionnaire a", len(dico_part_s1), "mots en commun avec le dictionnaire de référence.\n")
dico_com1_s1 = {}
liste_nt_s1 = []
for i in dico_part_s1.keys():
if dico_trad_s1[i] in dico_part_s1[i]:
dico_com1_s1.update({i : [dico_trad_s1[i], dico_trad_pctg[i][1]]})
else:
liste_nt_s1.append(i)
# Analyse du pourcentage d'occurrence des mots choisis pour la traduction
liste_pctg_s1 = [dico_com1_s1[i][1] for i in dico_com1_s1.keys()]
print("Les mots les plus fréquents choisis pour la traduction des mots anglais dans le 1er dictionnaire ont une \
occurrence comprise entre", min(liste_pctg_s1), "% et", max(liste_pctg_s1), "%.\n")
# Analyse du score du 1er dictionnaire
print("Les traductions proposées dans le 1er dictionnaire correspondent à celles proposées par le dictionnaire de \
référence pour", len(dico_com1_s1), "mots, soit dans", round(len(dico_com1_s1)/len(dico_part_s1) * 100, 2),"% des cas.\n")
print("Liste des mots mal traduits, de", len(liste_nt_s1), "mots :\n", liste_nt_s1)
dico_lt_s1 = {}
for i in dico_part_s1.keys():
for j in range(len(dico_occ[i])):
if dico_occ[i][j] in dico_part_s1[i]:
for k in dico_occ_pctg[i].keys():
if dico_occ_pctg[i][k] >= seuil:
dico_lt_s1.update({i : dico_occ[i]})
break
liste_lnt_s1 = []
for i in dico_part_s1.keys():
if i not in dico_lt_s1.keys():
liste_lnt_s1.append(i)
print("Les possibles traductions proposées dans le 1er dictionnaire trouvent une correspondance dans les traductions \
proposées dans le dictionnaire de référence pour", len(dico_lt_s1), "mots, soit dans",
round(len(dico_lt_s1)/len(dico_part_s1) * 100, 2), "% des cas.\n")
print("Liste des mots pour lesquels aucune occurrence ne correspond à une traduction, de",
len(liste_lnt_s1), "mots :\n", liste_lnt_s1)
# Dictionnaire 2
dico_trad_s2 = {}
for i in dico_trad_m.keys():
if dico_trad_pctg_m[i][1]>=seuil:
dico_trad_s2.update({i : dico_trad_m[i]})
dico_part_s2 = {}
for i in dico_trad_s2.keys():
if i in dico_part.keys():
dico_part_s2.update({i : dico_cplt[i]})
print("Avec l'application d'un seuil minimum d'occurrence à", seuil, "%, on obtient un dictionnaire de",len(dico_trad_s2),
"mots traduits.")
print("Ce dictionnaire a", len(dico_part_s2), "mots en commun avec le dictionnaire de référence.\n")
dico_com1_s2 = {}
liste_nt_s2 = []
for i in dico_part_s2.keys():
if dico_trad_s2[i] in dico_part_s2[i]:
dico_com1_s2.update({i : [dico_trad_s2[i], dico_trad_pctg_m[i][1]]})
else:
liste_nt_s2.append(i)
# Analyse du pourcentage d'occurrence des mots choisis pour la traduction
liste_pctg_s2 = [dico_com1_s2[i][1] for i in dico_com1_s2.keys()]
print("Les mots les plus fréquents choisis pour la traduction des mots anglais dans le 1er dictionnaire ont une \
occurrence comprise entre", min(liste_pctg_s2), "% et", max(liste_pctg_s2), "%.\n")
# Analyse du score du 2eme dictionnaire
print("Les traductions proposées dans le 2eme dictionnaire correspondent à celles proposées par le dictionnaire de \
référence pour", len(dico_com1_s2), "mots, soit dans", round(len(dico_com1_s2)/len(dico_part_s2) * 100, 2),"% des cas.\n")
print("Liste des mots mal traduits, de", len(liste_nt_s2), "mots :\n", liste_nt_s2)
dico_lt_m_s2 = {}
for i in dico_part_s2.keys():
for j in range(len(dico_occ_m[i])):
if dico_occ_m[i][j] in dico_part_s2[i]:
for k in dico_occ_pctg_m[i].keys():
if dico_occ_pctg_m[i][k] >= seuil:
dico_lt_m_s2.update({i : dico_occ_m[i]})
break
liste_lnt_m_s2 = []
for i in dico_part_s2.keys():
if i not in dico_lt_m_s2.keys():
liste_lnt_m_s2.append(i)
print("Les possibles traductions proposées dans le 2eme dictionnaire trouvent une correspondance dans les traductions \
proposées dans le dictionnaire de référence pour", len(dico_lt_m_s2), "mots, soit dans",
round(len(dico_lt_m_s2)/len(dico_part_s2) * 100, 2), "% des cas.\n")
print("Liste des mots pour lesquels aucune occurrence ne correspond à une traduction, de",
len(liste_lnt_m_s2), "mots :\n", liste_lnt_m_s2)
###Output
Les possibles traductions proposées dans le 2eme dictionnaire trouvent une correspondance dans les traductions proposées dans le dictionnaire de référence pour 123 mots, soit dans 91.79 % des cas.
Liste des mots pour lesquels aucune occurrence ne correspond à une traduction, de 11 mots :
['dislike', 'drove', 'feared', 'field', 'have', 'her', 'nice', 'plans', 'saw', 'store', 'would']
|
notebooks/breathing_notebooks/.ipynb_checkpoints/1.1_deformation_experiment_scattering_ETH-03-checkpoint.ipynb | ###Markdown
Experiment 02: Deformations ExperimentsIn this notebook, we are using the CLUST Dataset.The sequence used for this notebook is ETH-01.zip
###Code
import sys
import random
import os
sys.path.append('../src')
import warnings
warnings.filterwarnings("ignore")
from PIL import Image
from sklearn.manifold import Isomap
from utils.compute_metrics import get_metrics, get_majority_vote,log_test_metrics
from utils.split import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
from sklearn.svm import LinearSVC
from sklearn.svm import SVC
from sklearn.model_selection import GroupKFold
from tqdm import tqdm
from pprint import pprint
import torch
from itertools import product
import pickle
import pandas as pd
import numpy as np
import mlflow
import matplotlib.pyplot as plt
#from kymatio.numpy import Scattering2D
import torch
from tqdm import tqdm
from kymatio.torch import Scattering2D
###Output
_____no_output_____
###Markdown
1. Visualize Sequence of USWe are visualizing the first images from the sequence ETH-01-1 that contains 3652 US images.
###Code
directory=os.listdir('../data/02_interim/Data1')
directory.sort()
# settings
h, w = 15, 10 # for raster image
nrows, ncols = 3, 4 # array of sub-plots
figsize = [15, 8] # figure size, inches
# prep (x,y) for extra plotting on selected sub-plots
xs = np.linspace(0, 2*np.pi, 60) # from 0 to 2pi
ys = np.abs(np.sin(xs)) # absolute of sine
# create figure (fig), and array of axes (ax)
fig, ax = plt.subplots(nrows=nrows, ncols=ncols, figsize=figsize)
imgs= directory[0:100]
count = 0
# plot simple raster image on each sub-plot
for i, axi in enumerate(ax.flat):
# i runs from 0 to (nrows*ncols-1)
# axi is equivalent with ax[rowid][colid]
img =plt.imread("../data/02_interim/Data/" + imgs[i])
axi.imshow(img, cmap='gray')
axi.axis('off')
# get indices of row/column
# write row/col indices as axes' title for identification
#axi.set_title(df_labels['Finding Labels'][row[count]], size=20)
count = count +1
plt.tight_layout(True)
plt.savefig('samples_xray')
plt.show()
###Output
_____no_output_____
###Markdown
2. Create Dataset
###Code
%%time
ll_imgstemp = [plt.imread("../data/02_interim/Data/" + dir) for dir in directory[:5]]
%%time
ll_imgs = [np.array(Image.open("../data/02_interim/Data/" + dir).resize(size=(98, 114)), dtype='float32') for dir in directory]
%%time
ll_imgs2 = [img.reshape(1,img.shape[0],img.shape[1]) for img in ll_imgs]
# dataset = pd.DataFrame([torch.tensor(ll_imgs).view(1,M,N).type(torch.float32)], columns='img')
dataset = pd.DataFrame({'img':ll_imgs2}).reset_index().rename(columns={'index':'order'})
# dataset = pd.DataFrame({'img':ll_imgs}).reset_index().rename(columns={'index':'order'})
dataset
###Output
_____no_output_____
###Markdown
3. Extract Scattering Features
###Code
M,N = dataset['img'].iloc[0].shape[1], dataset['img'].iloc[0].shape[2]
print(M,N)
# Set the parameters of the scattering transform.
J = 3
# Generate a sample signal.
scattering = Scattering2D(J, (M, N))
data = np.concatenate(dataset['img'],axis=0)
data = torch.from_numpy(data)
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
if use_cuda:
scattering = scattering.cuda()
data = data.to(device)
init = 0
final =0
count =0
first_loop = True
for i in tqdm(range (0,len(data), 11)):
init= i
final = i + 11
if first_loop:
scattering_features = scattering(data[init: final])
first_loop=False
torch.cuda.empty_cache()
else:
scattering_features = torch.cat((scattering_features,scattering(data[init: final]) ))
torch.cuda.empty_cache()
# break
# save scattering features
# with open('../data/03_features/scattering_features_deformation.pickle', 'wb') as handle:
# pickle.dump(scattering_features, handle, protocol=pickle.HIGHEST_PROTOCOL)
# save scattering features
with open('../data/03_features/scattering_features_deformation.pickle', 'wb') as handle:
pickle.dump(scattering_features, handle, protocol=pickle.HIGHEST_PROTOCOL)
# save scattering features
with open('../data/03_features/dataset_deformation.pickle', 'wb') as handle:
pickle.dump(dataset, handle, protocol=pickle.HIGHEST_PROTOCOL)
###Output
_____no_output_____
###Markdown
4. Extract PCA Components
###Code
with open('../data/03_features/scattering_features_deformation.pickle', 'rb') as handle:
scattering_features = pickle.load(handle)
with open('../data/03_features/dataset_deformation.pickle', 'rb') as handle:
dataset = pickle.load(handle)
sc_features = scattering_features.view(scattering_features.shape[0], scattering_features.shape[1] * scattering_features.shape[2] * scattering_features.shape[3])
X = sc_features.cpu().numpy()
#standardize
scaler = StandardScaler()
X = scaler.fit_transform(X)
pca = PCA(n_components=50)
X = pca.fit_transform(X)
plt.plot(np.insert(pca.explained_variance_ratio_.cumsum(),0,0),marker='o')
plt.xlabel('Number of components')
plt.ylabel('Cumulative explained variance')
plt.show()
print(pca.explained_variance_ratio_.cumsum())
df = pd.DataFrame(X)
df['order'] = dataset['order']
#df.corr()
import seaborn as sns; sns.set_theme()
figure(figsize = (10,10))
vec1 = df.corr()['order'].values
vec2 = vec1.reshape(vec1.shape[0], 1)
sns.heatmap(vec2)
plt.show()
def visualize_corr_pca_order(pca_c, df):
plt.figure(figsize=(16,8))
x= df['order']
y= df[pca_c]
plt.scatter(x,y)
m, b = np.polyfit(x, y, 1)
plt.plot(x, m*x + b, color='red')
plt.ylabel('PCA Component '+ str(pca_c+1))
plt.xlabel('Frame Order')
plt.show()
visualize_corr_pca_order(1, df)
print('Correlation between order and Pca component 2:', df.corr()['order'][1])
visualize_corr_pca_order(3, df)
print('Correlation between order and Pca component 3:', df.corr()['order'][3])
def visualize_sub_plot(pca_c, df, x_num= 3, y_num =3):
fig, axs = plt.subplots(x_num, y_num, figsize=(15,10))
size = len(df)
plot_num = x_num * y_num
frame = int(size/plot_num)
start = 0
for i in range (x_num):
for j in range (y_num):
final = start + frame
x= df['order'].iloc[start:final]
y= df[pca_c].iloc[start:final]
m, b = np.polyfit(x, y, 1)
axs[i, j].set_ylabel('PCA Component '+ str(pca_c+1))
axs[i, j].set_xlabel('Frame Order')
axs[i, j].plot(x, m*x + b, color='red')
axs[i, j].scatter(x,y)
start = start + frame
plt.show()
visualize_sub_plot(1, df, x_num= 3, y_num =3)
visualize_sub_plot(3, df, x_num= 3, y_num =3)
###Output
_____no_output_____
###Markdown
5. Isometric Mapping Correlation with Order
###Code
with open('../data/03_features/scattering_features_deformation.pickle', 'rb') as handle:
scattering_features = pickle.load(handle)
with open('../data/03_features/dataset_deformation.pickle', 'rb') as handle:
dataset = pickle.load(handle)
sc_features = scattering_features.view(scattering_features.shape[0], scattering_features.shape[1] * scattering_features.shape[2] * scattering_features.shape[3])
X = sc_features.cpu().numpy()
#standardize
scaler = StandardScaler()
X = scaler.fit_transform(X)
from sklearn.manifold import Isomap
embedding = Isomap(n_components=2)
X_transformed = embedding.fit_transform(X)
df = pd.DataFrame(X_transformed)
df['order'] = dataset['order']
df.corr()
from sklearn.manifold import Isomap
def visualize_sub_plot_iso(pca_c, x_num= 3, y_num =3):
fig, axs = plt.subplots(x_num, y_num, figsize=(15,13))
size =len(sc_features )
plot_num = x_num * y_num
frame = int(size/plot_num)
start = 0
x_total = []
first = True
for i in tqdm(range (x_num)):
for j in tqdm(range (y_num)):
final = start + frame
if first:
embedding = Isomap(n_components=2)
X_transformed = embedding.fit_transform(X[start:final])
first = False
else:
X_transformed = embedding.transform(X[start:final])
x_total.extend(X_transformed)
df = pd.DataFrame(X_transformed)
df['order'] = dataset['order'].iloc[start:final].values
x= df['order']
y= df[pca_c]
start = start + frame
#m, b = np.polyfit(x, y, 1)
axs[i, j].set_ylabel('Iso Map Dimension '+ str(pca_c+1))
axs[i, j].set_xlabel('Frame Order')
#axs[i, j].plot(x, m*x + b, color='red')
axs[i, j].scatter(x,y)
plt.show()
return x_total
x_total = visualize_sub_plot_iso(0, x_num= 3, y_num =3)
#print('Correlation between order and Pca component 2:', df.corr()['order'][1])
%%time
embedding = Isomap(n_components=1)
embedding.fit(X[:500])
X_transformed = embedding.transform(X)
pca_c =0
plt.figure(figsize=(16,8))
x= dataset['order']#.iloc[:3645]
y= X_transformed#[total[pca_c] for total in x_total]
plt.scatter(x,y)
m, b = np.polyfit(x, y, 1)
#plt.plot(x, m*x + b, color='red')
plt.ylabel('PCA Component '+ str(pca_c+1))
plt.xlabel('Frame Order')
plt.show()
###Output
_____no_output_____ |
KNN/K-Nearest-neighbors.ipynb | ###Markdown
Apply CrossValidation For Finding K
###Code
X=[]
Y=[]
for i in range(1,26,2):
clf1=KNeighborsClassifier(n_neighbors=i)
score=cross_val_score(clf1,X_train,Y_train)
X.append(i)
Y.append(score.mean())
import matplotlib.pyplot as plt
plt.plot(X,Y)
plt.scatter(X,Y)
plt.show()
###Output
_____no_output_____ |
preprocessing_thetis_non_seq.ipynb | ###Markdown
Preprocessing ThetisNotebook to preprocess the thetis dataset to train the cnn as non-sequential images Mounting Google Drive
###Code
from google.colab import drive
drive.mount('/content/drive')
# Navigate to code directory
%cd /content/drive/MyDrive/cnn
# List project directory contents
!ls
###Output
_____no_output_____
###Markdown
Unzipping the dataset
###Code
# import os
# import zipfile
# local_zip = '/content/drive/MyDrive/cnn/VIDEO_RGB.zip'
# zip_ref = zipfile.ZipFile(local_zip, 'r')
# zip_ref.extractall('/content/drive/MyDrive/cnn')
# zip_ref.close()
###Output
_____no_output_____
###Markdown
Preparing the directories
###Code
import os
os.mkdir(f"FRAMES_COMPLETE_VIDEO/p8_fslice_s2")
os.mkdir(f"FRAMES_COMPLETE_VIDEO/p8_fslice_s2/backhand")
os.mkdir(f"FRAMES_COMPLETE_VIDEO/p8_fslice_s2/forehand")
os.mkdir(f"FRAMES_COMPLETE_VIDEO/p8_fslice_s2/idle")
os.mkdir(f"FRAMES_BIG_VALIDATION")
os.mkdir(f"FRAMES_BIG_VALIDATION/bhnd")
os.mkdir(f"FRAMES_BIG_VALIDATION/forehand")
os.mkdir(f"FRAMES_BIG_VALIDATION/idle")
###Output
_____no_output_____
###Markdown
Dividing the videos into frames
###Code
import cv2
import glob
import time
count = 12790
for filepath in glob.iglob('VIDEO_RGB/backhand_slice/*.avi', recursive = True):
print(filepath)
cap = cv2.VideoCapture(filepath)
success,image = cap.read()
count2 = 0
while success:
if count2 < 5:
cv2.imwrite(f"/content/drive/MyDrive/cnn/FRAMES_RGB/idle/frame{count}.jpg", image) # save frame as JPEG file
elif count2 > 80:
cv2.imwrite(f"/content/drive/MyDrive/cnn/FRAMES_RGB/idle/frame{count}.jpg", image) # save frame as JPEG file
elif count2 > 10 and count2 < 75:
cv2.imwrite(f"/content/drive/MyDrive/cnn/FRAMES_RGB/bhnd/frame{count}.jpg", image) # save frame as JPEG file
count2+=1
success,image = cap.read()
print('Read a new frame: ', success)
count += 1
###Output
_____no_output_____
###Markdown
Preparing a folder to classify a complete video
###Code
import cv2
import glob
import time
filepath = "/content/drive/MyDrive/cnn/VIDEO_RGB/forehand_slice/p8_fslice_s2.avi"
cap = cv2.VideoCapture(filepath)
success,image = cap.read()
count2 = 0
while success:
if count2 < 5:
cv2.imwrite(f"/content/drive/MyDrive/cnn/FRAMES_COMPLETE_VIDEO/p8_fslice_s2/idle/frame{count2}.jpg", image) # save frame as JPEG file
elif count2 > 80:
cv2.imwrite(f"/content/drive/MyDrive/cnn/FRAMES_COMPLETE_VIDEO/p8_fslice_s2/idle/frame{count2}.jpg", image) # save frame as JPEG file
elif count2 > 10 and count2 < 75:
cv2.imwrite(f"/content/drive/MyDrive/cnn/FRAMES_COMPLETE_VIDEO/p8_fslice_s2/forehand/frame{count2}.jpg", image) # save frame as JPEG file
count2+=1
success,image = cap.read()
print('Read a new frame: ', success)
# count = 0
# for filepath in glob.iglob('VIDEO_RGB/backhand/*.avi', recursive = True):
# count = count + 1
# print(filepath)
# print(count)
import cv2
import glob
import time
count = 246667
for filepath in glob.iglob('VIDEO_RGB/forehand_slice/*.avi', recursive = True):
print(filepath)
cap = cv2.VideoCapture(filepath)
success,image = cap.read()
count2 = 0
while success:
if count2 < 5:
cv2.imwrite(f"/content/drive/MyDrive/cnn/FRAMES_RGB/idle/frame{count}.jpg", image) # save frame as JPEG file
elif count2 > 80:
cv2.imwrite(f"/content/drive/MyDrive/cnn/FRAMES_RGB/idle/frame{count}.jpg", image) # save frame as JPEG file
elif count2 > 10 and count2 < 75:
cv2.imwrite(f"/content/drive/MyDrive/cnn/FRAMES_RGB/forehand/frame{count}.jpg", image) # save frame as JPEG file
count2+=1
success,image = cap.read()
print('Read a new frame: ', success)
count += 1
###Output
_____no_output_____
###Markdown
Creating the validation set
###Code
import glob
import random
import numpy as np
import shutil
# Select a random 10% of the dataset to be the validation set
forehand_files = glob.glob('FRAMES_RGB/forehand/*')
n_files = len(forehand_files)
print(n_files)
# random_sample = random.sample(forehand_files, np.int(np.floor(n_files*0.1)))
random_sample = random.sample(forehand_files, 6000)
for filepath in random_sample:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/forehand')
shutil.move(filepath, 'FRAMES_MEDIUM/forehand')
random_sample = random.sample(forehand_files, 600)
for filepath in random_sample:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/forehand')
shutil.move(filepath, 'FRAMES_MEDIUM_VALIDATION/forehand')
# Select a random 10% of the dataset to be the validation set
bhnd_files = glob.glob('FRAMES_RGB/bhnd/*')
n_files = len(bhnd_files)
print(n_files)
random_sample = random.sample(bhnd_files, 6000)
for filepath in random_sample:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/bhnd')
shutil.move(filepath, 'FRAMES_MEDIUM/bhnd')
random_sample = random.sample(bhnd_files, 600)
for filepath in random_sample:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/bhnd')
shutil.move(filepath, 'FRAMES_MEDIUM_VALIDATION/bhnd')
# Select a random 10% of the dataset to be the validation set
idle_files = glob.glob('FRAMES_RGB/idle/*')
n_files = len(idle_files)
print(n_files)
random_sample = random.sample(idle_files, 5000)
for filepath in random_sample:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/idle')
shutil.move(filepath, 'FRAMES_MEDIUM/idle')
random_sample = random.sample(idle_files, 500)
for filepath in random_sample:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/idle')
shutil.move(filepath, 'FRAMES_MEDIUM_VALIDATION/idle')
###Output
_____no_output_____
###Markdown
Creating human accuracy test set
###Code
import glob
import random
import numpy as np
import shutil
# Select a random 10% of the dataset to be the validation set
forehand_files = glob.glob('FRAMES_RGB/forehand/*')
n_files = len(forehand_files)
print(n_files)
# random_sample = random.sample(forehand_files, np.int(np.floor(n_files*0.1)))
random_sample = random.sample(forehand_files, 20)
for filepath in random_sample:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/forehand')
shutil.move(filepath, 'FRAMES_HUMAN_ACCURACY/forehand')
# Select a random 10% of the dataset to be the validation set
bhnd_files = glob.glob('FRAMES_RGB/bhnd/*')
n_files = len(bhnd_files)
print(n_files)
random_sample = random.sample(bhnd_files, 20)
for filepath in random_sample:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/bhnd')
shutil.move(filepath, 'FRAMES_HUMAN_ACCURACY/bhnd')
# Select a random 10% of the dataset to be the validation set
idle_files = glob.glob('FRAMES_RGB/idle/*')
n_files = len(idle_files)
print(n_files)
random_sample = random.sample(idle_files, 20)
for filepath in random_sample:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/idle')
shutil.move(filepath, 'FRAMES_HUMAN_ACCURACY/idle')
import os
os.mkdir(f"FRAMES_HUMAN_ACCURACY_MIXED")
import os
# Select a random 10% of the dataset to be the validation set
bhnd_files = glob.glob('FRAMES_HUMAN_ACCURACY/**/*', recursive = 'true')
bhnd_files = [f for f in bhnd_files if os.path.isfile(f)]
n_files = len(bhnd_files)
print(n_files)
random_sample = random.sample(bhnd_files, 60)
for filepath in random_sample:
print(filepath)
shutil.copy(filepath, 'FRAMES_HUMAN_ACCURACY_MIXED')
import glob
import random
import numpy as np
import shutil
idle_files = glob.glob('FRAMES_RGB/bhnd/bhnd/')
n_files = len(idle_files)
print(n_files)
# Select a random 10% of the dataset to be the validation set
bhnd_files = glob.glob('FRAMES_RGB/bhnd/*')
n_files = len(bhnd_files)
print(n_files)
random_sample = random.sample(bhnd_files, 300)
for filepath in random_sample:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/bhnd')
shutil.move(filepath, 'FRAMES_SMALL/bhnd')
random_sample = random.sample(bhnd_files, 100)
for filepath in random_sample:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/bhnd')
shutil.move(filepath, 'FRAMES_SMALL_VALIDATION/bhnd')
!cp -a FRAMES_RGB/bhnd/ FRAMES_RGB/bhnd/
import glob
import random
import numpy as np
import shutil
# Select a random 10% of the dataset to be the validation set
idle_files = glob.glob('FRAMES_RGB/forehand/*')
n_files = len(idle_files)
print(n_files)
random_sample = random.sample(idle_files, 3600)
count = 0
for filepath in random_sample:
if count < 3000:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/idle')
shutil.move(filepath, 'FRAMES_MEDIUM/forehand')
else:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/idle')
shutil.move(filepath, 'FRAMES_MEDIUM_VALIDATION/forehand')
count = count + 1
import glob
import random
import numpy as np
import shutil
# Select a random 10% of the dataset to be the validation set
idle_files = glob.glob('FRAMES_RGB/bhnd/*')
n_files = len(idle_files)
print(n_files)
random_sample = random.sample(idle_files, 3600)
count = 0
for filepath in random_sample:
if count < 3000:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/idle')
shutil.move(filepath, 'FRAMES_MEDIUM/backhand')
else:
print(filepath)
# shutil.move(filepath, 'FRAMES_RGB_VALIDATION/idle')
shutil.move(filepath, 'FRAMES_MEDIUM_VALIDATION/backhand')
count = count + 1
import glob
import random
import numpy as np
import shutil
idle_files = glob.glob('FRAMES_MEDIUM_VALIDATION/backhand/*')
n_files = len(idle_files)
print(n_files)
# for filepath in idle_files:
# print(filepath)
# # shutil.move(filepath, 'FRAMES_RGB_VALIDATION/idle')
# shutil.move(filepath, 'FRAMES_RGB/idle')
rm FRAMES_RGB_VALIDATION/forehand/*
###Output
_____no_output_____ |
opengrid_dev/notebooks/Analysis/Energy_signature.ipynb | ###Markdown
This script is a port of EnergyID's code that calculates a linear regression on heating data Imports and setup General imports
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
OpenGrid-specific imports
###Code
from opengrid_dev.library import houseprint
from opengrid_dev import config
from opengrid_dev.library import linearregression
c = config.Config()
###Output
_____no_output_____
###Markdown
Plotting settings
###Code
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = 16,8
###Output
_____no_output_____
###Markdown
Load Data We are going to use gas consumption data and weather data. Because we don't want to overload the weather API, we will only use 1 location (Ukkel).First, let's define the start and end date of our experiment. Let's take 1 year worth of data, starting with last month.
###Code
# If we want to get consumption for 12 months, we will need 13 months of data
end = pd.Timestamp.today().replace(day=2).normalize()
start = (end.replace(year=end.year-1) - pd.Timedelta(days=2))
start = start.tz_localize('Europe/Brussels')
end = end.tz_localize('Europe/Brussels')
print(start, end)
###Output
_____no_output_____
###Markdown
Gas Data
###Code
# Load the Houseprint, and sync all data
hp = houseprint.Houseprint()
#hp = houseprint.load_houseprint_from_file('cache_hp.hp')
hp.init_tmpo()
#hp.sync_tmpos()
#hp.save('cache_hp.hp')
def gas_data_generator1():
# original monthly data generator, returns wrong data
for gas_sensor in hp.get_sensors(sensortype='gas'):
df = gas_sensor.get_data(head=start, tail=end, unit='kWh', diff=False)
df = df.tz_convert('Europe/Brussels')
df = df.resample('MS')
df = df.diff().dropna()
df = df[df>0]
if df.empty:
continue
yield df
def gas_data_generator2():
# Simple roughly correct monthly data generator
# Roughly-correct means that the gas consumption between two counter values
# right before and right after a month-transition are attributed to the new month.
# However, it is robust and does not need data beyond the last month
for gas_sensor in hp.get_sensors(sensortype='gas'):
df = gas_sensor.get_data(head=start, tail=end, unit='kWh', diff=False)
df = df.tz_convert('Europe/Brussels')
df = df.resample('MS').last()
df = df.diff().dropna()
df = df[df>0]
if df.empty:
continue
yield df
def gas_data_generator3():
# More complicated but most correct correct monthly data generator
# The difference with the previous is that this generator interpolates
# at month-transitions in order to estimate the exact counter value at 00:00:00
# whereas the previous attributed all gas consumption at month-transitions to the
# new month
# Drawbacks: very slow (due to the two reindex() calls) and if there would be no
# data after the end of the last month or before beginning of first month,
# interpolation can't be made, and the entire last (or first) month has no data
for gas_sensor in hp.get_sensors(sensortype='gas'):
df = gas_sensor.get_data(head=start, tail=end, unit='kWh', diff=False)
df = df.tz_convert('Europe/Brussels')
newindex = df.resample('MS').first().index
df = df.reindex(df.index.union(newindex))
df = df.interpolate(method='time')
df = df.reindex(newindex)
df = df.diff()
df = df.shift(-1).dropna()
df = df[df>0]
if df.empty:
continue
yield df
def gas_data_generator4():
# Preferred method: as accurate as 3, and faster
# Daily approach, obtain fully correct daily data.
# To be aggregated to monthly or weekly or ...
for gas_sensor in hp.get_sensors(sensortype='gas'):
df = gas_sensor.get_data(head=start, tail=end, resample='day', unit='kWh', diff=False, tz='Europe/Brussels')
df = df.diff().shift(-1).dropna()
if df.empty:
continue
yield df
###Output
_____no_output_____
###Markdown
Let's have a peek
###Code
gas_data1 = gas_data_generator1()
gas_data2 = gas_data_generator2()
gas_data3 = gas_data_generator3()
gas_data4 = gas_data_generator4()
peek1 = next(gas_data1)
peek2 = next(gas_data2)
peek3 = next(gas_data3)
peek4 = next(gas_data4)
plt.figure()
plt.plot(peek1, label='1')
plt.plot(peek2, label='2')
plt.plot(peek3, label='3')
plt.plot(peek4.resample('MS').sum(), label='4')
plt.legend()
print(peek3 - peek4.resample('MS').sum())
%timeit(next(gas_data1))
%timeit(next(gas_data2))
%timeit(next(gas_data3))
%timeit(next(gas_data4))
###Output
_____no_output_____
###Markdown
Weather Data Run this block to download the weather data and save it to a pickle. This is a large request, and you can only do 2 or 3 of these per day before your credit with Forecast.io runs out!TODO: Use the caching library for this. To get the data run the cell below
###Code
from opengrid_dev.library import forecastwrapper
weather = forecastwrapper.Weather(location='Ukkel, Belgium', start=start, end=end)
weather_data = weather.days().resample('MS').sum()
weather_data['heatingDegreeDays16.5'].plot()
###Output
_____no_output_____
###Markdown
Put data together We have defined an OpenGrid analysis as a class that takes a single DataFrame as input, so we'll create that dataframe.I wrote a generator that uses our previously defined generator so you can generate while you generate.
###Code
def analysis_data_generator():
gas_data = gas_data_generator()
for gas_df in gas_data:
df = pd.concat([gas_df, weather_data['heatingDegreeDays16.5']], axis=1).dropna()
df.columns = ['gas', 'degreedays']
yield df
###Output
_____no_output_____
###Markdown
Let's have another peek
###Code
analysis_data = analysis_data_generator()
peek = next(analysis_data)
fig, ax1 = plt.subplots()
ax2 = ax1.twinx()
for axis, column, color in zip([ax1, ax2], peek.columns, ['b', 'r']):
axis.plot_date(peek.index, peek[column], '-', color=color, label=column)
plt.legend()
###Output
_____no_output_____
###Markdown
Run Regression Analysis
###Code
analysis_data = analysis_data_generator()
for data in analysis_data:
try:
analysis = linearregression.LinearRegression(independent=data.degreedays, dependent=data.gas)
except ValueError as e:
print(e)
fig = analysis.plot()
fig.show()
analysis_data = analysis_data_generator()
for data in analysis_data:
try:
analysis = linearregression.LinearRegression3(independent=data.degreedays, dependent=data.gas,
breakpoint=60, percentage=0.5)
except ValueError as e:
print(e)
fig = analysis.plot()
fig.show()
###Output
_____no_output_____ |
algo/.ipynb_checkpoints/PC1-checkpoint.ipynb | ###Markdown
Use the following for testing Decision tree- run 100 iterations- 50/50 training/testing- Decision tree
###Code
res = dict()
X, y= df.iloc[:,:-1].values, df[CLASS_NAME].values
for i in range(100):
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=.5, random_state=0)
clf_tree = DecisionTreeClassifier(random_state=0)
clf_tree.fit(X_train, y_train)
y_pred = clf_tree.predict(X_test)
tmp_res = classification_report(y_test, y_pred, output_dict=True)
res["precision"] = res.get("precision", 0) + tmp_res["1"]["precision"]
res["recall"] = res.get("recall", 0) + tmp_res["1"]["recall"]
res["f1-score"] = res.get("f1-score", 0) + tmp_res["1"]["f1-score"]
res["specificity"] = res.get("specificity", 0) + tmp_res[str(MAJORITY)]["recall"]
res["sensitivity"] = res.get("sensitivity", 0) + tmp_res[str(MINORITY)]["recall"]
res["overall accuracy"] = res.get("overall accuracy", 0) + accuracy_score(y_test, y_pred,)
res["auc"] = res.get("auc", 0) + roc_auc_score(y_test, y_pred)
res["g_mean"] = res.get("g_mean", 0) + geometric_mean_score(y_test, y_pred)
pprint_dict(res)
###Output
precision: 0.25
recall: 0.27
f1-score: 0.26
specificity: 0.94
sensitivity: 0.27
overall accuracy: 0.90
auc: 0.61
g_mean: 0.50
###Markdown
Use the following for testing SMOTE- run 100 iterations- 50/50 training/testing- Decision tree- N = 200
###Code
res = dict()
X, y= df.iloc[:,:-1].values, df[CLASS_NAME].values
for i in range(100):
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=.5, random_state=0)
if i == 0:
print("Shape of X_train before oversampling: " + str(X_train.shape))
print("Outcome distribution of X_train before oversampling: " + str(np.bincount(y_train)))
# Oversample training data
sm = SMOTE(random_state=0)
sm.fit(X_train, y_train)
X_train_r, y_train_r = sm.fit_resample(X_train, y_train)
if i == 0:
print("Shape of X_train after oversampling: " + str(X_train_r.shape))
print("Outcome distribution of X_train after oversampling: " + str(np.bincount(y_train_r)))
# Build classifier on resampled data
clf_tree = DecisionTreeClassifier(random_state=0)
clf_tree.fit(X_train_r, y_train_r)
y_pred = clf_tree.predict(X_test)
tmp_res = classification_report(y_test, y_pred, output_dict=True)
res["precision"] = res.get("precision", 0) + tmp_res["1"]["precision"]
res["recall"] = res.get("recall", 0) + tmp_res["1"]["recall"]
res["f1-score"] = res.get("f1-score", 0) + tmp_res["1"]["f1-score"]
res["specificity"] = res.get("specificity", 0) + tmp_res[str(MAJORITY)]["recall"]
res["sensitivity"] = res.get("sensitivity", 0) + tmp_res[str(MINORITY)]["recall"]
res["overall accuracy"] = res.get("overall accuracy", 0) + accuracy_score(y_test, y_pred,)
res["auc"] = res.get("auc", 0) + roc_auc_score(y_test, y_pred)
res["g_mean"] = res.get("g_mean", 0) + geometric_mean_score(y_test, y_pred)
pprint_dict(res)
###Output
precision: 0.22
recall: 0.38
f1-score: 0.27
specificity: 0.90
sensitivity: 0.38
overall accuracy: 0.87
auc: 0.64
g_mean: 0.58
###Markdown
Use the following for testing ADASYN- run 100 iterations- 50/50 training/testing- Decision tree- A fully balanced dataset after synthesizing- Dth = 0.75 (Dth is a preset threshold for the maximum tolerated degree of class imbalance ratio)
###Code
res = dict()
X, y= df.iloc[:,:-1].values, df[CLASS_NAME].values
for i in range(100):
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=.5, random_state=0)
if i == 0:
print("Shape of X_train before oversampling: " + str(X_train.shape))
print("Outcome distribution of X_train before oversampling: " + str(np.bincount(y_train)))
# Oversample training data
ada = ADASYN(random_state=0)
ada.fit(X_train, y_train)
X_train_r, y_train_r = ada.fit_resample(X_train, y_train)
if i == 0:
print("Shape of X_train after oversampling: " + str(X_train_r.shape))
print("Outcome distribution of X_train after oversampling: " + str(np.bincount(y_train_r)))
# Build classifier on resampled data
clf_tree = DecisionTreeClassifier(random_state=0)
clf_tree.fit(X_train_r, y_train_r)
y_pred = clf_tree.predict(X_test)
tmp_res = classification_report(y_test, y_pred, output_dict=True)
res["precision"] = res.get("precision", 0) + tmp_res["1"]["precision"]
res["recall"] = res.get("recall", 0) + tmp_res["1"]["recall"]
res["f1-score"] = res.get("f1-score", 0) + tmp_res["1"]["f1-score"]
res["specificity"] = res.get("specificity", 0) + tmp_res[str(MAJORITY)]["recall"]
res["sensitivity"] = res.get("sensitivity", 0) + tmp_res[str(MINORITY)]["recall"]
res["overall accuracy"] = res.get("overall accuracy", 0) + accuracy_score(y_test, y_pred,)
res["auc"] = res.get("auc", 0) + roc_auc_score(y_test, y_pred)
res["g_mean"] = res.get("g_mean", 0) + geometric_mean_score(y_test, y_pred)
pprint_dict(res)
###Output
precision: 0.26
recall: 0.51
f1-score: 0.35
specificity: 0.90
sensitivity: 0.51
overall accuracy: 0.87
auc: 0.71
g_mean: 0.68
###Markdown
Use the following for testing SMOTEBoost- run 100 iterations- 50/50 training/testing- Decision tree
###Code
res = dict()
X, y= df.iloc[:,:-1].values, df[CLASS_NAME].values
for i in range(100):
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=.5, random_state=0)
clf1 = SMOTEBoost(random_state=0)
clf1.fit(X_train, y_train)
y_pred = clf1.predict(X_test)
tmp_res = classification_report(y_test, y_pred, output_dict=True)
res["precision"] = res.get("precision", 0) + tmp_res["1"]["precision"]
res["recall"] = res.get("recall", 0) + tmp_res["1"]["recall"]
res["f1-score"] = res.get("f1-score", 0) + tmp_res["1"]["f1-score"]
res["specificity"] = res.get("specificity", 0) + tmp_res[str(MAJORITY)]["recall"]
res["sensitivity"] = res.get("sensitivity", 0) + tmp_res[str(MINORITY)]["recall"]
res["overall accuracy"] = res.get("overall accuracy", 0) + accuracy_score(y_test, y_pred,)
res["auc"] = res.get("auc", 0) + roc_auc_score(y_test, y_pred)
res["g_mean"] = res.get("g_mean", 0) + geometric_mean_score(y_test, y_pred)
pprint_dict(res)
###Output
precision: 0.16
recall: 0.41
f1-score: 0.23
specificity: 0.85
sensitivity: 0.41
overall accuracy: 0.82
auc: 0.63
g_mean: 0.59
###Markdown
Use the following for testing Dev_algo- run 100 iterations- 50/50 training/testing- Decision tree
###Code
res = dict()
X, y= df.iloc[:,:-1].values, df[CLASS_NAME].values
for i in range(100):
X_train, X_test, y_train, y_test = \
train_test_split(X, y, test_size=.5, random_state=0)
unique, counts = np.unique(y_train, return_counts=True)
frequency = dict(zip(unique, counts))
clf1 = DEVALGO(random_state=0, n_samples=frequency[MAJORITY])
clf1.fit(X_train, y_train)
y_pred = clf1.predict(X_test)
tmp_res = classification_report(y_test, y_pred, output_dict=True)
res["precision"] = res.get("precision", 0) + tmp_res["1"]["precision"]
res["recall"] = res.get("recall", 0) + tmp_res["1"]["recall"]
res["f1-score"] = res.get("f1-score", 0) + tmp_res["1"]["f1-score"]
res["specificity"] = res.get("specificity", 0) + tmp_res[str(MAJORITY)]["recall"]
res["sensitivity"] = res.get("sensitivity", 0) + tmp_res[str(MINORITY)]["recall"]
res["overall accuracy"] = res.get("overall accuracy", 0) + accuracy_score(y_test, y_pred,)
res["auc"] = res.get("auc", 0) + roc_auc_score(y_test, y_pred)
res["g_mean"] = res.get("g_mean", 0) + geometric_mean_score(y_test, y_pred)
pprint_dict(res)
###Output
precision: 0.34
recall: 0.30
f1-score: 0.32
specificity: 0.96
sensitivity: 0.30
overall accuracy: 0.91
auc: 0.63
g_mean: 0.53
|
fraud_detection_v3_fixing_time_column.ipynb | ###Markdown
Reading & Parsing Data
###Code
data = pd.read_csv('train.csv')
data.head()
data.info()
ms.matrix(data)
###Output
_____no_output_____
###Markdown
Feature Selection & Processing
###Code
data.head()
ses.boxplot(x=data['Class'],y=data['Time'],data=data)
ses.scatterplot(x=data['Time'],y=data['Amount'],data=data)
ses.heatmap(data.corr(),cmap='coolwarm')
ses.scatterplot(x=data['Time'],y=data['V8'],data=data)
ses.scatterplot(x=data['Time'],y=data['V7'],data=data)
#add a new column
data['mins'] = data['Time']//60
data.head()
data['day'] = data['mins']//60
data['week'] = data['day']//7
data['dayOrNight'] = np.where(data['day']>=18.0, 1.0, 0.0)
data.head()
ses.scatterplot(x=data['day'],y=data['Amount'],data=data)
ses.boxplot(x=data['dayOrNight'],y=data['Amount'],data=data)
###Output
_____no_output_____
###Markdown
Model Selection & Training
###Code
creditId = data.drop('ID',axis=1,inplace=True)
data.info()
data.head()
from sklearn.cluster import KMeans
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(data.drop('Class',axis=1),data['Class'],test_size=0.30,random_state=53)
X_train.shape
y_train.shape
X_test.shape
y_test.shape
kmeans = KMeans(n_clusters=2, random_state=0).fit(X_train,y_train)
kmeans.labels_
predict = kmeans.predict(X_test)
predict
from sklearn.metrics import confusion_matrix,accuracy_score,precision_score,recall_score,classification_report
print(accuracy_score(y_test,predict))
###Output
0.31658544782389514
###Markdown
Another Model Training
###Code
from xgboost.sklearn import XGBClassifier
rf=XGBClassifier(learning_rate=0.01,
n_estimators=90,
max_depth=4,
min_child_weight=1,
gamma=0,
subsample=0.8,
objective='binary:logistic',
reg_alpha=0,
seed=27)
rf.fit(X_train,y_train)
predict=rf.predict(X_test)
print(accuracy_score(y_test,predict))
###Output
0.9994958830448664
###Markdown
Prediction & Saving
###Code
test = pd.read_csv('test.csv')
test.info()
test_id = test['ID']
test['mins'] = test['Time']//60
test['day'] = test['mins']//60
test['week'] = test['day']//7
test['dayOrNight'] = np.where(test['day']>=18.0, 1.0, 0.0)
test.head()
test.drop(['ID'],axis=1,inplace=True)
answer=rf.predict(test)
answer
df1 = pd.DataFrame(test_id,columns=['ID'])
df2 = pd.DataFrame(answer,columns=['Class'])
output = pd.concat([df1,df2],axis=1)
output.to_csv('prediction_v2_time_fix_v1.csv',index=False)
###Output
_____no_output_____
###Markdown
Random Forest Model
###Code
from sklearn.ensemble import RandomForestClassifier
rfc = RandomForestClassifier(max_depth=4, random_state=53)
rfc.fit(X_train,y_train)
predict_v2 = rfc.predict(X_test)
predict_v2
print(accuracy_score(y_test,predict_v2))
answer_v2 = rfc.predict(test)
answer_v2
df1 = pd.DataFrame(test_id,columns=['ID'])
df2 = pd.DataFrame(answer_v2,columns=['Class'])
output = pd.concat([df1,df2],axis=1)
output.to_csv('prediction_v2_time_fix_v2.csv',index=False)
###Output
_____no_output_____ |
notebooks/eigenvalue_rigidity_graph_backbone_ribose.ipynb | ###Markdown
Part 1: Eigenvalue Plot
###Code
e_agent = EigenPlotBackboneRibose(rootfolder)
e_agent.initailize_six_systems()
figsize = (20, 10)
e_agent.plot_lambda_six_together(figsize)
#plt.savefig('Rigidity_graph_eigenvalue_backbone.png', dpi=200)
plt.show()
figsize = (12, 3)
e_agent.plot_lambda_separate_strand(figsize)
plt.tight_layout()
#plt.savefig('lambda_sep_strands_backbone.png', dpi=200)
plt.show()
###Output
_____no_output_____
###Markdown
Part 2: Eigenvector
###Code
figsize = (24, 12)
hspace = 0.18
wspace = 0.2
groupid = 2 # 0, 1, 2
lw = 2
fig, d_axes = e_agent.plot_eigenvector_separate_strand(figsize, hspace, wspace, groupid, lw)
#plt.savefig(f'group{groupid}_eigvector_backbone.png', dpi=200)
plt.show()
###Output
_____no_output_____ |
wandb/run-20210518_200345-2pudm69v/tmp/code/00-main.ipynb | ###Markdown
testing
###Code
from load_data import *
# load_data()
###Output
_____no_output_____
###Markdown
Loading the data
###Code
from load_data import *
X_train,X_test,y_train,y_test = load_data()
len(X_train),len(y_train)
len(X_test),len(y_test)
###Output
_____no_output_____
###Markdown
Test Modelling
###Code
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
class Test_Model(nn.Module):
def __init__(self) -> None:
super().__init__()
self.c1 = nn.Conv2d(1,32,5)
self.c2 = nn.Conv2d(32,64,5)
self.c3 = nn.Conv2d(64,128,5)
self.fc4 = nn.Linear(128*10*10,256)
self.fc5 = nn.Linear(256,4)
def forward(self,X):
preds = F.max_pool2d(F.relu(self.c1(X)),(2,2))
preds = F.max_pool2d(F.relu(self.c2(preds)),(2,2))
preds = F.max_pool2d(F.relu(self.c3(preds)),(2,2))
print(preds.shape)
preds = preds.view(-1,128*10*10)
preds = F.relu(self.fc4(preds))
preds = self.fc5(preds)
return preds
device = torch.device('cuda')
BATCH_SIZE = 32
IMG_SIZE = 112
model = Test_Model().to(device)
optimizer = optim.SGD(model.parameters(),lr=0.1)
criterion = nn.CrossEntropyLoss()
EPOCHS = 12
from tqdm import tqdm
PROJECT_NAME = 'Weather-Clf'
import wandb
test_index += 1
wandb.init(project=PROJECT_NAME,name=f'test-{test_index}')
for _ in tqdm(range(EPOCHS)):
for i in range(0,len(X_train),BATCH_SIZE):
X_batch = X_train[i:i+BATCH_SIZE].view(-1,1,112,112).to(device)
y_batch = y_train[i:i+BATCH_SIZE].to(device)
model.to(device)
preds = model(X_batch.float())
preds.to(device)
loss = criterion(preds,torch.tensor(y_batch,dtype=torch.long))
optimizer.zero_grad()
loss.backward()
optimizer.step()
wandb.log({'loss':loss.item()})
wandb.finish()
###Output
_____no_output_____ |
ETNAYourOwnJupyterInterpreter.ipynb | ###Markdown
**ETNA: Extensive Tool for Network Analysis** **About:**This notebook enables interactive network analysis using a graphical user interface. It was initially developed to analyse protein-protein interaction networks, however may be used for any network saved in .csv or .graphml format.The following methods are available:* **Centrality measures** (*Degree, Betweenness Centrality, Closeness Centrality, Eigenvector Centrality*) - histograms and basic statistics are provided* **Clustering coefficient** - histogram and basic statistics are provided* **Power law fitting** for the degree sequence - the fit is plotted, likelihood is calculated and p-value may be calculated if needed. The user may define the cutoff parameter and number of simulations for bootstrap when performing p-value assesment. * **Hubs impact** - two methods are available for evaluating the contribution of the hubs and lower-degree nodes. Plots of them are displayed.* **Assortativity** - degree correlation coefficient is calculated and the Average Nearest Neighbour Degree (ANND) plot is provided.* **Robustness** - the robustness can be measured with respect to Degree, Betweenness Centrality, Closeness Centrality and Eigenvector Centrality. The plot is provided.* **Failure cascade** - histogram of final sizes of failure cascades is provided. The user may define the simulation parameter.In addition, for the **Centrality measures**, **Clustering coefficient**, **Robustness** and **Failure cascade** the data can be downloaded in a .csv file. All other details regarding implementation of ETNA's methods can be found in the manuscript and corresponding Supplementary Materials (Section 2. and 3.). **Requirements:**This is an .ipynb notebook. To run it you must have a correct interpreter e.g. Jupyter Notebook within Anaconda. Before running you must install graph-tool library (follow the isntructions on the website: https://git.skewed.de/count0/graph-tool/-/wikis/installation-instructions). All other libraries (numpy, pandas, random, rpy2, ipywidgets, base64, hashlib, typing and seaborn) if not installed can be installed using !pip install library_name command. All other details about the implementation of ETNA's methods can be found in the manuscript and corresponding **Availability:**4 exemplary preprocessed datasets from IntAct Molecular Interaction Database (elimination of special signs in the authors names was performed) are provided in the GitHub repository https://github.com/AlicjaNowakowska/ETNA. If you want to use them for the analysis you must download the zip file, unzip it and copy the file path. Then provide in the ETNA's window. **Network files:**To perform the analysis you may use the exemplary datasets or your own data. In either case you must copy the file path of your file of interest - it must be then provided in the ETNA's tab 'Provide file name here'. **To display the ETNA's window and perform the analysis excute all the following code cells by clicking Shift+Enter button.** Code
###Code
# ------------------- Graphical User Interface for Network Analysis ------------------- #
# Libraries
import warnings
warnings.filterwarnings("ignore")
from graph_tool.all import *
import graph_tool.all as gt
import ipywidgets as widgets
import matplotlib.pyplot as plt
import numpy as np
from IPython.display import clear_output
import seaborn as sns
from ipywidgets import *
import rpy2.robjects.packages as rpackages
from rpy2.robjects.packages import importr
import rpy2
import rpy2.robjects as robjects
from rpy2.robjects.vectors import StrVector
import pandas as pd
from IPython.utils import io
import random
import numpy as np
# Libraries for Download button
import base64
import hashlib
from typing import Callable
import ipywidgets
from IPython.display import HTML, display
# Installing R packages
utils = rpackages.importr('utils')
with io.capture_output() as captured:
utils.install_packages('poweRlaw', repos="https://cloud.r-project.org")
x = rpackages.importr('poweRlaw')
# Creating a My_Network class to hold functions for all network analysis methods
class My_Network:
def __init__(self, file_name):
# Network class is initialized through the file upload
if ".csv" in file_name:
self.G = graph_tool.load_graph_from_csv(file_name)
if ".graphml" in file_name:
self.G = graph_tool.load_graph(file_name)
def prepare_the_network(self):
"""
Network preparation includes:
1) Making it undirected
2) Removal of parallel edges if they are present
3) Extraction of the largest connected component that is treated as the final ready-to-use network (all other components are removed).
"""
self.G.set_directed(False) # 1)
graph_tool.stats.remove_parallel_edges(self.G) # 2)
# 3)
comp, hist = graph_tool.topology.label_components(self.G)
label = gt.label_largest_component(self.G)
to_remove = []
for v in self.G.vertices():
if label[v]==0:
to_remove.append(v)
for v in reversed(sorted(to_remove)):
self.G.remove_vertex(v)
"""
The following functions are responsible for calculation of centrality measures and clustering coefficient.
It is done by generating a corresponding map of the form: node <---> value of the measure.
"""
def create_degree_distribution_map(self):
my_map = self.G.degree_property_map("total")
return my_map
def create_betweenness_distribution_map(self):
v_betweeness_map, e_betweenness_map = graph_tool.centrality.betweenness(self.G)
my_map = v_betweeness_map
return my_map
def create_closeness_distribution_map(self):
my_map = graph_tool.centrality.closeness(self.G)
return my_map
def create_eigenvector_distribution_map(self):
eigen_value, v_eigen_map = graph_tool.centrality.eigenvector(self.G)
my_map = v_eigen_map
return my_map
def create_clustering_map(self):
my_map = graph_tool.clustering.local_clustering(self.G)
return my_map
def create_random_map(self):
# Corresponds to the generation of the random ranking of the nodes. Each number is assesed a random place in the ranking.
# Its position is saved within the vertex property map as it is done for other metrics.
r = self.G.new_vertex_property("double")
indexes = np.arange(self.G.num_vertices())
np.random.shuffle(indexes)
r.a = indexes
return r
def plot_map_histogram(self, my_map, measure_name, block = True):
"""
plot_map_histogram function contains a code for the plot generation
using matplotlib library given the graph-tool map for the measure of interest.
"""
# General settings:
plt.style.use('seaborn-whitegrid')
fig, ax = plt.subplots(constrained_layout=True, figsize=(5, 5))
FONT = 15
# Preparing the data:
my_map = my_map.fa # Extraction of the map's values - now the normal pythonic list is obtained as the representation of the measure's values.
# Calculating basic statistics:
to_calculate_statistics = list(my_map)
avg = round(np.mean(to_calculate_statistics),4)
std = round(np.std(to_calculate_statistics),2)
# Creating the histogram:
n=15
a = ax.hist(my_map, bins=n, facecolor="lightblue",weights=np.zeros_like(my_map) + 1. / len(my_map))
bins_mean = [0.5 * (a[1][j] + a[1][j+1]) for j in range(n)]
sticks_to_mark = ([], [])
for k in range(len(a[0])):
if a[0][k] == 0:
pass
else:
sticks_to_mark[0].append(bins_mean[k])
sticks_to_mark[1].append(a[0][k])
ax.plot(sticks_to_mark[0], sticks_to_mark[1], "b+")
ax.set_xlabel("Value", fontsize = FONT)
ax.set_ylabel("Fraction of nodes", fontsize = FONT)
ax.set_title(measure_name +" histogram \n Mean value: " + str(avg)+ ", Std: "+ str(std), fontsize = FONT)
plt.show(block=block)
return fig, ax
def hubs_impact_check(self):
"""
hubs_impact_check function is used for the evaluation of hubs and low-degree nodes' contribution to the number of links present in the graph.
This is done by extracting all the possible values of the degree (1) and then looping over them (2). Within the loop for each degree number
all nodes with the degree below or equal to it are extracted to form the subnetwork (3). The number of links and nodes in the subnetwork
is divided by the corresponding total numbers in the network (4) to evaluate the contribution of the following degree groups.
"""
largest_N = self.G.num_vertices()
largest_E = self.G.num_edges()
degrees = self.G.get_total_degrees(self.G.get_vertices())
Ns = []
Es = []
degrees_set = list(set(degrees)) # 1)
degrees_set.sort()
degrees_map = self.G.degree_property_map("total")
for degree in degrees_set: # 2)
cut = degree
u = gt.GraphView(self.G, vfilt = lambda v: degrees_map[v]<=cut) # 3)
current_N = u.num_vertices()/largest_N
current_E = u.num_edges()/largest_E # 4)
Ns.append(current_N)
Es.append(current_E)
return Ns, Es, degrees_set
def plot_hubs_impact1(self, degrees_set, Es, block = True): #to use it first need to execute hubs_impact_check
"""
Plot_hubs_impact1 requires data that is generated by hubs_impact_check function.
It generates the plot that represents how the following degree groups contribute to the number of links present in the whole network.
"""
# Plot settings:
FONT = 15
plt.style.use('seaborn-whitegrid')
plt.figure(figsize=(5,5))
plt.xticks(fontsize=FONT-3)
plt.yticks(fontsize=FONT-3)
plt.xlabel("K", fontsize= FONT)
plt.ylabel("$L_K/L$", fontsize= FONT)
plt.title("Relation between K and subnetworks' links\n sizes; $s_1$", fontsize= FONT)
# Plotting the data
plt.plot(degrees_set, Es, "o", markersize=4, color="royalblue")
plt.show(block = block)
def plot_hubs_impact2(self, degrees_set, Es, Ns, block = True):
"""
Plot_hubs_impact2 requires data that is generated by hubs_impact_check function.
It generates the plot that represents how the following percentages of the total number of nodes contribute to
the total number of links present in the whole network.
"""
# Plot settings:
FONT=15
plt.style.use('seaborn-whitegrid')
plt.figure(figsize=(5,5))
sns.set_context("paper", rc={"font.size":FONT,"axes.titlesize":FONT,"axes.labelsize":FONT, "xtick.labelsize":FONT-3, "ytick.labelsize":FONT-3,
"legend.fontsize":FONT-3, "legend.titlesize":FONT-3})
# Plotting the data
fig = sns.scatterplot(x= Ns, y=Es, hue=np.log(degrees_set), palette="dark:blue_r")
fig.set(xlabel='$N_K/N$', ylabel='$L_K/L$', title="Relation between subnetworks' nodes\nand links sizes; $s_2$")
plt.legend(title="Log(K)", loc ="upper left", title_fontsize=FONT-3)
plt.show(block = block)
def calculate_assortativity_value(self):
# Calculation of the degree correlation coefficient:
return gt.assortativity(self.G, "total")[0]
def plot_ANND(self, normed = False, errorbar = True, block = True):
"""
plot_ANND generates Average Nearest Neighbour Degree plot that represents the mixing patterns between different groups of the nodes.
Each group consists of the nodes of the same degree.
"""
# Plot settings:
FONT = 15
plt.style.use('seaborn-whitegrid')
fig = plt.figure(figsize=(5,5))
plt.xlabel("Source degree (k)", fontsize = FONT)
plt.ylabel("$<k_{nn}(k)>$", fontsize = FONT)
title = "Average degree of\n the nearest neighbours" if normed == False else "Normed average degree of\n the nearest neighbours"
plt.title(title, fontsize = FONT)
# Calculating correlation vectors for ANND plot
h = gt.avg_neighbor_corr(self.G, "total", "total")
x = h[2][:-1]
y = h[0]
error = h[1]# yerr argument
# Taking into account "normed" parameter:
if normed == True:
N = self.G.num_vertices()
x = [i/N for i in x]
y = [i/N for i in y]
error = [i/N for i in error]
# Taking into account "errobar" parameter and plotting
if errorbar == True:
plt.errorbar(x, y, error, fmt="o", color="royalblue", markersize=4)
else:
plt.plot(x, y, "o", color="royalblue", markersize=4)
plt.show(block=block)
def one_node_cascade(self, fraction_to_fail, initial_node):
"""
one_node_cascade executes failure cascade simulation with the starting failure point equal to the provided initial node (1).
Failure cascade algorithm requires going constantly through the network and checking the nodes's statuses (2).
The current state of the node is changed to FAILED if the fraction of node's neighbours that have FAILED statuses exceeds
or is equal to fraction_to_fail number (3). Looping over the network finishes when no new FAILED status has been introduced
during the iteration (4). The output of the function is the number of nodes with the FAILED status at the end of the simulation (5).
"""
# Initializing a vector that represents statuses:
gprop = self.G.new_vertex_property("bool")
gprop[initial_node] = True #1)
go_on=True
while go_on == True: #2)
go_on=False #4 assume no new FAILED status in the upcoming iteration
for v in self.G.get_vertices(): #2)
if gprop[v] == 0: # check current node status
failures = gprop.a[self.G.get_all_neighbors(v)] # extract statuses of all the node's neighbours
if sum(failures)/len(failures) >= fraction_to_fail:
gprop[v]=1 #3
go_on=True # have had new FAILED status, must continue looping
cascade_size = sum(gprop.a)/len(gprop.a) #5)
return (initial_node, cascade_size)
def cascade_all_nodes(self, fraction_to_fail = 0.25):
"""
cascade_all_nodes runs failure cascade simulation (one_node_cascade) for each of the network's nodes to evaluate distribution
of the final cascade sizes. It returns a dictionary in which each node is assigned a value of the cascade size that it generated.
"""
nodes_numbers = []
cascade_sizes =[]
for v in self.G.get_vertices(): # Take each node
i, c = self.one_node_cascade(fraction_to_fail, v) # Run for it failure cascade
nodes_numbers.append(v)
cascade_sizes.append(c)
zip_iterator = zip(nodes_numbers, cascade_sizes) # Get pairs of elements.
dictionary_names_cascade = dict(zip_iterator) # Return dicitionary node_number:cascade_size
return dictionary_names_cascade
def plot_cascade(self, dictionary_names_cascade, fraction_to_fail):
"""
plot_cascade generates a histogram for the results of the cascade_all_nodes function.
It shows the distribution of the failure cascade sizes in the network.
"""
# Plot settings:
FONT = 15
plt.style.use('seaborn-whitegrid')
plt.figure(figsize=(5,5))
plt.title("Cascade size histogram C="+ str(fraction_to_fail), fontsize= FONT)
plt.xlabel("Value", fontsize= FONT)
plt.ylabel("Fraction of nodes", fontsize= FONT)
# Data transformation for the histogram:
cascade_sizes = list(dictionary_names_cascade.values())
unique, counts = np.unique(cascade_sizes, return_counts=True)
cascade_sizes_counts = dict(zip(unique, counts))
possible_cascade_sizes, counts = zip(*cascade_sizes_counts.items())
fractions = [i/sum(counts) for i in counts]
# Plotting:
plt.plot(possible_cascade_sizes, fractions,"*", color="royalblue",markersize=4)
plt.show(block=True)
def robustness_evaluation(self, map_G, step = 1):
"""
robustness_evaluation performs the robustness measurements according to the provided map_G.
Robustness measurements are performed by sorting the nodes according to the map_G values (1).
Then subsequent fractions of the nodes are taken according to the sorted pattern (2) and removed from the network
using the filtering option in graph-tool (3). In such a way new subgraphs that contain only not filtered-out (removed) nodes and edges between them
are generated (4). The largest component sizes of such subnetworks are calculated and returned.
"""
largest_N = self.G.num_vertices()
largest_E = self.G.num_edges()
giant_component_size = []
vertices_to_remove = map_G.a.argsort()[::-1] # 1)
f_previous = 0
# settings for a vector that represents whether a node should be taken or not when performing network filtering
gprop = self.G.new_vertex_property("bool")
self.G.vertex_properties["no_removal"] = gprop
for v in self.G.vertices():
self.G.properties[("v","no_removal")][v] = True
for fraction in range(0,100,step):
f = fraction/100
new_to_remove = vertices_to_remove[int(f_previous*largest_N):int(f*largest_N)] # 2) adding new nodes to be filtered
""" In order to reduce computational costs the filtering statuses are added subsequently. In other words in the first iteration
x nodes, equal to f_previous*largest_N, should be filtered (removed), so x nodes have no_removal = False. In new iteration x+y (int(f*largest_N))
nodes should be added the filtered status. However, already x nodes have no_removal = False, therefore only nodes from the range
int(f_previous*largest_N):int(f*largest_N) must change no_removal = False.
"""
for node in new_to_remove:
self.G.properties[("v","no_removal")][node] = False # 3)
f_previous = f
sub = GraphView(self.G, gprop) # 4)
comp, hist = graph_tool.topology.label_components(sub) #5)
giant_component_size.append(max(hist))
return giant_component_size #5)
def robustness_random_evaluation(self, N=10):
"""
Performs robustness assesment in terms of the random failures. It generates N times the random map corresponding to the random
order of the nodes. According to the map, in each iteration the removal is performed and the corresponding largest component sizes
are measured.
"""
giant_component_sizes = [self.robustness_evaluation(self.create_random_map()) for i in range(N)]
mean_gcs = np.array(giant_component_sizes).mean(axis=0)
return list(mean_gcs)
def plot_robustness(self, metrics_results, step = 1, block = False):
"""
plot_robustness generates the plots for the data generated by the robustness_evaluation function.
"""
# Plot settings:
FONT = 15
fraction = [i/100 for i in range(0,100,step)]
plt.figure(figsize = (5,5))
plt.style.use('seaborn-whitegrid')
plot_metric_labels = {"Degree": ["--*", "#D81B60"] , "Betweenness centrality": ["--o", "#1E88E5"],
"Closeness centrality" : ["--+","#FFC107"],
"Eigenvector centrality": ["--^", "#004D40"],
"Random failures":["--1", "black"]}
plt.xlabel("Fraction of nodes removed", fontsize = FONT)
plt.ylabel("Largest component size", fontsize = FONT)
plt.title("Robustness of the network", fontsize = FONT)
#Plotting:
for i in metrics_results:
data, metric_name = i
data = [i/max(data) for i in data]
plt.plot(fraction, data, plot_metric_labels[metric_name][0], label= metric_name, color=plot_metric_labels[metric_name][1], linewidth = 1, markersize = 7)
plt.legend()
plt.show(block=False)
def powerlaw(self, cutoff = False):
"""
powerlaw function adjust the power law distribution according to the Maximum likelihood method for the network's degree sequence.
The calculations are performed with the usage of poweRlaw library from R package and as the output the value of the adjusted
alpha parameter is returned. The adjustment is performed for all values from the degree sequence that are larger or equal to
the cutoff value. If cutoff == False then the cutoff is adjsuted automatically by optimizing the Kolomogrov distance
between the fitted power law and the data.
"""
robjects.r('''
powerlaws <- function(degrees, cutoff = FALSE){
degrees = as.integer(degrees)
#print(degrees)
# Set powerlaw object
my_powerlaw = displ$new(degrees)
# Estimate alpha value
est = estimate_pars(my_powerlaw)
# Estimate cutoff value as the one that minimizes Kolomogrov distance between the data and distribution model
if (cutoff == FALSE){
est2 = estimate_xmin(my_powerlaw)
my_powerlaw$setXmin(est2)
est = estimate_pars(my_powerlaw)
my_powerlaw$setPars(est$pars)
}
else{
my_powerlaw$setXmin(cutoff)
est = estimate_pars(my_powerlaw)
my_powerlaw$setPars(est$pars)
}
# Calculate likelihood of the model
likelihood = dist_ll(my_powerlaw)
# Calculate percentage of data covered by the powerlaw
percentage = length(degrees[which(degrees>=my_powerlaw$xmin)])/length(degrees)
#print(degrees[which(degrees>=my_powerlaw$xmin)])
# Data for plotting the results
data = plot(my_powerlaw)
fit = lines(my_powerlaw)
return(list(data, fit, my_powerlaw$xmin, my_powerlaw$pars, percentage, likelihood, my_powerlaw))
#return(c(my_powerlaw$xmin, my_powerlaw$pars))
#statistical_test = bootstrap_p(m, no_of_sims = 1000, threads = 2)
#p_value = statistical_test$p
}''')
# Make R funtion available for python:
powerlaw = robjects.globalenv['powerlaws']
# Prepare the degree sequence:
degree_map = self.create_degree_distribution_map().fa
degree_map = degree_map.tolist()
# Perform calculations:
power_law_result = powerlaw(degree_map, cutoff)
plotting_data = (power_law_result[0][0], power_law_result[0][1], power_law_result[1][0], power_law_result[1][1])
kmin = power_law_result[2][0]
alpha = power_law_result[3][0]
percentage = power_law_result[4][0]
likelihood = power_law_result[5][0]
my_powerlaw = power_law_result[6]
return (kmin, alpha, percentage, likelihood, plotting_data, my_powerlaw)
def bootstrap_powerlaw(self, my_powerlaw, N=100):
"""
bootstrap_powerlaw calculates the p-value for H0: degree sequence comes from the power law distirbution with parameters: estimated alpha and cutoff;
H1: It does not come. The test is performed according to bootstrap_p function from poweRlaw package that simulates N times the data from the distirbution
and calculates how many times the distance between the theoretical and simulational distributions was larger or equal to the one for the degree sequence.
"""
robjects.r('''
assess_p_value <- function(my_powerlaw, N){
statistical_test = bootstrap_p(my_powerlaw, no_of_sims = N, threads = 2)
return(statistical_test$p)
}''')
p_value = robjects.globalenv['assess_p_value']
p = p_value(my_powerlaw, N)[0]
return p
def plot_powerlaw(self, plotting_data, block = False):
"""
plot_powerlaw function visualises the power law fit and the data on the log log scale.
"""
FONT = 15
# Data preparation:
datax = plotting_data[0]
datay = plotting_data[1]
fitx = plotting_data[2]
fity = plotting_data[3]
# Plot settings:
plt.figure(figsize =(5,5))
plt.style.use('seaborn-whitegrid')
plt.xlabel("log k", fontsize = FONT)
plt.ylabel("log P(X<k)", fontsize = FONT)
plt.title("Power law fit", fontsize = FONT)
# Plotting:
plt.plot(np.log(datax), np.log(datay), "o", markersize=4, color="#1E88E5")
plt.plot(np.log(fitx), np.log(fity), linewidth = 3, color = "#FFC107")
plt.show(block = block)
# Defining additional ipywidget that will perform data download after button hitting - DownloadButton
class DownloadButton(ipywidgets.Button):
"""
Download button with dynamic content
The content is generated using a callback when the button is clicked. It is defined as an extension of "button" class in ipywidgets (source: https://stackoverflow.com/questions/61708701/how-to-download-a-file-using-ipywidget-button).
"""
def __init__(self, filename: str, contents: Callable[[], str], **kwargs):
super(DownloadButton, self).__init__(**kwargs)
self.filename = filename
self.contents = contents
self.on_click(self.__on_click)
def __on_click(self, b):
contents: bytes = self.contents().encode('utf-8')
b64 = base64.b64encode(contents)
payload = b64.decode()
digest = hashlib.md5(contents).hexdigest() # bypass browser cache
id = f'dl_{digest}'
display(HTML(f"""
<html>
<body>
<a id="{id}" download="{self.filename}" href="data:text/csv;base64,{payload}" download>
</a>
<script>
(function download() {{
document.getElementById('{id}').click();
}})()
</script>
</body>
</html>
"""))
# Graphical User Interface:
class GUI_for_network_analysis:
def __init__(self):
# Initializing the variables and the GUI elements:
self.G = None
self.initial_info = widgets.HTML(value = "<b><font color='#555555';font size =5px;font family='Helvetica'>ETNA: Extensive Tool for Network Analysis</b>")
self.instruction_header = widgets.HTML(value = "<b><font color='#555555';font size =4px;font family='Helvetica'>Instruction:</b>")
self.instruction = widgets.HTML(value = "<b><font color='#555555';font size =2.5px;font family='Helvetica'>1. Provide file name with with .graphml or .csv extension. <br>2. Hit 'Prepare the network' button (Parallel links and nodes not from the largest component will be removed. Network is also set as undirected). <br>3. Choose the tab of interest. <br>4. Adjust method settings if present.<br>5. Run the method by hitting the tab's 'Run button'. The calculations will be performed and the appropriate plot will be displayed on the right.<br>6. If you want to run a new analysis for a new network hit 'Restart ETNA' button. </b>")
self.file_name_textbox = widgets.Text(value='Provide file name here',
placeholder='Type something',
description='Network:',
disabled=False,
align_items='center',
layout=Layout(width='40%')#, height='10px')
)
self.button_graph_preparation = widgets.Button(value=False,
description='Prepare the network',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check', # (FontAwesome names without the `fa-` prefix)
layout=Layout(width='40%', height='20%'),
style= {'button_color':'#FFAAA7'}
)
self.links_nodes_number_info = widgets.Label(value="")
self.label_centrality = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Histograms of centrality measures</b>")
self.centrality_choice = widgets.Dropdown(
options=['Choose from the list','Degree', 'Betweenness centrality', 'Closeness centrality',
'Eigenvector centrality', "Clustering coefficient"],
description='Measure: ',
disabled=False,
layout=Layout(width='90%')
)
self.button_centrality = widgets.Button(value=False,
description='Run',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check', # (FontAwesome names without the `fa-` prefix)
layout=Layout(width='90%', height='20%'),
style= {'button_color':'#98DDCA'}
)
self.centrality_out = widgets.Output()
self.info_mini = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Minimum: </b>")
self.info_mini_value = widgets.Label(value = "")
self.info_maxi = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Maximum: </b>")
self.info_maxi_value = widgets.Label(value = "")
self.info_avg = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Average: </b>")
self.info_avg_value = widgets.Label(value = "")
self.info_std = widgets.HTML(value="<b><font color='black';font size =2px;font family='Helvetica'>Standard deviation: </b>")
self.info_std_value = widgets.Label(value = "")
self.button_assortativity = widgets.Button(value=False,
description='Run',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check', # (FontAwesome names without the `fa-` prefix)
layout=Layout(width='90%', height='20%'),
style= {'button_color':'#98DDCA'}
) #można zrobić pogrubione (działa) , "font_weight":"bold" dodać do stylu
self.label_corr_value = widgets.Label(value = "") #było " "
self.label_ANND_plot = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Assortativity examination: Average Nearest Neighbour Degree (ANND) plot and degree correlation coefficient</b>")
self.label_ANND_plot_settings = widgets.Label(value = "ANND plot settings:")
self.ANND_plot_settings_normed = widgets.Checkbox(value=False,
description='Normed ANND',
disabled=False,
indent=False)
self.ANND_plot_settings_errorbar = widgets.Checkbox(value=False,
description='Errorbars',
disabled=False,
indent=False)
self.assortativity_out = widgets.Output()
self.hubs_impact_choice = widgets.Dropdown(
options=['Choose from the list','s1', 's2'],
description='Measure: ',
disabled=False,
layout=Layout(width='90%')
)
self.hubs_impact_button = widgets.Button(value=False,
description='Run',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check', # (FontAwesome names without the `fa-` prefix)
layout=Layout(width='90%', height='20%'),
style= {'button_color':'#98DDCA'}
)
self.label_hubs_impact = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Plots of s1 and s2</b>")
#self.label_hubs_impact_explain = widgets.Label(value = "Hubs impact examination consists of creating subnetworks.. i tutaj walnąć ten ładny matematyczny zapis z mgr")
self.hubs_impact_out = widgets.Output()
self.button_robustness = widgets.Button(value=False,
description='Run',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check', # (FontAwesome names without the `fa-` prefix)
layout=Layout(width='90%', height='20%'),
style= {'button_color':'#98DDCA'}
)
self.robustness_degree = widgets.Checkbox(value=True,
description='Degree',
disabled=False,
indent=False)
self.robustness_betweenness = widgets.Checkbox(value=False,
description='Betweennness centrality',
disabled=False,
indent=False)
self.robustness_closeness = widgets.Checkbox(value=False,
description='Closeness centrality',
disabled=False,
indent=False)
self.robustness_eigenvector = widgets.Checkbox(value=False,
description='Eigenvector centrality',
disabled=False,
indent=False)
self.robustness_random = widgets.Checkbox(value=False,
description='Random failures',
disabled=False,
indent=False)
self.label_robustness_info = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Examination of the network robustness</b>")
self.label_robustness_settings = widgets.Label(value = "Choose metrics for the network robustness examination:")
self.robustness_out = widgets.Output()
self.robustness_random_label = widgets.Label(value = "Number of Monte Carlo repetitions for random failures")
self.robustness_random_value = widgets.IntSlider(value = 10, min=0, max=1000, step=10,
description='',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True
)
self.cascade_info = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Simulation of failure cascade</b>")
self.button_cascade = widgets.Button(value=False,
description='Run',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check', # (FontAwesome names without the `fa-` prefix)
layout=Layout(width='90%', height='20%'),
style= {'button_color':'#98DDCA'}
)
self.cascade_fraction_to_fail = widgets.FloatSlider(value=0.25, min=0, max=1, step=0.05,
description='',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.2f')
self.cascade_fraction_to_fail_label = widgets.Label(value = "Failure fraction")
self.cascade_out = widgets.Output()
self.label_powerlaw = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Fitting power law to the degree sequence using Maximum Likelihood estimator</b>")
self.powerlaw_settings = widgets.HTML(value = "Settings:")
self.powerlaw_pvalue = widgets.Checkbox(value=False,
description='Calculate p-value',
disabled=False,
indent=False)
self.bootstrap_settings_label = widgets.Label(value = "Number of simulations for bootstrap")
self.bootstrap_settings = widgets.IntSlider(value=100, min=50, max=1000, step=50,
description='',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True
)
self.bootstrap_settings.layout.visibility = 'hidden'
self.bootstrap_settings_label.layout.visibility = 'hidden'
self.cutoff_settings = widgets.Checkbox(value=True,
description='Cutoff value according to Kolomogrov distance',
disabled=False,
indent=False)
self.cutoff_label = widgets.Label(value = "Cutoff value")
self.cutoff_label.layout.visibility = 'hidden'
self.cutoff = widgets.IntSlider(value = 1, min=1, max=100, step=1,
description='',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True
)
self.cutoff.layout.visibility = 'hidden'
self.pvalue_label = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>P-value:</b>")
self.pvalue_value = widgets.Label(value="")
self.pvalue_label.layout.visibility = 'hidden'
self.pvalue_value.layout.visibility = 'hidden'
self.powerlaw_button = widgets.Button(value=False,
description='Run',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check', # (FontAwesome names without the `fa-` prefix)
layout=Layout(width='90%', height='20%'),
style= {'button_color':'#98DDCA'}
)
self.powerlaw_out = widgets.Output()
self.restart_button = widgets.Button(value=False,
description='Restart ETNA',
disabled=False,
button_style='', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
icon='check', # (FontAwesome names without the `fa-` prefix)
layout=Layout(width='40%', height='100%'),
style= {'button_color':'#FFD3B4'}
)
self.error_info = widgets.HTML(value = " ")
self.plot_label = widgets.HTML(value = "Plot and info")
self.download_button = DownloadButton(filename='data.csv', contents=lambda: f'', description='Download data')
self.download_button.layout.visibility = 'hidden'
self.download_button.layout.width = '90%'
self.download_button.style.button_color = '#D5ECC2'
self.dataframe = None
def button_graph_preparation_click(self, button):
"""
Defines what to do when the graph preparation button is clicked.
"""
self.clear()
# Error handling:
if self.file_name_textbox.value == "" or self.file_name_textbox.value == 'Provide file name here':
self.file_name_textbox.value = "No file name provided. Provide file name here."
return None
if ".graphml" not in self.file_name_textbox.value and ".csv" not in self.file_name_textbox.value:
self.file_name_textbox.value = "Incorrect file name. File must have .graphml or .csv extension."
return None
self.button_graph_preparation.description = "Preparing..."
self.error_info.value = " "
# Graph upload from the file:
self.G = My_Network(self.file_name_textbox.value)
# Graph preparation - removal of the parallel edges, non-connected components etc.:
self.G.prepare_the_network()
self.button_graph_preparation.description = "Network is ready! Now choose the tool below."
self.button_graph_preparation.style.button_color = '#D5ECC2'
self.links_nodes_number_info.value = "Number of nodes: "+str(self.G.G.num_vertices())+", Number of links: " + str(self.G.G.num_edges())
def centrality_button_click(self, b):
"""
Binds the centrality measure button from the centrality tab with the appropriate map (1), plot generation (2) and statistics calculations (3).
"""
self.clear()
with self.centrality_out:
if self.centrality_choice.value == "Choose from the list":
pass
else:
# 1):
if self.error() == True:
return None
else:
centrality_choices_functions = {'Degree':self.G.create_degree_distribution_map,
'Betweenness centrality':self.G.create_betweenness_distribution_map,
'Closeness centrality': self.G.create_closeness_distribution_map,
'Eigenvector centrality':self.G.create_eigenvector_distribution_map,
"Clustering coefficient": self.G.create_clustering_map}
my_map = centrality_choices_functions[self.centrality_choice.value]()
fig, ax = self.G.plot_map_histogram(my_map, self.centrality_choice.value) # 2)
self.retrieve_data(my_map, "Centrality and clustering")
my_map = list(my_map.fa)
# 3)
self.info_mini_value.value = str(min(my_map))
self.info_maxi_value.value = str(max(my_map))
self.info_avg_value.value = str(round(np.mean(my_map),4))
self.info_std_value.value = str(round(np.std(my_map),4))
self.info_mini = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Minimum: </b>")
self.info_maxi = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Maximum: </b>")
self.info_avg = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Average: </b>")
self.info_std = widgets.HTML(value="<b><font color='black';font size =2px;font family='Helvetica'>Standard deviation: </b>")
display(VBox(children = [
HBox(children= [self.info_mini, self.info_mini_value]),
HBox(children= [self.info_maxi, self.info_maxi_value]),
HBox(children= [self.info_avg, self.info_avg_value]),
HBox(children= [self.info_std, self.info_std_value])
]))
def assortativity_button_click(self, b):
"""
Binds the assortativity button with the ANND plot generation (1) and degree correlation calculations (2).
"""
self.clear()
if self.error() == True:
return None
else:
corr_value = round(self.G.calculate_assortativity_value(),3)
corr_meaning = "assortative" if corr_value>0 else "disassortative"
self.label_corr_value.value = "Degree correlation coefficient equals " + str(corr_value)+". Graph has "+ corr_meaning +' mixing patterns with regards to the degree.' # 2
with self.assortativity_out:
self.assortativity_out.clear_output()
self.G.plot_ANND(normed = self.ANND_plot_settings_normed.value, errorbar = self.ANND_plot_settings_errorbar.value, block = False) # 1
def hubs_impact_choice_plot(self, b):
"""
Binds the hubs impact button with the hubs impact plot generation. Data is firstly calculated by calling hubs_impact check function (1) and then plotted (2).
"""
self.clear()
with self.hubs_impact_out:
if self.hubs_impact_choice.value == "Choose from the list":
pass
else:
if self.error() == True:
return None
else:
if self.hubs_impact_choice.value == "s1":
Ns, Es, degrees_set = self.G.hubs_impact_check() # 1
self.G.plot_hubs_impact1(degrees_set, Es, block = False) # 2
if self.hubs_impact_choice.value == "s2":
Ns, Es, degrees_set = self.G.hubs_impact_check() # 1
self.G.plot_hubs_impact2(degrees_set, Es, Ns, block = False) # 2
def cascade_button_click(self, b):
"""
Binds the cascade button with fialure cascade simulation performance (1), plotting (2) and the statistics calculations (3).
"""
self.clear()
if self.error() == True:
return None
else:
# Button settings:
self.button_cascade.style.button_color = '#FFAAA7'
self.button_cascade.description = "Running..."
# Data generation:
cascade_data = self.G.cascade_all_nodes(fraction_to_fail = self.cascade_fraction_to_fail.value) # 1)
self.retrieve_data(cascade_data, "Cascade")
with self.cascade_out:
self.cascade_out.clear_output()
self.G.plot_cascade(cascade_data, fraction_to_fail = self.cascade_fraction_to_fail.value) # 2)
# 3):
self.info_mini_value.value = str(min(cascade_data.values()))
self.info_maxi_value.value = str(max(cascade_data.values()))
self.info_avg_value.value = str(round(np.mean(list(cascade_data.values())),4))
self.info_std_value.value = str(round(np.std(list(cascade_data.values())),4))
self.info_mini = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Minimum: </b>")
self.info_maxi = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Maximum: </b>")
self.info_avg = widgets.HTML(value = "<b><font color='black';font size =2px;font family='Helvetica'>Average: </b>")
self.info_std = widgets.HTML(value="<b><font color='black';font size =2px;font family='Helvetica'>Standard deviation: </b>")
display(VBox(children = [
HBox(children= [self.info_mini, self.info_mini_value]),
HBox(children= [self.info_maxi, self.info_maxi_value]),
HBox(children= [self.info_avg, self.info_avg_value]),
HBox(children= [self.info_std, self.info_std_value])
]))
self.button_cascade.description = "Run failure cascade simulation"
self.button_cascade.style.button_color = '#98DDCA'
def robustness_button_click(self, b):
"""
Binds robustness button with the robustness button with the reboustness examination.
In the call the data is generated (1) and then plotted (2).
"""
self.clear()
if self.error() == True:
return None
else:
self.button_robustness.style.button_color = '#FFAAA7'
self.button_robustness.description = "Running..."
metrics_to_run = {self.robustness_degree:[self.G.create_degree_distribution_map, "Degree"],
self.robustness_betweenness:[self.G.create_betweenness_distribution_map, "Betweenness centrality"] ,
self.robustness_closeness:[self.G.create_closeness_distribution_map, 'Closeness centrality'],
self.robustness_eigenvector:[self.G.create_eigenvector_distribution_map,'Eigenvector centrality'],
self.robustness_random:[]}
results_to_plot = []
for metric in metrics_to_run.keys():
if metric.value == True:
if metric == self.robustness_random:
results = self.G.robustness_random_evaluation(N=self.robustness_random_value.value)
results_to_plot.append([results, "Random failures"])
else:
[function, metric_name] = metrics_to_run[metric]
map_G = function()
results = self.G.robustness_evaluation(map_G) # 1
results_to_plot.append([results, metric_name])
self.retrieve_data(results_to_plot, "Robustness")
with self.robustness_out:
self.robustness_out.clear_output()
self.G.plot_robustness(results_to_plot, block=True) # 2
self.button_robustness.description = "Run"
self.button_robustness.style.button_color = '#98DDCA'
def robustness_random_true(self, b):
"""
Function for handling the robustness settings for random failures.
It makes visible the bar for the adjustment of the number of Mone Carlo repetitions if the random failures measurements are chosen .
"""
if self.robustness_random.value == True:
self.robustness_random_label.layout.visibility = 'visible'
self.robustness_random_value.layout.visibility = 'visible'
else:
self.robustness_random_label.layout.visibility = 'hidden'
self.robustness_random_value.layout.visibility = 'hidden'
def powerlaw_button_click(self, b):
"""
Binds the powerlaw button with the power law adjustment to the degree sequence. Parameters are calculated (1), the fit is plotted (2) and the statistics are calculated (3).
"""
self.clear()
if self.error() == True:
return None
else:
pvalue = "Not calculated"
self.powerlaw_button.description = "Running..."
self.powerlaw_button.style.button_color = '#FFAAA7'
cutoff = self.cutoff.value if self.cutoff_settings.value == False else False
(kmin, alpha, percentage, likelihood, plotting_data, my_powerlaw) = self.G.powerlaw(cutoff) # 1)
if self.powerlaw_pvalue.value == True:
# calculate also p-value
N = self.bootstrap_settings.value
pvalue = self.G.bootstrap_powerlaw(my_powerlaw, N)
pvalue = str(round(pvalue, 4))
self.pvalue_label.layout.visibility = 'visible'
self.pvalue_value.layout.visibility = 'visible'
with self.powerlaw_out:
self.powerlaw_out.clear_output()
self.G.plot_powerlaw(plotting_data, block = True) # 2)
# 3:
self.info_mini.value = "<b><font color='black';font size =2px;font family='Helvetica'>Cutoff: </b>"
self.info_mini_value.value = str(kmin)
self.info_maxi.value = "<b><font color='black';font size =2px;font family='Helvetica'>Power law parameter alpha: </b>"
self.info_maxi_value.value = str(round(alpha,4))
if alpha>3 or alpha<2:
self.info_maxi_value.value+= ", ANOMALOUS REGIME!, standard: 2<alpha<3"
self.info_avg.value = "<b><font color='black';font size =2px;font family='Helvetica'>Percentage of data covered: </b>"
self.info_avg_value.value = str(round(percentage*100,4))
self.info_std.value = "<b><font color='black';font size =2px;font family='Helvetica'>Likelihood: </b>"
self.info_std_value.value = str(round(likelihood,4))
self.pvalue_value.value = pvalue
display(VBox(children = [
HBox(children= [self.info_mini, self.info_mini_value]),
HBox(children= [self.info_maxi, self.info_maxi_value]),
HBox(children= [self.info_std, self.info_std_value]),
HBox(children= [self.info_avg, self.info_avg_value]),
HBox(children= [self.pvalue_label, self.pvalue_value])
]))
self.powerlaw_button.description = "Run"
self.powerlaw_button.style.button_color = '#98DDCA'
def powerlaw_pvalue_true(self, b):
"""
Function for handling the powerlaw settings. It makes visible the bootstrap settings if the pvalue is to be assesed (pvalue checkbox is True).
"""
if self.powerlaw_pvalue.value == True:
self.bootstrap_settings.layout.visibility = 'visible'
self.bootstrap_settings_label.layout.visibility = "visible"
else:
self.bootstrap_settings.layout.visibility = 'hidden'
self.bootstrap_settings_label.layout.visibility = "hidden"
def powerlaw_cutoff(self, b):
"""
Function for handling the powerlaw settings. It makes visible the cutoff choice bar if the default option for cutoff adjustment using the Kolomogrov distance is not chosen.
"""
if self.cutoff_settings.value == False:
self.cutoff_label.layout.visibility = "visible"
self.cutoff.layout.visibility = 'visible'
if self.error(return_message = False) == True:
return None
else:
degree_values = self.G.create_degree_distribution_map().fa
self.cutoff.min = min(degree_values)
self.cutoff.max = max(degree_values)
self.cutoff.value = self.cutoff.min
else:
self.cutoff_label.layout.visibility = "hidden"
self.cutoff.layout.visibility = 'hidden'
def display(self):
"""
Displays all the elements of the GUI in the appropriate order to form the interface.
"""
display(self.initial_info)
display(self.instruction_header)
display(self.instruction)
preparation = VBox(children = [self.file_name_textbox, self.button_graph_preparation, self.links_nodes_number_info], layout = Layout(width = "100%"))
display(preparation)
tabs_preparation = self.tabs
outs = VBox(children = [self.centrality_out, self.hubs_impact_out,
self.assortativity_out, self.label_corr_value,
self.robustness_out, self.cascade_out, self.powerlaw_out,
self.download_button
]) # self.clustering_out
all = HBox(children = [tabs_preparation, outs])
display(all)
display(self.error_info)
display(self.restart_button)
def bind(self):
"""
Binds buttons and other interactivities with the corresponding action functions.
"""
# Bind prepare graph button with the preparation function:
self.button_graph_preparation.on_click(self.button_graph_preparation_click)
# Bind centrality choice button with the centrality examination and centrality tab
self.button_centrality.on_click(self.centrality_button_click)
self.tab_centrality = VBox(children=[self.label_centrality, self.centrality_choice, self.button_centrality])
# Bind hubs_impact button with the plot generation and hubs_impact tab
self.hubs_impact_button.on_click(self.hubs_impact_choice_plot)
self.tab_hubs_impact = VBox(children=[self.label_hubs_impact, self.hubs_impact_choice, self.hubs_impact_button])
# Bind assortativity button with the assortativity examination and assortativity tab
self.button_assortativity.on_click(self.assortativity_button_click)
self.tab_assortativity = VBox(children=[self.label_ANND_plot, self.label_ANND_plot_settings,
self.ANND_plot_settings_errorbar, self.ANND_plot_settings_normed, self.button_assortativity
])
# Bind robustness button with the robustness examination and robustness tab
self.robustness_random_results = interactive_output(self.robustness_random_true, {"b":self.robustness_random}) #interactive_output(self.robustness_random, {"b":self.robustness_random_true})
self.button_robustness.on_click(self.robustness_button_click)
self.robustness = VBox(children=[self.label_robustness_info, self.label_robustness_settings, self.robustness_degree, self.robustness_betweenness,
self.robustness_closeness,
self.robustness_eigenvector,
self.robustness_random,
self.robustness_random_results,
self.robustness_random_label,
self.robustness_random_value,
self.button_robustness])
# Bind cascade button with the failure cascade examination and cascade tab
self.button_cascade.on_click(self.cascade_button_click)
self.tab_cascade = VBox(children=[self.cascade_info, HBox(children = [self.cascade_fraction_to_fail_label, self.cascade_fraction_to_fail]),
self.button_cascade])
# Bind powerlaw button with the powerlaw examination, bind powerlaw settings with the corresponding actions, add all to the powerlaw tab
self.powerlaw_button.on_click(self.powerlaw_button_click)
self.powerlaw_bootstrap = interactive_output(self.powerlaw_pvalue_true, {'b':self.powerlaw_pvalue})
self.powerlaw_cutoff = interactive_output(self.powerlaw_cutoff, {'b':self.cutoff_settings})
self.tab_powerlaw = VBox(children = [self.label_powerlaw, self.powerlaw_settings, self.powerlaw_pvalue,
self.powerlaw_bootstrap,
self.bootstrap_settings_label, self.bootstrap_settings,
self.powerlaw_cutoff, self.cutoff_settings, self.cutoff_label,
self.cutoff,
self.powerlaw_button])
# Joining tabs in the GUI
self.tabs = widgets.Accordion(children = [self.tab_centrality, self.tab_powerlaw,
self.tab_hubs_impact, self.tab_assortativity, self.robustness, self.tab_cascade],
layout=Layout(width='40%', min_width = "300px",
), selected_index = None) #self.tab_clustering bylo kiedys,
#layout in_height='500px',max_height='500px', display='flex'align_items='stretch'
# Additional tabs' settings
self.tabs.set_title(0, '> Centrality and clusterization ')
self.tabs.set_title(1, '> Power law fitting')
self.tabs.set_title(2, '> Subnetworks: s1 and s2')
self.tabs.set_title(3, '> Assortativity')
self.tabs.set_title(4, '> Robustenss')
self.tabs.set_title(5, '> Failure cascade')
# Bind restart button with the restart function
self.restart_button.on_click(self.gui_restart)
def gui_restart(self,b):
"""
Sets everything to the initial settings by cleaning the output widgets, fixing colors, bringing original texts to the labels and buttons.
"""
self.G = None
self.file_name_textbox.value = "Provide file name here"
self.button_graph_preparation.description = "Prepare the graph"
self.button_graph_preparation.style.button_color = "#FFAAA7"
self.links_nodes_number_info.value = ""
self.centrality_choice.value = "Choose from the list"
self.centrality_out.clear_output()
#self.clustering_out.clear_output()
self.hubs_impact_choice.value = "Choose from the list"
self.hubs_impact_out.clear_output()
self.label_corr_value.value = ""
self.ANND_plot_settings_normed.value = False
self.ANND_plot_settings_errorbar.value = False
self.assortativity_out.clear_output()
self.cascade_fraction_to_fail.value = 0.25
self.cascade_out.clear_output()
self.robustness_degree.value = False
self.robustness_betweenness.value = False
self.robustness_closeness.value = False
self.robustness_eigenvector.value = False
self.robustness_random.value = False
self.robustness_out.clear_output()
self.powerlaw_pvalue.value = False
self.cutoff_settings.value = True
self.powerlaw_out.clear_output()
#self.data_preview.clear_output()
#self.data_preview_button.layout.visibility = 'hidden'
self.download_button.layout.visibility = 'hidden'
def error(self, return_message = True):
"""
Used for error handling - checks if the file is provided in the appropriate format. This functions is called always before running any of the methods in the GUI.
"""
if self.G == None or self.file_name_textbox.value == "No file name provided. Provide file name here." or self.file_name_textbox.value == "":
if return_message==True:
self.error_info.value = "<b><font color='#FFAAA7';font size =3px;font family='Helvetica'>Cannot use the method. Provide file name and prepare the network first.</b>"
return True
def clear(self):
"""
Clears the outputs. Used to make previous plots and statistics disappear from the GUI when the new method is called.
This functions is called always before running any of the methods in the GUI.
"""
self.centrality_out.clear_output()
self.hubs_impact_out.clear_output()
self.assortativity_out.clear_output()
self.robustness_out.clear_output()
#self.clustering_out.clear_output()
self.cascade_out.clear_output()
self.powerlaw_out.clear_output()
self.label_corr_value.value = ""
#self.data_preview.clear_output()
#self.data_preview_button.layout.visibility = 'hidden'
self.download_button.layout.visibility = 'hidden'
def retrieve_data(self, data, method):
"""
Used to gather the data from the method functions so that it is downloadable.
Called in 3 cases - when the robustness, cascade or Centrality and clustering methods are chosen.
"""
if method == "Centrality and clustering":
my_map = data
my_map_values = my_map.a[self.G.G.get_vertices()]
nodes = self.G.G.get_vertices()
self.dataframe = pd.DataFrame({"NodeIndex":nodes, "MeasureValue": my_map_values})
#self.data_preview_button.layout.visibility = 'visible'
self.download_button.layout.visibility = 'visible'
self.dataframe = self.dataframe.to_csv()
self.download_button.contents = lambda: self.dataframe
if method == "Robustness":
results_to_plot = data
dataframe = {}
for row in results_to_plot:
dataframe[row[1]] = row[0]
self.dataframe = pd.DataFrame(dataframe)
self.dataframe["RemovedFraction"] = fractions = [i/100 for i in range(0,100)]
self.dataframe = self.dataframe[['RemovedFraction'] + [col for col in self.dataframe.columns if col != 'RemovedFraction' ]]
self.download_button.layout.visibility = 'visible'
self.dataframe = self.dataframe.to_csv()
self.download_button.contents = lambda: self.dataframe
if method == "Cascade":
nodes = data.keys()
values = data.values()
self.dataframe = pd.DataFrame({"NodeIndex":nodes, "Cascade size": values})
self.download_button.layout.visibility = 'visible'
self.dataframe = self.dataframe.to_csv()
self.download_button.contents = lambda: self.dataframe
###Output
* installing *source* package ‘poweRlaw’ ...
** package ‘poweRlaw’ successfully unpacked and MD5 sums checked
** using staged installation
** R
** data
** demo
** inst
** byte-compile and prepare package for lazy loading
** help
*** installing help indices
*** copying figures
** building package indices
** installing vignettes
** testing if installed package can be loaded from temporary location
** testing if installed package can be loaded from final location
** testing if installed package keeps a record of temporary installation path
* DONE (poweRlaw)
###Markdown
Display ETNA
###Code
G = GUI_for_network_analysis()
G.bind()
G.display()
###Output
_____no_output_____ |
Energy_min.ipynb | ###Markdown
Case I
###Code
M=16*8
N=10000
a=-8
b=8
c=-4
d=4
gx=1
gy=4
saving_time=50
potential=lambda x,y:(gx**2*x**2+gy**2*y**2)/2
psi0= lambda x,y: (gx*gy)**(1/4)*np.exp(-(gx**2*x**2+gy**2*y**2)/2)/np.pi**(1/2)
beta=200.
dt=0.001
eps=1
t, X, Y, psi=td_tssp_2d_pbc_bis(M, N, a, b, c, d, psi0, potential, dt, beta, eps,saving_time)
V = potential(X, Y) / eps
x_spacing=(b-a)/M
y_spacing=(d-c)/M
En = np.empty(len(psi))
for i in range (len(psi)):
En[i]=energy_gpe(psi[i], V, beta, x_spacing, y_spacing)
plt.figure()
plt.plot(En)
plt.show()
mu_g = mu_gpe(psi[-1],V,beta,x_spacing,y_spacing)
x_rms = np.sqrt(mean_value_bis(fx2,psi[-1],a,b,c,d,M))
y_rms = np.sqrt(mean_value_bis(fy2,psi[-1],a,b,c,d,M))
print('x_rms={:.5}'.format(x_rms))
print('y_rms={:.4}'.format(y_rms))
print('E_g={:.6}'.format(En[-1]))
print('mu_g={:.6}'.format(mu_g))
%matplotlib notebook
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
psi2=np.abs(psi)**2
zmax=np.max(psi2[-1])
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_zlim3d(0, zmax)
ax.set_title("Ground state")
surf = ax.plot_surface(X, Y, psi2[-1], cmap=cm.coolwarm,
linewidth=0, antialiased=False)
###Output
_____no_output_____
###Markdown
Case II
###Code
M=16*8
N=10000
a=-8
b=8
c=-8
d=8
gx=1
gy=1
w0, delt, r0 = 4., 1., 1.
potential=lambda x,y:(gx**2*x**2+gy**2*y**2)/2 + w0*np.exp(-delt*((x-r0)**2+y**2))
psi0= lambda x,y: (gx*gy)**(1/4)*np.exp(-(gx**2*x**2+gy**2*y**2)/2)/np.pi**(1/2)
beta=200.
dt=0.001
saving_time=50
eps=1
t, X, Y, psi=td_tssp_2d_pbc_bis(M, N, a, b, c, d, psi0, potential, dt, beta, eps,saving_time)
V = potential(X, Y) / eps
x_spacing=(b-a)/M
y_spacing=(d-c)/M
En = np.empty(len(psi))
for i in range (len(psi)):
En[i]=energy_gpe(psi[i], V, beta, x_spacing, y_spacing)
plt.figure()
plt.plot(En)
plt.show()
mu_g = mu_gpe(psi[-1],V,beta,x_spacing,y_spacing)
x_rms = np.sqrt(mean_value_bis(fx2,psi[-1],a,b,c,d,M))
y_rms = np.sqrt(mean_value_bis(fy2,psi[-1],a,b,c,d,M))
print('x_rms={:.5}'.format(x_rms))
print('y_rms={:.4}'.format(y_rms))
print('E_g={:.6}'.format(En[-1]))
print('mu_g={:.6}'.format(mu_g))
psi2=np.abs(psi)**2
zmax=np.max(psi2[-1])
fig = plt.figure()
ax = fig.gca(projection='3d')
ax.set_zlim3d(0, zmax)
ax.set_title("Ground state")
surf = ax.plot_surface(X, Y, psi2[-1], cmap=cm.coolwarm,
linewidth=0, antialiased=False)
###Output
_____no_output_____ |
Models/Bonsai/IHTBonsai.ipynb | ###Markdown
Imports
###Code
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from IPython.display import display, clear_output
import sys
import time
import os
import time
import pickle
%matplotlib inline
###Output
_____no_output_____
###Markdown
Importing training data
###Code
dir_path = (os.getcwd() + "\\").replace("\\","/") # If it does not work change it to path where data is stored.
print("Working directory is : ", dir_path)
#Loading Pre-processed dataset for Bonsai
dirc = dir_path + '/../../Datasets/'
print(os.listdir(dirc))
Xtrain = np.load(dirc + 'Xtrain2.npy').reshape(-1,28*28)
Ytrain = np.load(dirc + 'Ytrain2.npy')
Xtest = np.load(dirc + 'Xtest.npy').reshape(-1,28*28)
Ytest = np.load(dirc + 'Ytest.npy')
Ytrain.shape
from sklearn.preprocessing import LabelEncoder as LE
from sklearn.preprocessing import OneHotEncoder as OHE
mo1 = LE()
mo2 = OHE()
Ytrain = mo2.fit_transform(mo1.fit_transform((Ytrain.ravel())).reshape(-1,1)).todense()
Ytest = mo2.transform(mo1.transform((Ytest.ravel())).reshape(-1,1)).todense()
Xtrain.shape, Xtest.shape,Ytrain.shape,Ytest.shape
# N, dDims = X_train.shape
N, dDims = Xtrain.shape
# nClasses = len(np.unique(Y_train))
nClasses = Ytrain.shape[1]
print('Training Size:',N,',Data Dims:', dDims,',No. Classes:', nClasses)
class Bonsai():
def __init__(self, nClasses, dDims, pDims, tDepth, sigma, W=None, T=None, V=None, Z=None):
'''
dDims : data Dimensions
pDims : projected Dimesions
nClasses : num Classes
tDepth : tree Depth
Expected Dimensions:
--------------------
Bonsai Params // Optional
W [numClasses*totalNodes, projectionDimension]
V [numClasses*totalNodes, projectionDimension]
Z [projectionDimension, dataDimension + 1]
T [internalNodes, projectionDimension]
internalNodes = 2**treeDepth - 1
totalNodes = 2*internalNodes + 1
sigma - tanh non-linearity
sigmaI - Indicator function for node probabilities
sigmaI - has to be set to infinity(1e9 for practicality)
while doing testing/inference
numClasses will be reset to 1 in binary case
'''
# Initialization of parameter variables
self.dDims = dDims
self.pDims = pDims
# If number of classes is two we dont need to calculate other class probability
# if nClasses == 2:
# self.nClasses = 1
# else:
# self.nClasses = nClasses
self.nClasses = nClasses
self.tDepth = tDepth
self.sigma = sigma
self.iNodes = 2**self.tDepth - 1
self.tNodes = 2*self.iNodes + 1
self.Z = tf.Variable(tf.random_normal([self.pDims, self.dDims]), name='Z', dtype=tf.float32)
self.W = tf.Variable(tf.random_normal([self.nClasses * self.tNodes, self.pDims]), name='W', dtype=tf.float32)
self.V = tf.Variable(tf.random_normal([self.nClasses * self.tNodes, self.pDims]), name='V', dtype=tf.float32)
self.T = tf.Variable(tf.random_normal([self.iNodes, self.pDims]), name='T', dtype=tf.float32)
self.assert_params()
self.score = None
self.X_ = None
self.prediction = None
def __call__(self, X, sigmaI):
'''
Function to build the Bonsai Tree graph
Expected Dimensions
-------------------
X is [_, self.dDims]
X_ is [_, self.pDims]
'''
errmsg = "Dimension Mismatch, X is [_, self.dataDimension]"
assert (len(X.shape) == 2 and int(X.shape[1]) == self.dDims), errmsg
# return score, X_ if exists where X_ is the projected X, i.e X_ = (Z.X)/(D^)
if self.score is not None:
return self.score, self.X_
X_ = tf.divide(tf.matmul(self.Z, X, transpose_b=True),self.pDims) # dimensions are D^x1
# For Root Node score...
self.__nodeProb = [] # node probability list
self.__nodeProb.append(1) # probability of x passing through root is 1.
W_ = self.W[0:(self.nClasses)]# first K trees root W params : KxD^
V_ = self.V[0:(self.nClasses)]# first K trees root V params : KxD^
# All score sums variable initialized to root score... for each tree (Note: can be negative)
score_ = self.__nodeProb[0]*tf.multiply(tf.matmul(W_, X_), tf.tanh(self.sigma * tf.matmul(V_, X_))) # : Kx1
# Adding rest of the nodes scores...
for i in range(1, self.tNodes):
# current node is i
# W, V of K different trees for current node
W_ = self.W[i * self.nClasses:((i + 1) * self.nClasses)]# : KxD^
V_ = self.V[i * self.nClasses:((i + 1) * self.nClasses)]# : KxD^
# i's parent node shared theta param reshaping to 1xD^
T_ = tf.reshape(self.T[int(np.ceil(i / 2.0) - 1.0)],[-1, self.pDims])# : 1xD^
# Calculating probability that x should come to this node next given it is in parent node...
prob = tf.divide((1 + ((-1)**(i + 1))*tf.tanh(tf.multiply(sigmaI, tf.matmul(T_, X_)))),2.0) # : scalar 1x1
# Actual probability that x will come to this node...p(parent)*p(this|parent)...
prob = self.__nodeProb[int(np.ceil(i / 2.0) - 1.0)] * prob # : scalar 1x1
# adding prob to node prob list
self.__nodeProb.append(prob)
# New score addes to sum of scores...
score_ += self.__nodeProb[i]*tf.multiply(tf.matmul(W_, X_), tf.tanh(self.sigma * tf.matmul(V_, X_))) # Kx1
self.score = score_
self.X_ = X_
return self.score, self.X_
def predict(self):
'''
Takes in a score tensor and outputs a integer class for each data point
'''
if self.prediction is not None:
return self.prediction
# If number of classes is two we dont need to calculate other class probability
if self.nClasses > 2:
# Finding argmax over first axis (k axis)
self.prediction = tf.argmax(tf.transpose(self.score), 1) # score is kx1
else:
# Finding argmax over score and 0 score is 1x1
self.prediction = tf.argmax(tf.concat([tf.transpose(self.score),0*tf.transpose(self.score)], 1), 1)
return self.prediction
def assert_params(self):
# Asserting Initializaiton
errRank = "All Parameters must has only two dimensions shape = [a, b]"
assert len(self.W.shape) == len(self.Z.shape), errRank
assert len(self.W.shape) == len(self.T.shape), errRank
assert len(self.W.shape) == 2, errRank
msg = "W and V should be of same Dimensions"
assert self.W.shape == self.V.shape, msg
errW = "W and V are [numClasses*totalNodes, projectionDimension]"
assert self.W.shape[0] == self.nClasses * self.tNodes, errW
assert self.W.shape[1] == self.pDims, errW
errZ = "Z is [projectionDimension, dataDimension]"
assert self.Z.shape[0] == self.pDims, errZ
assert self.Z.shape[1] == self.dDims, errZ
errT = "T is [internalNodes, projectionDimension]"
assert self.T.shape[0] == self.iNodes, errT
assert self.T.shape[1] == self.pDims, errT
assert int(self.nClasses) > 0, "numClasses should be > 1"
msg = "# of features in data should be > 0"
assert int(self.dDims) > 0, msg
msg = "Projection should be > 0 dims"
assert int(self.pDims) > 0, msg
msg = "treeDepth should be >= 0"
assert int(self.tDepth) >= 0, msg
class BonsaiTrainer():
def __init__(self, tree, lW, lT, lV, lZ, lr, X, Y, sW, sV, sZ, sT):
'''
bonsaiObj - Initialised Bonsai Object and Graph...
lW, lT, lV and lZ are regularisers to Bonsai Params...
sW, sT, sV and sZ are sparsity factors to Bonsai Params...
lr - learningRate fro optimizer...
X is the Data Placeholder - Dims [_, dataDimension]
Y - Label placeholder for loss computation
useMCHLoss - For choice between HingeLoss vs CrossEntropy
useMCHLoss - True - MultiClass - multiClassHingeLoss
useMCHLoss - False - MultiClass - crossEntropyLoss
'''
# Intializations of training parameters
self.tree = tree
# regularization params lambdas(l) (all are scalars)
self.lW = lW
self.lV = lV
self.lT = lT
self.lZ = lZ
# sparsity parameters (scalars all...) will be used to calculate percentiles to make other cells zero
self.sW = sW
self.sV = sV
self.sT = sT
self.sZ = sZ
# placeholders for inputs and labels
self.Y = Y # _ x nClasses
self.X = X # _ x D
# learning rate
self.lr = lr
# Asserting initialization
self.assert_params()
# place holder for path selection parameter sigmaI
self.sigmaI = tf.placeholder(tf.float32, name='sigmaI')
# invoking __call__ of tree getting initial values of score and projected X
self.score, self.X_ = self.tree(self.X, self.sigmaI)
# defining loss function tensorflow graph variables.....
self.loss, self.marginLoss, self.regLoss = self.lossGraph()
# defining single training step graph process ...
self.tree.TrainStep = tf.train.AdamOptimizer(self.lr).minimize(self.loss)
self.trainStep = self.tree.TrainStep
# defining accuracy and prediction graph objects
self.accuracy = self.accuracyGraph()
self.prediction = self.tree.predict()
# set all parameters above 0.99 if dont want to use IHT
if self.sW > 0.99 and self.sV > 0.99 and self.sZ > 0.99 and self.sT > 0.99:
self.isDenseTraining = True
else:
self.isDenseTraining = False
# setting the hard thresholding graph obejcts
self.hardThrsd()
def hardThrsd(self):
'''
Set up for hard Thresholding Functionality
'''
# place holders for sparse parameters....
self.__Wth = tf.placeholder(tf.float32, name='Wth')
self.__Vth = tf.placeholder(tf.float32, name='Vth')
self.__Zth = tf.placeholder(tf.float32, name='Zth')
self.__Tth = tf.placeholder(tf.float32, name='Tth')
# assigning the thresholded values to params as a graph object for tensorflow....
self.__Woph = self.tree.W.assign(self.__Wth)
self.__Voph = self.tree.V.assign(self.__Vth)
self.__Toph = self.tree.T.assign(self.__Tth)
self.__Zoph = self.tree.Z.assign(self.__Zth)
# grouping the graph objects as one object....
self.hardThresholdGroup = tf.group(
self.__Woph, self.__Voph, self.__Toph, self.__Zoph)
def hardThreshold(self, A, s):
'''
Hard thresholding function on Tensor A with sparsity s
'''
# copying to avoid errors....
A_ = np.copy(A)
# flattening the tensor...
A_ = A_.ravel()
if len(A_) > 0:
# calculating the threshold value for sparse limit...
th = np.percentile(np.abs(A_), (1 - s) * 100.0, interpolation='higher')
# making sparse.......
A_[np.abs(A_) < th] = 0.0
# reconstructing in actual shape....
A_ = A_.reshape(A.shape)
return A_
def accuracyGraph(self):
'''
Accuracy Graph to evaluate accuracy when needed
'''
if (self.tree.nClasses > 1):
correctPrediction = tf.equal(tf.argmax(tf.transpose(self.score), 1), tf.argmax(self.Y, 1))
self.accuracy = tf.reduce_mean(tf.cast(correctPrediction, tf.float32))
else:
# some accuracy functional analysis for 2 classes could be different from this...
y_ = self.Y * 2 - 1
correctPrediction = tf.multiply(tf.transpose(self.score), y_)
correctPrediction = tf.nn.relu(correctPrediction)
correctPrediction = tf.ceil(tf.tanh(correctPrediction)) # final predictions.... round to(0 or 1)
self.accuracy = tf.reduce_mean(
tf.cast(correctPrediction, tf.float32))
return self.accuracy
def lossGraph(self):
'''
Loss Graph for given tree
'''
# regularization losses.....
self.regLoss = 0.5 * (self.lZ * tf.square(tf.norm(self.tree.Z)) +
self.lW * tf.square(tf.norm(self.tree.W)) +
self.lV * tf.square(tf.norm(self.tree.V)) +
self.lT * tf.square(tf.norm(self.tree.T)))
# emperical actual loss.....
if (self.tree.nClasses > 2):
'''
Cross Entropy loss for MultiClass case in joint training for
faster convergence
'''
# cross entropy loss....
self.marginLoss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits_v2(logits=tf.transpose(self.score),
labels=tf.stop_gradient(self.Y)))
else:
# sigmoid loss....
self.marginLoss = tf.reduce_mean(tf.nn.relu(1.0 - (2 * self.Y - 1) * tf.transpose(self.score)))
# adding the losses...
self.loss = self.marginLoss + self.regLoss
return self.loss, self.marginLoss, self.regLoss
def assert_params(self):
# asserting the initialization....
err = "sparsity must be between 0 and 1"
assert self.sW >= 0 and self.sW <= 1, "W " + err
assert self.sV >= 0 and self.sV <= 1, "V " + err
assert self.sZ >= 0 and self.sZ <= 1, "Z " + err
assert self.sT >= 0 and self.sT <= 1, "T " + err
errMsg = "Dimension Mismatch, Y has to be [_, " + str(self.tree.nClasses) + "]"
errCont = " numClasses are 1 in case of Binary case by design"
assert (len(self.Y.shape) == 2 and self.Y.shape[1] == self.tree.nClasses), errMsg + errCont
def train(self, batchSize, totalEpochs, sess, Xtrain, Xval, Ytrain, Yval, htc):
iht = 0 # to keep a note if thresholding has been started ...
numIters = Xtrain.shape[0] / batchSize # number of batches at a time...
totalBatches = numIters * totalEpochs # total number of batch operations...
treeSigmaI = 1 # controls the fidelity of the approximation too high can saturate tanh.
maxTestAcc = -10000
itersInPhase = 0
for i in range(totalEpochs):
print("\nEpoch Number: " + str(i))
# defining training acc and loss
trainAcc = 0.0
trainLoss = 0.0
numIters = int(numIters)
for j in range(numIters):
# creating batch.....sequentiall could be done randomly using choice function...
mini_batchX = Xtrain[j*batchSize:(j+1)*batchSize,:] # B x D
mini_batchY = Ytrain[j*batchSize:(j+1)*batchSize] # B x
# feed for training using tensorflow graph based gradient descent approach......
_feed_dict = {self.X: mini_batchX, self.Y: mini_batchY,
self.sigmaI: treeSigmaI}
# training the tensorflow graph
_, batchLoss, batchAcc = sess.run(
[self.trainStep, self.loss, self.accuracy],
feed_dict=_feed_dict)
# calculating acc....
trainAcc += batchAcc
trainLoss += batchLoss
# to update sigmaI.....
if (itersInPhase % 100 == 0):
# Making a random batch....
indices = np.random.choice(Xtrain.shape[0], 100)
rand_batchX = Xtrain[indices, :]
rand_batchY = Ytrain[indices, :]
rand_batchY = np.reshape(rand_batchY, [-1, self.tree.nClasses])
_feed_dict = {self.X: rand_batchX,
self.sigmaI: treeSigmaI}
# Projected matrix...
Xcapeval = self.X_.eval(feed_dict=_feed_dict) # D^ x 1
# theta value... current...
Teval = self.tree.T.eval() # iNodes x D^
# current sum of all internal nodes sum(abs(theta^T.Z.x): iNoddes x miniBS) : 1x1
sum_tr = 0.0
for k in range(0, self.tree.iNodes):
sum_tr += (np.sum(np.abs(np.dot(Teval[k], Xcapeval))))
if(self.tree.iNodes > 0):
sum_tr /= (100 * self.tree.iNodes) # normalizing all sums
sum_tr = 0.1 / sum_tr # inverse of average sum
else:
sum_tr = 0.1
# thresholding inverse of sum as min(1000, sum_inv*2^(cuurent batch number / total bacthes / 30))
sum_tr = min(
1000, sum_tr * (2**(float(itersInPhase) /
(float(totalBatches) / 30.0))))
# assiging higher values as convergence is reached...
treeSigmaI = sum_tr
itersInPhase+=1
# to start hard thresholding after half_time(could vary) ......
if((itersInPhase//numIters > htc*totalEpochs) and (not self.isDenseTraining)):
if(iht == 0):
print('\n\nHard Thresolding Started\n\n')
iht = 1
# getting the current estimates of W,V,Z,T...
currW = self.tree.W.eval()
currV = self.tree.V.eval()
currZ = self.tree.Z.eval()
currT = self.tree.T.eval()
# Setting a method to make some values of matrix zero....
self.__thrsdW = self.hardThreshold(currW, self.sW)
self.__thrsdV = self.hardThreshold(currV, self.sV)
self.__thrsdZ = self.hardThreshold(currZ, self.sZ)
self.__thrsdT = self.hardThreshold(currT, self.sT)
# runnign the hard thresholding graph....
fd_thrsd = {self.__Wth: self.__thrsdW, self.__Vth: self.__thrsdV,
self.__Zth: self.__thrsdZ, self.__Tth: self.__thrsdT}
sess.run(self.hardThresholdGroup, feed_dict=fd_thrsd)
print("Train Loss: " + str(trainLoss / numIters) +
" Train accuracy: " + str(trainAcc / numIters))
# calculating the test accuracies with sigmaI as expected -> inf.. = 10^9
oldSigmaI = treeSigmaI
treeSigmaI = 1e9
# test feed for tf...
_feed_dict = {self.X: Xval, self.Y: Yval,
self.sigmaI: treeSigmaI}
# calculating losses....
testAcc, testLoss, regTestLoss = sess.run([self.accuracy, self.loss, self.regLoss], feed_dict=_feed_dict)
if maxTestAcc <= testAcc:
maxTestAccEpoch = i
maxTestAcc = testAcc
print("Test accuracy %g" % testAcc)
print("MarginLoss + RegLoss: " + str(testLoss - regTestLoss) +
" + " + str(regTestLoss) + " = " + str(testLoss) + "\n", end='\r')
# time.sleep(0.1)
# clear_output()
treeSigmaI = oldSigmaI
# sigmaI has to be set to infinity to ensure
# only a single path is used in inference
treeSigmaI = 1e9
print("\nMaximum Test accuracy at compressed" +
" model size(including early stopping): " +
str(maxTestAcc) + " at Epoch: " +
str(maxTestAccEpoch + 1) + "\nFinal Test" +
" Accuracy: " + str(testAcc))
tf.reset_default_graph()
sess = tf.InteractiveSession()
tree = Bonsai(nClasses = nClasses, dDims = dDims, pDims =20, tDepth = 3, sigma = 1.0)
X = tf.placeholder("float32", [None, dDims])
Y = tf.placeholder("float32", [None, nClasses])
bonsaiTrainer = BonsaiTrainer(tree, lW = 0.00001, lT = 0.00001, lV = 0.00001, lZ = 0.0000001, lr = 0.01, X = X, Y = Y,
sZ = 0.6999, sW = 0.3999, sV = 0.3999, sT = 0.3999)
sess.run(tf.global_variables_initializer())
totalEpochs = 100
batchSize = np.maximum(1000, int(np.ceil(np.sqrt(Ytrain.shape[0]))))
bonsaiTrainer.train(batchSize, totalEpochs, sess, Xtrain, Xtest, Ytrain, Ytest, htc = 0.00)
# print('Time taken',end-start)
def calc_zero_ratios(tree):
xZ = tree.Z.eval()
xW = tree.W.eval()
xV = tree.V.eval()
xT = tree.T.eval()
zs = np.sum(np.abs(xZ)>0.0000000000000001)
ws = np.sum(np.abs(xW)>0.0000000000000001)
vs = np.sum(np.abs(xV)>0.0000000000000001)
ts = np.sum(np.abs(xT)>0.0000000000000001)
print('Sparse ratios achieved...\nW:',ws,xW.shape,'\nV:',vs,xV.shape,'\nT:',ts,xT.shape,'\nZ:',zs,xZ.shape)
_feed_dict = {bonsaiTrainer.X: Xtest, bonsaiTrainer.Y: Ytest,
bonsaiTrainer.sigmaI: 10e9}
print('Net',ws+zs+vs+ts)
start = time.time()
sess.run(bonsaiTrainer.tree.prediction, feed_dict=_feed_dict)
end = time.time()
print('Time taken :', end-start)
calc_zero_ratios(tree)
###Output
Sparse ratios achieved...
W: 1200 (150, 20)
V: 1200 (150, 20)
T: 56 (7, 20)
Z: 10974 (20, 784)
Net 13430
Time taken : 0.19434881210327148
|
notebooks/inferance.ipynb | ###Markdown
Inference
###Code
import pickle
import os
dir_path = os.path.dirname(os.getcwd())
import sys
sys.path.append(os.path.join(dir_path, "src"))
from clean_comments import clean
from processing import process_txt
### load model
pkl_file = os.path.join(dir_path, 'model', 'final_model.pkl')
open_file = open(pkl_file, "rb")
model = pickle.load(open_file)
open_file.close()
### load vectorizer
pkl_file = os.path.join(dir_path, 'model', 'final_vectorizer.pkl')
open_file = open(pkl_file, "rb")
bw_vectorizer = pickle.load(open_file)
open_file.close()
i1 = ["that is so good, i am so happy bitch!"]
i2 = ['This project is quite interesting to work on']
i3 = ["i'm going to kill you nigga, you are you sick or mad, i don't like you at all"]
i4 = ["D'aww! He matches this background colour I'm seemingly stuck with. Thanks. (talk) 21:51, January 11, 2016 (UTC)"]
input_str = clean(i1[0])
input_str = process_txt(input_str, stemm= True)
input_str = bw_vectorizer.transform([input_str])
prediction = model.predict(input_str)
prediction
labels = ['toxic', 'severe_toxic', 'obscene', 'threat', 'insult', 'identity_hate']
predc = [labels[i] for i in range (0,len(prediction[0])) if prediction[0][i] == 1]
if len(predc)== 0:
i ='comment in not toxic'
print(i)
else:
print("Prediction : {}".format(" | ".join(predc)))
###Output
Prediction : toxic | obscene | insult
|
BlackJacks.ipynb | ###Markdown
Problem- Using Previous code of 52 Card Deck- Simulate a game of BlackJacks also known as 21
###Code
import random
def dealer(num_cards=2):
suits = ['Clubs', 'Diamonds', 'Hearts', 'Spades']
values = [1,2,3,4,5,6,7,8,9,10,10,10,10]
deck = []
for i in suits:
for j in values:
temp = (i,j)
deck.append(temp)
cards = len(deck)
full_deck = []
while cards > 0:
index = random.randrange(cards)
temp = deck.pop(index)
full_deck.append(temp)
cards = len(deck)
hand = full_deck[:num_cards]
pile = full_deck [num_cards:]
dic = {'full_deck':full_deck,'hand':hand,'pile':pile}
return dic
import numpy as np
def twenty1():
wallet = 100
games = 0
twenty = []
while wallet >= 10 and games < 100:
wallet -= 10
games += 1
dic = {}
num_cards = np.random.choice([2,3], size=1, p=[.5,.5])[0]
deal = dealer(num_cards)
hand = deal['hand']
handtotal = 0
for i in range(len(hand)):
temp = hand[i][1]
handtotal += temp
if handtotal <= 16:
wallet += 0
elif handtotal in [17,18,19,20]:
wallet += 10
elif handtotal == 21:
wallet += 50
elif handtotal >= 22:
wallet += 0
dic = {'gp': games, 'hand':hand,'handtotal':handtotal,'wallet':wallet}
twenty.append(dic)
return twenty
twenty1()
###Output
_____no_output_____ |
notebooks/census_dataprepare.ipynb | ###Markdown
Data Prepare census
###Code
from sklearn.preprocessing import LabelEncoder
from pandas import read_csv
from pandas import get_dummies
from pandas import concat
import sqlite3
#from pandas.core.generic import NDFrame as dataframe
#funcoes uteis
def fill_na_median(dataframe, grupo, valor, tipo='median'):
return dataframe[valor].fillna(dataframe.groupby(grupo)[valor].transform(tipo))
#normaliza a coluna do tipo standardization
def nomaliza_std(dataframe, coluna):
return (dataframe[coluna]-dataframe[coluna].mean())/dataframe[coluna].std()
#return columns with one hot encoding
def set_onehotencoding(dataframe, coluna):
cols = get_dummies(dataframe[coluna], prefix=coluna, drop_first=False)
dataframe.drop(coluna, axis=1, inplace=True)
return concat([dataframe,cols],axis=1)
df = read_csv('../data/census.csv')
# uso de Label Encoder
'''
le = LabelEncoder()
features_to_encoder = ['workclass', 'education','marital-status',\
'occupation', 'relationship', 'race', 'sex', 'native-country']
for feature in features_to_encoder:
df[feature] = le.fit_transform(df[feature])
'''
#encoder and normalize
features_to_encoder = ['workclass', 'education','marital-status',\
'occupation', 'relationship', 'race', 'sex', 'native-country']
features_to_normalize = ['age','final-weight', 'education-num', 'capital-loos','hour-per-week', 'capital-gain']
income_dict = { ' <=50K': 0,' >50K': 1}
for feature in features_to_encoder:
df = set_onehotencoding(df, feature)
for feature in features_to_normalize:
df[feature] = nomaliza_std(df, feature)
le = LabelEncoder()
df['income'] = le.fit_transform(df['income'])
#df['income'] = df['income'].map(income_dict)
conn = sqlite3.connect('../data/db.db')
cursor = conn.cursor()
df.to_sql('census', con=conn, if_exists='replace', index=False)
###Output
/Users/alexssandroos/Public/dev/python/datascience/learn_formacaods_udmy/venv/lib/python3.6/site-packages/pandas/core/generic.py:2130: UserWarning: The spaces in these column names will not be changed. In pandas versions < 0.14, spaces were converted to underscores.
dtype=dtype)
|
beta-vae-normalizing-flows/thoracic-surgery/beta-vae-realnvp-thoracic-surgery.ipynb | ###Markdown
Define prior distribution
###Code
def get_prior(num_modes, latent_dim):
mixture_distribution = tfd.Categorical(probs=[1./num_modes] * num_modes)
components_distribution = tfd.MultivariateNormalDiag(loc=tf.Variable(tf.random.normal((num_modes, latent_dim))),
scale_diag=tfp.util.TransformedVariable(tf.ones((num_modes,latent_dim)),
bijector=tfb.Softplus())
)
prior = tfd.MixtureSameFamily(mixture_distribution,
components_distribution
)
return prior
latent_dim = 2
input_shape = 2
prior = get_prior(num_modes=latent_dim, latent_dim=input_shape)
print(f'Prior event shape: {prior.event_shape[0]}')
print(f'# of Gaussions: {prior.components_distribution.batch_shape[0]}')
print(f'Covariance matrix: {prior.components_distribution.name}')
###Output
Prior event shape: 2
# of Gaussions: 2
Covariance matrix: MultivariateNormalDiag
###Markdown
Define KL divergence
###Code
# set weight for more emphasis on KLDivergence term rather than reconstruction loss
# average over both samples and batches
def get_KL_regularizer(prior, weight=4.):
regularizer = tfpl.KLDivergenceRegularizer(prior,
use_exact_kl=False,
test_points_reduce_axis=(),
test_points_fn=lambda q: q.sample(10),
weight=weight
)
return regularizer
KLDivergence_regularizer = get_KL_regularizer(prior)
###Output
_____no_output_____
###Markdown
Define the encoder
###Code
def get_encoder(input_shape, latent_dim, KL_regularizer):
encoder = Sequential([
Dense(input_shape=input_shape, units=256, activation='relu'),
Dense(units=128, activation='relu'),
Dense(units=64, activation='relu'),
Dense(units=32, activation='relu'),
Dense(tfpl.MultivariateNormalTriL.params_size(latent_dim)),
tfpl.MultivariateNormalTriL(latent_dim,
activity_regularizer=KL_regularizer),
])
return encoder
encoder = get_encoder(input_shape=(input_shape,), latent_dim=latent_dim, KL_regularizer=KLDivergence_regularizer)
encoder.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense (Dense) (None, 256) 768
_________________________________________________________________
dense_1 (Dense) (None, 128) 32896
_________________________________________________________________
dense_2 (Dense) (None, 64) 8256
_________________________________________________________________
dense_3 (Dense) (None, 32) 2080
_________________________________________________________________
dense_4 (Dense) (None, 5) 165
_________________________________________________________________
multivariate_normal_tri_l (M multiple 8
=================================================================
Total params: 44,173
Trainable params: 44,173
Non-trainable params: 0
_________________________________________________________________
###Markdown
Define the decoder
###Code
def get_decoder(latent_dim):
decoder = Sequential([
Dense(input_shape=(latent_dim,), units=5, activation='relu'),
Dense(units=64, activation='relu'),
Dense(units=128, activation='relu'),
Dense(units=256, activation='relu'),
Dense(tfpl.MultivariateNormalTriL.params_size(latent_dim)),
tfpl.MultivariateNormalTriL(latent_dim)
])
return decoder
decoder = get_decoder(latent_dim)
decoder.summary()
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_5 (Dense) (None, 5) 15
_________________________________________________________________
dense_6 (Dense) (None, 64) 384
_________________________________________________________________
dense_7 (Dense) (None, 128) 8320
_________________________________________________________________
dense_8 (Dense) (None, 256) 33024
_________________________________________________________________
dense_9 (Dense) (None, 5) 1285
_________________________________________________________________
multivariate_normal_tri_l_1 multiple 0
=================================================================
Total params: 43,028
Trainable params: 43,028
Non-trainable params: 0
_________________________________________________________________
###Markdown
Connect encoder to decoder
###Code
vae = Model(inputs=encoder.inputs, outputs=decoder(encoder.outputs))
vae.summary()
###Output
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_input (InputLayer) [(None, 2)] 0
_________________________________________________________________
dense (Dense) (None, 256) 768
_________________________________________________________________
dense_1 (Dense) (None, 128) 32896
_________________________________________________________________
dense_2 (Dense) (None, 64) 8256
_________________________________________________________________
dense_3 (Dense) (None, 32) 2080
_________________________________________________________________
dense_4 (Dense) (None, 5) 165
_________________________________________________________________
multivariate_normal_tri_l (M multiple 8
_________________________________________________________________
sequential_1 (Sequential) multiple 43028
=================================================================
Total params: 87,201
Trainable params: 87,201
Non-trainable params: 0
_________________________________________________________________
###Markdown
Specify the loss function
###Code
# KL divergence is implicity incorporated to the loss function before
# add reconstruction error to loss function
def reconstruction_error(decoding_dist, x_true):
return -tf.reduce_mean(decoding_dist.log_prob(x_true))
class custom_reconstruction_error(Loss):
def call(self, decoding_dist, x_true):
return -tf.reduce_mean(decoding_dist.log_prob(x_true))
###Output
_____no_output_____
###Markdown
Selection process
###Code
print(f'# of training samples: {X_train.shape[0]}')
print(f'# of test samples: {X_test.shape[0]}')
y_train.reset_index(drop=True, inplace=True)
X_train['Risk1Yr'] = y_train
X_train[keepdims[1:]] = X_train[keepdims[1:]].astype(np.float32)
X_train_copy = X_train.copy()
X_test_copy = X_test.copy()
###Output
_____no_output_____
###Markdown
Compile and fit the model
###Code
X_test.drop(['DGN'], axis=1, inplace=True)
X_train[keepdims[1:]] = X_train[keepdims[1:]].astype(np.float32)
X_test[keepdims[1:]] = X_test[keepdims[1:]].astype(np.float32)
X_train.drop(['DGN', 'Risk1Yr'],axis=1,inplace=True)
samples = pd.DataFrame(prior.sample(X_train_copy.shape[0]).numpy(), columns=keepdims[1:])
f, axs = plt.subplots(1, 3, figsize=(12,5))
sns.kdeplot(data=pd.DataFrame(X_train_copy.drop(['DGN', 'Risk1Yr'], axis=1), columns=['PRE4','PRE5']), ax=axs[0], multiple="stack", palette='Set1').set_title('train')
sns.kdeplot(data=samples, ax=axs[1], multiple="stack", palette='Set2').set_title('qtrainable')
sns.kdeplot(data=pd.DataFrame(X_test_copy.drop('DGN', axis=1), columns=['PRE4','PRE5']), ax=axs[2], multiple="stack", palette='Set3').set_title('test')
f = f.get_figure()
sns.despine()
f.savefig(os.getcwd() + '/realnvp-results/p.jpeg')
optimizer = Adam(learning_rate=3e-4)
epochs = 300
epoch_callback = LambdaCallback(on_epoch_end=lambda epoch, logs: print('\n Epoch {}/{}'.format(epoch+1, epochs, logs),
'\n\t ' + (': {:.4f}, '.join(logs.keys()) + ': {:.4f}').format(*logs.values()))
if epoch % 100 == 0 else False
)
vae.compile(optimizer=optimizer, loss=reconstruction_error)
history = vae.fit(X_train,
validation_data=(X_test,),
epochs=epochs,
batch_size=32,
verbose=0,
shuffle=True,
callbacks=[epoch_callback]
)
###Output
Epoch 1/300
loss: 29.9215, val_loss: 24.8619
Epoch 101/300
loss: 0.0731, val_loss: 0.2223
Epoch 201/300
loss: 0.0624, val_loss: 0.1429
###Markdown
Plot training and validation losses
###Code
cutoff=10
train_losses = history.history['loss'][cutoff:]
valid_losses = history.history['val_loss'][cutoff:]
plt.plot(train_losses, label='train')
plt.plot(valid_losses, label='valid')
plt.legend()
plt.xlabel('Epochs')
plt.ylabel('KL Divergence')
plt.tight_layout()
plt.show()
# Loss function is ELBO maximization
# ELBO maximization is equivalent to KL divergence minimization
###Output
_____no_output_____
###Markdown
Sample from the generative model
###Code
X_train_sample = decoder(X_train.to_numpy()).sample()
X_train_sample = pd.DataFrame(X_train_sample.numpy(), columns=X_test.columns)
X_train_sample.head()
X_test_sample = decoder(X_test.to_numpy()).sample()
X_test_sample = pd.DataFrame(X_test_sample.numpy(), columns=X_test.columns)
X_test_sample.head()
y_train.reset_index(drop=True,inplace=True)
y_test.reset_index(drop=True,inplace=True) # for concatenation
X_train_sample['Risk1Yr'] = y_train
X_train_sample['DGN'] = X_train_copy['DGN']
###Output
_____no_output_____
###Markdown
RealNVP Flow
###Code
loc = [X_train_sample[i].mean().astype('float32') for i in list(X_train.columns)]
scale_diag = [X_train_sample[i].std().astype('float32') for i in list(X_train.columns)]
mvn = tfd.MultivariateNormalDiag(loc=loc, scale_diag=scale_diag)
mvn
class NN(Layer):
def __init__(self, input_shape, n_hidden=[512, 512], activation="relu", name="nn"):
super(NN, self).__init__(name="nn")
layer_list = []
for i, hidden in enumerate(n_hidden):
layer_list.append(Dense(hidden, activation=activation, name='dense_{}_1'.format(i)))
layer_list.append(Dense(hidden, activation=activation, name='dense_{}_2'.format(i)))
self.layer_list = layer_list
self.log_s_layer = Dense(input_shape, activation="tanh", name='log_s')
self.t_layer = Dense(input_shape, name='t')
def call(self, x):
y = x
for layer in self.layer_list:
y = layer(y)
log_s = self.log_s_layer(y)
t = self.t_layer(y)
return log_s, t
class RealNVP(tfb.Bijector):
def __init__(
self,
input_shape,
n_hidden=[512, 512],
forward_min_event_ndims=1,
validate_args: bool = False,
name="real_nvp",
):
super(RealNVP, self).__init__(
validate_args=validate_args, forward_min_event_ndims=forward_min_event_ndims, name=name
)
assert input_shape[-1] % 2 == 0
self.input_shape = input_shape
nn_layer = NN(input_shape[-1] // 2, n_hidden)
nn_input_shape = input_shape.copy()
nn_input_shape[-1] = input_shape[-1] // 2
x = tf.keras.Input(nn_input_shape)
log_s, t = nn_layer(x)
self.nn = Model(x, [log_s, t], name="nn")
def _forward(self, x):
x_a, x_b = tf.split(x, 2, axis=-1)
y_b = x_b
log_s, t = self.nn(x_b)
s = tf.exp(log_s)
y_a = s * x_a + t
y = tf.concat([y_a, y_b], axis=-1)
return y
def _inverse(self, y):
y_a, y_b = tf.split(y, 2, axis=-1)
x_b = y_b
log_s, t = self.nn(y_b)
s = tf.exp(log_s)
x_a = (y_a - t) / s
x = tf.concat([x_a, x_b], axis=-1)
return x
def _forward_log_det_jacobian(self, x):
_, x_b = tf.split(x, 2, axis=-1)
log_s, t = self.nn(x_b)
return log_s
n_samples = X_train.shape[0]
X_train_np = X_train_sample.to_numpy()
X_test_np = X_test_sample.to_numpy()
X_train_np = X_train_np[:,0:2]
X_test_np = X_test_np[:,0:2]
# standardize once again before feeding into network
scaler.fit(X_train_np)
X_train_np = scaler.transform(X_train_np)
X_test_np = scaler.transform(X_test_np)
X_train = X_train_np.astype(np.float32)
X_train = tf.data.Dataset.from_tensor_slices(X_train)
X_train = X_train.batch(32)
X_valid = X_test_np.astype(np.float32)
X_valid = tf.data.Dataset.from_tensor_slices(X_valid)
X_valid = X_valid.batch(32)
num_layers = 4
flow_bijector = []
for i in range(num_layers):
flow_i = RealNVP(input_shape=[2], n_hidden=[256,256])
flow_bijector.append(flow_i)
flow_bijector.append(tfb.Permute([1,0]))
# discard the last permute layer
flow_bijector = tfb.Chain(list(reversed(flow_bijector[:-1])))
trainable_dist = tfd.TransformedDistribution(distribution=mvn,
bijector=flow_bijector)
trainable_dist
def make_samples():
x = mvn.sample(n_samples)
samples = [x]
names = [mvn.name]
for bijector in reversed(trainable_dist.bijector.bijectors):
x = bijector.forward(x)
samples.append(x)
names.append(bijector.name)
return names, samples
num_epochs = 600
opt = tf.keras.optimizers.Adam(3e-4)
train_losses = []
valid_losses = []
for epoch in range(num_epochs):
if epoch % 100 == 0:
print("Epoch {}...".format(epoch))
train_loss = tf.keras.metrics.Mean()
val_loss = tf.keras.metrics.Mean()
for train_batch in X_train:
with tf.GradientTape() as tape:
tape.watch(trainable_dist.bijector.trainable_variables)
loss = -trainable_dist.log_prob(train_batch)
train_loss(loss)
grads = tape.gradient(loss, trainable_dist.bijector.trainable_variables)
opt.apply_gradients(zip(grads, trainable_dist.bijector.trainable_variables))
train_losses.append(train_loss.result().numpy())
# Validation
for valid_batch in X_valid:
loss = -trainable_dist.log_prob(valid_batch)
val_loss(loss)
valid_losses.append(val_loss.result().numpy())
cutoff=10
train_losses = history.history['loss'][cutoff:]
valid_losses = history.history['val_loss'][cutoff:]
plt.plot(np.arange(cutoff, epochs), train_losses, label='training')
plt.plot(np.arange(cutoff, epochs), valid_losses, label='validation')
plt.legend()
plt.xlabel("Epochs")
plt.ylabel("Negative log likelihood")
plt.title("Training and validation loss curves")
plt.show()
names, samples = make_samples()
def visualize_training_data(samples):
f, arr = plt.subplots(1, 2, figsize=(20,5))
names = ['Data', 'Trainable']
samples = [tf.constant(X_train_np), samples[-1]]
for i in range(2):
res = samples[i]
X, Y = res[..., 0].numpy(), res[..., 1].numpy()
arr[i].scatter(X, Y, s=10)
z = np.polyfit(X, Y, 1)
p = np.poly1d(z)
arr[i].plot(X, p(X), color='green')
arr[i].set_xlim([-2, 2])
arr[i].set_ylim([-2, 2])
arr[i].set_title(names[i])
visualize_training_data(samples)
samples = pd.DataFrame(samples[-1].numpy(), columns=['PRE4','PRE5'])
samples.head()
f, axs = plt.subplots(1, 3, figsize=(12,5))
sns.kdeplot(data=pd.DataFrame(X_train_np, columns=['PRE4','PRE5']), ax=axs[0], multiple="stack", palette='Set1').set_title('ptrain')
sns.kdeplot(data=samples, ax=axs[1], multiple="stack", palette='Set2').set_title('qtrained')
sns.kdeplot(data=pd.DataFrame(X_test_np, columns=['PRE4','PRE5']), ax=axs[2], multiple="stack", palette='Set3').set_title('ptest')
f = f.get_figure()
sns.despine()
f.savefig(os.getcwd() + '/realnvp-results/q.jpeg')
# KS-test
post_sample = trainable_dist.sample(X_train_copy.shape[0]).numpy()
with open('ks-test.txt', 'a') as f:
f.write('REALNVP FLOW\n')
f.write('p-val[training - qtrained] = {}\n'.format(ks2d2s(X_train_np[...,0], X_train_np[...,1], post_sample[...,0], post_sample[...,1])))
f.write('p-val[testing - qtrained] = {}\n'.format(ks2d2s(X_test_np[...,0], X_test_np[...,1], post_sample[...,0], post_sample[...,1])))
f.close()
###Output
_____no_output_____
###Markdown
Measure KL Divergence
###Code
# training
prior_train = mvn.prob(X_train_np).numpy()
learned_train = trainable_dist.prob(X_train_np).numpy()
kl = tf.keras.metrics.KLDivergence()
kl.update_state(prior_train, learned_train)
print(kl.result().numpy())
kl.reset_state()
# testing
prior_test = mvn.prob(X_test_np).numpy()
learned_test = trainable_dist.prob(X_test_np).numpy()
kl = tf.keras.metrics.KLDivergence()
kl.update_state(prior_test, learned_test)
print(kl.result().numpy())
kl.reset_state()
###Output
4.562646
17.873766
###Markdown
Measure Shannon Entropy
###Code
# training
cross_entropy = prior_train * np.log(learned_train)
print(-np.ma.masked_invalid(cross_entropy).sum())
# testing
cross_entropy = prior_test * np.log(learned_test)
print(-np.ma.masked_invalid(cross_entropy).sum())
###Output
11025.344
14063.683
###Markdown
Measure Poisson
###Code
# training
poisson = tf.keras.metrics.Poisson()
poisson.update_state(prior_train, learned_train)
print(poisson.result().numpy())
poisson.reset_state()
# testing
poisson = tf.keras.metrics.Poisson()
poisson.update_state(prior_test, learned_test)
print(poisson.result().numpy())
poisson.reset_state()
###Output
0.35982814
0.37691572
###Markdown
Measure MAE
###Code
# training
mae = tf.keras.losses.MeanAbsoluteError()
print(mae(prior_train, learned_train).numpy())
# testing
print(mae(prior_test, learned_test).numpy())
###Output
0.08932299
0.07830838
###Markdown
Performance Evaluation
###Code
# collect metrics
def _collect(prior_train, prior_test, approx_dist_train, approx_dist_test, results, number_of_run):
# KL training
kl = tf.keras.losses.KLDivergence()
results['Kullback-Leibler Divergence'][number_of_run][0] = kl(prior_train, approx_dist_train).numpy()
# KL testing
results['Kullback-Leibler Divergence'][number_of_run][1] = kl(prior_test, approx_dist_test).numpy()
# Cross Entropy training
ce = tf.keras.losses.CategoricalCrossentropy()
results['Cross Entropy'][number_of_run][0] = ce(prior_train, approx_dist_train).numpy()
# Cross Entropy testing
results['Cross Entropy'][number_of_run][1] = ce(prior_test, approx_dist_test).numpy()
# MAE training
mae = tf.keras.losses.MeanAbsoluteError()
results['Mean Absolute Error'][number_of_run][0] = mae(prior_train, approx_dist_train).numpy()
# MAE testing
results['Mean Absolute Error'][number_of_run][1] = mae(prior_test, approx_dist_test).numpy()
return results
# metrics to measure KL, Cross Entropy, Mean Absolute Error
def init_results():
results = {'Kullback-Leibler Divergence': [[None, None] for _ in range(NUMBER_OF_RUNS)],
'Cross Entropy': [[None, None] for _ in range(NUMBER_OF_RUNS)],
'Mean Absolute Error': [[None, None] for _ in range(NUMBER_OF_RUNS)]
}
return results
NUMBER_OF_RUNS = 5
results = init_results()
# generate new observations and measure with the generative model
def _results(X_train_np, X_test_np, NUMBER_OF_RUNS, results):
for num in range(NUMBER_OF_RUNS):
prior_train = decoder(X_train_np).sample().numpy()
prior_test = decoder(X_test_np).sample().numpy()
approx_dist_train = trainable_dist.sample(X_train_np.shape[0]).numpy()
approx_dist_test = trainable_dist.sample(X_test_np.shape[0]).numpy()
results = _collect(prior_train, prior_test, approx_dist_train, approx_dist_test, results, num)
return results
# convert output of decoder to probabilities
trial_data = decoder(X_train_np).sample().numpy()
tf.reduce_sum(tf.exp(-trial_data) / tf.reduce_sum(tf.exp(-trial_data), axis=0),axis=0)
# sanity check
_results(X_train_np, X_test_np, NUMBER_OF_RUNS, results)
###Output
_____no_output_____
###Markdown
[Pooled Estimate of Common Std. Deviation](https://sphweb.bumc.bu.edu/otlt/mph-modules/bs/bs704_confidence_intervals/bs704_confidence_intervals5.html)
###Code
NUMBER_OF_RUNS=100
results = init_results()
start_time = time.time()
stats = _results(X_train_np, X_test_np, NUMBER_OF_RUNS, results)
print('Metric sampling in seconds: {sec}'.format(sec=round(time.time()-start_time)))
# calculate stds and xbars
stdxbars = {'Kullback-Leibler Divergence': [None, None],
'Cross Entropy': [None, None],
'Mean Absolute Error': [None, None]
}
def _stdxbar(stdxbars, stats):
for k, v in stats.items():
tensored_samples = tf.convert_to_tensor(v)
xbars = tf.reduce_mean(tensored_samples, axis=0).numpy()
xbar_train, xbar_test = xbars[0], xbars[1]
stds = tf.math.reduce_std(tensored_samples, axis=0).numpy()
std_train, std_test = stds[0], stds[1]
stdxbars[k] = [[xbar_train, std_train], [xbar_test, std_test]]
return stdxbars
stdxbars = _stdxbar(results, stats)
stdxbars
def _pooled(stdxbars, n):
poolingstds = {'Kullback-Leibler Divergence': .0,
'Cross Entropy': .0,
'Mean Absolute Error': .0
}
for k, v in stdxbars.items():
trainstd = v[0][1]
teststd = v[1][1]
estimate = np.sqrt((((n-1)*np.square(trainstd))+((n-1)*np.square(teststd)))/(2*(n-1)))
assertion = False
try:
assert .5 <= trainstd/teststd <= 2., '{metric}: One sample variance cannot be the double of the other!'.format(metric=k)
except AssertionError as e:
assertion = True
print(e)
# discard metrics that don't pass sample variance check
if assertion == False:
poolingstds[k] = estimate
else:
del poolingstds[k]
return poolingstds
pooled_estimates = _pooled(stdxbars, n=NUMBER_OF_RUNS)
pooled_estimates
# calculate 95% CI
def _CI(stdxbars, pooled_estimates, n):
zval95 = 1.96
conf_intervals = {'Kullback-Leibler Divergence': .0,
'Cross Entropy': .0,
'Mean Absolute Error': .0
}
trainxbar = stdxbars['Kullback-Leibler Divergence'][0][0]
testxbar = stdxbars['Kullback-Leibler Divergence'][1][0]
mean_diff = testxbar-trainxbar
estimate = pooled_estimates['Kullback-Leibler Divergence']
upper_bound = np.round(mean_diff+(zval95*estimate*np.sqrt(2/n)),3)
lower_bound = np.round(mean_diff-(zval95*estimate*np.sqrt(2/n)),3)
conf_intervals['Kullback-Leibler Divergence'] = [lower_bound, upper_bound]
trainxbar = stdxbars['Cross Entropy'][0][0]
testxbar = stdxbars['Cross Entropy'][1][0]
mean_diff = testxbar-trainxbar
estimate = pooled_estimates['Cross Entropy']
upper_bound = np.round(mean_diff+(zval95*estimate*np.sqrt(2/n)),3)
lower_bound = np.round(mean_diff-(zval95*estimate*np.sqrt(2/n)),3)
conf_intervals['Cross Entropy'] = [lower_bound, upper_bound]
trainxbar = stdxbars['Mean Absolute Error'][0][0]
testxbar = stdxbars['Mean Absolute Error'][1][0]
mean_diff = testxbar-trainxbar
estimate = pooled_estimates['Mean Absolute Error']
upper_bound = np.round(mean_diff+(zval95*estimate*np.sqrt(2/n)),3)
lower_bound = np.round(mean_diff-(zval95*estimate*np.sqrt(2/n)),3)
conf_intervals['Mean Absolute Error'] = [lower_bound, upper_bound]
return conf_intervals
conf_intervals = _CI(stdxbars, pooled_estimates, n=NUMBER_OF_RUNS)
conf_intervals
###Output
_____no_output_____ |
matplotlib_exampleplot.ipynb | ###Markdown
###Code
%autosave 100
#matplotlib makes plots work like matlab
import matplotlib.pyplot as plt
p1=plt.plot([1,2,3,4],[3,5,2,8])
plt.show()
%whos
###Output
Variable Type Data/Info
------------------------------
p1 list n=1
plt module <module 'matplotlib.pyplo<...>es/matplotlib/pyplot.py'>
|
.ipynb_checkpoints/datas_savings.ipynb | ###Markdown
Imports
###Code
import pandas as pd
import numpy as np
import sqlalchemy as sc
###Output
_____no_output_____
###Markdown
Make a request
###Code
url_100 = 'http://api.bcb.gov.br/dados/serie/bcdata.sgs.25135/dados?formato=json'
url_500 = 'http://api.bcb.gov.br/dados/serie/bcdata.sgs.25136/dados?formato=json'
url_1000 = 'http://api.bcb.gov.br/dados/serie/bcdata.sgs.25137/dados?formato=json'
url_2000 = 'http://api.bcb.gov.br/dados/serie/bcdata.sgs.25138/dados?formato=json'
url_5000 = 'http://api.bcb.gov.br/dados/serie/bcdata.sgs.25139/dados?formato=json'
url_10000 = 'http://api.bcb.gov.br/dados/serie/bcdata.sgs.25140/dados?formato=json'
url_15000 = 'http://api.bcb.gov.br/dados/serie/bcdata.sgs.25141/dados?formato=json'
url_20000 = 'http://api.bcb.gov.br/dados/serie/bcdata.sgs.25142/dados?formato=json'
url_25000 = 'http://api.bcb.gov.br/dados/serie/bcdata.sgs.25143/dados?formato=json'
url_30000 = 'http://api.bcb.gov.br/dados/serie/bcdata.sgs.25144/dados?formato=json'
url_more_30000 = 'http://api.bcb.gov.br/dados/serie/bcdata.sgs.25145/dados?formato=json'
pd.read_json(url_100)
###Output
_____no_output_____
###Markdown
Export to csv
###Code
# # Export data less than 100 to csv
# dados_100 = pd.read_json(url_100)
# dados_100.to_csv('dados_100.csv')
# # Export data less than 500 to csv
# dados_500 = pd.read_json(url_500)
# dados_500.to_csv('dados_500.csv')
# # Export data less than 1000 to csv
# dados_1000 = pd.read_json(url_1000)
# dados_1000.to_csv('dados_1000.csv')
# # Export data less than 2000 to csv
# dados_2000 = pd.read_json(url_2000)
# dados_2000.to_csv('dados_2000.csv')
# # Export data less than 5000 to csv
# dados_5000 = pd.read_json(url_5000)
# dados_5000.to_csv('dados_5000.csv')
# # Export data less than 10000 to csv
# dados_10000 = pd.read_json(url_10000)
# dados_10000.to_csv('dados_10000.csv')
# # Export data less than 15000 to csv
# dados_15000 = pd.read_json(url_15000)
# dados_15000.to_csv('dados_15000.csv')
# # Export data less than 20000 to csv
# dados_20000 = pd.read_json(url_20000)
# dados_20000.to_csv('dados_20000.csv')
# # Export data less than 25000 to csv
# dados_25000 = pd.read_json(url_25000)
# dados_25000.to_csv('dados_25000.csv')
# # Export data less than 30000 to csv
# dados_30000 = pd.read_json(url_30000)
# dados_30000.to_csv('dados_30000.csv')
# # # Export data bigger than 3000 to csv
# dados_3000 = pd.read_json(url_more_30000)
# dados_3000.to_csv('dados_more_3000.csv')
###Output
_____no_output_____
###Markdown
connect to database
###Code
engine = sc.create_engine('postgresql://postgres:Pos2021*-@localhost:5432/economics_datas')
data = pd.read_sql_table('f_savings', engine)
data.dtypes
###Output
_____no_output_____
###Markdown
tests in database
###Code
data.dtypes
###Output
_____no_output_____ |
tutorials/finite_elasticity/finite_elasticity.ipynb | ###Markdown
Finite elasticity IntroductionThis tutorial demonstrates how to setup and solve a nonlinear continuum mechanics problem using OpenCMISS-Iron in python. For the purpose of this tutorial, we will be solving a 3D finite elasticity problem. This tutorial has been set up to solve the deformation of an isotropic, unit cube under a range of loading conditions.See the [OpenCMISS-Iron tutorial documenation page](https://opencmiss-iron-tutorials.readthedocs.io/en/latest/tutorials.htmlhow-to-run-tutorials) for instructions on how to run this tutorial. Learning outcomesAt the end of this tutorial you will:- Know the steps involved in setting up and solving a nonlinear finite elasticity simulation with OpenCMISS-Iron.- Know how different mechanical loads can be applied to a geometry.- Know how information about the deformation can be extracted for analysis. Problem summary The finite elasticity stress equilibrium equationIn this example we are solving the stress equilibrium equation for nonlinear finite elasticity. The stress equilibrium equation represents the principle of linear momentum as follows:$$\displaystyle \nabla \sigma + \rho \mathbf{b}-\rho \mathbf {a}=0. \qquad \text{in} \qquad \Omega $$where $\sigma$ is the Cauchy stress tensor, $\mathbf{b}$ is the body force vector and $\mathbf{a}$ is the vector representing the acceleration due to any unbalanced forces.The boundary equations for the stress equilibrium equation are partitioned into the Dirichlet boundary conditions representing a fixed displacement $u_d$ over the boundary $\Gamma_d$, and the Neumann boundary conditions representing the traction forces $\mathbf{t}$ applied on the boundary $\Gamma_t$ along the normal $\mathbf{n}$ as follows:$$\begin{aligned}\displaystyle u &= u_d \quad &\text{on} \quad \Gamma_d \\\displaystyle \sigma^T \mathbf{n} &= \mathbf{t} \quad &\text{on} \quad \Gamma_t \end{aligned}$$In this example we will solve the stress equilibrium equation for a incompressible material, which we shall perform using **the incompressibility constraint**:$$\displaystyle (I_3-1) = 0$$We will also use a Mooney Rivlin constitutive equation to describe the behaviour of the material. Solution variablesThe problem we are about to solve includes 4 dependent variables. The first three variables represent each of the 3D coordinates $(x,y,z)$ of the mesh nodes in the deformed state. The incompressibility constraint is satisfied using lagrange multipliers, which are represented as a scalar variable, $p$. This variable is often referred to as the **hydrostatic pressure**. Constitutive relationshipAnother important set of equations that need to be included when solving a mechanics problem are the stress-strain relationships or **constitutive relationships**. There are a set of constitutive equations that have already been included within the OpenCMISS-Iron library as [EquationSet subtypes](http://cmiss.bioeng.auckland.ac.nz/OpenCMISS/doc/user/group___o_p_e_n_c_m_i_s_s___equations_set_subtypes.html). We will demonstrate how these constants are used to incorporate the constitutive equation in the simulation when setting up the equation set. Loading conditionsWe will see in this tutorial how five different types of mechanical loads can be applied as follows: * Model 1 (Uniaxial extension of a unit cube)* Model 2 (Equibiaxial extension of a unit cube)* Model 3 (Simple shear of a unit cube)* Model 4 (Shear of a unit cube)* Model 5 (Extension and shear of a unit cube) Setup Loading the OpenCMISS-Iron libraryIn order to use OpenCMISS-Iron we have to first import the opencmiss.iron module from the OpenCMISS-Iron package.
###Code
import numpy
# Intialise OpenCMISS-Iron.
from opencmiss.iron import iron
###Output
_____no_output_____
###Markdown
Set up parametersWe first specify a set of variables that will be used throughout the tutorial. This is a common practice among experienced users to set these variables up at the start because we can then change these easily and re-run the rest of the code as required.
###Code
# Set constants
X, Y, Z = (1, 2, 3)
# Set model number to solve (these specify different loading conditions).
model = 1
# Specify the number of local element directions.
number_of_xi = 3
# Specify the number of elements along each element direction.
number_global_x_elements = 1
number_global_y_elements = 1
number_global_z_elements = 1
# Set dimensions of the cube.
width = 1.0
length = 1.0
height = 1.0
interpolation_type = 1
number_of_gauss_per_xi = 2 # Gauss points along each local element coordinate direction (xi).
use_pressure_basis = False
number_of_load_increments = 1
###Output
_____no_output_____
###Markdown
- `interpolation_type` is an integer variable that can be changed to choose one of the nine basis interpolation types defined in OpenCMISS-Iron [Basis Interpolation Specifications Constants](http://opencmiss.org/documentation/apidoc/iron/latest/python/classiron_1_1_basis_interpolation_specifications.html)- `number_of_gauss_per_xi` is the number of Gauss points used along a given local element direction for integrating the equilibrium equations. Note that there is a minimum number of gauss points required for a given interpolation scheme (e.g. linear Lagrange interpolation requires at least 2 Gauss points along each local element direction). Using less than the minimum number will result in the solver not converging.- `use_pressure_basis` is a boolean variable that we can set to true to set up node-based interpolation scheme for the incompressibility constraint or set to false if we want the pressure basis to be only varying between elements. This choice is typically determined by the basis function used to interpolate the geometric variables. The pressure basis is typically one order lower than the geometric and displacement field interpolation schemes. If you choose linear lagrange for displacements, then the lower order is a constant, element based interoplation. If you choose quadratic lagrange for the geometry then a linear interpolation can chosen used. - `number_of_load_increments` sets the number of load steps to be taken to solve the nonlinear mechanics problem. This concept is briefly explained below. The weak form finite element matrix equations for the stress equilibrium equations above have been implemented in the OpenCMISS-Iron libraries. Numerical treatment of the equations will show that the equations are nonlinear and, specifically, the determination of the deformed configuration coordinates turn out to be like a root finding exercise. We need to find the deformed coordinates $\mathbf{x}$ such that $f(x)=0$. These equations are therefore solved using a nonlinear iterative solver, **Newton Raphson technique** to be exact. Nonlinear solvers require an initial guess at the root of the equation and if we are far away from the actual solution then the solvers can diverge and give spurious $x$ values. Therefore, in most simulations it is best to split an entire mechanical load into lots of smaller loads. This is what `number_of_load_increments` allows us to do. You will see it come to use in the control loop section of the tutorial below. The next section describes how we can interact with the OpenCMISS-Iron library through it's object-oriented API to create and solve the mechanics problem. Step by step guide 1. Creating a coordinate systemFirst we construct a coordinate system that will be used to describe the geometry in our problem. The 3D geometry will exist in a 3D space, so we need a 3D coordinate system.
###Code
# Create a 3D rectangular cartesian coordinate system.
coordinate_system_user_number = 1
coordinate_system = iron.CoordinateSystem()
coordinate_system.CreateStart(coordinate_system_user_number)
coordinate_system.DimensionSet(3)
coordinate_system.CreateFinish()
###Output
_____no_output_____
###Markdown
2. Creating basis functionsThe finite element description of our fields requires a basis function to interpolate field values over elements. In OpenCMISS-Iron, we start by initalising a `basis` object on which can specific an interpolation scheme. In the following section, you can choose from a number of basis function interpolation types. These are defined in the OpenCMISS-Iron [Basis Interpolation Specifications Constants](http://opencmiss.org/documentation/apidoc/iron/latest/python/classiron_1_1_basis_interpolation_specifications.html). The `interpolation_type` variable that was set at the top of the tutorial will be used to determine which basis interpolation will be used in this simulation. Note that you will need to ensure that an appropriate `number_of_gauss_per_xi` have been set if you change the basis interpolation.We have also initialised a second `pressure_basis` object for the hydrostatic pressure. In mechanics theory, it is generally well understood that the interpolation scheme for the hydrostatic pressure basis should be one order lower than the geometric basis. `use_pressure_basis`, which is set at the top of the tutorial, determines whether the incompressibility constraint equation will be interpolated using a nodally based interpolation scheme or interpolated as a constant within elements. If the pressure basis is not defined then the simulation is set up with element based interpolation. Note that if you want to describe nearly incompressible or compressible materials then you need to choose a different equation subtype that describes a constitutive equation for compressible/nearly incompressible materials.
###Code
basis_user_number = 1
pressure_basis_user_number = 2
# Define geometric basis.
basis = iron.Basis()
basis.CreateStart(basis_user_number)
if interpolation_type in (1,2,3,4):
basis.TypeSet(iron.BasisTypes.LAGRANGE_HERMITE_TP)
elif interpolation_type in (7,8,9):
basis.TypeSet(iron.BasisTypes.SIMPLEX)
basis.NumberOfXiSet(number_of_xi)
basis.InterpolationXiSet(
[iron.BasisInterpolationSpecifications.LINEAR_LAGRANGE]*number_of_xi)
if number_of_gauss_per_xi>0:
basis.QuadratureNumberOfGaussXiSet( [number_of_gauss_per_xi]*number_of_xi)
basis.CreateFinish()
if use_pressure_basis:
# Define hydrostatic pressure basis.
pressure_basis = iron.Basis()
pressure_basis.CreateStart(pressure_basis_user_number)
if interpolation_type in (1,2,3,4):
pressure_basis.TypeSet(iron.BasisTypes.LAGRANGE_HERMITE_TP)
elif interpolation_type in (7,8,9):
pressure_basis.TypeSet(iron.BasisTypes.SIMPLEX)
pressure_basis.NumberOfXiSet(number_of_xi)
pressure_basis.InterpolationXiSet(
[iron.BasisInterpolationSpecifications.LINEAR_LAGRANGE]*number_of_xi)
if number_of_gauss_per_xi > 0:
pressure_basis.QuadratureNumberOfGaussXiSet(
[number_of_gauss_per_xi]*number_of_xi)
pressure_basis.CreateFinish()
###Output
_____no_output_____
###Markdown
3. Creating a regionNext we create a region that our fields will be defined on, and tell it to use the 3D coordinate system we created previously.
###Code
# Create a region and assign the coordinate system to the region.
region_user_number = 1
region = iron.Region()
region.CreateStart(region_user_number,iron.WorldRegion)
region.LabelSet("Region")
region.CoordinateSystemSet(coordinate_system)
region.CreateFinish()
###Output
_____no_output_____
###Markdown
4. Setting up a simple cuboid meshIn this example we will use the `iron.GeneratedMesh()` class of OpenCMISS to automatically create a 3D geometric mesh on which to solve the mechanics problem. We will create a regular mesh of size `width` (defined along x), `height` (defined along y) and `depth` (defined along z) and divide the mesh into `number_global_x_elements` in the X direction, `number_global_y_elements` in the Y direction and `number_global_z_elements` in the Z direction. We will then tell it to use the basis we created previously.
###Code
# Start the creation of a generated mesh in the region.
generated_mesh_user_number = 1
generated_mesh = iron.GeneratedMesh()
generated_mesh.CreateStart(generated_mesh_user_number, region)
generated_mesh.TypeSet(iron.GeneratedMeshTypes.REGULAR)
if use_pressure_basis:
generated_mesh.BasisSet([basis, pressure_basis])
else:
generated_mesh.BasisSet([basis])
generated_mesh.ExtentSet([width, length, height])
generated_mesh.NumberOfElementsSet(
[number_global_x_elements,
number_global_y_elements,
number_global_z_elements])
# Finish the creation of a generated mesh in the region.
mesh_user_number = 1
mesh = iron.Mesh()
generated_mesh.CreateFinish(mesh_user_number,mesh)
###Output
_____no_output_____
###Markdown
5. Decomposing the meshOnce the mesh has been created we can decompose it into a number of domains in order to allow for parallelism. We choose the options to let OpenCMISS-Iron to calculate the best way to break up the mesh. We also set the number of domains to be equal to the number of computational nodes this example is running on. In this example, we will only be using a single domain. Look for our parallelisation example for a demonstratoin of how to execute simulations using parallel processing techniques.
###Code
# Get the number of computational nodes.
number_of_computational_nodes = iron.ComputationalNumberOfNodesGet()
# Create a decomposition for the mesh.
decomposition_user_number = 1
decomposition = iron.Decomposition()
decomposition.CreateStart(decomposition_user_number,mesh)
decomposition.TypeSet(iron.DecompositionTypes.CALCULATED)
decomposition.NumberOfDomainsSet(number_of_computational_nodes)
decomposition.CreateFinish()
###Output
_____no_output_____
###Markdown
6. Creating a geometric fieldNow that the mesh has been decomposed we are in a position to create fields. The first field we need to create is the geometric field. Once we have finished creating the field, we can change the field degrees of freedom (DOFs) to give us our geometry. Since the mesh was constructed using the OpenCMISS-Iron `GenerateMesh` class, we can use it's `GeometricParametersCalculate` method to automatically calculate and populate the geometric field parameters of the regular mesh.
###Code
# Create a field for the geometry.
geometric_field_user_number = 1
geometric_field = iron.Field()
geometric_field.CreateStart(geometric_field_user_number,region)
geometric_field.MeshDecompositionSet(decomposition)
geometric_field.TypeSet(iron.FieldTypes.GEOMETRIC)
geometric_field.VariableLabelSet(iron.FieldVariableTypes.U,"Geometry")
geometric_field.ComponentMeshComponentSet(iron.FieldVariableTypes.U,1,1)
geometric_field.ComponentMeshComponentSet(iron.FieldVariableTypes.U,2,1)
geometric_field.ComponentMeshComponentSet(iron.FieldVariableTypes.U,3,1)
if interpolation_type == 4:
# Set arc length scaling for cubic-Hermite elements.
geometric_field.FieldScalingTypeSet(iron.FieldScalingTypes.ARITHMETIC_MEAN)
geometric_field.CreateFinish()
# Update the geometric field parameters from generated mesh.
generated_mesh.GeometricParametersCalculate(geometric_field)
###Output
_____no_output_____
###Markdown
Visualising the geometryWe now visualise the geometry using pythreejs. Note that this visualisation currently only supports elements with linear Lagrange interpolation. This includes the node numbers for all elements.
###Code
import sys
sys.path.insert(1, '../../tools/')
import threejs_visualiser
renderer = threejs_visualiser.visualise(
mesh, geometric_field, variable=iron.FieldVariableTypes.U, node_labels=True)
renderer
###Output
_____no_output_____
###Markdown
7. Creating fields Dependent fieldWhen solving the mechanics equations set, we require somewhere to store the deformed geometry (our solution). In OpenCMISS-Iron, we store the solutions to equations sets in a dependent field that contains our dependent variables. Note that the dependent field has been pre-defined in the OpenCMISS-Iron library to contain four components when solving `ProblemTypes.FINITE_ELASTICITY` with a `EquationsSetSubTypes.MOONEY_RIVLIN` subtype. The first three components store the deformed coordinates and the fourth stores the hydrostatic pressure.Remember that `use_pressure_basis` can be set to true or false to switch between element based or nodally based interpolation for the incompressibility constraint.One can either make the hydrostatic pressure constant within an element by calling the `dependent_field.ComponentInterpolationSet` method with the `iron.FieldInterpolationTypes.ELEMENT_BASED` option. Alternatively, we can make the hydrostatic pressure variable across different nodes by calling the `dependent_field.ComponentInterpolationSet` method with the `iron.FieldInterpolationTypes.NODE_BASED` option. In this tutorial, we have chosen the, hydrostatic pressure to be nodally interpolated if `use_pressure_basis` is true, or use element based interpolation if `use_pressure_basis` is false.
###Code
dependent_field_user_number = 2
dependent_field = iron.Field()
dependent_field.CreateStart(dependent_field_user_number, region)
dependent_field.MeshDecompositionSet(decomposition)
dependent_field.TypeSet(iron.FieldTypes.GEOMETRIC_GENERAL)
dependent_field.GeometricFieldSet(geometric_field)
dependent_field.DependentTypeSet(iron.FieldDependentTypes.DEPENDENT)
dependent_field.VariableLabelSet(iron.FieldVariableTypes.U, "Dependent")
dependent_field.NumberOfVariablesSet(2)
# Set the number of componets for the U variable (position) and the DELUDELN
# (forces).
dependent_field.NumberOfComponentsSet(iron.FieldVariableTypes.U, 4)
dependent_field.NumberOfComponentsSet(iron.FieldVariableTypes.DELUDELN, 4)
if use_pressure_basis:
# Set the hydrostatic pressure to be nodally based and use the second mesh component.
# U variable (position)
dependent_field.ComponentInterpolationSet(
iron.FieldVariableTypes.U, 4, iron.FieldInterpolationTypes.NODE_BASED)
dependent_field.ComponentMeshComponentSet(
iron.FieldVariableTypes.U, 4, 2)
# DELUDELN variable (forces)
dependent_field.ComponentInterpolationSet(
iron.FieldVariableTypes.DELUDELN, 4,
iron.FieldInterpolationTypes.NODE_BASED)
dependent_field.ComponentMeshComponentSet(
iron.FieldVariableTypes.DELUDELN, 4, 2)
if interpolation_type == 4:
# Set arc length scaling for cubic-Hermite elements.
dependent_field.FieldScalingTypeSet(
iron.FieldScalingTypes.ARITHMETIC_MEAN)
else:
# Set the hydrostatic pressure to be constant within each element.
dependent_field.ComponentInterpolationSet(
iron.FieldVariableTypes.U, 4,
iron.FieldInterpolationTypes.ELEMENT_BASED)
dependent_field.ComponentInterpolationSet(
iron.FieldVariableTypes.DELUDELN, 4,
iron.FieldInterpolationTypes.ELEMENT_BASED)
dependent_field.CreateFinish()
###Output
_____no_output_____
###Markdown
This dependent field needs to be initialised before the simulation is run. To this end, we copy the values of the coordinates from the geometric field into the dependent field in the below code snippet. The hydrostatic pressure field is set to 0.0.
###Code
# Initialise dependent field from undeformed geometry and displacement bcs and set hydrostatic pressure.
iron.Field.ParametersToFieldParametersComponentCopy(
geometric_field, iron.FieldVariableTypes.U, iron.FieldParameterSetTypes.VALUES, 1,
dependent_field, iron.FieldVariableTypes.U, iron.FieldParameterSetTypes.VALUES, 1)
iron.Field.ParametersToFieldParametersComponentCopy(
geometric_field, iron.FieldVariableTypes.U, iron.FieldParameterSetTypes.VALUES, 2,
dependent_field, iron.FieldVariableTypes.U, iron.FieldParameterSetTypes.VALUES, 2)
iron.Field.ParametersToFieldParametersComponentCopy(
geometric_field, iron.FieldVariableTypes.U, iron.FieldParameterSetTypes.VALUES, 3,
dependent_field, iron.FieldVariableTypes.U, iron.FieldParameterSetTypes.VALUES, 3)
iron.Field.ComponentValuesInitialiseDP(
dependent_field, iron.FieldVariableTypes.U, iron.FieldParameterSetTypes.VALUES, 4, 0.0)
###Output
_____no_output_____
###Markdown
Material fieldWe now set up a new field called the material field, which will store the constitutive equation parameters of the Mooney Rivlin equation. This field can be set to have the same values throughout the mesh to represent a homogeneous material as shown below. If you want to describe heterogeneous materials, you can set the values of these parameters differently across the mesh (e.g. either using an nodally varying variation, or a element constant variation). This is not shown here but we'll give you a little example of this at the end of this tutorial. Below, the ```ComponentValuesInitialiseDP``` function sets all the nodal values to be the same: 1.0 for `c10` and 0.2 for `c01`.
###Code
# Create the material field.
material_field_user_number = 3
material_field = iron.Field()
material_field.CreateStart(material_field_user_number, region)
material_field.TypeSet(iron.FieldTypes.MATERIAL)
material_field.MeshDecompositionSet(decomposition)
material_field.GeometricFieldSet(geometric_field)
material_field.VariableLabelSet(iron.FieldVariableTypes.U, "Material")
# Set the number of components for the Mooney Rivlin constitutive equation (2).
material_field.NumberOfComponentsSet(iron.FieldVariableTypes.U, 2)
for component in [1, 2]:
material_field.ComponentInterpolationSet(
iron.FieldVariableTypes.U, component,
iron.FieldInterpolationTypes.ELEMENT_BASED)
if interpolation_type == 4:
# Set arc length scaling for cubic-Hermite elements.
material_field.FieldScalingTypeSet(iron.FieldScalingTypes.ARITHMETIC_MEAN)
material_field.CreateFinish()
# Set Mooney-Rivlin constants c10 and c01 respectively.
material_field.ComponentValuesInitialiseDP(
iron.FieldVariableTypes.U, iron.FieldParameterSetTypes.VALUES, 1, 1.0)
material_field.ComponentValuesInitialiseDP(
iron.FieldVariableTypes.U, iron.FieldParameterSetTypes.VALUES, 2, 0.2)
###Output
_____no_output_____
###Markdown
Equation set field
###Code
# Equation set field.
equations_set_field_user_number = 4
equations_set_field = iron.Field()
equations_set = iron.EquationsSet()
###Output
_____no_output_____
###Markdown
8. Defining the finite elasticity equations setWe now specify that we want to solve a finite elasticity equation, and identify the specific material constitutive equation that we wish to use to describe the mechanical behaviour of the cube.The key constants used to define the equation set are:- `ProblemClasses.ELASTICITY` defines that the equation set is of the elasticity class.- `ProblemTypes.FINITE_ELASTICITY` defines that the finite elasticity equations set will be used.- `EquationsSetSubTypes.MOONEY_RIVLIN` defines that the Mooney Rivlin constitutive equation that should be used from the range of constitutive equations that are implemented within the OpenCMISS-Iron library. You can find more information on these by browsing the OpenCMISS-Iron [Equations Set Subtypes Constants](http://opencmiss.org/documentation/apidoc/iron/latest/python/classiron_1_1_equations_set_subtypes.html). Future tutorials will demonstrate how you can dynamically specify constitutive relations using CellML.
###Code
equations_set_user_number = 1
# Finite elasticity equation specification.
equations_set_specification = [iron.ProblemClasses.ELASTICITY,
iron.ProblemTypes.FINITE_ELASTICITY,
iron.EquationsSetSubtypes.MOONEY_RIVLIN]
# Add the geometric field and equations set field that we created earlier (note
# that while we defined the geometric field above, we only initialised an empty
# field for the equations_set_field. When an empty field is provided to the
# equations_set, it will automatically populate it with default values).
equations_set.CreateStart(
equations_set_user_number, region, geometric_field, equations_set_specification,
equations_set_field_user_number, equations_set_field)
# Add the dependent field that we created earlier.
equations_set.DependentCreateStart(dependent_field_user_number, dependent_field)
equations_set.DependentCreateFinish()
# Add the material field that we created earlier.
equations_set.MaterialsCreateStart(material_field_user_number, material_field)
equations_set.MaterialsCreateFinish()
equations_set.CreateFinish()
###Output
_____no_output_____
###Markdown
Once the equations set is defined, we create the equations that use our fields to construct equations matrices and vectors.
###Code
# Create equations.
equations = iron.Equations()
equations_set.EquationsCreateStart(equations)
equations.SparsityTypeSet(iron.EquationsSparsityTypes.SPARSE)
equations.OutputTypeSet(iron.EquationsOutputTypes.NONE)
equations_set.EquationsCreateFinish()
###Output
_____no_output_____
###Markdown
9. Defining the problemNow that we have defined the equations we can now create our problem to be solved by OpenCMISS-Iron. We create a standard finite elasticity problem, which is a member of the elasticity field problem class. The problem control loop uses the default load increment loop and hence does not require a subtype.
###Code
# Define the problem.
problem_user_number = 1
problem = iron.Problem()
problem_specification = (
[iron.ProblemClasses.ELASTICITY,
iron.ProblemTypes.FINITE_ELASTICITY,
iron.ProblemSubtypes.NONE])
problem.CreateStart(problem_user_number, problem_specification)
problem.CreateFinish()
###Output
_____no_output_____
###Markdown
10. Defining control loopsThe problem type defines a control loop structure that is used when solving the problem. The OpenCMISS-Iron control loop is a "supervisor" for the computational process. We may have multiple control loops with nested sub loops, and control loops can have different types, for example load incremented loops or time loops for dynamic problems. These control loops have been defined in the OpenCMISS-Iron library for the finite elasticity type of equations as a load increment loop. If we wanted to access the control loop and modify it we would use the `problem.ControlLoopGet` method before finishing the creation of the control loops. In the below code snippet we get the control loop to set the number of load increments to be used to solve the problem using the variable `number_of_load_increments`.
###Code
# Create the problem control loop.
problem.ControlLoopCreateStart()
control_loop = iron.ControlLoop()
problem.ControlLoopGet([iron.ControlLoopIdentifiers.NODE], control_loop)
control_loop.MaximumIterationsSet(number_of_load_increments)
problem.ControlLoopCreateFinish()
###Output
_____no_output_____
###Markdown
11. Defining solversAfter defining the problem structure we can create the solvers that will be run to actually solve our problem. As the finite elasticity equations are nonlinear, we require a nonlinear solver. Nonlinear solvers typically involve a linearisation step, and therefore a linear solver is also required. In OpenCMISS-Iron, we can start the creation of solvers by calling the `problem.SolversCreateStart()` method, whose properties can be specified, before they are finalised using a call to the `problem.SolversCreateFinish()` method. Once finalised, only solver parameter (.e.g tolerances) can be changed, however, fundamental properties (e.g. which solver library to use) cannot be changed. If an additional solver is required, the existing solver can be destroyed and recreated, or another solver can be constructed.
###Code
nonlinear_solver = iron.Solver()
linear_solver = iron.Solver()
problem.SolversCreateStart()
problem.SolverGet([iron.ControlLoopIdentifiers.NODE], 1, nonlinear_solver)
nonlinear_solver.OutputTypeSet(iron.SolverOutputTypes.NONE)
nonlinear_solver.NewtonJacobianCalculationTypeSet(
iron.JacobianCalculationTypes.EQUATIONS)
nonlinear_solver.NewtonLinearSolverGet(linear_solver)
linear_solver.LinearTypeSet(iron.LinearSolverTypes.DIRECT)
problem.SolversCreateFinish()
###Output
_____no_output_____
###Markdown
12. Defining solver equationsAfter defining our solver we can create the equations for the solver to solve. This is achived by adding our equations sets to an OpenCMISS-Iron `solver_equations` object. In this example, we have just one equations set to add, however, for coupled problems, we may have multiple equations sets in the solver equations.
###Code
solver = iron.Solver()
solver_equations = iron.SolverEquations()
problem.SolverEquationsCreateStart()
problem.SolverGet([iron.ControlLoopIdentifiers.NODE], 1, solver)
solver.SolverEquationsGet(solver_equations)
solver_equations.SparsityTypeSet(iron.SolverEquationsSparsityTypes.SPARSE)
_ = solver_equations.EquationsSetAdd(equations_set)
problem.SolverEquationsCreateFinish()
###Output
_____no_output_____
###Markdown
13. Defining the boundary conditionsThe final step in configuring the problem is to define the boundary conditions for the simulations. Here, as stated at the top of the tutorial, we have set up five different boundary conditions settings to represent four independent loading conditions on the cube geometry.- Model 1 (Uniaxial extension of a unit cube)- Model 2 (Equibiaxial extension of a unit cube)- Model 3 (Simple shear of a unit cube)- Model 4 (Shear of a unit cube)- Model 5 (Extension and shear of a unit cube)The variable `model` set at the top of the tutorial program can be used to switch between these deformations. Each line of code below sets a dirichlet boundary conditions that prescribe a nodal coordinate value. The constant `iron.BoundaryConditionsTypes.FIXED` indicates that the value needs to be fixed to a certain value specified in the final argument of the `boundary_conditions.AddNode` method.
###Code
# Prescribe boundary conditions (absolute nodal parameters).
boundary_conditions = iron.BoundaryConditions()
solver_equations.BoundaryConditionsCreateStart(boundary_conditions)
if model == 1:
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,1,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,3,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,5,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,7,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,2,X,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,4,X,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,6,X,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,8,X,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,1,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,2,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,5,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,6,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,1,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,2,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,3,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field, iron.FieldVariableTypes.U,1,1,4,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
elif model == 2:
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,1,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,3,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,5,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,7,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,2,X,iron.BoundaryConditionsTypes.FIXED,0.25)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,4,X,iron.BoundaryConditionsTypes.FIXED,0.25)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,6,X,iron.BoundaryConditionsTypes.FIXED,0.25)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,8,X,iron.BoundaryConditionsTypes.FIXED,0.25)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,1,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,2,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,5,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,6,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,3,Y,iron.BoundaryConditionsTypes.FIXED,0.25)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,4,Y,iron.BoundaryConditionsTypes.FIXED,0.25)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,7,Y,iron.BoundaryConditionsTypes.FIXED,0.25)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,8,Y,iron.BoundaryConditionsTypes.FIXED,0.25)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,1,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,2,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,3,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,4,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
elif model == 3:
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,1,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,3,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,5,X,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,7,X,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,2,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,4,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,6,X,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,8,X,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,1,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,2,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,5,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,6,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,1,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,2,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,3,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,4,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,5,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,6,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,7,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,8,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
elif model == 4:
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,1,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,3,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,6,X,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,8,X,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,1,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,2,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,5,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,6,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,1,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,3,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,6,Z,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,8,Z,iron.BoundaryConditionsTypes.FIXED,0.5)
elif model == 5:
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,1,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,3,X,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,5,X,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,7,X,iron.BoundaryConditionsTypes.FIXED,0.5)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,2,X,iron.BoundaryConditionsTypes.FIXED,0.25)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,4,X,iron.BoundaryConditionsTypes.FIXED,0.25)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,6,X,iron.BoundaryConditionsTypes.FIXED,0.75)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,8,X,iron.BoundaryConditionsTypes.FIXED,0.75)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,1,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,2,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,5,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,6,Y,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,1,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,2,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,3,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,4,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,5,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,6,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,7,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
boundary_conditions.AddNode(dependent_field,iron.FieldVariableTypes.U,1,1,8,Z,iron.BoundaryConditionsTypes.FIXED,0.0)
###Output
_____no_output_____
###Markdown
We then construct the solver matrices and vectors by making a call to the `solver_equations.BoundaryConditionsCreateFinish()` method.
###Code
solver_equations.BoundaryConditionsCreateFinish()
###Output
_____no_output_____
###Markdown
14. Solving the problemAfter our problem solver equations have been fully defined, we are now ready to solve our problem. When we call the Solve method of the problem it will loop over the control loops and control loop solvers to solve our problem.
###Code
# Solve the problem.
problem.Solve()
###Output
_____no_output_____
###Markdown
Visualising resultsWe can now visualise the resulting deformation as animation using pythreejs.
###Code
threejs_visualiser.visualise(
mesh, geometric_field, dependent_field=dependent_field,
variable=iron.FieldVariableTypes.U, resolution=8, mechanics_animation=True)
###Output
_____no_output_____
###Markdown
Exporting resultsBefore we export the results in Cmgui format, we will first create a new `deformed_field` and `pressure_field` to separately hold the solution for the deformed geometry and hydrostatic pressure for visualisation in Cmgui (this enables simplified access to these fields in Cmgui visualisation scripts).
###Code
deformed_field_user_number = 5
deformed_field = iron.Field()
deformed_field.CreateStart(deformed_field_user_number, region)
deformed_field.MeshDecompositionSet(decomposition)
deformed_field.TypeSet(iron.FieldTypes.GEOMETRIC)
deformed_field.VariableLabelSet(iron.FieldVariableTypes.U, "DeformedGeometry")
for component in [1, 2, 3]:
deformed_field.ComponentMeshComponentSet(
iron.FieldVariableTypes.U, component, 1)
if interpolation_type == 4:
# Set arc length scaling for cubic-Hermite elements.
deformed_field.ScalingTypeSet(iron.FieldScalingTypes.ARITHMETIC_MEAN)
deformed_field.CreateFinish()
pressure_field_user_number = 6
pressure_field = iron.Field()
pressure_field.CreateStart(pressure_field_user_number, region)
pressure_field.MeshDecompositionSet(decomposition)
pressure_field.VariableLabelSet(iron.FieldVariableTypes.U, "Pressure")
pressure_field.ComponentMeshComponentSet(iron.FieldVariableTypes.U, 1, 1)
pressure_field.ComponentInterpolationSet(
iron.FieldVariableTypes.U, 1, iron.FieldInterpolationTypes.ELEMENT_BASED)
pressure_field.NumberOfComponentsSet(iron.FieldVariableTypes.U, 1)
pressure_field.CreateFinish()
# Copy deformed geometry into deformed field.
for component in [1, 2, 3]:
dependent_field.ParametersToFieldParametersComponentCopy(
iron.FieldVariableTypes.U,
iron.FieldParameterSetTypes.VALUES, component,
deformed_field, iron.FieldVariableTypes.U,
iron.FieldParameterSetTypes.VALUES, component)
# Copy the hydrostatic pressure solutions from the dependent field into the
# pressure field.
dependent_field.ParametersToFieldParametersComponentCopy(
iron.FieldVariableTypes.U,
iron.FieldParameterSetTypes.VALUES, 4,
pressure_field, iron.FieldVariableTypes.U,
iron.FieldParameterSetTypes.VALUES, 1)
# Export results to exnode and exelem format.
fields = iron.Fields()
fields.CreateRegion(region)
fields.NodesExport("cube", "FORTRAN")
fields.ElementsExport("cube", "FORTRAN")
fields.Finalise()
###Output
_____no_output_____
###Markdown
Evaluating mechanics tensor fields
###Code
results = {}
elementNumber = 1
xiPosition = [0.5, 0.5, 0.5]
F = equations_set.TensorInterpolateXi(
iron.EquationsSetDerivedTensorTypes.DEFORMATION_GRADIENT,
elementNumber, xiPosition,(3,3))
results['Deformation Gradient Tensor'] = F
print("Deformation Gradient Tensor")
print(F)
C = equations_set.TensorInterpolateXi(
iron.EquationsSetDerivedTensorTypes.R_CAUCHY_GREEN_DEFORMATION,
elementNumber, xiPosition,(3,3))
results['Right Cauchy-Green Deformation Tensor'] = C
print("Right Cauchy-Green Deformation Tensor")
print(C)
E = equations_set.TensorInterpolateXi(
iron.EquationsSetDerivedTensorTypes.GREEN_LAGRANGE_STRAIN,
elementNumber, xiPosition,(3,3))
results['Green-Lagrange Strain Tensor'] = E
print("Green-Lagrange Strain Tensor")
print(E)
I1=numpy.trace(C)
I2=0.5*(numpy.trace(C)**2.-numpy.tensordot(C,C))
I3=numpy.linalg.det(C)
results['Invariants'] = [I1, I2, I3]
print("Invariants")
print("I1={0}, I2={1}, I3={2}".format(I1,I2,I3))
TC = equations_set.TensorInterpolateXi(
iron.EquationsSetDerivedTensorTypes.CAUCHY_STRESS,
elementNumber, xiPosition,(3,3))
results['Cauchy Stress Tensor'] = TC
print("Cauchy Stress Tensor")
print(TC)
# Calculate the Second Piola-Kirchhoff Stress Tensor from
# TG=J*F^(-1)*TC*F^(-T), where T denotes the
# matrix transpose, and assumes J=1.
TG = numpy.dot(numpy.linalg.inv(F),numpy.dot(
TC,numpy.linalg.inv(numpy.matrix.transpose(F))))
results['Second Piola-Kirchhoff Stress Tensor'] = TG
print("Second Piola-Kirchhoff Stress Tensor")
print(TG)
p = dependent_field.ParameterSetGetElement(
iron.FieldVariableTypes.U,
iron.FieldParameterSetTypes.VALUES,elementNumber,4)
results['Hydrostatic pressure'] = p
print("Hydrostatic pressure")
print(p)
###Output
Hydrostatic pressure
1.1962935699867334
###Markdown
Finalising session
###Code
problem.Destroy()
coordinate_system.Destroy()
region.Destroy()
basis.Destroy()
iron.Finalise()
###Output
_____no_output_____ |
Lectures/Data Preprocessing/2. Advanced Pipelines - Complete.ipynb | ###Markdown
Advanced PipelinesWe now have a pretty strong repetoire of regression models. Depending on the data set there may be a number of preprocessing steps that should be taken prior to fitting the model. While we've learned basic pipelines and out of the box transformer objects you may need to perform preprocessing tasks that are too complicated for these simple tools. What We'll Accomplish in This Notebook- We'll review the differences and necessity for fit, transform and fit_transform- Introduce the popular California Housing Data Set- Demonstrate how to construct custom transformer objects for more advanced pipelines
###Code
## Import packages
## For data handling
import pandas as pd
import numpy as np
## For plotting
import matplotlib.pyplot as plt
import seaborn as sns
## This sets the plot style
## to have a grid on a white background
sns.set_style("whitegrid")
###Output
_____no_output_____
###Markdown
`fit`, `transform`, and `fit_transform`Hopefully you remember from the `Basic Pipelines` notebook the terms, `fit`, `transform` and `fit_transform`. Let's return to the `StandardScaler` object as a reminder.https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html
###Code
from sklearn.preprocessing import StandardScaler
###Output
_____no_output_____
###Markdown
From the documentation listed above we know that the standard scaler will take in the features, `X`, and scale them like so:$$\frac{X_i - \overline{X_i}}{s_{X_i}}.$$Let's generate some data.
###Code
X = 10*np.random.randn(100,1)-5
print("The mean of X is",np.mean(X))
print("The variance of X is",np.var(X))
###Output
The mean of X is -4.637962480205209
The variance of X is 113.07071862638341
###Markdown
Now we'll scale $X$.
###Code
## first we make a scaler object
scaler = StandardScaler()
## Then we fit it
scaler.fit(X)
print("The scaler was fit to have mean",scaler.mean_)
print("and variance",scaler.var_)
## The we transform the data, aka scale it
X_scaled = scaler.transform(X)
print("The mean of X is",np.mean(X_scaled))
print("The standard deviation of X is",np.std(X_scaled))
###Output
The mean of X is 2.4424906541753444e-17
The standard deviation of X is 0.9999999999999999
###Markdown
Now let's imagine we're ready to check the test error on our model. So we have to scale the test features.
###Code
X_test = 10*np.random.randn(100,1)-5.1
np.shape(X_test)
print("The mean of X_test is",np.mean(X_test))
print("The variance of X_test is",np.var(X_test))
###Output
The mean of X_test is -3.18717080849431
The variance of X_test is 111.89226947019259
###Markdown
Now what code should we write to scale the test data?
###Code
X_test_scaled = scaler.transform(X_test)
print(np.mean(X_test_scaled))
print(np.var(X_test_scaled))
###Output
0.13643631393267291
0.9895777689351671
###Markdown
The order in which these sorts of steps gets done is important. This is because you only fit the model on the training data, and the scaler (and other preprocessing steps) is thought of as part of the model. Let's do a short practice You Code A New ScalerGo to the documentation and read about the `MinMaxScaler` object, https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.MinMaxScaler.html.Use the `MinMaxScaler` to scale the following training and test data.
###Code
## Your train and test data
X_train = np.random.randint(1,1000,1000)
X_test = np.random.randint(1,1000,1000)
## Import MinMaxScaler here
## Sample Answer
from sklearn.preprocessing import MinMaxScaler
## Fit and transform the training and test data
## using a MinMaxScaler here
## Sample Answer
min_max = MinMaxScaler()
print("The min of X_train is",np.min(X_train))
print("The max of X_train is",np.max(X_train))
min_max.fit(X_train.reshape(-1,1))
X_train_scaled = min_max.transform(X_train.reshape(-1,1))
print("The min of scaled X_train is",np.min(X_train_scaled))
print("The max of scaled X_train is",np.max(X_train_scaled))
print()
print()
print("The min of X_test is",np.min(X_test))
print("The max of X_test is",np.max(X_test))
X_test_scaled = min_max.transform(X_test.reshape(-1,1))
print("The min of scaled X_test is",np.min(X_test_scaled))
print("The max of scaled X_test is",np.max(X_test_scaled))
###Output
The min of X_train is 2
The max of X_train is 999
The min of scaled X_train is 0.0
The max of scaled X_train is 1.0
The min of X_test is 1
The max of X_test is 999
The min of scaled X_test is -0.0010030090270812437
The max of scaled X_test is 1.0
###Markdown
Imputing ValuesSometimes your data may have missing values. It is often bad practice to throw away missing values, one option is to instead impute them.Imputation is when you use the non-missing values to fill in the missing values. Three simple ways would be to replace the missing values with the mean, median, or mode of the training data.Here is the documentation on the `SimpleImputer`, https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html.We'll now impute the missing values on the following data using the median of the data.
###Code
## Here is some data
X_train = np.random.randn(1000)
X_test = np.random.randn(1000)
## With some values missing
X_train[np.random.choice(range(1000),20)] = np.nan
X_test[np.random.choice(range(1000),20)] = np.nan
## Import the SimpleImputer
from sklearn.impute import SimpleImputer
## Make the imputer object with the desired "strategy"
imp = SimpleImputer(strategy = 'median')
print("X_train has", sum(np.isnan(X_train)), "missing values.")
## impute the missing values
# first fit the imputer
imp.fit(X_train.reshape(-1,1))
# then transform
X_train_imp = imp.transform(X_train.reshape(-1,1))
print("After imputing X_train has", sum(np.isnan(X_train_imp)), "missing values.")
## Now impute on the test data
## note that we don't use the "fit" step here
print("X_test has", sum(np.isnan(X_test)), "missing values.")
X_test_imp = imp.transform(X_test.reshape(-1,1))
print("After imputing X_test has", sum(np.isnan(X_test_imp)), "missing values.")
###Output
X_test has 20 missing values.
After imputing X_test has [0] missing values.
###Markdown
The California Housing Data SetWe'll now introduce a popular machine learning data set, the California Housing data set. The data is used in the book Hands-On Machine Learning with Scikit-Learn and TensorFlow as an example of a machine learning workflow. This is an excellent book and a useful reference if you're looking to purchase a book about machine learning with python.We won't be using this data to build a predictive model, but rather to demonstrate the need for advanced pipelines.
###Code
## Read the data
df = pd.read_csv("https://raw.githubusercontent.com/ageron/handson-ml/master/datasets/housing/housing.csv")
df_train = df.copy().sample(frac=.75, random_state = 440)
df_test = df.copy().drop(df_train.index)
## Let's look at the dataframe info
df_train.info()
## What kind of categories are possible for ocean proximity?
df_train.ocean_proximity.value_counts()
## Each dot is at it's longitude and latitude
## the size of the dot is proportional to its population
## the color of the dot represents the median_house_value of the dot
df_train.plot(kind="scatter", x = "longitude", y = "latitude",
alpha = .9, s = df_train["population"]/50, label="population",
figsize=(12,14), c="median_house_value",cmap = plt.get_cmap("viridis"),
colorbar=True)
plt.xlabel("Longitude", fontsize=16)
plt.ylabel("Latitude", fontsize=16)
plt.show()
plt.figure(figsize=(12,14))
sns.scatterplot(data=df_train,x="longitude",y="latitude",hue="ocean_proximity")
plt.xlabel("Longitude", fontsize=16)
plt.ylabel("Latitude", fontsize=16)
plt.show()
###Output
_____no_output_____
###Markdown
Now from our exploration of the data we can see that this data set has a number of preprocessing steps:1. `total_bedrooms` has a number of missing values that could be imputed2. `ocean_proximity` needs to be one-hot-encoded3. Many columns have vastly differing scales, so we should scale them4. We may want to create additional features from our other features.Now we'll review how to do 1. and 2. then it will be your job to incorporate 3. and 4. As we go through let's remember two main points:- Fitting should only be performed on the training set- A good pipeline takes in the features and target without any preprocessing and outputs the fit or prediction Imputing `total_bedrooms`Recall that we only want to impute the column for `total_bedrooms`. If we were to put `SimpleImputer` as is into the pipeline we'd be imputing the entire dataframe. While this isn't an issue for this dataset (because only `total_bedrooms` is missing data), it's an excellent time to introduce how you can create a custom imputer object.`sklearn` is quite nice because it gives us the functionality to make custom transformers relatively easily. To do this we make our own transformer object. To fully grasp everything going on check out the bonus content notebook in the `python prep` folder where I review objects and classes in python. If you're happy just copying and pasting the code for your own transformers (no shame in that for now, we're learning a lot of data science) no need to read through those notes.
###Code
## We'll need these
from sklearn.impute import SimpleImputer
from sklearn.base import BaseEstimator, TransformerMixin
###Output
_____no_output_____
###Markdown
A python object is an instance of a python class.Below we define our `BedroomImputer` class.
###Code
## Define our custom imputer
class BedroomImputer(BaseEstimator, TransformerMixin):
# Class Constructor
# This allows you to initiate the class when you call
# BedroomImputer
def __init__(self):
# I want to initiate each object with
# the SimpleImputer method
self.SimpleImputer = SimpleImputer(strategy = "median")
# For my fit method I'm just going to "steal"
# SimpleImputer's fit method using only the
# 'total_bedrooms' column
def fit(self, X, y = None ):
self.SimpleImputer.fit(X['total_bedrooms'].values.reshape(-1,1))
return self
# Now I want to transform the total_bedrooms columns
# and return it with imputed values
def transform(self, X, y = None):
copy_X = X.copy()
copy_X['total_bedrooms'] = self.SimpleImputer.transform(copy_X['total_bedrooms'].values.reshape(-1,1))
return copy_X
###Output
_____no_output_____
###Markdown
We now have a custom imputer let's put it to work.
###Code
imputer = BedroomImputer()
df_train.total_bedrooms.describe()
imputer.fit(df_train)
imputer.transform(df_train).total_bedrooms.describe()
imputer.fit_transform(df_train)
###Output
_____no_output_____
###Markdown
One-Hot-Encoding `ocean_proximity`Now let's see how we can one-hot-encode `ocean_proximity`.Here we can use the `FunctionTransformer` object.
###Code
from sklearn.preprocessing import FunctionTransformer
# define our preprocessing function
# This creates bedrooms_per_room
# and one hot encodes ocean_proximity
def one_hot_encode(df):
df_copy = df.copy()
hot_encoding = pd.get_dummies(df_copy['ocean_proximity'])
df_copy[hot_encoding.columns[:-1]] = hot_encoding[hot_encoding.columns[:-1]]
return df_copy
one_hot = FunctionTransformer(one_hot_encode)
one_hot.transform(df_train)
###Output
_____no_output_____
###Markdown
Great!Now it's your turn. You CodeYour boss has told you that her end goal is to regress `median_house_value` on `median_income`, `ocean_proximity`, and a new feature, `bedrooms_per_room`.Write a function called `get_feats` that takes in a feature dataframe and returns the columns for `median_income` the one-hot-encoded `ocean_proximity`, and `bedrooms_per_room`. Feel free to use the function, `one_hot_encode` or not. Then create a `FunctionTransformer` object using `get_feats`, check to make sure that running `df_train` through your transformer object returns a dataframe with the desired columns, i.e. `median_income`, the one-hot-encoded `ocean_proximity` and `bedrooms_per_room`.
###Code
# df should hold the features not the target
def get_feats(df):
# make a copy of the dataframe
# I'll assume that I've already created the
# one-hot-encoded columns
df_copy = df.copy()
# calculate bedrooms_per_room
df_copy['bedrooms_per_room'] = df_copy['total_bedrooms']/df_copy['total_rooms']
return df_copy[['median_income', 'bedrooms_per_room', '<1H OCEAN',
'INLAND','ISLAND', 'NEAR BAY']]
## Code here
test_transformer = FunctionTransformer(get_feats)
test_df = one_hot.transform(df_train)
test_transformer.transform(test_df)
###Output
_____no_output_____
###Markdown
Now you remember that it's important to scale the data prior to fitting your model. However, you only want to scale the columns for `median_income` and `bedrooms_per_room`, not the one-hot-encoded columns. Following the approach we took for `BedroomImputer` define a custom scaler called, `NumericScale` that takes in the dataframe produced by get_feats and scales the `median_income` and `bedrooms_per_room` columns. Hint: use `StandardScaler` in a manner similar to how `SimpleImputer` was used above.
###Code
## Below is a SAMPLE SOLUTION
from sklearn.preprocessing import StandardScaler
## Code here
# Define our custom Scaler
class NumericScale(BaseEstimator, TransformerMixin):
#Class Constructor
# This allows you to initiate the class when you call
# NumericScale
def __init__(self):
# I want to initiate each object with
# the StandardScaler
self.StandardScaler = StandardScaler()
# For my fit method I'm just going to "steal"
# StandardScaler's fit method using only the
# 'median_income' and 'bedrooms_per_room' columns
def fit(self, X, y = None ):
self.StandardScaler.fit(X[['median_income', 'bedrooms_per_room']])
return self
# Now I want to transform the 'median_income' and
# 'bedrooms_per_room' columns and return it with scaled values
def transform(self, X, y = None):
copy_X = X.copy()
copy_X[['median_income', 'bedrooms_per_room']] = \
self.StandardScaler.transform(copy_X[['median_income', 'bedrooms_per_room']])
return copy_X
## Code here
# I'll test the scaler here!
test_scale = NumericScale()
test_scale.fit_transform(test_transformer.transform(test_df)).describe()
###Output
_____no_output_____
###Markdown
Now we can put it all together!
###Code
X_train = df_train[['total_rooms','total_bedrooms','median_income','ocean_proximity']]
y_train = df_train['median_house_value']
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LinearRegression
pipe = Pipeline([('impute',BedroomImputer()),
('one_hot',FunctionTransformer(one_hot_encode)),
('get_feats',FunctionTransformer(get_feats)),
('scale',NumericScale()),
('reg',LinearRegression(copy_X = True))])
pipe.fit(X_train,y_train)
train_res = pipe.predict(X_train) - y_train.values
print("The training RMSE is",
np.round(np.sqrt( np.sum(np.power(train_res,2))/len(train_res) ),2) )
###Output
The training RMSE is 73064.85
|
Arrhythmia_CNN_H.ipynb | ###Markdown
Refernce[Paper](https://www.sciencedirect.com/science/article/abs/pii/S0010482518302713) [Link](https://github.com/tom-beer/Arrhythmia-CNN)
###Code
# from google.colab import drive
# drive.mount('/content/drive')
# !git clone https://github.com/tom-beer/Arrhythmia-CNN.git
# %cd Arrhythmia-CNN/
from __future__ import print_function
import torch
import torch.utils.data
import numpy as np
import pandas as pd
from torch import nn, optim
from torch.utils.data.dataset import Dataset
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler
is_cuda = True
num_epochs = 100
batch_size = 10
torch.manual_seed(46)
log_interval = 10
in_channels_ = 1
num_segments_in_record = 100
segment_len = 3600
num_records = 48
num_classes = 16
allow_label_leakage = True
device = torch.device("cuda:0" if is_cuda else "cpu")
# train_ids, test_ids = train_test_split(np.arange(index_set), train_size=.8, random_state=46)
# scaler = MinMaxScaler(feature_range=(0, 1), copy=False)
class CustomDatasetFromCSV(Dataset):
def __init__(self, data_path, transforms_=None):
self.df = pd.read_pickle(data_path)
self.transforms = transforms_
def __getitem__(self, index):
row = self.df.iloc[index]
signal = row['signal']
target = row['target']
if self.transforms is not None:
signal = self.transforms(signal)
signal = signal.reshape(1, signal.shape[0])
return signal, target
def __len__(self):
return self.df.shape[0]
train_dataset = CustomDatasetFromCSV('/content/drive/MyDrive/Arrhythmia-CNN-master/data/Arrhythmia_dataset.pkl')
train_loader = torch.utils.data.DataLoader(dataset=train_dataset, batch_size=batch_size, shuffle=False)
test_dataset = CustomDatasetFromCSV('/content/drive/MyDrive/Arrhythmia-CNN-master/data/Arrhythmia_dataset.pkl')
test_loader = torch.utils.data.DataLoader(dataset=test_dataset, batch_size=1, shuffle=False)
class Flatten(torch.nn.Module):
def forward(self, x):
batch_size = x.shape[0]
return x.view(batch_size, -1)
def basic_layer(in_channels, out_channels, kernel_size, batch_norm=False, max_pool=True, conv_stride=1, padding=0
, pool_stride=2, pool_size=2):
layer = nn.Sequential(
nn.Conv1d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, stride=conv_stride,
padding=padding),
nn.ReLU())
if batch_norm:
layer = nn.Sequential(
layer,
nn.BatchNorm1d(num_features=out_channels))
if max_pool:
layer = nn.Sequential(
layer,
nn.MaxPool1d(kernel_size=pool_size, stride=pool_stride))
return layer
class arrhythmia_classifier(nn.Module):
def __init__(self, in_channels=in_channels_):
super(arrhythmia_classifier, self).__init__()
self.cnn = nn.Sequential(
basic_layer(in_channels=in_channels, out_channels=128, kernel_size=50, batch_norm=True, max_pool=True,
conv_stride=3, pool_stride=3),
basic_layer(in_channels=128, out_channels=32, kernel_size=7, batch_norm=True, max_pool=True,
conv_stride=1, pool_stride=2),
basic_layer(in_channels=32, out_channels=32, kernel_size=10, batch_norm=False, max_pool=False,
conv_stride=1),
basic_layer(in_channels=32, out_channels=128, kernel_size=5, batch_norm=False, max_pool=True,
conv_stride=2, pool_stride=2),
basic_layer(in_channels=128, out_channels=256, kernel_size=15, batch_norm=False, max_pool=True,
conv_stride=1, pool_stride=2),
basic_layer(in_channels=256, out_channels=512, kernel_size=5, batch_norm=False, max_pool=False,
conv_stride=1),
basic_layer(in_channels=512, out_channels=128, kernel_size=3, batch_norm=False, max_pool=False,
conv_stride=1),
Flatten(),
nn.Linear(in_features=1152, out_features=512),
nn.ReLU(),
nn.Dropout(p=.1),
nn.Linear(in_features=512, out_features=num_classes),
nn.Softmax()
)
def forward(self, x, ex_features=None):
return self.cnn(x)
def calc_next_len_conv1d(current_len=112500, kernel_size=16, stride=8, padding=0, dilation=1):
return int(np.floor((current_len + 2 * padding - dilation * (kernel_size - 1) - 1) / stride + 1))
model = arrhythmia_classifier().to(device).double()
lr = 3e-1
num_of_iteration = len(train_dataset) // batch_size
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=1e-6)
criterion = nn.NLLLoss()
def train(epoch):
model.train()
train_loss = 0
for batch_idx, (data, target) in enumerate(train_loader):
data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = criterion(output, target)
loss.backward()
train_loss += loss.item()
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader),
loss.item() / len(data)))
print('====> Epoch: {} Average loss: {:.4f}'.format(
epoch, train_loss / len(train_loader.dataset)))
def test(epoch):
model.eval()
test_loss = 0
with torch.no_grad():
all_=0
t_=0
for batch_idx, (data, target) in enumerate(test_loader):
all_+=1
data, target = data.to(device), target.to(device)
output = model(data)
loss = criterion(output, target)
test_loss += loss.item()
if batch_idx == 0:
n = min(data.size(0), 4)
list_=output[0].tolist()
pridict_=int(list_.index(max(list_)))
if int(target)==pridict_:
t_+=1
accuracy_=t_/all_
test_loss /= len(test_loader.dataset)
print('====> Test set loss: {:.5f}'.format(test_loss))
print('====> Test Accuracy: {:.5f}'.format(accuracy_))
# print(f'Learning rate: {optimizer.param_groups[0]["lr"]:.6f}')
for epoch in range(1, num_epochs + 1):
train(epoch)
test(epoch)
###Output
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/container.py:119: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
input = module(input)
|
Applied_Capstone_Week_3_Assignment.ipynb | ###Markdown
Battle of the Neighbourhoods - Toronto This notebook contains Questions 1, 2 & 3 of the Assignment. They have been segregated by Section headers
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Question 1 Importing Data
###Code
import requests
url = "https://en.wikipedia.org/wiki/List_of_postal_codes_of_Canada:_M"
wiki_url = requests.get(url)
wiki_url
###Output
_____no_output_____
###Markdown
Response 200 means that we are able to make the connection to the page
###Code
wiki_data = pd.read_html(wiki_url.text)
wiki_data
len(wiki_data), type(wiki_data)
###Output
_____no_output_____
###Markdown
We need the first table alone, so dropping the other tables
###Code
wiki_data = wiki_data[0]
wiki_data
###Output
_____no_output_____
###Markdown
Dropping Borough which are not assigned
###Code
df = wiki_data[wiki_data["Borough"] != "Not assigned"]
df
###Output
_____no_output_____
###Markdown
Grouping the records based on Postal Code
###Code
df = df.groupby(['Postal Code']).head()
df
###Output
_____no_output_____
###Markdown
Checking for number of records where Neighbourhood is "Not assigned"
###Code
df.Neighbourhood.str.count("Not assigned").sum()
df = df.reset_index()
df
df.drop(['index'], axis = 'columns', inplace = True)
df
df.shape
###Output
_____no_output_____
###Markdown
Answer to Question 1: We have 103 rows and 3 columns Question 2 Installing geocoder
###Code
pip install geocoder
import geocoder # import geocoder
###Output
_____no_output_____
###Markdown
Tried the below approach, ran for 20 mins, then killed it. Changing the code cell to Text for now so that the run all execution doesn't stop. ```python initialize your variable to Nonelat_lng_coords = Nonepostal_code = 'M3A' loop until you get the coordinateswhile(lat_lng_coords is None): g = geocoder.google('{}, Toronto, Ontario'.format(postal_code)) lat_lng_coords = g.latlnglatitude = lat_lng_coords[0]longitude = lat_lng_coords[1]``` Alternatively, as suggested in the assignment, Importing the CSV file from the URL
###Code
data = pd.read_csv("https://cocl.us/Geospatial_data")
data
print("The shape of our wiki data is: ", df.shape)
print("the shape of our csv data is: ", data.shape)
###Output
The shape of our wiki data is: (103, 3)
the shape of our csv data is: (103, 3)
###Markdown
Since the dimensions are the same, we can try to join on the postal codes to get the required data.Checking the column types of both the dataframes, especially Postal Code column since we are trying to join on it
###Code
df.dtypes
data.dtypes
combined_data = df.join(data.set_index('Postal Code'), on='Postal Code', how='inner')
combined_data
combined_data.shape
###Output
_____no_output_____
###Markdown
**Solution:** We get 103 rows as expected when we do a inner join, so we have good data. Question 3 Drawing inspiration from the previous lab where we cluster the neighbourhood of NYC, We cluster Toronto based on the similarities of the venues categories using Kmeans clustering and Foursquare API.
###Code
from geopy.geocoders import Nominatim
address = 'Toronto, Ontario'
geolocator = Nominatim(user_agent="toronto_explorer")
location = geolocator.geocode(address)
latitude = location.latitude
longitude = location.longitude
print('The coordinates of Toronto are {}, {}.'.format(latitude, longitude))
###Output
The coordinates of Toronto are 43.6534817, -79.3839347.
###Markdown
Let's visualize the map of Toronto
###Code
import folium
# Creating the map of Toronto
map_Toronto = folium.Map(location=[latitude, longitude], zoom_start=11)
# adding markers to map
for latitude, longitude, borough, neighbourhood in zip(combined_data['Latitude'], combined_data['Longitude'], combined_data['Borough'], combined_data['Neighbourhood']):
label = '{}, {}'.format(neighbourhood, borough)
label = folium.Popup(label, parse_html=True)
folium.CircleMarker(
[latitude, longitude],
radius=5,
popup=label,
color='red',
fill=True
).add_to(map_Toronto)
map_Toronto
###Output
_____no_output_____
###Markdown
Initializing Foursquare API credentials
###Code
CLIENT_ID = 'LDIJF4KI5VGMMA3NNDLFZWHR12TCMNTUL0TUC3QPZ3SJD040'
CLIENT_SECRET = 'EOOOZ3EF5N0FOMNUJVTDV0SXVUVVEBMWPFXMNBK1R5K4H55A'
VERSION = '20180605' # Foursquare API version
print('Your credentails:')
print('CLIENT_ID: ' + CLIENT_ID)
print('CLIENT_SECRET:' + CLIENT_SECRET)
###Output
Your credentails:
CLIENT_ID: LDIJF4KI5VGMMA3NNDLFZWHR12TCMNTUL0TUC3QPZ3SJD040
CLIENT_SECRET:EOOOZ3EF5N0FOMNUJVTDV0SXVUVVEBMWPFXMNBK1R5K4H55A
###Markdown
Next, we create a function to get all the venue categories in Toronto
###Code
def getNearbyVenues(names, latitudes, longitudes, radius=500):
venues_list=[]
for name, lat, lng in zip(names, latitudes, longitudes):
print(name)
# create the API request URL
url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}'.format(
CLIENT_ID,
CLIENT_SECRET,
VERSION,
lat,
lng,
radius
)
# make the GET request
results = requests.get(url).json()["response"]['groups'][0]['items']
# return only relevant information for each nearby venue
venues_list.append([(
name,
lat,
lng,
v['venue']['name'],
v['venue']['categories'][0]['name']) for v in results])
nearby_venues = pd.DataFrame([item for venue_list in venues_list for item in venue_list])
nearby_venues.columns = ['Neighbourhood',
'Neighbourhood Latitude',
'Neighbourhood Longitude',
'Venue',
'Venue Category']
return(nearby_venues)
###Output
_____no_output_____
###Markdown
Collecting the venues in Toronto for each Neighbourhood
###Code
venues_in_toronto = getNearbyVenues(combined_data['Neighbourhood'], combined_data['Latitude'], combined_data['Longitude'])
venues_in_toronto.shape
###Output
_____no_output_____
###Markdown
So we have 1317 records and 5 columns. Checking sample data
###Code
venues_in_toronto.head()
###Output
_____no_output_____
###Markdown
Checking the Venues based on Neighbourhood
###Code
venues_in_toronto.groupby('Neighbourhood').head()
###Output
_____no_output_____
###Markdown
So there are 405 records for each neighbourhood.Checking for the maximum venue categories
###Code
venues_in_toronto.groupby('Venue Category').max()
###Output
_____no_output_____
###Markdown
There are around 232 different types of Venue Categories. Interesting! One Hot encoding the venue Categories
###Code
toronto_venue_cat = pd.get_dummies(venues_in_toronto[['Venue Category']], prefix="", prefix_sep="")
toronto_venue_cat
###Output
_____no_output_____
###Markdown
Adding the neighbourhood to the encoded dataframe
###Code
toronto_venue_cat['Neighbourhood'] = venues_in_toronto['Neighbourhood']
# moving neighborhood column to the first column
fixed_columns = [toronto_venue_cat.columns[-1]] + list(toronto_venue_cat.columns[:-1])
toronto_venue_cat = toronto_venue_cat[fixed_columns]
toronto_venue_cat.head()
###Output
_____no_output_____
###Markdown
We will group the Neighbourhoods, calculate the mean venue categories in each Neighbourhood
###Code
toronto_grouped = toronto_venue_cat.groupby('Neighbourhood').mean().reset_index()
toronto_grouped.head()
###Output
_____no_output_____
###Markdown
Let's make a function to get the top most common venue categories
###Code
def return_most_common_venues(row, num_top_venues):
row_categories = row.iloc[1:]
row_categories_sorted = row_categories.sort_values(ascending=False)
return row_categories_sorted.index.values[0:num_top_venues]
import numpy as np
###Output
_____no_output_____
###Markdown
There are way too many venue categories, we can take the top 10 to cluster the neighbourhoods
###Code
num_top_venues = 10
indicators = ['st', 'nd', 'rd']
# create columns according to number of top venues
columns = ['Neighbourhood']
for ind in np.arange(num_top_venues):
try:
columns.append('{}{} Most Common Venue'.format(ind+1, indicators[ind]))
except:
columns.append('{}th Most Common Venue'.format(ind+1))
# create a new dataframe
neighborhoods_venues_sorted = pd.DataFrame(columns=columns)
neighborhoods_venues_sorted['Neighbourhood'] = toronto_grouped['Neighbourhood']
for ind in np.arange(toronto_grouped.shape[0]):
neighborhoods_venues_sorted.iloc[ind, 1:] = return_most_common_venues(toronto_grouped.iloc[ind, :], num_top_venues)
neighborhoods_venues_sorted.head()
###Output
_____no_output_____
###Markdown
Let's make the model to cluster our Neighbourhoods
###Code
# import k-means from clustering stage
from sklearn.cluster import KMeans
# set number of clusters
k_num_clusters = 5
toronto_grouped_clustering = toronto_grouped.drop('Neighbourhood', 1)
# run k-means clustering
kmeans = KMeans(n_clusters=k_num_clusters, random_state=0).fit(toronto_grouped_clustering)
kmeans
###Output
_____no_output_____
###Markdown
Checking the labelling of our model
###Code
kmeans.labels_[0:100]
###Output
_____no_output_____
###Markdown
Let's add the clustering Label column to the top 10 common venue categories
###Code
neighborhoods_venues_sorted.insert(0, 'Cluster Labels', kmeans.labels_)
###Output
_____no_output_____
###Markdown
Join toronto_grouped with combined_data on neighbourhood to add latitude & longitude for each neighborhood to prepare it for plotting
###Code
toronto_merged = combined_data
toronto_merged = toronto_merged.join(neighborhoods_venues_sorted.set_index('Neighbourhood'), on='Neighbourhood')
toronto_merged.head()
###Output
_____no_output_____
###Markdown
Drop all the NaN values to prevent data skew
###Code
toronto_merged_nonan = toronto_merged.dropna(subset=['Cluster Labels'])
###Output
_____no_output_____
###Markdown
Plotting the clusters on the map
###Code
import matplotlib.cm as cm
import matplotlib.colors as colors
map_clusters = folium.Map(location=[latitude, longitude], zoom_start=11)
# set color scheme for the clusters
x = np.arange(k_num_clusters)
ys = [i + x + (i*x)**2 for i in range(k_num_clusters)]
colors_array = cm.rainbow(np.linspace(0, 1, len(ys)))
rainbow = [colors.rgb2hex(i) for i in colors_array]
# add markers to the map
markers_colors = []
for lat, lon, poi, cluster in zip(toronto_merged_nonan['Latitude'], toronto_merged_nonan['Longitude'], toronto_merged_nonan['Neighbourhood'], toronto_merged_nonan['Cluster Labels']):
label = folium.Popup('Cluster ' + str(int(cluster) +1) + '\n' + str(poi) , parse_html=True)
folium.CircleMarker(
[lat, lon],
radius=5,
popup=label,
color=rainbow[int(cluster-1)],
fill=True,
fill_color=rainbow[int(cluster-1)]
).add_to(map_clusters)
map_clusters
###Output
_____no_output_____
###Markdown
Let's verify each of our clustersCluster 1
###Code
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 0, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 2
###Code
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 1, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 3
###Code
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 2, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 4
###Code
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 3, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
###Output
_____no_output_____
###Markdown
Cluster 5
###Code
toronto_merged_nonan.loc[toronto_merged_nonan['Cluster Labels'] == 4, toronto_merged_nonan.columns[[1] + list(range(5, toronto_merged_nonan.shape[1]))]]
###Output
_____no_output_____ |
notebooks/parcels/Remote/beaching.ipynb | ###Markdown
**Template OP on salish**
###Code
%matplotlib inline
import sys
import xarray as xr
import numpy as np
import os
import math
from datetime import datetime, timedelta
from parcels import FieldSet, Field, VectorField, ParticleSet, JITParticle, ErrorCode, ParcelsRandom
sys.path.append('/home/jvalenti/MOAD/analysis-jose/notebooks/parcels')
from Kernels_beaching import DeleteParticle, Buoyancy, AdvectionRK4_3D, Stokes_drift, Beaching, Unbeaching
from OP_functions_beaching import *
# Define paths
local = 0 #Set to 0 when working on server
paths = path(local)
Dat=xr.open_dataset(get_WW3_path(datetime(2018, 12, 23)))
path_NEMO = make_prefix(datetime(2018, 12, 23), paths['NEMO'])
Dat0=xr.open_dataset(path_NEMO + '_grid_W.nc')
Dat=xr.open_dataset(path_NEMO + '_grid_T.nc')
coord=xr.open_dataset(paths['coords'],decode_times=False)
WW3 = xr.open_dataset(get_WW3_path(datetime(2018, 12, 23)))
batt=xr.open_dataset(paths['mask'],decode_times=False)
###Output
_____no_output_____
###Markdown
Define and save mask for distance to coast
###Code
# def maskcoast():
# maskd=np.zeros(batt.mbathy.shape)
# for ilat in range(baty.shape[0]):
# for jlon in range(baty.shape[1]):
# if baty[ilat,jlon]>0:
# if baty[ilat+1,jlon] == 0 or baty[ilat+1,jlon+1] == 0 \
# or baty[ilat+1,jlon-1] == 0 or baty[ilat,jlon] == 0 \
# or baty[ilat,jlon+1] == 0 or baty[ilat,jlon-1] == 0 \
# or baty[ilat-1,jlon] == 0 or baty[ilat-1,jlon+1] == 0 or baty[ilat-1,jlon-1] == 0:
# maskd[0,ilat,jlon] = 1
# return maskd
# baty = batt.mbathy[0,:,:]
# maskd=maskcoast()
# def maskcoast2(maskd):
# maskd2=np.zeros(batt.mbathy.shape)
# Ilat=np.where(maskd[0,:,:]==1)[0]
# Jlon=np.where(maskd[0,:,:]==1)[1]
# for i in range(len(np.where(maskd[0,:,:]==1)[0])):
# ilat=Ilat[i]
# jlon=Jlon[i]
# maskd2[0,ilat,jlon]=1
# #if ilat<baty.shape[0] or jlon<baty.shape[1]:
# return maskd2
# maskd2=maskcoast2(maskd)
# dist=xr.DataArray(attrs={'Distc':maskd2})
# batt['Distc']=(batt.mbathy.dims,maskd)
# batt.to_netcdf(path='/ocean/jvalenti/MOAD/grid/mesh_maskd2T201702.nc')
#clat,clon = p_unidist(coord.gphif[0,:,:],coord.glamf[0,:,:],batt.mbathy[0,:,:],10,10)
with open('clat.txt') as f:
clat = f.read()
clat= clat[1:-1]
clat0 = clat.split(",")
f.close()
with open('clon.txt') as f:
clon = f.read()
clon=clon[1:-1]
clon0 = clon.split(",")
f.close()
clat,clon=[],[]
for i in range(len(clat0)):
clat.append(float(clat0[i]))
clon.append(float(clon0[i]))
###Output
_____no_output_____
###Markdown
Definitions
###Code
start = datetime(2018, 10, 1) #Start date
# Set Time length [days] and timestep [seconds]
length = 50
duration = timedelta(days=length)
dt = 90 #toggle between - or + to pick backwards or forwards
N = len(clat) # number of deploying locations
n = 1 # 1000 # number of particles per location
dmin = list(np.zeros(len(clat))) #minimum depth
dd = 5 #max depth difference from dmin
x_offset, y_offset, zvals = p_deploy(N,n,dmin,dd)
from parcels import Variable
class MPParticle(JITParticle):
ro = Variable('ro', initial = 1025)
diameter = Variable('diameter', initial = 1.6e-5)
length = Variable('length', initial = 61e-5)
Lb = Variable('Lb', initial = 0.26) #days needed in days for particle to have 67% probability of beaching if in beaching zone (500m)
Db = Variable('Db', initial = 1000) #Distance at which particles can randomly beach.
Ub = Variable('Ub', initial = 69) #days to have 67% probability of unbeaching
sediment = Variable('sediment', initial = 0)
beached = Variable('beached', initial = 0)
lon = np.zeros([N,n])
lat = np.zeros([N,n])
for i in range(N):
lon[i,:]=(clon[i] + x_offset[i,:])
lat[i,:]=(clat[i] + y_offset[i,:])
z = zvals
#Set start date time and the name of the output file
name = 'Beaching-UnbeachingL' #name output file
daterange = [start+timedelta(days=i) for i in range(length)]
fn = name + '_'.join(d.strftime('%Y%m%d')+'_1n' for d in [start, start+duration]) + '.nc'
outfile = os.path.join(paths['out'], fn)
print(outfile)
###Output
/home/jvalenti/MOAD/results/Beaching-UnbeachingL20181001_1n_20181120_1n.nc
###Markdown
Simulation
###Code
#Fill in the list of variables that you want to use as fields
varlist=['U','V','W','R']
filenames,variables,dimensions=filename_set(start,length,varlist,local)
field_set=FieldSet.from_nemo(filenames, variables, dimensions, allow_time_extrapolation=True)
varlist=['US','VS','WL']
filenames,variables,dimensions=filename_set(start,length,varlist,local)
us = Field.from_netcdf(filenames['US'], variables['US'], dimensions,allow_time_extrapolation=True)
vs = Field.from_netcdf(filenames['VS'], variables['VS'], dimensions,allow_time_extrapolation=True)
wl = Field.from_netcdf(filenames['WL'], variables['WL'], dimensions,allow_time_extrapolation=True)
field_set.add_field(us)
field_set.add_field(vs)
field_set.add_field(wl)
field_set.add_vector_field(VectorField("stokes", us, vs, wl))
filenames,variables,dimensions=filename_set(start,length,['Bathy'],local)
Bth = Field.from_netcdf(filenames['Bathy'], variables['Bathy'], dimensions,allow_time_extrapolation=True)
field_set.add_field(Bth)
# filenames,variables,dimensions=filename_set(start,length,['D'],local)
# Distc = Field.from_netcdf(filenames['D'], variables['D'], dimensions,allow_time_extrapolation=True)
# field_set.add_field(Distc)
# #Load Salish output as fields
#field_set = FieldSet.from_nemo(filenames, variables, dimensions, allow_time_extrapolation=True)
pset = ParticleSet.from_list(field_set, MPParticle, lon=lon, lat=lat, depth=z, time=start+timedelta(hours=2))
k_sink = pset.Kernel(Buoyancy)
k_waves = pset.Kernel(Stokes_drift)
k_beach = pset.Kernel(Beaching)
k_unbeach = pset.Kernel(Unbeaching)
pset.execute(AdvectionRK4_3D + k_sink + k_waves + k_beach + k_unbeach,
runtime=duration,
dt=dt,
output_file=pset.ParticleFile(name=outfile, outputdt=timedelta(hours=1)),
recovery={ErrorCode.ErrorOutOfBounds: DeleteParticle})
###Output
INFO: Compiled ArrayMPParticleAdvectionRK4_3DBuoyancyStokes_driftBeachingUnbeaching ==> /tmp/parcels-2894/lib1ac38246b603dfd8b112acf9ff7df34b_0.so
|
WeedCode/Synthetic_sugarbeets/train.ipynb | ###Markdown
Mask R-CNN - Train on Shapes DatasetThis notebook shows how to train Mask R-CNN on your own dataset. To keep things simple we use a synthetic dataset of shapes (squares, triangles, and circles) which enables fast training. You'd still need a GPU, though, because the network backbone is a Resnet101, which would be too slow to train on a CPU. On a GPU, you can start to get okay-ish results in a few minutes, and good results in less than an hour.The code of the *Shapes* dataset is included below. It generates images on the fly, so it doesn't require downloading any data. And it can generate images of any size, so we pick a small image size to train faster.
###Code
import os
import sys
import random
import math
import re
import time
import numpy as np
import cv2
import glob
import matplotlib
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings("ignore", message=r"Passing", category=FutureWarning)
# Root directory of the project
ROOT_DIR = os.path.abspath("..\\..\\")
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn.config import Config
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
from mrcnn.model import log
%matplotlib inline
# Directory to save logs and trained model
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_coco.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
###Output
Using TensorFlow backend.
###Markdown
Configurations
###Code
class SyntheticSugarbeetsConfig(Config):
"""Configuration for training on the Synthetic Sugarbeets dataset.
Derives from the base Config class and overrides values specific
to the Sythetic Sugarbeets dataset.
"""
# Give the configuration a recognizable name
NAME = "SyntheticSugarbeets"
# Train on 1 GPU and 1800 images per GPU. We can put multiple images on each
# GPU because the images are small. Batch size is ?? (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 2 #40,000 max
# Number of classes (including background)
NUM_CLASSES = 1 + 3 # background + 3 plants
# Use small images for faster training. Set the limits of the small side
# the large side, and that determines the image shape.
IMAGE_MIN_DIM = 512
IMAGE_MAX_DIM = 512
# Use smaller anchors because our image and objects are small
#RPN_ANCHOR_SCALES = (8, 16, 32, 64, 128) # anchor side in pixels
# Reduce training ROIs per image because the images are small and have
# few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
#TRAIN_ROIS_PER_IMAGE = 32
# Use a small epoch since the data is simple
#STEPS_PER_EPOCH = 100
# use small validation steps since the epoch is small
#VALIDATION_STEPS = 5
config = SyntheticSugarbeetsConfig()
config.display()
###Output
Configurations:
BACKBONE resnet101
BACKBONE_STRIDES [4, 8, 16, 32, 64]
BATCH_SIZE 2
BBOX_STD_DEV [0.1 0.1 0.2 0.2]
COMPUTE_BACKBONE_SHAPE None
DETECTION_MAX_INSTANCES 100
DETECTION_MIN_CONFIDENCE 0.7
DETECTION_NMS_THRESHOLD 0.3
FPN_CLASSIF_FC_LAYERS_SIZE 1024
GPU_COUNT 1
GRADIENT_CLIP_NORM 5.0
IMAGES_PER_GPU 2
IMAGE_CHANNEL_COUNT 3
IMAGE_MAX_DIM 512
IMAGE_META_SIZE 16
IMAGE_MIN_DIM 512
IMAGE_MIN_SCALE 0
IMAGE_RESIZE_MODE square
IMAGE_SHAPE [512 512 3]
LEARNING_MOMENTUM 0.9
LEARNING_RATE 0.001
LOSS_WEIGHTS {'rpn_class_loss': 1.0, 'rpn_bbox_loss': 1.0, 'mrcnn_class_loss': 1.0, 'mrcnn_bbox_loss': 1.0, 'mrcnn_mask_loss': 1.0}
MASK_POOL_SIZE 14
MASK_SHAPE [28, 28]
MAX_GT_INSTANCES 100
MEAN_PIXEL [123.7 116.8 103.9]
MINI_MASK_SHAPE (56, 56)
NAME SyntheticSugarbeets
NUM_CLASSES 4
POOL_SIZE 7
POST_NMS_ROIS_INFERENCE 1000
POST_NMS_ROIS_TRAINING 2000
PRE_NMS_LIMIT 6000
ROI_POSITIVE_RATIO 0.33
RPN_ANCHOR_RATIOS [0.5, 1, 2]
RPN_ANCHOR_SCALES (32, 64, 128, 256, 512)
RPN_ANCHOR_STRIDE 1
RPN_BBOX_STD_DEV [0.1 0.1 0.2 0.2]
RPN_NMS_THRESHOLD 0.7
RPN_TRAIN_ANCHORS_PER_IMAGE 256
STEPS_PER_EPOCH 1000
TOP_DOWN_PYRAMID_SIZE 256
TRAIN_BN False
TRAIN_ROIS_PER_IMAGE 200
USE_MINI_MASK True
USE_RPN_ROIS True
VALIDATION_STEPS 50
WEIGHT_DECAY 0.0001
###Markdown
Notebook Preferences
###Code
def get_ax(rows=1, cols=1, size=8):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Change the default size attribute to control the size
of rendered images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
###Output
_____no_output_____
###Markdown
DatasetCreate a synthetic datasetExtend the Dataset class and add a method to load the shapes dataset, `load_shapes()`, and override the following methods:* load_image()* load_mask()* image_reference()
###Code
class SyntheticSugarbeetsDataset(utils.Dataset):
"""Generates the synthetic sugarbeet dataset."""
def load_SyntheticSugarbeets(self, count, width, height, folder, seed, purpose):
"""Load the requested number of synthetic images.
count: number of images to generate.
"""
# Add classes
self.add_class("SyntheticSugarbeets", 1, "Sugarbeat")
self.add_class("SyntheticSugarbeets", 2, "Capsella")
self.add_class("SyntheticSugarbeets", 3, "Galium")
trainfilepath = os.path.join(folder, 'train1.txt')
my_file = open(trainfilepath, "r")
contents = my_file.read().splitlines()
my_file.close()
image_filepaths = []
for tail in contents:
image_filepaths.append(os.path.join(os.path.join(folder, 'rgb'), tail))
if purpose == 'train':
sampled_image_filepaths = image_filepaths[:count]
else:
sampled_image_filepaths = image_filepaths[-count:]
for i in range(count):
self.add_image("SyntheticSugarbeets", image_id=i, path=sampled_image_filepaths[i],
width=width, height=height)
def load_image(self, image_id):
"""Load the specified image and return a [H,W,3] Numpy array.
"""
# Load image
info = self.image_info[image_id]
image = cv2.imread(info['path'])
resized_image_results = utils.resize_image(image, max_dim=info['width'], mode="square")
self.image_info[image_id].update(scale=resized_image_results[2], padding=resized_image_results[3])
return cv2.cvtColor(resized_image_results[0], cv2.COLOR_BGR2RGB)
def load_mask(self, image_id):
"""Generate instance masks for shapes of the given image ID.
"""
info = self.image_info[image_id]
img_path = info['path']
head,tail = os.path.split(img_path)
instance_path = head[:-3]+"instance_mask\\"+tail
lbl_path = head[:-3]+"label\\"+tail
instance_mask = cv2.imread(instance_path,cv2.IMREAD_GRAYSCALE)
lbl = cv2.imread(lbl_path,cv2.IMREAD_GRAYSCALE)
count = np.max(instance_mask)
mask = np.zeros([np.shape(instance_mask)[0], np.shape(instance_mask)[1], count], dtype=np.bool)
class_ids = np.zeros(count,dtype=np.int32)
for i in range(count):
mask[:, :, i] = instance_mask == (i+1)
class_ids[i] = np.max(np.multiply(mask[:, :, i].astype('uint8'),lbl))
mask = utils.resize_mask(mask, scale=info['scale'], padding=info['padding'])
return mask, class_ids
# Training dataset
dataset_train = SyntheticSugarbeetsDataset()
FOLDER = ".\\data\\occlusion_00\\"
dataset_train.load_SyntheticSugarbeets(1600, config.IMAGE_SHAPE[0], config.IMAGE_SHAPE[1],FOLDER,seed=123,purpose='train')
dataset_train.prepare()
# Validation dataset
dataset_val = SyntheticSugarbeetsDataset()
dataset_val.load_SyntheticSugarbeets(290, config.IMAGE_SHAPE[0], config.IMAGE_SHAPE[1],FOLDER,seed=123,purpose='validation')
dataset_val.prepare()
###Output
_____no_output_____
###Markdown
Create Model
###Code
# Create model in training mode
model = modellib.MaskRCNN(mode="training", config=config,
model_dir=MODEL_DIR)
# Which weights to start with?
init_with = "coco" # imagenet, coco, or last
if init_with == "imagenet":
model.load_weights(model.get_imagenet_weights(), by_name=True)
elif init_with == "coco":
# Load weights trained on MS COCO, but skip layers that
# are different due to the different number of classes
# See README for instructions to download the COCO weights
model.load_weights(COCO_MODEL_PATH, by_name=True,
exclude=["mrcnn_class_logits", "mrcnn_bbox_fc",
"mrcnn_bbox", "mrcnn_mask"])
elif init_with == "last":
# Load the last model you trained and continue training
model.load_weights(model.find_last(), by_name=True)
###Output
_____no_output_____
###Markdown
TrainingTrain in two stages:1. Only the heads. Here we're freezing all the backbone layers and training only the randomly initialized layers (i.e. the ones that we didn't use pre-trained weights from MS COCO). To train only the head layers, pass `layers='heads'` to the `train()` function.2. Fine-tune all layers. For this simple example it's not necessary, but we're including it to show the process. Simply pass `layers="all` to train all layers.
###Code
# Train the head branches
# Passing layers="heads" freezes all layers except the head
# layers. You can also pass a regular expression to select
# which layers to train by name pattern.
#model.train(dataset_train, dataset_val,
# learning_rate=config.LEARNING_RATE,
# epochs=5,
# layers='heads')
# Fine tune all layers
# Passing layers="all" trains all layers. You can also
# pass a regular expression to select which layers to
# train by name pattern.
#model.train(dataset_train, dataset_val,
# learning_rate=config.LEARNING_RATE / 10,
# epochs=15,
# layers="all")
# Save weights
# Typically not needed because callbacks save after every epoch
# Uncomment to save manually
#model_path = os.path.join(MODEL_DIR, "mask_rcnn_shapes.h5")
#model.keras_model.save_weights(model_path)
###Output
_____no_output_____
###Markdown
Detection
###Code
class InferenceConfig(SyntheticSugarbeetsConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = 1
inference_config = InferenceConfig()
# Recreate the model in inference mode
model = modellib.MaskRCNN(mode="inference",
config=inference_config,
model_dir=MODEL_DIR)
# Get path to saved weights
# Either set a specific path or find last trained weights
model_path = os.path.join(ROOT_DIR, "logs\\syntheticsugarbeets_occlusion30\\mask_rcnn_syntheticsugarbeets_0015.h5")
#model_path = model.find_last()[0]
#
# Load trained weights
print("Loading weights from ", model_path)
model.load_weights(model_path, by_name=True)
# Test on a random image
image_id = dataset_val.image_ids[0]
original_image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
#gt_class_id -= 1
log("original_image", original_image)
log("image_meta", image_meta)
log("gt_class_id", gt_class_id)
log("gt_bbox", gt_bbox)
log("gt_mask", gt_mask)
visualize.display_instances(original_image, gt_bbox, gt_mask, gt_class_id,
dataset_train.class_names, figsize=(8, 8))
results = model.detect([original_image], verbose=1)
r = results[0]
visualize.display_instances(original_image, r['rois'], r['masks'], r['class_ids'],
dataset_val.class_names, r['scores'], ax=get_ax(), show_bbox=False)
###Output
Processing 1 images
image shape: (512, 512, 3) min: 0.00000 max: 159.00000 uint8
molded_images shape: (1, 512, 512, 3) min: -123.70000 max: 35.30000 float64
image_metas shape: (1, 16) min: 0.00000 max: 512.00000 int32
anchors shape: (1, 65472, 4) min: -0.70849 max: 1.58325 float32
WARNING:tensorflow:From C:\Users\danie\miniconda3\envs\Mask_RCNN_py3_5_2\lib\site-packages\keras\backend\tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
WARNING:tensorflow:From C:\Users\danie\miniconda3\envs\Mask_RCNN_py3_5_2\lib\site-packages\keras\backend\tensorflow_backend.py:431: The name tf.is_variable_initialized is deprecated. Please use tf.compat.v1.is_variable_initialized instead.
WARNING:tensorflow:From C:\Users\danie\miniconda3\envs\Mask_RCNN_py3_5_2\lib\site-packages\keras\backend\tensorflow_backend.py:438: The name tf.variables_initializer is deprecated. Please use tf.compat.v1.variables_initializer instead.
###Markdown
Evaluation
###Code
# Compute VOC-Style mAP @ IoU=0.5
# Running on 10 images. Increase for better accuracy.
image_ids = dataset_val.image_ids
APs = []
for image_id in image_ids:
# Load image and ground truth data
image, image_meta, gt_class_id, gt_bbox, gt_mask =\
modellib.load_image_gt(dataset_val, inference_config,
image_id, use_mini_mask=False)
molded_images = np.expand_dims(modellib.mold_image(image, inference_config), 0)
# Run object detection
results = model.detect([image], verbose=0)
r = results[0]
# Compute AP
AP, precisions, recalls, overlaps =\
utils.compute_ap(gt_bbox, gt_class_id, gt_mask,
r["rois"], r["class_ids"], r["scores"], r['masks'])
APs.append(AP)
print("mAP: ", np.mean(APs))
###Output
mAP: 0.8124505661724002
|
ft8_mapper.ipynb | ###Markdown
FT8 Mapper TL;DR:Click on __Restart the kernel then re-run the whole notebook__ on the Jupiter Lab toolobar, then scroll down to the bottom of this page and click on the __Build Map__ button. OverviewThis script was inspired by the [presentation of Jose CT1BOH](https://www.contestuniversity.com/wp-content/uploads/2021/05/There-is-Nothing-Magic-About-Propagation-CTU-2021-CT1BOH.pdf) on the use of FT8 spot data for HF propagation nowcasting. It downloads real-time data from different sources and plots them all on the same map. The data include:- FT8 spots from the [PSKReporter web site](https://www.pskreporter.info/pskmap.html),- MUF(3000) map from [KC2G](https://prop.kc2g.com/) or [IZMIRAN](https://www.izmiran.ru/ionosphere/weather/daily/index.shtml),- auroral oval from [NOAA](https://www.swpc.noaa.gov/products/aurora-30-minute-forecast),- magnetic dip (inclination) from [NOAA](https://www.ngdc.noaa.gov/geomag/calculators/magcalc.shtmligrfgrid),- gray line (computed).The map is available in the Geographic, Polar and Azimuthal projections.The script is provided as a Jupyter notebook that includes Python code, narrative text and visual output, all in one page. Several sections below contain the code, the last section presents a number of controls that you can use to set the desired map parameters and build the map. LicenseCopyright © 2021 Alex Shovkoplyas VE3NEALicense: [MIT](https://opensource.org/licenses/MIT) (_you can do whatever you want as long as you include the original copyright and license notice_). Import required packages
###Code
import zipfile
from datetime import datetime
import re
from enum import Enum
import pylab
import cartopy.crs as ccrs
from cartopy.feature.nightshade import Nightshade
import matplotlib.pyplot as plt
import json
from IPython.display import clear_output, FileLink, FileLinks
from colorama import Fore, Style
from io import StringIO
import urllib.request as urllib
import requests
import base64
import png
import numpy as np
import matplotlib
import ipywidgets as widgets
import cartopy
###Output
_____no_output_____
###Markdown
Convert grid square to lon/lat
###Code
letters='ABCDEFGHIJKLMNOPQRSTUVWX'
digits = '0123456789'
letters12 = dict([(letters[i] + letters[j], np.array([i*20.-180, j*10.-90])) for i in range(18) for j in range(18)])
digits34 = dict([(digits[i] + digits[j], np.array([2.*i, 1.*j])) for i in range(10) for j in range(10)])
letters56 = dict([(letters[i] + letters[j], np.array([2./24*i, 1./24*j])) for i in range(24) for j in range(24)])
def square_to_lon_lat(square):
try:
if len(square) == 2: return letters12[square[:2]] + [10, 5]
if len(square) == 4: return letters12[square[:2]] + digits34[square[2:4]] + [1, 0.5]
if len(square) == 6: return letters12[square[:2]] + digits34[square[2:4]] + letters56[square[4:6]] + [1./24, 0.5/24]
except:
return None
###Output
_____no_output_____
###Markdown
Download IZMIRAN MUF map[Source](https://www.izmiran.ru/ionosphere/weather/daily/index.shtml)
###Code
headers = 'START OF foF2 MAP|START OF hmF2 MAP|EPOCH OF CURRENT MAP|LAT/LON1/LON2/DLON'
def __get_ionex(url):
data, _ = urllib.urlretrieve(url)
data = zipfile.ZipFile(data, 'r')
data = data.open(data.namelist()[0])
data = [line.decode('UTF-8') for line in data]
data = [line for line in data if re.search(headers, line) is None][7:]
data = ''.join(data)
data = np.fromstring(data, dtype='float', sep=' ')
data.shape = (24, 71, 73)
return data[datetime.utcnow().hour][-1::-1,:]
def download_muf_izmiran():
date_string = datetime.utcnow().strftime('%y/%m/%y%m%d')
foF2_url = f'https://www.izmiran.ru/ionosphere/weather/gram/dfc/{date_string}f1.zip'
hmF2_url = f'https://www.izmiran.ru/ionosphere/weather/gram/dhc/{date_string}h1.zip'
foF2 = __get_ionex(foF2_url) / 10
hmF2 = __get_ionex(hmF2_url)
return foF2 * 1490 / (hmF2 + 176)
###Output
_____no_output_____
###Markdown
Download KC2G MUF mapwith kind permission from Andrew Rodland, KC2G[Source](https://prop.kc2g.com/)
###Code
MIN_MHZ = 4
MAX_MHZ = 35
DECIMATION_FACTOR = 4
# palette
cmap = pylab.cm.get_cmap('viridis', 256)
colors = [cmap(i, alpha=0.35, bytes=True) for i in range(256)]
colors = [c[0] + (c[1] << 8) + (c[2] << 16) + (c[3] << 24) for c in colors]
color_to_mhz = dict([(colors[i], MIN_MHZ * (MAX_MHZ / MIN_MHZ)**(i/255)) for i in range(256)])
def download_muf_kc2g():
# download
url = 'https://prop.kc2g.com/renders/current/mufd-normal-now.svg'
svg = requests.get(url).text
# extract pixels
png_data = re.search('xlink:href="data:image/png;base64, ([^"]+)"', svg).group(1)
png_data = base64.b64decode(png_data)
_, _, pixel_bytes, _ = png.Reader(bytes=png_data).read()
pixel_bytes = list(pixel_bytes)[::DECIMATION_FACTOR]
rows = [np.frombuffer(row, dtype=np.uint32) for row in pixel_bytes]
# pixels to MHz
return [[color_to_mhz[v] for v in row[::DECIMATION_FACTOR]] for row in rows]
###Output
_____no_output_____
###Markdown
Download FT8 spotswith kind permission from Philip Gladstone, N1DQ[Source](https://www.pskreporter.info/pskmap.html)
###Code
bands = {
'5.3': '5000000-6000000', '7': '6000000-8000000', '10': '8000000-12000000', '14': '12000000-16000000',
'18': '16000000-19000000', '21': '19000000-22000000', '24.8': '22000000-26000000', '28': '27999999-31000000'
}
def download_ft8(home_square, mhz):
# download
if home_square == '': home_square = 'ZZZZZ'
modify = 'all' if home_square == 'ZZZZZ' else 'grid'
url = 'https://www.pskreporter.info/cgi-bin/pskquery5.pl?encap=1&callback=doNothing&statistics=1&noactive=1&nolocator=0' \
f'&flowStartSeconds=-900&frange={bands[mhz]}&mode=FT8&modify={modify}&callsign={home_square}'
xml_str = requests.get(url, timeout=180).text
with open('./ft8.txt', 'w') as f: f.write(xml_str) # for debugging
# parse json
json_str = re.search('doNothing\((.+)\);', xml_str, re.DOTALL)
json_str = json_str.group(1)
json_struct = json.loads(json_str)
# paths
report = json_struct['receptionReport']
paths = [[r['receiverLocator'][:6].upper(), r['senderLocator'][:6].upper()] for r in report]
paths = np.unique([[min(p), max(p)] for p in paths], axis=0)
paths = np.array([p for p in paths if p[0] != '' and p[1] != ''])
# reporting stations
ends = np.unique(paths.flatten())
ends.sort()
# non-reporting stations
report = json_struct['activeReceiver']
stations = [r['locator'][:6].upper() for r in report if r['mode'] == 'FT8']
stations.sort()
stations = np.unique(stations)
stations = list(set(stations) - set(ends))
# grid square to lon/lat
paths = [[square_to_lon_lat(p[0]), square_to_lon_lat(p[1])] for p in paths]
ends = [square_to_lon_lat(s) for s in ends]
stations = [square_to_lon_lat(s) for s in stations]
print(f' paths: {len(paths)}, blue stations: {len(ends)}, gray stations: {len(stations)}')
return {'paths': paths, 'path_ends': ends, 'stations': stations}
###Output
_____no_output_____
###Markdown
Download Aurora map[Source](https://www.swpc.noaa.gov/products/aurora-30-minute-forecast)
###Code
def download_aurora():
url = 'https://services.swpc.noaa.gov/json/ovation_aurora_latest.json'
json_str = requests.get(url).text;
json_struct = json.loads(json_str)
aurora = json_struct['coordinates']
return np.array(aurora)[:,2].reshape(360, 181).T
###Output
_____no_output_____
###Markdown
Load Dip map[Source](https://www.ngdc.noaa.gov/geomag/calculators/magcalc.shtmligrfgrid)
###Code
def load_dip():
dip_data = np.loadtxt('igrfgridData.csv',delimiter=',')
return np.array(dip_data)[:,4].reshape(180, 361)
###Output
_____no_output_____
###Markdown
Plot the map
###Code
class Projection(Enum):
Geographic = 1
Polar = 2
Azimuthal = 3
def plot_map(muf_data=None, dip_data=None, aurora=None, grayline=False, spot_data=None, mhz=None, projection=Projection.Azimuthal,
home_square='', time=datetime.utcnow(), all_bands_muf=True):
# projection
geodetic_proj = ccrs.Geodetic()
geographic_proj = ccrs.PlateCarree()
geographic_proj._threshold = 0.1
home = square_to_lon_lat(home_square) if home_square != '' else [0, 90]
if projection == Projection.Geographic:
current_proj = geographic_proj
elif projection == Projection.Polar:
current_proj = ccrs.AzimuthalEquidistant(home[0], 90)
else:
current_proj = ccrs.AzimuthalEquidistant(home[0], home[1])
# base map
fig = plt.figure(figsize=(22, 25))
ax = plt.axes(projection=current_proj)
ax.set_global()
ax.coastlines(resolution='110m', alpha=0.9)
ax.add_feature(cartopy.feature.BORDERS, edgecolor='black', linestyle='--', alpha=0.8)
if grayline: ax.add_feature(Nightshade(time), alpha=0.2, color='black', edgecolor='red')
# title
title = time.strftime('%Y-%m-%d %H:%M UTC')
if mhz is not None: title = f'{title} {mhz} MHz'
if home_square != '': title = f'{title} {home_square}'
plt.title(title + '\n', fontsize=15, pad=0.02)
# MUF
if not muf_data is None:
lons, lats = np.meshgrid(np.linspace(-180, 180, len(muf_data[0])), np.linspace(-90,90, len(muf_data)))
levels = [4,5.3,7,10.1,14,18,21,24.8,28,35] if all_bands_muf else [4, mhz, 35]
# filled countours
contours = plt.contourf(lons, lats, muf_data, levels=levels, transform=geographic_proj, cmap='jet', alpha=0.5)
plt.colorbar(orientation='horizontal', shrink=0.3, pad=0.02).ax.set_xlabel('MUF(3000), MHz')
# labels
isolines = plt.contour(lons, lats, muf_data, levels=levels, colors=['None'], transform=geographic_proj)
ax.clabel(isolines, colors=['gray'], manual=False, inline=True, fmt=' {:.0f} '.format)
# dip
if not dip_data is None:
lons, lats = np.meshgrid(np.linspace(-180, 180, 361), np.linspace(89,-90, 180))
levels = [-90,-80,-70,-60,-50,-40,-30,-20,-10,0,10,20,30,40,50,60,70,80,90]
# contours
isolines = plt.contour(lons, lats, dip_data, levels=levels, alpha=.5, linewidths=[.5, .5, .5, .5] , colors='green', transform=geographic_proj)
# labels
ax.clabel(isolines, colors='green', manual=False, inline=True, fmt=' {:.0f} '.format)
# aurora
if aurora is not None:
ax.imshow(aurora, transform=geographic_proj, extent=(0, 360, 90, -90), cmap='Greys', vmin=1, vmax=10)
# spots
if not spot_data is None:
ax.scatter([s[0] for s in spot_data['stations']], [s[1] for s in spot_data['stations']], s=60, color='silver', edgecolors='gray', transform=geographic_proj, zorder=3)
for p in spot_data['paths']: plt.plot([p[0][0], p[1][0]], [p[0][1], p[1][1]], color='blue', transform=geodetic_proj, zorder=4)
ax.scatter([s[0] for s in spot_data['path_ends']], [s[1] for s in spot_data['path_ends']], s=60, color='aqua', edgecolors='blue', transform=geographic_proj, zorder=5)
# grid
if home_square == '':
# no home location, plot lat/lon
xlocs=np.linspace(-180,180,19)
ylocs=np.linspace(-90,90,19)
grid = ax.gridlines(color='magenta', alpha=0.7, xlocs=xlocs, ylocs=ylocs)
else:
if projection == Projection.Geographic:
ax2 = fig.add_axes(ax.get_position(), projection=ccrs.RotatedPole(central_rotated_longitude=home[0] + 180, pole_latitude=-home[1]))
elif projection == Projection.Polar:
ax2 = fig.add_axes(ax.get_position(), projection=ccrs.AzimuthalEquidistant(0, -home[1]))
else:
ax2 = fig.add_axes(ax.get_position(), projection=ccrs.AzimuthalEquidistant(0, -90))
ax2.set_global()
ax2.patch.set_facecolor('none')
xlocs=np.linspace(-180,180,13)
ylocs=np.linspace(77,-90,7)
grid = ax2.gridlines(color='teal', alpha=0.7, xlocs=xlocs, ylocs=ylocs)
ax.plot(home[0], home[1], marker='*', color='yellow', mec='olive', ms=20, zorder=9, transform=geographic_proj)
grid.n_steps = 1000
plt.savefig('map.png')
plt.show()
###Output
_____no_output_____
###Markdown
User interface
###Code
ui = {}
muf_data = None
spot_data = None
dip_data = None
aurora = None
def show_widgets():
# widgets
options = [('Geographic', Projection.Geographic), ('Polar', Projection.Polar), ('Azimuthal', Projection.Azimuthal)]
ui['projection'] = widgets.RadioButtons(options=options, value=Projection.Polar, description='Projection:')
ui['mhz'] = widgets.Dropdown(options=['5.3', '7', '10.1', '14', '18', '21', '24.8', '28'], value='14', description='MHz:')
ui['home_square'] = widgets.Text(value='FN03', description='Square:', placeholder='All squares')
ui['square_valid'] = widgets.Valid(value=True)
ui['square'] = widgets.Box([ui['home_square'], ui['square_valid']])
ui['show_spots'] = widgets.Checkbox(value=True, description='Show FT8 Spots', indent=False)
ui['show_muf'] = widgets.Checkbox(value=True, description='Show MUF(3000) Map', disabled=False, indent=False)
ui['all_bands_muf'] = widgets.Checkbox(value=True, description='MUF for All Bands', disabled=False, indent=False)
ui['show_aurora'] = widgets.Checkbox(description='Show Aurora', indent=False)
ui['show_dip'] = widgets.Checkbox(description='Show Magnetic Dip', indent=False)
ui['show_grayline'] = widgets.Checkbox(description='Show Grayline', indent=False)
ui['download_spots'] = widgets.Checkbox(description='Re-download Spot Data', indent=False)
ui['download_aurora'] = widgets.Checkbox(description='Re-download Auroral Data', indent=False)
ui['download_muf'] = widgets.Checkbox(description='Re-download MUF Data', indent=False)
ui['muf_source'] = widgets.RadioButtons(options=['KC2G', 'IZMIRAN'], description='MUF Source:')
ui['button'] = widgets.Button(description='Build Map')
# layout
ui['box1'] = widgets.VBox([ui['projection'], ui['mhz'], ui['square']])
ui['box2']= widgets.VBox([ui['show_spots'], ui['show_muf'], ui['all_bands_muf'], ui['show_dip'], ui['show_aurora'], ui['show_grayline']])
ui['box3'] = widgets.VBox([ui['download_spots'], ui['download_aurora'], ui['download_muf'], ui['muf_source']])
layout = widgets.Layout(height='200px', width='100%', border='1px solid gray')
display(widgets.HBox(children=[ui['box1'], ui['box2'], ui['box3']], layout=layout))
layout = widgets.Layout(height='70px', width='100%', align_items='center')
display(widgets.HBox(children=[ui['button']], layout=layout))
# output
ui['out'] = widgets.Output()
display(ui['out'])
# callbacks
ui['home_square'].observe(on_text_change, names='value')
ui['button'].on_click(on_button_click)
def download_data():
global muf_data, spot_data, dip_data, aurora
if ui['download_aurora'].value or (ui['show_aurora'].value and aurora is None):
print('Downloading aurora...')
try:
aurora = download_aurora()
except:
print(Fore.RED + ' download failed' + Style.RESET_ALL)
if ui['download_muf'].value or (ui['show_muf'].value and muf_data is None):
print('Downloading muf...')
try:
muf_data = download_muf_kc2g() if ui['muf_source'].value == 'KC2G' else download_muf_izmiran()
except:
print(Fore.RED + ' download failed' + Style.RESET_ALL)
if ui['show_dip'].value and dip_data is None:
print('Loading dip...')
try:
dip_data = load_dip()
except:
print(Fore.RED + ' dip load failed' + Style.RESET_ALL)
if ui['download_spots'].value or (ui['show_spots'].value and spot_data is None):
print('Downloading spots...')
try:
spot_data = download_ft8(home_square=ui['home_square'].value.upper(), mhz=ui['mhz'].value)
except:
print(Fore.RED + ' download failed' + Style.RESET_ALL)
ui['download_spots'].value = False
ui['download_muf'].value = False
ui['download_aurora'].value = False
def on_text_change(change):
square = change['new'].upper()
if ui['square_valid'] is not None:
ui['square_valid'].value = square == '' or square_to_lon_lat(square) is not None
def on_button_click(button):
with ui['out']:
clear_output(True)
download_data()
print('Building the map...\n')
plot_map(
muf_data = muf_data if ui['show_muf'].value else None,
dip_data = dip_data if ui['show_dip'].value else None,
aurora = aurora if ui['show_aurora'].value else None,
spot_data = spot_data if ui['show_spots'].value else None,
grayline = ui['show_grayline'].value,
mhz=ui['mhz'].value,
projection = ui['projection'].value,
home_square=ui['home_square'].value.upper(),
all_bands_muf=ui['all_bands_muf'].value
)
###Output
_____no_output_____
###Markdown
Enter the settings and build the map__Note:__ By default, the script downloads every dataset only once. If you change some parameter that affects the data, e.g, select a different band or different MUF source, tick the correspoiding _Re-download_ checkbox before building the map. To keep the map current, re-download all data every 15 minutes.The map will be shown below and saved to the [map.png](./map.png) file.
###Code
show_widgets()
###Output
_____no_output_____ |
How_to_draw_Derivative_graph_of_a_function.ipynb | ###Markdown
Copyright 2020 Masahiko Isshiki - https://blog.masahiko.info/.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
「活性化関数の『導関数』の出力グラフをPythonで描画する方法」 ■準備 ●Pythonバージョン:3.x
###Code
import sys
print('Python', sys.version)
# Python 3.6.9 (default, Nov 7 2019, 10:44:02) …… などと表示される
###Output
Python 3.6.9 (default, Nov 7 2019, 10:44:02)
[GCC 8.3.0]
###Markdown
■実装方法 ●NumPyを使って0.001間隔で-6~6の数値を生成
###Code
import numpy as np
xn = np.arange(-6.0, 6.0, 0.001)
print(xn)
# [-6. -5.999 -5.998 ... 5.997 5.998 5.999] ……などと表示される
###Output
[-6. -5.999 -5.998 ... 5.997 5.998 5.999]
###Markdown
●シグモイド関数(ゲイン付き)の定義
###Code
def sigmoid(x, a=1):
# a: ゲイン(係数)
return 1.0 / (1.0 + np.exp(-a*x))
print(sigmoid(xn))
# [0.00247262 0.00247509 0.00247756 ... 0.99751997 0.99752244 0.99752491] ……などと表示される
###Output
[0.00247262 0.00247509 0.00247756 ... 0.99751997 0.99752244 0.99752491]
###Markdown
●シグモイド関数のグラフを描画
###Code
import matplotlib.pyplot as plt
plt.plot(xn, sigmoid(xn), label = "Sigmoid(a=1)")
plt.plot(xn, sigmoid(xn, 2), label = " (a=2)")
plt.plot(xn, sigmoid(xn, 10), label = " (a=10)")
plt.plot(xn, sigmoid(xn, 100), label = " (a=100)")
plt.xlim(-6, 6)
plt.ylim(-0.2, 1.2)
plt.grid()
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
●シグモイド関数の「導関数」の定義
###Code
def der_sigmoid(x, a=1):
# a: ゲイン(係数)
return sigmoid(x, a) * (1.0 - sigmoid(x, a))
print(der_sigmoid(xn))
# [0.00246651 0.00246896 0.00247142 ... 0.00247388 0.00247142 0.00246896] ……などと表示される
###Output
[0.00246651 0.00246896 0.00247142 ... 0.00247388 0.00247142 0.00246896]
###Markdown
●シグモイド関数の「導関数」のグラフを描画
###Code
import matplotlib.pyplot as plt
plt.plot(xn, der_sigmoid(xn), label = "Derivative of Sigmoid(a=1)")
plt.plot(xn, der_sigmoid(xn, 2), label = " (a=2)")
plt.plot(xn, der_sigmoid(xn, 10), label = " (a=10)")
plt.plot(xn, der_sigmoid(xn, 100), label = " (a=100)")
plt.xlim(-6, 6)
plt.ylim(-0.2, 0.3)
plt.grid()
plt.legend()
plt.show()
###Output
_____no_output_____ |
docs/memo/notebooks/tutorials/2_pipeline_lesson5/.ipynb_checkpoints/notebook-checkpoint.ipynb | ###Markdown
FiltersA Filter is a function from an asset and a moment in time to a boolean:```F(asset, timestamp) -> boolean```In Pipeline, [Filters](https://www.quantopian.com/helpquantopian_pipeline_filters_Filter) are used for narrowing down the set of securities included in a computation or in the final output of a pipeline. There are two common ways to create a `Filter`: comparison operators and `Factor`/`Classifier` methods.Comparison OperatorsComparison operators on `Factors` and `Classifiers` produce Filters. Since we haven't looked at `Classifiers` yet, let's stick to examples using `Factors`. The following example produces a filter that returns `True` whenever the latest close price is above $20.
###Code
last_close_price = USEquityPricing.close.latest
close_price_filter = last_close_price > 20
###Output
_____no_output_____
###Markdown
And this example produces a filter that returns True whenever the 10-day mean is below the 30-day mean.
###Code
mean_close_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=10)
mean_close_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=30)
mean_crossover_filter = mean_close_10 < mean_close_30
###Output
_____no_output_____
###Markdown
Remember, each security will get its own `True` or `False` value each day. Factor/Classifier MethodsVarious methods of the `Factor` and `Classifier` classes return `Filters`. Again, since we haven't yet looked at `Classifiers`, let's stick to `Factor` methods for now (we'll look at `Classifier` methods later). The `Factor.top(n)` method produces a `Filter` that returns `True` for the top `n` securities of a given `Factor`. The following example produces a filter that returns `True` for exactly 200 securities every day, indicating that those securities were in the top 200 by last close price across all known securities.
###Code
last_close_price = USEquityPricing.close.latest
top_close_price_filter = last_close_price.top(200)
###Output
_____no_output_____
###Markdown
For a full list of `Factor` methods that return `Filters`, see [this link](https://www.quantopian.com/helpquantopian_pipeline_factors_Factor).For a full list of `Classifier` methods that return `Filters`, see [this link](https://www.quantopian.com/helpquantopian_pipeline_classifiers_Classifier). Dollar Volume FilterAs a starting example, let's create a filter that returns `True` if a security's 30-day average dollar volume is above $10,000,000. To do this, we'll first need to create an `AverageDollarVolume` factor to compute the 30-day average dollar volume. Let's include the built-in `AverageDollarVolume` factor in our imports:
###Code
from zipline.pipeline.factors import AverageDollarVolume
###Output
_____no_output_____
###Markdown
And then, let's instantiate our average dollar volume factor.
###Code
dollar_volume = AverageDollarVolume(window_length=30)
###Output
_____no_output_____
###Markdown
By default, `AverageDollarVolume` uses `USEquityPricing.close` and `USEquityPricing.volume` as its `inputs`, so we don't specify them. Now that we have a dollar volume factor, we can create a filter with a boolean expression. The following line creates a filter returning `True` for securities with a `dollar_volume` greater than 10,000,000:
###Code
high_dollar_volume = (dollar_volume > 10000000)
###Output
_____no_output_____
###Markdown
To see what this filter looks like, let's can add it as a column to the pipeline we defined in the previous lesson.
###Code
def make_pipeline():
mean_close_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=10)
mean_close_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=30)
percent_difference = (mean_close_10 - mean_close_30) / mean_close_30
dollar_volume = AverageDollarVolume(window_length=30)
high_dollar_volume = (dollar_volume > 10000000)
return Pipeline(
columns={
'percent_difference': percent_difference,
'high_dollar_volume': high_dollar_volume
}
)
###Output
_____no_output_____
###Markdown
If we make and run our pipeline, we now have a column `high_dollar_volume` with a boolean value corresponding to the result of the expression for each security.
###Code
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
result
###Output
_____no_output_____
###Markdown
Applying a ScreenBy default, a pipeline produces computed values each day for every asset in the Quantopian database. Very often however, we only care about a subset of securities that meet specific criteria (for example, we might only care about securities that have enough daily trading volume to fill our orders quickly). We can tell our Pipeline to ignore securities for which a filter produces `False` by passing that filter to our Pipeline via the `screen` keyword.To screen our pipeline output for securities with a 30-day average dollar volume greater than $10,000,000, we can simply pass our `high_dollar_volume` filter as the `screen` argument. This is what our `make_pipeline` function now looks like:
###Code
def make_pipeline():
mean_close_10 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=10)
mean_close_30 = SimpleMovingAverage(inputs=[USEquityPricing.close], window_length=30)
percent_difference = (mean_close_10 - mean_close_30) / mean_close_30
dollar_volume = AverageDollarVolume(window_length=30)
high_dollar_volume = dollar_volume > 10000000
return Pipeline(
columns={
'percent_difference': percent_difference
},
screen=high_dollar_volume
)
###Output
_____no_output_____
###Markdown
When we run this, the pipeline output only includes securities that pass the `high_dollar_volume` filter on a given day. For example, running this pipeline on May 5th, 2015 results in an output for ~2,100 securities
###Code
result = run_pipeline(make_pipeline(), '2015-05-05', '2015-05-05')
print('Number of securities that passed the filter: %d' % len(result))
result
###Output
Number of securities that passed the filter: 2511
###Markdown
Inverting a FilterThe `~` operator is used to invert a filter, swapping all `True` values with `Falses` and vice-versa. For example, we can write the following to filter for low dollar volume securities:
###Code
low_dollar_volume = ~high_dollar_volume
###Output
_____no_output_____ |
hmwk4/HW4-k-means-plus-plus-s.ipynb | ###Markdown
K-means++In this notebook, we are going to implement [k-means++](https://en.wikipedia.org/wiki/K-means%2B%2B) algorithm with multiple initial sets. The original k-means++ algorithm will just sample one set of initial centroid points and iterate until the result converges. The only difference in this implementation is that we will sample `RUNS` sets of initial centroid points and update them in parallel. The procedure will finish when all centroid sets are converged.
###Code
### Definition of some global parameters.
K = 5 # Number of centroids
RUNS = 25 # Number of K-means runs that are executed in parallel. Equivalently, number of sets of initial points
RANDOM_SEED = 60295531
converge_dist = 0.1 # The K-means algorithm is terminated when the change in the location
# of the centroids is smaller than 0.1
import numpy as np
import pickle
import sys
from numpy.linalg import norm
from matplotlib import pyplot as plt
def print_log(s):
sys.stdout.write(s + "\n")
sys.stdout.flush()
def parse_data(row):
'''
Parse each pandas row into a tuple of (station_name, feature_vec),`l
where feature_vec is the concatenation of the projection vectors
of TAVG, TRANGE, and SNWD.
'''
return (row[0],
np.concatenate([row[1], row[2], row[3]]))
def compute_entropy(d):
'''
Compute the entropy given the frequency vector `d`
'''
d = np.array(d)
d = 1.0 * d / d.sum()
return -np.sum(d * np.log2(d))
def choice(p):
'''
Generates a random sample from [0, len(p)),
where p[i] is the probability associated with i.
'''
random = np.random.random()
r = 0.0
for idx in range(len(p)):
r = r + p[idx]
if r > random:
return idx
assert(False)
def kmeans_init(rdd, K, RUNS, seed):
'''
Select `RUNS` sets of initial points for `K`-means++
'''
# the `centers` variable is what we want to return
n_data = rdd.count()
shape = rdd.take(1)[0][1].shape[0]
centers = np.zeros((RUNS, K, shape))
def update_dist(vec, dist, k):
new_dist = norm(vec - centers[:, k], axis=1)**2
return np.min([dist, new_dist], axis=0)
# The second element `dist` in the tuple below is the closest distance from
# each data point to the selected points in the initial set, where `dist[i]`
# is the closest distance to the points in the i-th initial set.
data = rdd.map(lambda p: (p, [np.inf] * RUNS)) \
.cache()
# Collect the feature vectors of all data points beforehand, might be
# useful in the following for-loop
local_data = rdd.map(lambda (name, vec): vec).collect()
# Randomly select the first point for every run of k-means++,
# i.e. randomly select `RUNS` points and add it to the `centers` variable
sample = [local_data[k] for k in np.random.randint(0, len(local_data), RUNS)]
centers[:, 0] = sample
for idx in range(K - 1):
##############################################################################
# Insert your code here:
##############################################################################
# In each iteration, you need to select one point for each set
# of initial points (so select `RUNS` points in total).
# For each data point x, let D_i(x) be the distance between x and
# the nearest center that has already been added to the i-th set.
# Choose a new data point for i-th set using a weighted probability
# where point x is chosen with probability proportional to D_i(x)^2
##############################################################################
#Repeat each data point by 25 times (for each RUN) to get 12140x25
#Update distance
data = data.map(lambda ((name,vec),dist): ((name,vec),update_dist(vec,dist,idx))).cache()
#Calculate sum of D_i(x)^2
d1 = data.map(lambda ((name,vec),dist): (1,dist))
d2 = d1.reduceByKey(lambda x,y: np.sum([x,y], axis=0))
total = d2.collect()[0][1]
#Normalize each distance to get the probabilities and reshapte to 12140x25
prob = data.map(lambda ((name,vec),dist): np.divide(dist,total)).collect()
prob = np.reshape(prob,(len(local_data), RUNS))
#K'th centroid for each run
data_id = [choice(prob[:,i]) for i in xrange(RUNS)]
sample = [local_data[i] for i in data_id]
centers[:, idx+1] = sample
return centers
def get_closest(p, centers):
'''
Return the indices the nearest centroids of `p`.
`centers` contains sets of centroids, where `centers[i]` is
the i-th set of centroids.
'''
best = [0] * len(centers)
closest = [np.inf] * len(centers)
for idx in range(len(centers)):
for j in range(len(centers[0])):
temp_dist = norm(p - centers[idx][j])
if temp_dist < closest[idx]:
closest[idx] = temp_dist
best[idx] = j
return best
def kmeans(rdd, K, RUNS, converge_dist, seed):
'''
Run K-means++ algorithm on `rdd`, where `RUNS` is the number of
initial sets to use.
'''
k_points = kmeans_init(rdd, K, RUNS, seed)
print_log("Initialized.")
temp_dist = 1.0
iters = 0
st = time.time()
while temp_dist > converge_dist:
##############################################################################
# INSERT YOUR CODE HERE
##############################################################################
# Update all `RUNS` sets of centroids using standard k-means algorithm
# Outline:
# - For each point x, select its nearest centroid in i-th centroids set
# - Average all points that are assigned to the same centroid
# - Update the centroid with the average of all points that are assigned to it
# Insert your code here
#For each point x, select its nearest centroid in i-th centroids set
#Format: ((RUN, nearest_centroid), point_coord)
cen_rdd1 = rdd.flatMap(lambda p:
[((indx,j),p[1]) for (indx,j) in enumerate(get_closest(p[1], k_points))])
#Introduce 1 for count
#Format: ((RUN, nearest_centroid), point, 1)
cen_rdd2 = cen_rdd1.map(lambda ((run, k), pt):
((run, k), np.array([pt, 1])))
#Add all the distance and add 1's (count)
#Format: ((RUN, nearest_centroid), sum_points, count)
cen_rdd3 = cen_rdd2.reduceByKey(lambda x,y: np.sum([x,y], axis=0))
#Calculate mean distance for each run
#Format: ((RUN, nearest_centroid), mean_distance)
cen_rdd4 = cen_rdd3.map(lambda ((run, k), p):
((run, k), p[0]/p[1]))
#Get dictionary of new_points
new_points = cen_rdd4.collectAsMap()
# You can modify this statement as long as `temp_dist` equals to
# max( sum( l2_norm of the movement of j-th centroid in each centroids set ))
##############################################################################
# temp_dist = np.max([
# np.sum([norm(k_points[idx][j] - new_points[(idx, j)]) for j in range(K)])
# for idx in range(RUNS)])
temp_dist = np.max([
np.sum([norm(k_points[idx][j] - new_points[(idx, j)]) for idx,j in new_points.keys()])
])
iters = iters + 1
if iters % 5 == 0:
print_log("Iteration %d max shift: %.2f (time: %.2f)" %
(iters, temp_dist, time.time() - st))
st = time.time()
# update old centroids
# You modify this for-loop to meet your need
for ((idx, j), p) in new_points.items():
k_points[idx][j] = p
return k_points
## Read data
data = pickle.load(open("/home/sadat/Desktop/UCSD_BigData_2016/Data/Weather/stations_projections.pickle", "rb"))
rdd = sc.parallelize([parse_data(row[1]) for row in data.iterrows()])
rdd.take(1)
# main code
import time
st = time.time()
np.random.seed(RANDOM_SEED)
centroids = kmeans(rdd, K, RUNS, converge_dist, np.random.randint(1000))
group = rdd.mapValues(lambda p: get_closest(p, centroids)) \
.collect()
print "Time takes to converge:", time.time() - st
###Output
Initialized.
Iteration 5 max shift: 18811.76 (time: 21.72)
Iteration 10 max shift: 6234.56 (time: 21.37)
Iteration 15 max shift: 2291.96 (time: 21.44)
Iteration 20 max shift: 1285.12 (time: 21.35)
Iteration 25 max shift: 700.73 (time: 21.57)
Iteration 30 max shift: 455.04 (time: 21.34)
Iteration 35 max shift: 148.68 (time: 22.01)
Iteration 40 max shift: 101.45 (time: 27.06)
Iteration 45 max shift: 40.88 (time: 37.93)
Iteration 50 max shift: 16.89 (time: 21.13)
Iteration 55 max shift: 0.85 (time: 21.22)
Time takes to converge: 267.533921957
###Markdown
Verify your resultsVerify your results by computing the objective function of the k-means clustering problem.
###Code
def get_cost(rdd, centers):
'''
Compute the square of l2 norm from each data point in `rdd`
to the centroids in `centers`
'''
def _get_cost(p, centers):
best = [0] * len(centers)
closest = [np.inf] * len(centers)
for idx in range(len(centers)):
for j in range(len(centers[0])):
temp_dist = norm(p - centers[idx][j])
if temp_dist < closest[idx]:
closest[idx] = temp_dist
best[idx] = j
return np.array(closest)**2
cost = rdd.map(lambda (name, v): _get_cost(v, centroids)).collect()
return np.array(cost).sum(axis=0)
cost = get_cost(rdd, centroids)
log2 = np.log2
print log2(np.max(cost)), log2(np.min(cost)), log2(np.mean(cost))
###Output
33.8254902123 33.7575332525 33.7790236109
###Markdown
Plot the increase of entropy after multiple runs of k-means++
###Code
entropy = []
for i in range(RUNS):
count = {}
for g, sig in group:
_s = ','.join(map(str, sig[:(i + 1)]))
count[_s] = count.get(_s, 0) + 1
entropy.append(compute_entropy(count.values()))
###Output
_____no_output_____
###Markdown
**Note:** Remove this cell before submitting to PyBolt (PyBolt does not fully support matplotlib)
###Code
%matplotlib inline
plt.xlabel("Iteration")
plt.ylabel("Entropy")
plt.plot(range(1, RUNS + 1), entropy)
2**entropy[-1]
###Output
_____no_output_____
###Markdown
Print the final results
###Code
print 'entropy=',entropy
best = np.argmin(cost)
print 'best_centers=',list(centroids[best])
###Output
entropy= [1.6445469704935676, 2.0800064512748428, 2.080006451274842, 2.0800064512748424, 2.1906681946052755, 2.2570115065383876, 2.2786597860645408, 2.2786597860645408, 2.2786597860645408, 2.2786597860645408, 2.2786597860645403, 2.2786597860645408, 2.2786597860645408, 2.2786597860645408, 2.2849509629282276, 2.2849509629282276, 2.2849509629282276, 2.2849509629282272, 2.286874405497795, 2.2868744054977945, 2.2868744054977945, 2.286874405497795, 2.2868744054977945, 2.286874405497795, 2.286874405497795]
best_centers= [array([ 2952.76608 , 1933.02980077, 92.424188 , -2547.74851278,
144.84123959, 154.0172669 , 18.40817384, 7.84926361,
5.11113863]), array([ 428.4738994 , 1807.58033164, 35.14799298, -2574.43476306,
-180.39839191, 263.09089521, 6048.90511888, -743.20856056,
256.68319372]), array([ 1492.0570036 , 1954.30230067, 94.48584365, -2567.99675086,
-112.2682711 , 152.28015089, 395.84574671, 131.09390181,
73.10315542]), array([ 750.10763916, 2067.97627806, 35.34601332, -2398.58742321,
-138.36631381, 233.32209536, 2268.85311051, 245.99611499,
125.46432194]), array([ 408.29696084, 1353.92836359, 56.37619358, -2206.17029272,
-221.37785013, 183.25193705, 18757.57406286, -5513.4828535 ,
1476.58182765])]
|
quchem_examples/2.1 Ansatz_Generator_Functions.ipynb | ###Markdown
Theory The HF technique does not take into account electron correlation effects, other than the Pauli exclusion principle which is imposed by the Slater determinant. To correct for this, the wavefunction can be expanded as a superposition of all the determinants in the N electron Fock Space.The total number of determinants is given by the binomial:$$\binom{M}{N} = \frac{M!}{N!(M-N)!}$$- $M=$ number of spin orbitals- $N=$ number of e- for example in H2 in an STO-3G basis there are 4 orbitals and 2 electronstherefore there are $\binom{4}{2} = 6$ determinants. The superposition over all determinants is: $\alpha |{1100}\rangle + \beta |{1010}\rangle + \gamma |{1001}\rangle, \delta |{0110}\rangle + \epsilon |{0101}\rangle + \zeta |{0011}\rangle$. The importance of this is that as all possible determinants are present, the exact wavefunction of any state in the system can be written in this form!Including all the determinants will yield the full configuration interaction (FCI) wavefunction, if all the excitations above the HF wavefunction are considered. This can be formalized with the definition of the excitation operators: $$T = \sum_{i=1}^{N} T_{i}$$where:$$T_{1} = \sum_{\substack{i\in occ \\ \alpha \in virt}} t_{\alpha}^{i}a_{\alpha}^{\dagger}a_{i}$$$$T_{2} = \sum_{\substack{i>j\in occ, \\ \alpha > \beta \in virt}} t_{\alpha \beta}^{ij}a_{\alpha}^{\dagger}a_{\beta}^{\dagger}a_{i}a_{j}$$$$\dots$$ note:- occ = occupied spin orbitals of **reference state**- virt = unoccupied spin orbitals of **reference state**Single excitations from the reference stateare generated by $T_{1}$, $T_{2}$ double excitations and higher order terms follow as expected. The expansion coefficients are denoted as $t_{\alpha}^{i}$ and $t_{\alpha \beta}^{ij}$ and determine the amplitude values! The unitary coupled cluster wavefunction is defined as: $$|\psi_{UCC}\rangle = e^{T-T^{\dagger}}|\psi_{HF}\rangle$$where$U_{U C C}(\vec{t})=e^{T-T^{\dagger}}=e^{\sum_{i} t_{i}\left(T_{i}-T_{i}^{\dagger}\right)}$ UCCSDUCCSD only takes into account **single** and **double** electron excitations:$$U_{CCSD} =e^{\left(T_{1}-T_{1}^{\dagger}\right)+\left(T_{2}-T_{2}^{\dagger}\right)}$$$$T_{1} = \sum_{\substack{i\in occ \\ \alpha \in virt}} t_{\alpha}^{i}a_{\alpha}^{\dagger}a_{i}$$$$T_{2} = \sum_{\substack{i>j\in occ, \\ \alpha > \beta \in virt}} t_{\alpha \beta}^{ij}a_{\alpha}^{\dagger}a_{\beta}^{\dagger}a_{i}a_{j}$$Leads to more TRACTABLE calculation
###Code
from quchem.Ansatz.Ansatz_Generator_Functions import Ansatz
num_electrons = 2
num_orbitals = 4
ansatz = Ansatz(num_electrons, num_orbitals)
ansatz.Get_ia_and_ijab_terms()
print(ansatz.Sec_Quant_CC_ia_Fermi_ops)
print('')
print(ansatz.Sec_Quant_CC_ijab_Fermi_ops)
###Output
[-1.0 [0^ 2] +
1.0 [2^ 0], -1.0 [1^ 3] +
1.0 [3^ 1]]
[-1.0 [0^ 1^ 2 3] +
1.0 [3^ 2^ 1 0]]
###Markdown
Suzuki-Trotter Decomposition Often we are given an operator as a sum of operstors (Just the excitation operators of the UCC operator $T = \sum_{i=1}^{N} T_{i}$)... Here we are considering a Hamiltonian:$$H = H_{1} + H_{2}$$from the **second postulate of QM** we want to perform the time evolution of the quantum system (using the Hamiltonian). When $H$ is constant in time the evolution operator $U(t)$ has a very simple form:$$U(t)=e^{-i \frac{H}{\hbar}t}$$**IMPORTANT note about exponentiated operators**Exponentiated operators DO NOT SHARE ALL PROPERTIES FOR EXPONENTIATED NUMBERSe.g.- for numbers we have: $e^{a+b} = e^{a}e^{b}$- for operators in general $e^{\hat{A}+\hat{B}} \neq e^{\hat{A}}e^{\hat{B}}$Only obeys this relationship if and only if ($\iff$) the operators commute:$$e^{\hat{A}+\hat{B}} = e^{\hat{A}}e^{\hat{B}} \iff [\hat{A}, \hat{B}]=0$$ Therefore if $H_{1}$ and $H_{2}$ do not commute, we could simulate them individually and add the results... BUT this will NOT calculate the full $H$.In such cases one can use the Suzuki-Trotter decomposition, where for any operators $\hat{A}$ and $\hat{B}$:$$e^{\hat{A}+\hat{B}}=\lim _{n \rightarrow \infty}\left(e^{\hat{A} / n} e^{\hat{B} / n}\right)^{n}=\lim _{n \rightarrow \infty}\left(e^{\hat{B} / n} e^{\hat{A} / n}\right)^{n}$$therefore for QUANTUM evolution operators:$$U(t)=e^{\frac{-i t}{\hbar}\left(H_{1}+H_{2}\right)}=\lim _{n \rightarrow \infty}\left(U_{1}(t / n) U_{2}(t / n)\right)^{n}$$This can be interpretted as the full evolution of $H_{1}$ and $H_{2}$ for successive slices and as the slices become infinitessimal the approximation becomes exact Single trotter step $$U_{U C C}(\vec{t}) \approx U_{U C C}^{(T r o t)}=\left(\prod_{j} \exp{ \big( \frac{t_{j}}{\rho}\left(T_{j}-T_{j}^{\dagger}\right)\big)}\right)^{\rho}$$- $\rho =$ trotter step for single step ($\rho =1 $)$$U_{U C C}^{(T r o t)}=\prod_{j} \exp{ \big( t_{j}\left(T_{j}-T_{j}^{\dagger}\right)\big)}$$therefore for the SINGLE - DOUBLE version:$$U_{U C C SD}^{(T r o t)}=e^{t_{1}\left(T_{1}-T_{1}^{\dagger}\right)} \times e^{t_{2}\left(T_{2}-T_{2}^{\dagger}\right)}$$ Therefore using H2 as an example:$$U_{UCCSD} = e^{t_{02}(a_{2}^{\dagger}a_{0} - a_{0}^{\dagger}a_{2}) + t_{13}(a_{3}^{\dagger}a_{1} - a_{1}^{\dagger}a_{3}) + t_{0123}(a_{3}^{\dagger}a_{2}^{\dagger}a_{1}a_{0} - a_{0}^{\dagger}a_{1}^{\dagger}a_{2}a_{3})} = exp{\big( t_{02}(a_{2}^{\dagger}a_{0} - a_{0}^{\dagger}a_{2}) + t_{13}(a_{3}^{\dagger}a_{1} - a_{1}^{\dagger}a_{3}) + t_{0123}(a_{3}^{\dagger}a_{2}^{\dagger}a_{1}a_{0} - a_{0}^{\dagger}a_{1}^{\dagger}a_{2}a_{3}) \big)}$$$$U_{UCCSD}^{Trot} = e^{t_{02}(a_{2}^{\dagger}a_{0} - a_{0}^{\dagger}a_{2})} \times e^{ t_{13}(a_{3}^{\dagger}a_{1} - a_{1}^{\dagger}a_{3})} \times e^{t_{0123}(a_{3}^{\dagger}a_{2}^{\dagger}a_{1}a_{0} - a_{0}^{\dagger}a_{1}^{\dagger}a_{2}a_{3})}$$
###Code
num_electrons = 2
num_orbitals = 4
ansatz = Ansatz(num_electrons, num_orbitals)
ansatz.Get_ia_and_ijab_terms()
ansatz.UCCSD_single_trotter_step(transformation='JW',
List_FermiOps_ia= ansatz.Sec_Quant_CC_ia_Fermi_ops,
List_FermiOps_ijab=ansatz.Sec_Quant_CC_ijab_Fermi_ops,
)
for index, ia_term in enumerate(ansatz.Sec_Quant_CC_ia_Fermi_ops):
print('ia_term:\n', ia_term)
print('becomes:\n', ansatz.Second_Quant_CC_single_Trot_list_ia[index])
print('')
print('')
print('####')
print('')
for index, ijab_term in enumerate(ansatz.Sec_Quant_CC_ijab_Fermi_ops):
print('ijab_term:\n', ansatz.Sec_Quant_CC_ijab_Fermi_ops)
print('becomes:\n', ansatz.Second_Quant_CC_single_Trot_list_ijab[index])
print('')
###Output
ia_term:
-1.0 [0^ 2] +
1.0 [2^ 0]
becomes:
-0.5j [X0 Z1 Y2] +
0.5j [Y0 Z1 X2]
ia_term:
-1.0 [1^ 3] +
1.0 [3^ 1]
becomes:
-0.5j [X1 Z2 Y3] +
0.5j [Y1 Z2 X3]
####
ijab_term:
[-1.0 [0^ 1^ 2 3] +
1.0 [3^ 2^ 1 0]]
becomes:
0.125j [X0 X1 X2 Y3] +
0.125j [X0 X1 Y2 X3] +
-0.125j [X0 Y1 X2 X3] +
0.125j [X0 Y1 Y2 Y3] +
-0.125j [Y0 X1 X2 X3] +
0.125j [Y0 X1 Y2 Y3] +
-0.125j [Y0 Y1 X2 Y3] +
-0.125j [Y0 Y1 Y2 X3]
###Markdown
**note** each Paulistring in each subterm commutes and so we can use the standard exponentiated matrix simplification rulesOnly obeys this relationship if and only if ($\iff$) the operators commute:$$e^{\hat{A}+\hat{B}} = e^{\hat{A}}e^{\hat{B}} \iff [\hat{A}, \hat{B}]=0$$ Overall using a single trotter step we get:$$U_{UCC}^{Trot} = \\ e^{-i\frac{\theta_{20}}{2}(X_{0} Z_{1} Y_{2})} \times e^{i\frac{\theta_{20}}{2}(Y_{0} Z_{1} X_{2})} \\ \times e^{-i\frac{\theta_{31}}{2}(X_{1} Z_{2} Y_{3})} \times e^{i\frac{\theta_{31}}{2}(Y_{1} Z_{2} X_{3})} \times \\ e^{\frac{\theta_{3210}}{8}(X_{0} X_{1} X_{2} Y_{3})} \times e^{i\frac{\theta_{3210}}{8}(X_{0} X_{1} Y_{2} X_{3})}\times e^{-i\frac{\theta_{3210}}{8}(X_{0} Y_{1} X_{2} X_{3})}\times e^{i\frac{\theta_{3210}}{8}(X_{0} Y_{1} Y_{2} Y_{3})}\times e^{-i\frac{\theta_{3210}}{8}(Y_{0} X_{1} X_{2} X_{3})}\times e^{i\frac{\theta_{3210}}{8}(Y_{0} X_{1} Y_{2} Y_{3})}\times e^{-i\frac{\theta_{3210}}{8}(Y_{0} Y_{1} X_{2} Y_{3})}\times e^{-i\frac{\theta_{3210}}{8}(Y_{0} Y_{1} Y_{2} X_{3})}$$ Get Hartree-Fock states Jordan-Wigner Transformation
###Code
n_electrons=2
n_qubits=4
ansatz_obj = Ansatz(n_electrons, n_qubits)
print('full state = ',ansatz_obj.Get_JW_HF_state())
print('')
print('state in occ number basis = ',ansatz_obj.Get_BK_HF_state_in_OCC_basis())
###Output
full state = [0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
state in occ number basis = [1. 0. 0. 0.]
###Markdown
Bravyi-Kitaev Transformation
###Code
print('full state = ', ansatz_obj.Get_BK_HF_state())
print('state in occ number basis = ',ansatz_obj.Get_BK_HF_state_in_OCC_basis())
###Output
full state = [0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
state in occ number basis = [1. 0. 0. 0.]
###Markdown
Get initial theta values using coupled cluster calculation
###Code
from quchem.Hamiltonian.Hamiltonian_Generator_Functions import Hamiltonian_PySCF
### CLASSICAL QUANTUM CALCULATION
Molecule = 'H2'#'LiH' #'H2
geometry = [('H', (0., 0., 0.)), ('H', (0., 0., 0.74))]
basis = 'sto-3g'
### Get Hamiltonian
Hamilt = Hamiltonian_PySCF(Molecule,
run_scf=1, run_mp2=1, run_cisd=1, run_ccsd=1, run_fci=1,
basis=basis,
multiplicity=1,
geometry=geometry) # normally None!
# Hamilt.Get_Molecular_Hamiltonian(Get_H_matrix=False)
QubitHam = Hamilt.Get_Qubit_Hamiltonian(transformation='JW')
Ham_matrix_JW = Hamilt.Get_sparse_Qubit_Hamiltonian_matrix(QubitHam)
### Get classical CC amplitudes
Hamilt.Get_CCSD_Amplitudes()
ansatz_obj = Ansatz(Hamilt.molecule.n_electrons, Hamilt.molecule.n_qubits)
ansatz_obj.Get_ia_and_ijab_terms(single_cc_amplitudes=Hamilt.molecule.single_cc_amplitudes,
double_cc_amplitudes=Hamilt.molecule.double_cc_amplitudes,
singles_hamiltonian=Hamilt.singles_hamiltonian,
doubles_hamiltonian=Hamilt.doubles_hamiltonian,
tol_filter_small_terms = None)
print(ansatz_obj.Sec_Quant_CC_ia_Fermi_ops)
ansatz_obj.theta_ia
print(ansatz_obj.Sec_Quant_CC_ijab_Fermi_ops)
ansatz_obj.theta_ijab
###Output
[-1.0 [0^ 1^ 2 3] +
1.0 [3^ 2^ 1 0]]
###Markdown
Use NOON to remove terms for UCCSD operators
###Code
### HAMILTONIAN start
Molecule = 'LiH'
geometry = None # [('H', (0., 0., 0.)), ('H', (0., 0., 0.74))]
basis = 'sto-3g'
transf= 'BK'
### Get Hamiltonian
Hamilt = Hamiltonian_PySCF(Molecule,
run_scf=1, run_mp2=1, run_cisd=1, run_ccsd=1, run_fci=1,
basis=basis,
multiplicity=1,
geometry=geometry) # normally None!
QubitHamiltonian = Hamilt.Get_Qubit_Hamiltonian(threshold=None, transformation=transf)
### HAMILTONIAN end
## Get CC ampltidues
Hamilt.Get_CCSD_Amplitudes()
##
NOON_spins_combined, NMO_basis, new_ROTATED_Qubit_Hamiltonian = Hamilt.Get_NOON(transf)
##
NOON_spins_combined
ansatz_obj = Ansatz(Hamilt.molecule.n_electrons, Hamilt.molecule.n_qubits)
Removed_indices, reduced_CC_ops_ia , reduced_CC_ops_ijab, theta_ia, theta_ijab = ansatz_obj.Remove_NOON_terms(
NOON=NOON_spins_combined,
occ_threshold=1.999,
unocc_threshold=1e-4,
indices_to_remove_list_manual=None,
single_cc_amplitudes=Hamilt.molecule.single_cc_amplitudes,
double_cc_amplitudes=Hamilt.molecule.double_cc_amplitudes,
singles_hamiltonian=None,
doubles_hamiltonian=None,
tol_filter_small_terms=None)
print('REDUCTION')
print('ia_terms', len(ansatz_obj.Sec_Quant_CC_ia_Fermi_ops), 'TO', len(reduced_CC_ops_ia))
print('ijab_terms', len(ansatz_obj.Sec_Quant_CC_ijab_Fermi_ops), 'TO', len(reduced_CC_ops_ijab))
print('indices removed:', Removed_indices)
###Output
REDUCTION
ia_terms 16 TO 6
ijab_terms 42 TO 6
indices removed: [0, 1, 6, 7]
|
Training/beat_shaper.ipynb | ###Markdown
Functions for loading/processing training data
###Code
# Load a JSON beatmap file and extract and format the note data for use in the neural net
def loadbeatmap(beatmap, beats, bpm, chunks_per_beat=8):
if beatmap[len(beatmap)-5:len(beatmap)] != ".json":
print("Beatmap file " + audio + " is not of type .json")
return -1
with open(beatmap) as f:
data = json.load(f)
notes = "_notes"
time = "_time"
line_index = "_lineIndex" #column number
line_layer = "_lineLayer" #row number
note_color = "_type" #0 is one color and 1 is the other
cut_direction = "_cutDirection"#9 cut directions
bpm_scale = bpm / data['_beatsPerMinute']
dim_0 = beats * chunks_per_beat
# number of rows and columns in the playfield
# number of cells in the playfield (each cell can hold at most 1 note)
playfield_rows = 3
playfield_cols = 4
playfield_cell_count = playfield_rows * playfield_cols
# number of colors (2): red, blue (order unknown)
# number of directions notes can face (9):
# up, down, left, right, up-left, up-right, down-left, down-right, dot (order unknown)
note_color_count = 2
note_direction_count = 9
# dimensions for a 'one-hot' representation of a single time unit (chunk)
dim_1 = playfield_rows
dim_2 = playfield_cols
dim_3 = (note_color_count + 1) + (note_direction_count + 1)
# initialize matrix to zeros, then set the "no note" bit for each block at each timestep to 1
outMatrix = np.zeros(shape=(dim_0, dim_1, dim_2, dim_3))
outMatrix[:,:,:,0] = 1
outMatrix[:,:,:,3] = 1
block_power = 1#(int)(beats *
#chunks_per_beat *
#playfield_rows *
#playfield_cols /
#len(data['_notes']) )
# for every note in the beatmap, set the color and direction bits for the proper cell to 1
for n in range(len(data[notes])):
entry = int(np.round(data[notes][n][time]*bpm_scale*chunks_per_beat)) #convert time to row index by rounding to nearest 1/8 beat
if data[notes][n][note_color] < 2:
outMatrix[entry] \
[data[notes][n][line_layer]] \
[data[notes][n][line_index]] \
[data[notes][n][note_color]+1] = block_power
outMatrix[entry] \
[data[notes][n][line_layer]] \
[data[notes][n][line_index]] \
[0] = 0
outMatrix[entry] \
[data[notes][n][line_layer]] \
[data[notes][n][line_index]] \
[data[notes][n][cut_direction]+4] = block_power
outMatrix[entry] \
[data[notes][n][line_layer]] \
[data[notes][n][line_index]] \
[3] = 0
return outMatrix
# mean center an array
def mean_center(x):
return (x - np.apply_along_axis(np.mean, 0, x) )
def mean_center(x):
return (x - np.apply_along_axis(np.mean, 0, x) )
def mean_center_norm(x):
return (x - np.apply_along_axis(np.mean, 0, x) )/ np.apply_along_axis(np.std, 0, x)
# Load an audio file of type .ogg and resample it to the correct number of
# samples per chunk (chunk size defaults to 1/8th note). Then resize the waveform
# to be evenly divisible by the number of beats
def loadsong(audio, samples_per_chunk=300, chunks_per_beat=8):
# verify extension is .ogg
if audio[len(audio)-4:len(audio)] != ".ogg":
print("Audio file " + audio + " is not of type .ogg")
return -1
# use librosa to load the audio and find the duration (in seconds) and beats per minute
y, sr = librosa.load(audio)
song_length = librosa.get_duration(y=y,sr=sr) / 60.0
beats_per_minute = int(np.round(librosa.beat.tempo(y, sr=sr)))
# find new sample rate (samples/sec)
beats_per_second = beats_per_minute / 60.0
samples_per_beat = samples_per_chunk * chunks_per_beat
new_sample_rate = samples_per_beat * beats_per_second
# resample audio so that it has the right number of samples per chunk
# then slice off any extra samples at the end, so that its length is an even number of beats
y = librosa.resample(y, sr, new_sample_rate)
y = y[0:(len(y)//samples_per_beat) * samples_per_beat]
# reshape the song into a list of samples_per_chunk sized pieces
y = y.reshape(len(y)//samples_per_chunk, samples_per_chunk)
# then perform the Fourier transform on each piece, remove the imaginary component
# and slice in half to remove the duplicated information
y = np.abs(np.apply_along_axis(np.fft.fft, 1, y))[:,1:(int)(samples_per_chunk/2)+1]
# then mean center the data so that each frequency's amplitude is expressed relative to the
# mean amplitude of that frequency across all pieces (optionally change this to mean_center_norm)
y = np.apply_along_axis(mean_center, 0, y)
# find the number of beats in the whole song
number_of_beats = int(beats_per_minute * song_length)
return y, number_of_beats, beats_per_minute
# given two lists of previously loaded song sequences and their corresponding beatmap sequences,
# append all possible sequences of the same size from a new song/beatmap pair
def append_song(init_song, init_beatmap, song, beatmap,
samples_per_chunk=300,
chunks_per_beat=8,
beats_per_sequence=32,
dropout=0.0):
# load the song and beatmap
song, beats, bpm = loadsong(song,
samples_per_chunk=samples_per_chunk,
chunks_per_beat=chunks_per_beat)
beatmap = loadbeatmap(beatmap, beats, bpm,
chunks_per_beat=chunks_per_beat)
# calculate the total number of chunks in the song
# the number of chunks each sequence will contain
# and the number of sequences of that size that exist in the song
num_chunks = beats * chunks_per_beat
chunks_per_sequence = chunks_per_beat * beats_per_sequence
num_sequences = num_chunks - chunks_per_sequence
# with (1 - dropout) probability, append each song sequence to the song
# list and its corresponding beatmap sequence to the beatmap list
for i in range(num_sequences):
if random.random() > dropout:
init_song.append(song[i:i+chunks_per_sequence])
init_beatmap.append(beatmap[i:i+chunks_per_sequence])
return init_song, init_beatmap
# given two numpy arrays of corresponding song and beatmap sequences
# if a beatmap sequence has no note blocks in its center 1/divisionth section
# (e.g. the center 1/4th of the sequence) remove the song sequence and beatmap sequence
# from the array
def remove_empty_sections(X, Y, division=1):
X_2 = []
Y_2 = []
# calculate the start and end indices of the middle slices
sequence_size = Y.shape[1]
slice_start = (sequence_size * (division - 1) // (division * 2))
slice_end = sequence_size * (division + 1) // (division * 2)
# for all sequences, if any notes exist in the middle slice, keep it
for i in range(Y.shape[0]):
if any(Y[i,j,k,l,0] == 0 for j in range(slice_start, slice_end)
for k in range(Y.shape[2])
for l in range(Y.shape[3])):
X_2.append(X[i])
Y_2.append(Y[i])
return np.array(X_2), np.array(Y_2)
# randomly split X and Y into train and test data
def train_test_split(X, Y, test_split_ratio = .2):
split_index = (int)((1-test_split_ratio) * X.shape[0])
indices = np.random.permutation(X.shape[0])
train_indices, test_indices = indices[:split_index], indices[split_index:]
x_train, x_test = X[train_indices], X[test_indices]
y_train, y_test = Y[train_indices], Y[test_indices]
return (x_train, y_train), (x_test, y_test)
"""This is the function that should be called to create the lists used by the get_training_data function"""
# search the current directory and its sub directories for .ogg files with Expert beatmaps and
# create a list of the song and json files (if an Expert beatmap does not exist, but an
# ExpertPlus beatmap does, use the ExpertPlus beatmap)
def get_song_lists(songpath="."):
slist = [ join(dirpath, filename) for dirpath, dirnames, filenames in walk(songpath)
for filename in filenames
if filename.endswith(".ogg") and isfile(join(dirpath,'Expert.json')) ]
slist.extend([ join(dirpath, filename) for dirpath, dirnames, filenames in walk(songpath)
for filename in filenames
if filename.endswith(".ogg") and not isfile(join(dirpath,'Expert.json')) and isfile(join(dirpath,'ExpertPlus.json')) ])
blist = [ join(dirpath, 'Expert.json') for dirpath, dirnames, filenames in walk(songpath)
for filename in filenames
if filename.endswith(".ogg") and isfile(join(dirpath,'Expert.json')) ]
blist.extend([ join(dirpath, 'ExpertPlus.json') for dirpath, dirnames, filenames in walk(songpath)
for filename in filenames
if filename.endswith(".ogg") and not isfile(join(dirpath,'Expert.json')) and isfile(join(dirpath,'ExpertPlus.json')) ])
return slist, blist
"""This is the function that should be called to create training data"""
# given a list of song filenames and a list of beatmap filenames,
# create a numpy array representing data from all of the songs and
# a corresponding numpy array representing data from all of the beatmaps
#
# If the output arrays are too large, consider using dropout to thin the data
# Usage: X, Y = get_training_data(song_list, beatmap_list)
def get_training_data(song_list, beatmap_list, samples_per_chunk=300, chunks_per_beat=8, beats_per_sequence=32,
remove_empty_sections=False, division=1, provide_test_split=False, test_split_ratio=.2,
dropout=0.0):
X, Y = append_song([], [], song_list[0], beatmap_list[0],
samples_per_chunk=samples_per_chunk,
chunks_per_beat=chunks_per_beat,
beats_per_sequence=beats_per_sequence,
dropout=dropout)
print("After appending", str("'" + song_list[0] + "':\n"), "X_length:", len(X), "Y_length:", len(Y))
for x, y in zip(song_list[1:], beatmap_list[1:]):
X, Y = append_song(X, Y, x, y,
samples_per_chunk=samples_per_chunk,
chunks_per_beat=chunks_per_beat,
beats_per_sequence=beats_per_sequence,
dropout=dropout)
print("After appending", str("'" + x + "':\n"), "X_length:", len(X), "Y_length:", len(Y))
X = np.array(X)
Y = np.array(Y)
if remove_empty_sections:
X, Y = remove_empty_sections(X, Y, division=division)
if provide_test_split:
return train_test_split(X, Y, test_split_ratio=test_split_ratio)
return X, Y
print(end='')
###Output
_____no_output_____
###Markdown
Functions for loading/processing prediction data
###Code
# convert a list containing a softmax from 0-2 and a softmax from 3-12
# to a list of size 2 containing the indices of the max args of each softmax
# e.g. [.1, .1, .8, .05, .05, .1, .1, .1, .2, .1, .1, .1, .1]
# yields [2, 8]
def softmax_to_max(note_cell):
output = []
output.append(np.argmax(note_cell[:3]))
output.append(np.argmax(note_cell[3:]))
return output
# Thanks to @shouldsee from https://github.com/mpld3/mpld3/issues/434#issuecomment-340255689
# helper class to encode numy arrays into json
class NumpyEncoder(json.JSONEncoder):
""" Special json encoder for numpy types """
def default(self, obj):
if isinstance(obj, (np.int_, np.intc, np.intp, np.int8,
np.int16, np.int32, np.int64, np.uint8,
np.uint16, np.uint32, np.uint64)):
return int(obj)
elif isinstance(obj, (np.float_, np.float16, np.float32,
np.float64)):
return float(obj)
elif isinstance(obj,(np.ndarray,)): #### This is the fix
return obj.tolist()
return json.JSONEncoder.default(self, obj)
# given the y_predicted from the neural net, and a division factor,
# stride along y_predicted and take the 1/division_factor'th slice
# of the section (axis 0 of y_predicted) and copy it into a new
# np array, effectively removing axis 0 from the input data resulting
# in an array with the same number of time steps as the original
# e.g. taking the middle 1/4th slice of size 40 sequences, the *'s are the
# middle 10 elements, which are copied to the output array. The stride between
# the sequences is equal to the size of the slice taken (10)
# seq i : |---------------**********---------------|
# seq i+1 : |----------------------------------------|
# seq i+10 : |---------------**********---------------|
# seq i+20 : |---------------**********---------------|
def convert_beatmap_array(nn_out,
division = 1):
# number of sequences in nn_out
num_sequences = nn_out.shape[0]
# number of time steps in each sequence
sequence_size = nn_out.shape[1]
# number of time steps in the original song
target_size = nn_out.shape[1] + nn_out.shape[0]
# indices of the start and end of the middle 1/division'th slice of a sequence
slice_start = (sequence_size * (division - 1) // (division * 2))
slice_end = sequence_size * (division + 1) // (division * 2)
stride = slice_end - slice_start
beatmap = np.zeros(shape=(target_size,3,4,13), dtype = float)
# first section of the beatmap array
beatmap[0:slice_end] = nn_out[0, 0:slice_end,:,:,:]
# middle section
i = stride
while i < num_sequences:
beatmap[i+slice_start:i+slice_end] = nn_out[i, slice_start:slice_end,:,:,:]
i+=stride
# final section
final_start_index = -(num_sequences - (i - stride))
beatmap[final_start_index:] = nn_out[num_sequences-1, final_start_index:,:,:,:]
# convert the softmax representations to indices
return np.apply_along_axis(softmax_to_max, -1, beatmap)
# given a numpy array representing each timestep of a whole song (i.e. the output from
# the convert_beatmap_array function) find all of the notes and add them to a dictionary
# in json format with all additional information beatsaber requires
def convert_to_json(original_beatmap, bpm,
chunks_per_beat = 8,
offset = 0.0,
beats_per_bar=16,
note_jump_speed=10,
shuffle=0,
shuffle_period=0.5,
version="1.5.0"):
new_beatmap = {}
new_beatmap['_version'] = version
new_beatmap['_beatsPerMinute'] = bpm
new_beatmap['_beatsPerBar'] = beats_per_bar
new_beatmap['_noteJumpSpeed'] = note_jump_speed
new_beatmap['_shuffle'] = shuffle
new_beatmap['_shufflePeriod'] = shuffle_period
new_beatmap['_events'] = []
new_beatmap['_notes'] = []
new_beatmap['_obstacles'] = []
new_beatmap['_notes'] = [
{
"_time" : (i / chunks_per_beat) + offset,
"_lineIndex" : k,
"_lineLayer" : j,
"_type" : original_beatmap[i][j][k][0] - 1,
"_cutDirection" : original_beatmap[i][j][k][1] - 1
} for i in range(original_beatmap.shape[0])
for j in range(original_beatmap.shape[1])
for k in range(original_beatmap.shape[2]) if original_beatmap[i][j][k][0] != 0]
return new_beatmap
"""This is the function that should be called to convert the NN's predicted output into a playable json beatmap"""
def export_beatmap(filename, nn_output, chunks_per_beat, bpm,
division = 1,
offset = 0.0,
beats_per_bar=16,
note_jump_speed=10,
shuffle=0,
shuffle_period=0.5,
version="1.5.0",
song_name="none",
song_sub_name="none",
difficulty="Expert",
difficulty_rank=4):
converted_map = convert_beatmap_array(nn_output,
division=division)
json_beatmap = convert_to_json(converted_map, bpm,
chunks_per_beat=chunks_per_beat,
offset=offset,
beats_per_bar=beats_per_bar,
note_jump_speed=note_jump_speed,
shuffle=shuffle,
shuffle_period=shuffle_period,
version=version)
with open(filename, 'w') as outfile:
outfile.write(json.dumps(json_beatmap, cls=NumpyEncoder))
info_json = {}
info_json['songName'] = song_name
info_json['songSubName'] = song_sub_name
info_json['authorName'] = "Beat Shaper"
info_json['beatsPerMinute'] = bpm
info_json['previewStartTime'] = 12
info_json['previewDuration'] = 10
info_json['coverImagePath'] = ""
info_json['environmentName'] = "DefaultEnvironment"
info_json['oneSaber'] = False
info_json['difficultyLevels'] = [{"difficulty": difficulty,
"difficultyRank": difficulty_rank,
"audioPath": "song.ogg",
"jsonPath": filename,
"offset": 0,
"oldOffset": 0,
"chromaToggle": "Off"}]
with open('info.json', 'w') as infofile:
infofile.write(json.dumps(info_json))
"""This is the function that should be called to prepare a song for the NN's predict function"""
# Similar functionality to the append_song function, but can be used without a corresponding beatmap
# making it useful for getting a predicted beatmap from a song that does not have a beatmap
def get_predict_song(song, samples_per_chunk, chunks_per_beat, beats_per_sequence):
init_song = []
# load the song
song, beats, bpm = loadsong(song,
samples_per_chunk=samples_per_chunk,
chunks_per_beat=chunks_per_beat)
# calculate the total number of chunks in the song
# the number of chunks each sequence will contain
# and the number of sequences of that size that exist in the song
num_chunks = beats * chunks_per_beat
chunks_per_sequence = chunks_per_beat * beats_per_sequence
num_sequences = num_chunks - chunks_per_sequence
# append every song sequence to the song list
for i in range(num_sequences):
init_song.append(song[i:i+chunks_per_sequence])
init_song = np.array(init_song)
return init_song, bpm
###Output
_____no_output_____
###Markdown
Neural Nets Training
###Code
samples_per_chunk = 300
chunks_per_beat = 2
beats_per_sequence = 64
# changing the data_version variable only affects which data files are saved and loaded
# change it as you see fit
data_version = 1
song_list, beatmap_list = get_song_lists(songpath="songs")
(X, Y), (x_test, y_test) = get_training_data(song_list, beatmap_list,
samples_per_chunk=samples_per_chunk,
chunks_per_beat=chunks_per_beat,
beats_per_sequence=beats_per_sequence,
provide_test_split=True,
test_split_ratio=.3,
dropout=0.8)
print("X.shape:", X.shape)
print("Y.shape:", Y.shape)
print("x_test.shape:", x_test.shape)
print("y_test.shape:", y_test.shape)
# save the data so that it may optionally be loaded again later
# filename will be "data_v<version>_<numberofsongs>_save_<samplesperchunk>_<chunksperbeat>_<beatspersequence>.npz"
np.savez('data_v' + str(data_version) +
'_n' + str(len(song_list)) +
'_save_' + str(samples_per_chunk) +
'_' + str(chunks_per_beat) +
'_' + str(beats_per_sequence) +
'.npz', X=X, Y=Y, x_test=x_test, y_test=y_test)
npzfile = np.load('data_v' + str(data_version) +
'_n' + str(len(song_list)) +
'_save_' + str(samples_per_chunk) +
'_' + str(chunks_per_beat) +
'_' + str(beats_per_sequence) + '.npz')
X = npzfile['X']
Y = npzfile['Y']
x_test = npzfile['x_test']
y_test = npzfile['y_test']
#Neural Net Model
#v1 recurrent dropout .1
#v2 changed recurrent dropout to .4
# changing the model_version variable only affects which model output files are
# saved and loaded. It can be changed it as you see fit for model saving purposes
model_version = 4
conv_filters = 128
conv_kernel_size = (int)(X.shape[1]/1.5)
lstm_size = 128
input_layer = keras.layers.Input(shape=(X.shape[1], X.shape[2]))
conv_layer = keras.layers.Conv1D(conv_filters,
kernel_size=conv_kernel_size,
activation='relu',
padding='same')
lstm_layer = keras.layers.LSTM(lstm_size,
return_sequences=True,
dropout=0.2,
recurrent_dropout=.4)
dense_col_layer = keras.layers.Dense(Y.shape[2]*Y.shape[3]*3)
dense_dir_layer = keras.layers.Dense(Y.shape[2]*Y.shape[3]*10)
reshape_col_layer = keras.layers.Reshape((Y.shape[1], Y.shape[2], Y.shape[3], 3))
reshape_dir_layer = keras.layers.Reshape((Y.shape[1], Y.shape[2], Y.shape[3], 10))
soft_col_layer = keras.layers.Softmax(axis=-1)
soft_dir_layer = keras.layers.Softmax(axis=-1)
conv_out = conv_layer(input_layer)
lstm_out = lstm_layer(conv_out)
dense_col_out = dense_col_layer(lstm_out)
dense_dir_out = dense_dir_layer(lstm_out)
reshape_col_out = reshape_col_layer(dense_col_out)
reshape_dir_out = reshape_dir_layer(dense_dir_out)
soft_col_out = soft_col_layer(reshape_col_out)
soft_dir_out = soft_dir_layer(reshape_dir_out)
new_model = keras.Model(input_layer, [soft_col_out, soft_dir_out])
new_model.compile(loss = [keras.losses.categorical_crossentropy, keras.losses.categorical_crossentropy], optimizer=keras.optimizers.Adam(), metrics=['accuracy'])
new_model.summary()
# Optionally load a preexisting model
i_want_to_load_a_model = False
if i_want_to_load_a_model:
new_model = keras.models.load_model("model_v3_n31_save_300_2_64_e40.hdf5")
# Optionally print the SVG
i_want_to_print_the_SVG = False
if i_want_to_print_the_SVG:
SVG(model_to_dot(new_model).create(prog='dot',format='svg'))
histories = {}
batch_percent = .2
batch_size = int(X.shape[0] * batch_percent / 100)
epochs = 2
history = new_model.fit(X, [Y[:,:,:,:,:3], Y[:,:,:,:,3:]],
batch_size=batch_size,
epochs=epochs,
verbose=0,
validation_data=(x_test, [y_test[:,:,:,:,:3], y_test[:,:,:,:,3:]]),
callbacks=[TQDMNotebookCallback()])
for data_type in history.history:
if data_type not in histories:
histories[data_type] = []
histories[data_type] += history.history[data_type]
# plot the accuracy and loss and save the model
plt.figure(1)
i = 1 #because I can't for the life of me figure out how to avoid it
for data_type in histories:
if not data_type.startswith('val_'):
val_data_type = str('val_' + data_type)
plt.subplot(len(histories)//2, 1, i)
plt.plot(histories[data_type])
plt.plot(histories[val_data_type])
if i == 1:
plt.title('model loss')
if i == 2:
plt.title('model color loss')
if i == 3:
plt.title('model direction loss')
if i == 4:
plt.title('model color accuracy')
if i == 5:
plt.title('model direction accuracy')
plt.ylabel(data_type)
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
i = i + 1
plt.figure(1).set_figheight(10)
plt.tight_layout()
# save the model so that it can be loaded later
# file name will be "model_v<version>_<numberofsongs>_save_<samplesperchunk>_<chunksperbeat>_<beatspersequence>_e<numberofepochs>.hdf5"
new_model.save('model_v' + str(model_version) +
'_n' + str(len(song_list)) +
'_save_' + str(samples_per_chunk) +
'_' + str(chunks_per_beat) +
'_' + str(beats_per_sequence) +
'_e' + str(len(histories['val_loss'])) + '.hdf5')
plt.show()
###Output
_____no_output_____
###Markdown
Create a beatmap
###Code
nn_song, bpm = get_predict_song("song.ogg", samples_per_chunk, chunks_per_beat, beats_per_sequence)
nn_out = np.concatenate(new_model.predict(nn_song), axis=-1)
export_beatmap("Expert.json", nn_out, chunks_per_beat, bpm,
division=nn_out.shape[1])
###Output
_____no_output_____
###Markdown
Utility functions and testing
###Code
print('Accuracy:', new_model.evaluate(X,Y)[1]*100.0,'%')
print('Test_Accuracy:', new_model.evaluate(x_test,y_test)[1]*100.0,'%')
data_version = 1
model_version = 1
###Output
_____no_output_____ |
notebooks/miscellaneous/longest_common_substring_clustering.ipynb | ###Markdown
Imports
###Code
import pandas as pd
from waad.heuristics.H2.machines_processing import MachinesProcessing
from waad.heuristics.H3.select_valid_accounts import SelectValidAccounts
from waad.utils.clustering import LongestCommonSubstringClustering
from waad.utils.postgreSQL_utils import Database, Table
###Output
_____no_output_____
###Markdown
Database
###Code
HOST = '127.0.0.1'
PORT = '5432'
USER = '' #To fill
PASSWORD = '' #To fill
DB_NAME = '' #To fill
TABLE_NAME = '' #To fill
db = Database(host=HOST, port=PORT, user=USER, password=PASSWORD, db_name=DB_NAME)
table = Table(db, table_name=TABLE_NAME)
###Output
_____no_output_____
###Markdown
Exemple on Accounts Load accounts name and pre-process them
###Code
esssttt = table.get_command(f"SELECT DISTINCT eventid, subjectusersid, subjectdomainname, subjectusername, targetusersid, targetdomainname, targetusername FROM {table.table_name}")
sva = SelectValidAccounts(esssttt)
sva.run()
valid_accounts = sva.valid_accounts
###Output
_____no_output_____
###Markdown
Apply clustering on accounts name
###Code
input_strings = list(set([account.name for account in valid_accounts if (account is not None and not account.name.endswith('$'))]))
lcsc = LongestCommonSubstringClustering(input_strings, min_samples=4, min_prefix_suffix_size=3, min_inside_size=4)
lcsc.run()
lcsc.plot_clusters()
###Output
_____no_output_____ |
tensorflow models/Eager.ipynb | ###Markdown
Copyright 2018 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Eager Execution View on TensorFlow.org Run in Google Colab View source on GitHub TensorFlow's eager execution is an imperative programming environment thatevaluates operations immediately, without building graphs: operations returnconcrete values instead of constructing a computational graph to run later. Thismakes it easy to get started with TensorFlow and debug models, and itreduces boilerplate as well. To follow along with this guide, run the codesamples below in an interactive `python` interpreter.Eager execution is a flexible machine learning platform for research andexperimentation, providing:* *An intuitive interface*—Structure your code naturally and use Python data structures. Quickly iterate on small models and small data.* *Easier debugging*—Call ops directly to inspect running models and test changes. Use standard Python debugging tools for immediate error reporting.* *Natural control flow*—Use Python control flow instead of graph control flow, simplifying the specification of dynamic models.Eager execution supports most TensorFlow operations and GPU acceleration. For acollection of examples running in eager execution, see:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples).Note: Some models may experience increased overhead with eager executionenabled. Performance improvements are ongoing, but please[file a bug](https://github.com/tensorflow/tensorflow/issues) if you find aproblem and share your benchmarks. Setup and basic usage To start eager execution, add `tf.enable_eager_execution()` to the beginning ofthe program or console session. Do not add this operation to other modules thatthe program calls.
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
import tensorflow as tf
tf.enable_eager_execution()
###Output
_____no_output_____
###Markdown
Now you can run TensorFlow operations and the results will return immediately:
###Code
tf.executing_eagerly()
x = [[2.]]
m = tf.matmul(x, x)
print("hello, {}".format(m))
###Output
_____no_output_____
###Markdown
Enabling eager execution changes how TensorFlow operations behave—now theyimmediately evaluate and return their values to Python. `tf.Tensor` objectsreference concrete values instead of symbolic handles to nodes in a computationalgraph. Since there isn't a computational graph to build and run later in asession, it's easy to inspect results using `print()` or a debugger. Evaluating,printing, and checking tensor values does not break the flow for computinggradients.Eager execution works nicely with [NumPy](http://www.numpy.org/). NumPyoperations accept `tf.Tensor` arguments. TensorFlow[math operations](https://www.tensorflow.org/api_guides/python/math_ops) convertPython objects and NumPy arrays to `tf.Tensor` objects. The`tf.Tensor.numpy` method returns the object's value as a NumPy `ndarray`.
###Code
a = tf.constant([[1, 2],
[3, 4]])
print(a)
# Broadcasting support
b = tf.add(a, 1)
print(b)
# Operator overloading is supported
print(a * b)
# Use NumPy values
import numpy as np
c = np.multiply(a, b)
print(c)
# Obtain numpy value from a tensor:
print(a.numpy())
# => [[1 2]
# [3 4]]
###Output
_____no_output_____
###Markdown
The `tf.contrib.eager` module contains symbols available to both eager and graph executionenvironments and is useful for writing code to [work with graphs](work_with_graphs):
###Code
tfe = tf.contrib.eager
###Output
_____no_output_____
###Markdown
Dynamic control flowA major benefit of eager execution is that all the functionality of the hostlanguage is available while your model is executing. So, for example,it is easy to write [fizzbuzz](https://en.wikipedia.org/wiki/Fizz_buzz):
###Code
def fizzbuzz(max_num):
counter = tf.constant(0)
max_num = tf.convert_to_tensor(max_num)
for num in range(1, max_num.numpy()+1):
num = tf.constant(num)
if int(num % 3) == 0 and int(num % 5) == 0:
print('FizzBuzz')
elif int(num % 3) == 0:
print('Fizz')
elif int(num % 5) == 0:
print('Buzz')
else:
print(num.numpy())
counter += 1
fizzbuzz(15)
###Output
_____no_output_____
###Markdown
This has conditionals that depend on tensor values and it prints these valuesat runtime. Build a modelMany machine learning models are represented by composing layers. Whenusing TensorFlow with eager execution you can either write your own layers oruse a layer provided in the `tf.keras.layers` package.While you can use any Python object to represent a layer,TensorFlow has `tf.keras.layers.Layer` as a convenient base class. Inherit fromit to implement your own layer:
###Code
class MySimpleLayer(tf.keras.layers.Layer):
def __init__(self, output_units):
super(MySimpleLayer, self).__init__()
self.output_units = output_units
def build(self, input_shape):
# The build method gets called the first time your layer is used.
# Creating variables on build() allows you to make their shape depend
# on the input shape and hence removes the need for the user to specify
# full shapes. It is possible to create variables during __init__() if
# you already know their full shapes.
self.kernel = self.add_variable(
"kernel", [input_shape[-1], self.output_units])
def call(self, input):
# Override call() instead of __call__ so we can perform some bookkeeping.
return tf.matmul(input, self.kernel)
###Output
_____no_output_____
###Markdown
Use `tf.keras.layers.Dense` layer instead of `MySimpleLayer` above as it hasa superset of its functionality (it can also add a bias).When composing layers into models you can use `tf.keras.Sequential` to representmodels which are a linear stack of layers. It is easy to use for basic models:
###Code
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, input_shape=(784,)), # must declare input shape
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Alternatively, organize models in classes by inheriting from `tf.keras.Model`.This is a container for layers that is a layer itself, allowing `tf.keras.Model`objects to contain other `tf.keras.Model` objects.
###Code
class MNISTModel(tf.keras.Model):
def __init__(self):
super(MNISTModel, self).__init__()
self.dense1 = tf.keras.layers.Dense(units=10)
self.dense2 = tf.keras.layers.Dense(units=10)
def call(self, input):
"""Run the model."""
result = self.dense1(input)
result = self.dense2(result)
result = self.dense2(result) # reuse variables from dense2 layer
return result
model = MNISTModel()
###Output
_____no_output_____
###Markdown
It's not required to set an input shape for the `tf.keras.Model` class sincethe parameters are set the first time input is passed to the layer.`tf.keras.layers` classes create and contain their own model variables thatare tied to the lifetime of their layer objects. To share layer variables, sharetheir objects. Eager training Computing gradients[Automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation)is useful for implementing machine learning algorithms such as[backpropagation](https://en.wikipedia.org/wiki/Backpropagation) for trainingneural networks. During eager execution, use `tf.GradientTape` to traceoperations for computing gradients later.`tf.GradientTape` is an opt-in feature to provide maximal performance whennot tracing. Since different operations can occur during each call, allforward-pass operations get recorded to a "tape". To compute the gradient, playthe tape backwards and then discard. A particular `tf.GradientTape` can onlycompute one gradient; subsequent calls throw a runtime error.
###Code
w = tf.Variable([[1.0]])
with tf.GradientTape() as tape:
loss = w * w
grad = tape.gradient(loss, w)
print(grad) # => tf.Tensor([[ 2.]], shape=(1, 1), dtype=float32)
###Output
_____no_output_____
###Markdown
Train a modelThe following example creates a multi-layer model that classifies the standardMNIST handwritten digits. It demonstrates the optimizer and layer APIs to buildtrainable graphs in an eager execution environment.
###Code
# Fetch and format the mnist data
(mnist_images, mnist_labels), _ = tf.keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(tf.cast(mnist_images[...,tf.newaxis]/255, tf.float32),
tf.cast(mnist_labels,tf.int64)))
dataset = dataset.shuffle(1000).batch(32)
# Build the model
mnist_model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
###Output
_____no_output_____
###Markdown
Even without training, call the model and inspect the output in eager execution:
###Code
for images,labels in dataset.take(1):
print("Logits: ", mnist_model(images[0:1]).numpy())
###Output
_____no_output_____
###Markdown
While keras models have a builtin training loop (using the `fit` method), sometimes you need more customization. Here's an example, of a training loop implemented with eager:
###Code
optimizer = tf.train.AdamOptimizer()
loss_history = []
for (batch, (images, labels)) in enumerate(dataset.take(400)):
if batch % 10 == 0:
print('.', end='')
with tf.GradientTape() as tape:
logits = mnist_model(images, training=True)
loss_value = tf.losses.sparse_softmax_cross_entropy(labels, logits)
loss_history.append(loss_value.numpy())
grads = tape.gradient(loss_value, mnist_model.trainable_variables)
optimizer.apply_gradients(zip(grads, mnist_model.trainable_variables),
global_step=tf.train.get_or_create_global_step())
import matplotlib.pyplot as plt
plt.plot(loss_history)
plt.xlabel('Batch #')
plt.ylabel('Loss [entropy]')
###Output
_____no_output_____
###Markdown
Variables and optimizers`tf.Variable` objects store mutable `tf.Tensor` values accessed duringtraining to make automatic differentiation easier. The parameters of a model canbe encapsulated in classes as variables.Better encapsulate model parameters by using `tf.Variable` with`tf.GradientTape`. For example, the automatic differentiation example abovecan be rewritten:
###Code
class Model(tf.keras.Model):
def __init__(self):
super(Model, self).__init__()
self.W = tf.Variable(5., name='weight')
self.B = tf.Variable(10., name='bias')
def call(self, inputs):
return inputs * self.W + self.B
# A toy dataset of points around 3 * x + 2
NUM_EXAMPLES = 2000
training_inputs = tf.random_normal([NUM_EXAMPLES])
noise = tf.random_normal([NUM_EXAMPLES])
training_outputs = training_inputs * 3 + 2 + noise
# The loss function to be optimized
def loss(model, inputs, targets):
error = model(inputs) - targets
return tf.reduce_mean(tf.square(error))
def grad(model, inputs, targets):
with tf.GradientTape() as tape:
loss_value = loss(model, inputs, targets)
return tape.gradient(loss_value, [model.W, model.B])
# Define:
# 1. A model.
# 2. Derivatives of a loss function with respect to model parameters.
# 3. A strategy for updating the variables based on the derivatives.
model = Model()
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.01)
print("Initial loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
# Training loop
for i in range(300):
grads = grad(model, training_inputs, training_outputs)
optimizer.apply_gradients(zip(grads, [model.W, model.B]),
global_step=tf.train.get_or_create_global_step())
if i % 20 == 0:
print("Loss at step {:03d}: {:.3f}".format(i, loss(model, training_inputs, training_outputs)))
print("Final loss: {:.3f}".format(loss(model, training_inputs, training_outputs)))
print("W = {}, B = {}".format(model.W.numpy(), model.B.numpy()))
###Output
_____no_output_____
###Markdown
Use objects for state during eager executionWith graph execution, program state (such as the variables) is stored in globalcollections and their lifetime is managed by the `tf.Session` object. Incontrast, during eager execution the lifetime of state objects is determined bythe lifetime of their corresponding Python object. Variables are objectsDuring eager execution, variables persist until the last reference to the objectis removed, and is then deleted.
###Code
if tf.test.is_gpu_available():
with tf.device("gpu:0"):
v = tf.Variable(tf.random_normal([1000, 1000]))
v = None # v no longer takes up GPU memory
###Output
_____no_output_____
###Markdown
Object-based saving`tf.train.Checkpoint` can save and restore `tf.Variable`s to and fromcheckpoints:
###Code
x = tf.Variable(10.)
checkpoint = tf.train.Checkpoint(x=x)
x.assign(2.) # Assign a new value to the variables and save.
checkpoint_path = './ckpt/'
checkpoint.save('./ckpt/')
x.assign(11.) # Change the variable after saving.
# Restore values from the checkpoint
checkpoint.restore(tf.train.latest_checkpoint(checkpoint_path))
print(x) # => 2.0
###Output
_____no_output_____
###Markdown
To save and load models, `tf.train.Checkpoint` stores the internal state of objects,without requiring hidden variables. To record the state of a `model`,an `optimizer`, and a global step, pass them to a `tf.train.Checkpoint`:
###Code
import os
import tempfile
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(16,[3,3], activation='relu'),
tf.keras.layers.GlobalAveragePooling2D(),
tf.keras.layers.Dense(10)
])
optimizer = tf.train.AdamOptimizer(learning_rate=0.001)
checkpoint_dir = tempfile.mkdtemp()
checkpoint_prefix = os.path.join(checkpoint_dir, "ckpt")
root = tf.train.Checkpoint(optimizer=optimizer,
model=model,
optimizer_step=tf.train.get_or_create_global_step())
root.save(checkpoint_prefix)
root.restore(tf.train.latest_checkpoint(checkpoint_dir))
###Output
_____no_output_____
###Markdown
Object-oriented metrics`tfe.metrics` are stored as objects. Update a metric by passing the new data tothe callable, and retrieve the result using the `tfe.metrics.result` method,for example:
###Code
m = tfe.metrics.Mean("loss")
m(0)
m(5)
m.result() # => 2.5
m([8, 9])
m.result() # => 5.5
###Output
_____no_output_____
###Markdown
Summaries and TensorBoard[TensorBoard](../guide/summaries_and_tensorboard.md) is a visualization tool forunderstanding, debugging and optimizing the model training process. It usessummary events that are written while executing the program.`tf.contrib.summary` is compatible with both eager and graph executionenvironments. Summary operations, such as `tf.contrib.summary.scalar`, areinserted during model construction. For example, to record summaries once every100 global steps:
###Code
global_step = tf.train.get_or_create_global_step()
logdir = "./tb/"
writer = tf.contrib.summary.create_file_writer(logdir)
writer.set_as_default()
for _ in range(10):
global_step.assign_add(1)
# Must include a record_summaries method
with tf.contrib.summary.record_summaries_every_n_global_steps(100):
# your model code goes here
tf.contrib.summary.scalar('global_step', global_step)
!ls tb/
###Output
_____no_output_____
###Markdown
Advanced automatic differentiation topics Dynamic models`tf.GradientTape` can also be used in dynamic models. This example for a[backtracking line search](https://wikipedia.org/wiki/Backtracking_line_search)algorithm looks like normal NumPy code, except there are gradients and isdifferentiable, despite the complex control flow:
###Code
def line_search_step(fn, init_x, rate=1.0):
with tf.GradientTape() as tape:
# Variables are automatically recorded, but manually watch a tensor
tape.watch(init_x)
value = fn(init_x)
grad = tape.gradient(value, init_x)
grad_norm = tf.reduce_sum(grad * grad)
init_value = value
while value > init_value - rate * grad_norm:
x = init_x - rate * grad
value = fn(x)
rate /= 2.0
return x, value
###Output
_____no_output_____
###Markdown
Additional functions to compute gradients`tf.GradientTape` is a powerful interface for computing gradients, but thereis another [Autograd](https://github.com/HIPS/autograd)-style API available forautomatic differentiation. These functions are useful if writing math code withonly tensors and gradient functions, and without `tf.variables`:* `tfe.gradients_function` —Returns a function that computes the derivatives of its input function parameter with respect to its arguments. The input function parameter must return a scalar value. When the returned function is invoked, it returns a list of `tf.Tensor` objects: one element for each argument of the input function. Since anything of interest must be passed as a function parameter, this becomes unwieldy if there's a dependency on many trainable parameters.* `tfe.value_and_gradients_function` —Similar to `tfe.gradients_function`, but when the returned function is invoked, it returns the value from the input function in addition to the list of derivatives of the input function with respect to its arguments.In the following example, `tfe.gradients_function` takes the `square`function as an argument and returns a function that computes the partialderivatives of `square` with respect to its inputs. To calculate the derivativeof `square` at `3`, `grad(3.0)` returns `6`.
###Code
def square(x):
return tf.multiply(x, x)
grad = tfe.gradients_function(square)
square(3.).numpy()
grad(3.)[0].numpy()
# The second-order derivative of square:
gradgrad = tfe.gradients_function(lambda x: grad(x)[0])
gradgrad(3.)[0].numpy()
# The third-order derivative is None:
gradgradgrad = tfe.gradients_function(lambda x: gradgrad(x)[0])
gradgradgrad(3.)
# With flow control:
def abs(x):
return x if x > 0. else -x
grad = tfe.gradients_function(abs)
grad(3.)[0].numpy()
grad(-3.)[0].numpy()
###Output
_____no_output_____
###Markdown
Custom gradientsCustom gradients are an easy way to override gradients in eager and graphexecution. Within the forward function, define the gradient with respect to theinputs, outputs, or intermediate results. For example, here's an easy way to clipthe norm of the gradients in the backward pass:
###Code
@tf.custom_gradient
def clip_gradient_by_norm(x, norm):
y = tf.identity(x)
def grad_fn(dresult):
return [tf.clip_by_norm(dresult, norm), None]
return y, grad_fn
###Output
_____no_output_____
###Markdown
Custom gradients are commonly used to provide a numerically stable gradient for asequence of operations:
###Code
def log1pexp(x):
return tf.log(1 + tf.exp(x))
grad_log1pexp = tfe.gradients_function(log1pexp)
# The gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# However, x = 100 fails because of numerical instability.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
Here, the `log1pexp` function can be analytically simplified with a customgradient. The implementation below reuses the value for `tf.exp(x)` that iscomputed during the forward pass—making it more efficient by eliminatingredundant calculations:
###Code
@tf.custom_gradient
def log1pexp(x):
e = tf.exp(x)
def grad(dy):
return dy * (1 - 1 / (1 + e))
return tf.log(1 + e), grad
grad_log1pexp = tfe.gradients_function(log1pexp)
# As before, the gradient computation works fine at x = 0.
grad_log1pexp(0.)[0].numpy()
# And the gradient computation also works at x = 100.
grad_log1pexp(100.)[0].numpy()
###Output
_____no_output_____
###Markdown
PerformanceComputation is automatically offloaded to GPUs during eager execution. If youwant control over where a computation runs you can enclose it in a`tf.device('/gpu:0')` block (or the CPU equivalent):
###Code
import time
def measure(x, steps):
# TensorFlow initializes a GPU the first time it's used, exclude from timing.
tf.matmul(x, x)
start = time.time()
for i in range(steps):
x = tf.matmul(x, x)
# tf.matmul can return before completing the matrix multiplication
# (e.g., can return after enqueing the operation on a CUDA stream).
# The x.numpy() call below will ensure that all enqueued operations
# have completed (and will also copy the result to host memory,
# so we're including a little more than just the matmul operation
# time).
_ = x.numpy()
end = time.time()
return end - start
shape = (1000, 1000)
steps = 200
print("Time to multiply a {} matrix by itself {} times:".format(shape, steps))
# Run on CPU:
with tf.device("/cpu:0"):
print("CPU: {} secs".format(measure(tf.random_normal(shape), steps)))
# Run on GPU, if available:
if tfe.num_gpus() > 0:
with tf.device("/gpu:0"):
print("GPU: {} secs".format(measure(tf.random_normal(shape), steps)))
else:
print("GPU: not found")
###Output
_____no_output_____
###Markdown
A `tf.Tensor` object can be copied to a different device to execute itsoperations:
###Code
if tf.test.is_gpu_available():
x = tf.random_normal([10, 10])
x_gpu0 = x.gpu()
x_cpu = x.cpu()
_ = tf.matmul(x_cpu, x_cpu) # Runs on CPU
_ = tf.matmul(x_gpu0, x_gpu0) # Runs on GPU:0
if tfe.num_gpus() > 1:
x_gpu1 = x.gpu(1)
_ = tf.matmul(x_gpu1, x_gpu1) # Runs on GPU:1
###Output
_____no_output_____
###Markdown
BenchmarksFor compute-heavy models, such as[ResNet50](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples/resnet50)training on a GPU, eager execution performance is comparable to graph execution.But this gap grows larger for models with less computation and there is work tobe done for optimizing hot code paths for models with lots of small operations. Work with graphsWhile eager execution makes development and debugging more interactive,TensorFlow graph execution has advantages for distributed training, performanceoptimizations, and production deployment. However, writing graph code can feeldifferent than writing regular Python code and more difficult to debug.For building and training graph-constructed models, the Python program firstbuilds a graph representing the computation, then invokes `Session.run` to sendthe graph for execution on the C++-based runtime. This provides:* Automatic differentiation using static autodiff.* Simple deployment to a platform independent server.* Graph-based optimizations (common subexpression elimination, constant-folding, etc.).* Compilation and kernel fusion.* Automatic distribution and replication (placing nodes on the distributed system).Deploying code written for eager execution is more difficult: either generate agraph from the model, or run the Python runtime and code directly on the server. Write compatible codeThe same code written for eager execution will also build a graph during graphexecution. Do this by simply running the same code in a new Python session whereeager execution is not enabled.Most TensorFlow operations work during eager execution, but there are some thingsto keep in mind:* Use `tf.data` for input processing instead of queues. It's faster and easier.* Use object-oriented layer APIs—like `tf.keras.layers` and `tf.keras.Model`—since they have explicit storage for variables.* Most model code works the same during eager and graph execution, but there are exceptions. (For example, dynamic models using Python control flow to change the computation based on inputs.)* Once eager execution is enabled with `tf.enable_eager_execution`, it cannot be turned off. Start a new Python session to return to graph execution.It's best to write code for both eager execution *and* graph execution. Thisgives you eager's interactive experimentation and debuggability with thedistributed performance benefits of graph execution.Write, debug, and iterate in eager execution, then import the model graph forproduction deployment. Use `tf.train.Checkpoint` to save and restore modelvariables, this allows movement between eager and graph execution environments.See the examples in:[tensorflow/contrib/eager/python/examples](https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/eager/python/examples). Use eager execution in a graph environmentSelectively enable eager execution in a TensorFlow graph environment using`tfe.py_func`. This is used when `tf.enable_eager_execution()` has *not*been called.
###Code
def my_py_func(x):
x = tf.matmul(x, x) # You can use tf ops
print(x) # but it's eager!
return x
with tf.Session() as sess:
x = tf.placeholder(dtype=tf.float32)
# Call eager function in graph!
pf = tfe.py_func(my_py_func, [x], tf.float32)
sess.run(pf, feed_dict={x: [[2.0]]}) # [[4.0]]
###Output
_____no_output_____ |
src/spacy_examples_visualizations.ipynb | ###Markdown
**1. Import statements**
###Code
import spacy
from spacy import displacy
###Output
_____no_output_____
###Markdown
**2. Load English model**
###Code
nlp = spacy.load('en')
doc = nlp('5:44 Mumbai slow local train cancel today (sent from Thane Stn.)')
###Output
_____no_output_____
###Markdown
**3. View word level attributes**
###Code
for token in doc:
print("{0}\t{1}\t{2}\t{3}\t\t{4}\t{5}\t{6}\t{7}".format(
token.text,
token.idx,
token.lemma_,
token.is_punct,
token.is_space,
token.shape_,
token.pos_,
token.tag_
))
###Output
5:44 0 5:44 False False d:dd NUM CD
Mumbai 5 mumbai False False Xxxxx PROPN NNP
slow 12 slow False False xxxx ADJ JJ
local 17 local False False xxxx ADJ JJ
train 23 train False False xxxx NOUN NN
cancel 29 cancel False False xxxx NOUN NN
today 36 today False False xxxx NOUN NN
42 False True SPACE
( 43 ( True False ( PUNCT -LRB-
sent 44 send False False xxxx VERB VBN
from 49 from False False xxxx ADP IN
Thane 54 thane False False Xxxxx PROPN NNP
Stn 60 stn False False Xxx PROPN NNP
. 63 . True False . PUNCT .
) 64 ) True False ) PUNCT -RRB-
###Markdown
**4. Named Entity Recognition (NER)**
###Code
for ent in doc.ents:
print(ent.text, ent.label_)
###Output
5:44 CARDINAL
Mumbai GPE
today DATE
Thane Stn FAC
###Markdown
**5. Visualize NER annotated entities**
###Code
displacy.render(doc, style='ent', jupyter=True)
###Output
_____no_output_____
###Markdown
**6. Chunking - automatically detect Noun Phrases**
###Code
for chunk in doc.noun_chunks:
print(chunk.text, chunk.label_, chunk.root.text)
###Output
5:44 Mumbai NP Mumbai
local train NP train
Thane Stn NP Stn
###Markdown
**7. Dependency Parsing**
###Code
for token in doc:
print("{0}/{1} <--{2}-- {3}/{4}".format(token.text, token.tag_, token.dep_, token.head.text, token.head.tag_))
###Output
5:44/CD <--nummod-- Mumbai/NNP
Mumbai/NNP <--nsubj-- slow/JJ
slow/JJ <--ROOT-- slow/JJ
local/JJ <--amod-- train/NN
train/NN <--nsubj-- cancel/NN
cancel/NN <--ccomp-- slow/JJ
today/NN <--npadvmod-- cancel/NN
/ <---- today/NN
(/-LRB- <--punct-- cancel/NN
sent/VBN <--advcl-- slow/JJ
from/IN <--prep-- sent/VBN
Thane/NNP <--compound-- Stn/NNP
Stn/NNP <--pobj-- from/IN
./. <--punct-- slow/JJ
)/-RRB- <--punct-- slow/JJ
###Markdown
**8. Visualize Dependency Tree**
###Code
displacy.render(doc, style='dep', jupyter=True, options={'distance': 90})
###Output
_____no_output_____ |
course_urls.ipynb | ###Markdown
Course URLsIn this notebook do the following: 1) Identify the highest scoring pages, for each base URL, from the Markov Crawler 2) Identify common features of URLs with high scores 3) Use PCA to identify combinations of URL features that lead to high scores 4) Identify compound phrases from text on the pages of these URLs 5) TODO: Cluster words to find topics 6) Plan next step
###Code
%load_ext autoreload
%matplotlib inline
%autoreload 2
from sqlalchemy import create_engine
import pandas as pd
from urllib.parse import urlparse
from collections import Counter
import re
from nltk.corpus import stopwords
from sklearn.decomposition import PCA
from nltk.stem import WordNetLemmatizer
import numpy as np
import requests
from bs4 import BeautifulSoup
from graph_scorer import GraphScorer
from matplotlib import pyplot as plt
from compounder import Compounder
from fuzzywuzzy import process as fuzzy_process
from fuzzywuzzy import fuzz
import wikipedia
from collections import Counter
from sklearn.cluster import AffinityPropagation
from sklearn.cluster import KMeans
import gensim
lemmatizer = WordNetLemmatizer()
_stops = set(stopwords.words("english"))
'''
Split words by tokens, including numbers as tokens. Useful for splitting URLs.
'''
def tokenize_alphanum(text):
words = list(filter(('').__ne__, re.split('[^a-zA-Z]',text)))
_words = []
for w in words:
w = w.lower()
w = lemmatizer.lemmatize(w)
_words.append(w)
return _words
'''
Split words according to a Viterbi Segmentation algorithm.
E.g. degreecourse -> degree course
Needs to be trained by a relevant corpus. Note: don't include words in the
corpus that you don't want to be found!
'''
class WordSplitter:
'''Define the word corpus'''
def __init__(self,words):
self.corpus = Counter(words)
self.max_word_length = max(map(len,self.corpus))
self.total = float(sum(self.corpus.values()))
'''Get the fraction of vocab corresponding to this word'''
def word_prob(self,word):
# A small workaround for plurals, could be extended to fuzzy matching
if word not in self.corpus:
if word[:-1] in self.corpus:
word = word[:-1]
return self.corpus[word] / self.total
'''Split word according to Viterbi algorithm'''
def split(self,text):
probs, lasts = [1.0], [0]
for i in range(1, len(text) + 1):
prob_k, k = max((probs[j] * self.word_prob(text[j:i]), j)
for j in range(max(0, i - self.max_word_length), i))
probs.append(prob_k)
lasts.append(k)
words = []
i = len(text)
while 0 < i:
words.append(text[lasts[i]:i])
i = lasts[i]
words.reverse()
score = probs[-1]
# If score is zero, return the original string
if score == 0.:
return [text],0
return words,score
###Output
_____no_output_____
###Markdown
Identify the highest scoring pages, for each base URL, from the Markov CrawlerHere I print out the top 5 highest scoring pages for each base URL. The results are ok, although a bit mixed as one might expect.
###Code
# Connect to database where data is found
with open("db.config") as f:
engine_url = f.read()
engine = create_engine(engine_url)
conn = engine.connect()
# Load data into dataframes
df_pages = pd.read_sql_table("page_scores",con=conn)
df_tops = pd.read_sql_table("top_urls",con=conn)
# Assign a base URL to each page, by iterating through
# each base URL
df_pages["top_url"] = None
for _,url in df_tops.url.items():
# Normalise the page to the score of the base URL, so extract
# the base URL's score
condition = df_pages.page == url
topurl_score = df_pages.loc[condition,"score"].values[0]
# Split the top URL, since only expect to find the netloc in a given page
# e.g. www.manchester.ac.uk and www.research.manchester.ac.uk
_parsed = urlparse(url)
_joined = _parsed.scheme +"://" + _parsed.netloc
condition = df_pages.page.str.contains(_parsed.netloc.replace("www.",""))
# Assign a normalised score, and the base URL
df_pages.loc[condition,"norm_score"] = df_pages.loc[condition,"score"]/topurl_score
df_pages.loc[condition,"top_url"] = _joined
# Print out "top 5" results
_df = df_pages.loc[condition].sort_values("norm_score",ascending=False).head(5)
print(_joined)
for _,row in _df.iterrows():
print(row["page"].replace(_joined,""),row["norm_score"])
print("-------------------")
###Output
http://www.manchester.ac.uk
/study/postgraduate-research/programmes/research-areas 1.4651873148963208
https://www.research.manchester.ac.uk/portal/en/publications/search.html 1.23936874236101
/study/undergraduate/courses/2018 1.2114514661583506
/study/postgraduate-research/contact 1.2099949126173424
/study/undergraduate/courses/2017 1.1546036591158086
-------------------
https://www.masdar.ac.ae
/visitor 1.4048698227547218
/research-education/partners-and-resources 1.346091716725332
/research-education/innovation-centers/iwater 1.3005225309011408
/research-education/degree-offerings 1.2972706155017852
/aboutus/useful-info/printed-material 1.2590632576989362
-------------------
http://www.birmingham.ac.uk
/university/colleges/artslaw/staff.aspx 2.129924767088331
/libraries/index.aspx 2.128522411821165
/schools/engineering/index.aspx 2.0381854442420098
/libraries/subject/index.aspx 2.03070238457869
/research/activity/clinical-sciences/index.aspx 1.8902829424172236
-------------------
https://www.ajman.ac.ae
http://education.ajman.ac.ae 4.486324047393929
/en/request-information 3.6183909493288784
/en/admission-and-registration/request-information.html 1.7809599080812295
http://engineering.ajman.ac.ae 1.2984538174508504
http://ajman.ac.ae 1.2690209234373755
-------------------
https://www.sheffield.ac.uk
/undergraduate/finance/fees-calculator 10.388672055566499
http://www.sheffield.ac.uk/prospectus/subjectList.do?prospectusYear=2018 4.641977144740597
http://www.sheffield.ac.uk/prospectus/courses-az.do?prospectusYear=2018 3.266722404665914
http://www.sheffield.ac.uk/prospectus/courses-az.do?prospectusYear=2017 2.6843273053223093
/alumni/keepintouch/alumni-keepintouch-update 2.642704686413461
-------------------
https://www.adu.ac.ae
/en-us/employment.aspx 1.8564880633817846
http://www.adu.ac.ae/en-us/programdetail.aspx?ProgramId=169 1.6722264723361364
http://www.adu.ac.ae/en-us/programdetail.aspx?ProgramId=164 1.6506548641185685
http://www.adu.ac.ae/en-us/aboutadu.aspx 1.6178328577948369
http://www.adu.ac.ae/en-us/departmentdetail.aspx?DepartmentId=5&CollegeId=224 1.487964881290488
-------------------
https://aau.ac.ae
http://regonline.aau.ac.ae/ain 6.450934717359815
/en/centers/elc/tests-calendar 2.069864276405709
/en/centers/elc/extracurricular-activities 1.8208636240485196
/en/academics/undergraduate-programs 1.7598508205926917
/en/admission/undergraduate/transcripts 1.6707760628805732
-------------------
https://www.aus.edu
http://www.aus.edu/info/200132/accreditation 6.5362885367714
http://www.aus.edu/info/200124/about_aus 3.6789391777181804
http://www.aus.edu/info/200135/undergraduate_programs/87/minors 2.2402535491966638
http://www.aus.edu/info/200135/undergraduate_programs 2.060794601776444
/info/200136/graduate_programs 1.7226674950175924
-------------------
https://gmu.ac.ae
http://gmu.ac.ae/college-dentistry 4.389155104499251
/aboutgmu/career 2.2542228784333376
http://mail.gmu.ac.ae 2.1529883960790794
http://gmu.ac.ae/college-medicine/bachelor-of-medicine-and-bachelor-of-surgery-mbbs 1.8543886487322037
/cash/gulfsim-medsim-2016 1.5655565564551808
-------------------
http://www.buid.ac.ae
/apply-sept-2017 1.311779120561331
/publications 1.3091733763565243
/academic-support-and-resources 1.267887855588195
/discover 1.2288016925160956
/PG_Diploma_and_Certificate 1.2164670248550251
-------------------
http://www.kustar.ac.ae
/pages/press-kit 3.8600644729378946
/pages/mazaya- 1.5534624773922587
/pages/%20http:/www.meduni-graz.at/DK_MCD/Call_Application.htm 1.407726526297515
/pages/academics 1.3552159225088083
/pages/graduate--programs 1.3349417876673806
-------------------
http://nyuad.nyu.edu
/en/academics/undergraduate-programs/minors.html 3.0576543174409503
/en/academics/global-education/academic-regional-travel.html 1.6439482147051716
/en/campus-life/public-safety.html 1.6234750765102213
/en/academics/academic-divisions/engineering.html 1.6027281745193502
/en/academics/executive-education.html 1.5887172138281134
-------------------
http://www.pi.ac.ae
/en/Academics 1.7279552060906147
/en/Pages/Contacts.aspx 1.2631383061419668
/en/Highlights/Pages/Details.aspx?ItemId=9 1.1558835584492677
/en/Research/GRC/Pages/default.aspx 1.0225594904386581
/en 1.0
-------------------
http://www.sharjah.ac.ae
/en/academics/colleges/ahss 2.750956057157518
/en/Libraries/Pages/default.aspx 2.656875408106748
/en/academics/degree-program/Pages/default.aspx 1.5249455765908317
/en/Life_on_Campus/Pages/scs.aspx 1.517964404833085
/en/about/Senior-Admin/DASS/Pages/default.aspx 1.49275951184652
-------------------
http://www.uowdubai.ac.ae
https://staff.uowdubai.ac.ae/feedback 2.2500191492908757
/events/health-activities-english-language-centre-students 2.174142900268991
/bachelor-engineering-electrical 2.042215550425715
/contact-us 1.8687304846197401
/about-uowd/our-professional-staff/office-of-institutional-research 1.7423601838932685
-------------------
http://www.zu.ac.ae
/main/en/graduate_programs/Graduate_Programs_Folder/index.aspx 3.7336753808939154
/main/en/colleges/index.aspx 3.622110646396689
http://library.zu.ac.ae/ftlist 3.556196617455738
/main/en/SAHIM/index.aspx 3.332867509807249
/main/en/_ice/index.aspx 2.828147719294743
-------------------
###Markdown
Identify common features of URLs with high scoresIn order to generate more meaningful results, I opt to analyse common features of URLs with high scores in order to filter out spurious results.First I plot the distribution of scores, which is useful if only to show the distribution is at least approximately symmetric. I originally attempted to sample more uniformally across the distribution, but it turns out that this isn't really necessary. The first step, however, is to exclude very bad pages by requiring non-zero scores.
###Code
# Only use non-zero scores
df_pages = df_pages.loc[df_pages.score > 0]
# Plot the log of the scores
log_scores = df_pages["norm_score"]
bins = np.arange(min(log_scores),max(log_scores),0.1)
log_scores.hist(bins=bins)
# freqs,lims = np.histogram(log_scores,bins)
# def get_freq(value):
# if np.fabs(value) > 4:
# return 0
# for i,x in enumerate(lims):
# if value <= x:
# if freqs[i-1] == 0:
# return 0
# return 1/freqs[i-1]**3
# return 0
# df_pages["weights"] = log_scores.apply(get_freq)
###Output
_____no_output_____
###Markdown
URL contentsBefore analysing URL contents, it is necessary to remove any common "junk" associated with university URLs. I additionally include "html" and "aspx" as these don't provide any predictive power. I therefore compile a list of stop words, in the context of academic URLs, and also include the standard NLTK stop words.I then make a count of the common non-stop terms found in URLs.
###Code
# Generate a list of stops from base URLs
stop_urls = ["html","aspx"]
for url in df_tops.url.values:
stop_urls += tokenize_alphanum(url)
stop_urls = set(stop_urls).union(_stops)
# Count common non-stops
common_words = []
for url in df_pages.page.values:
for w in tokenize_alphanum(url):
if not w in stop_urls:
common_words.append(w)
c = Counter(common_words)
###Output
_____no_output_____
###Markdown
One caveat with the above procedure is that I will miss URLs with wordscompoundedtogether. Therefore, I train a word splitter (see near top of notebook for WordSplitter) with the 200 most common words found. Therefore 'hiddden' words like "degreecourse" should be found as "degree" "course".
###Code
# Train a word splitter with the most common words,
# but first reweight each word by it's occurence
_common_words = []
for w,s in c.most_common(200):
# This way earch word, w, will appear s times
_common_words += [w]*s
wsplitter = WordSplitter(_common_words)
_common_words = set(_common_words)
# Try to find any hidden words
common_words = []
for url in df_pages.page.values:
# Split the URL into words
for w in tokenize_alphanum(url):
# If not a stop word
if w in stop_urls:
continue
# Get the split word
split_word,score = wsplitter.split(w)
# If the word can't be split
if score == 0 or len(split_word) == 1:
common_words.append(w)
# Otherwise append the components
else:
for s in split_word:
common_words.append(s)
# Recount the words, ignore short words (en,u etc)
c = Counter(common_words)
common_words = [w for w,_ in c.most_common(200) if len(w) > 2]
###Output
_____no_output_____
###Markdown
Finally, the contents of each page's URL can be formalised by creating a binary variable for each common word.
###Code
# Create a binary variable for each common word
for w in common_words:
# Generate one bool per URL in df_pages
condition = []
for url in df_pages.page.values:
# Check whether the tokens contain the common word
url_tokens = tokenize_alphanum(url)
# Firstly, is the word in the URL?
found_word = w in url_tokens
# Otherwise, check whether any tokens can be split to
# find the common word
if not found_word:
for token in url_tokens:
split_word,score = wsplitter.split(token)
if score > 0 and len(split_word) > 1:
found_word = w in split_word
# Append this flag
condition.append(found_word)
# Append this word's condition (Note: I use an underscore to avoid column conflicts)
df_pages["_"+w] = condition
###Output
_____no_output_____
###Markdown
Distribution of URL termsNow I look at the distribution of URL terms. Firstly I indentify words that are definitely unimportant or detrimental to a high score by considering whether the mean score of URLs containing each word are *significantly* higher than the mean score. Note significance is defined in terms of standard deviations.Note: this is where I previously mentioned that I could sample a uniform distribution with weights
###Code
_df = df_pages #.sample(n=1000,weights="weights")
# Columns not related to words
drop_cols = ["page","score","norm_score","top_url"]
# Decide which of the words are actually important by comparing their effect on the mean
d = {}
mean = _df["norm_score"].mean()
std = _df["norm_score"].std()
for col in _df.drop(drop_cols,axis=1).columns:
_mean = _df.loc[_df[col],"norm_score"].mean()
_std = _df[col].std()
d[col] = (_mean - mean)/np.sqrt(std*std + _std*_std)
# Plot the results
fig,ax = plt.subplots(figsize=(10,6))
_ = ax.hist(list(d.values()),bins=50)
ax.set_ylabel("Frequency")
ax.set_xlabel("Significance of score, relative to the mean score")
###Output
_____no_output_____
###Markdown
The above is enough to motivate me into cutting words with a significance of 0.1
###Code
# Only keep the most significant words
for w,s in Counter(d).most_common():
if s < 0.1:
_df.drop(w,axis=1,inplace=True)
len(_df.columns)
###Output
_____no_output_____
###Markdown
Combinations of URL termsNow I perform PCA on the words identified above, fitted with respect to the normalised score.
###Code
pca = PCA()
_pca = pca.fit(y=_df["norm_score"],X=_df.drop(drop_cols,axis=1))
###Output
_____no_output_____
###Markdown
It is found that the first 9ish components describe 50% of the variance, which at first seems pretty bad. However, I believe this is obscured by the fact that the distribution of `norm_score` is very noisy. The principal N components therefore could identify combinations of key words that rise above this noise, and are good indicators of pages containing courses. For example, the highest PC explains nearly 10% of variance, which is actually pretty huge.
###Code
total = 0
for i,x in enumerate(_pca.explained_variance_ratio_):
total += x
#print(total)
if total > 0.5:
print(i)
break
print(_pca.explained_variance_ratio_)
###Output
9
[ 0.11017301 0.08518736 0.05252114 0.04905744 0.0438438 0.04217498
0.04085477 0.03841977 0.02997759 0.02884661 0.02691523 0.02663878
0.02624788 0.02587565 0.02304958 0.02204769 0.02124936 0.02049994
0.01617408 0.01585362 0.01523768 0.01481933 0.01455394 0.01421435
0.01370439 0.0132476 0.01230321 0.0120076 0.01010983 0.00870186
0.0086139 0.00847353 0.00827081 0.00771417 0.00760734 0.00746846
0.00739634 0.00720747 0.0070589 0.00675965 0.0062417 0.00618848
0.00611109 0.00597404 0.00564725 0.00543089 0.00521255 0.0035512
0.00327683 0.00128733]
###Markdown
To motivate this further, look at the four most common words in each PC describing more than 5% of the variance (below). They look really very sensible.
###Code
for i,(comp,var) in enumerate(zip(_pca.components_,_pca.explained_variance_ratio_)):
if var < 0.05:
break
d = {w:c for w,c in zip(_df.drop(drop_cols,axis=1).columns,comp)}
print(i,var)
for w,c in Counter(d).most_common(5):
print("\t",w,c)
print("")
###Output
0 0.110173010725
_program 0.789462543071
_undergraduate 0.543052020129
_graduate 0.114352417181
_requirement 0.108690108532
_postgraduate 0.0637612399296
1 0.0851873574167
_college 0.957047466055
_program 0.154267280266
_graduate 0.122755340674
_science 0.0957872707366
_art 0.0763989750428
2 0.0525211355292
_program 0.444767430156
_degree 0.244966274832
_graduate 0.230994775341
_engineering 0.170119680704
_postgraduate 0.136421248974
###Markdown
Therefore, I use the first 3 PCs to rank URLs in order to extract course information from. I take the top 5 (`DataFrame.head`) URLs, ranked according to these 3 PCs, and take the set of these (therefore a maximum of 15 URLs per base URL)
###Code
urls_to_try = {}
for i in range(0,3):
# Calculate the i'th PC
pc = "pca"+str(i)
_cols = drop_cols
_df[pc] = [x[i] for x in _pca.transform(_df.drop(_cols,axis=1))]
# Get the top values of the i'th PC per top_url
for top_url,grouped in _df.groupby("top_url"):
# If not already added this top_url to the list
if top_url not in urls_to_try:
urls_to_try[top_url] = set()
for v in grouped.sort_values(pc,ascending=False).head()["page"].values:
urls_to_try[top_url].add(v)
# Make sure this new column is dropped in future iterations of this forloop
if pc not in drop_cols:
_cols.append(pc)
urls_to_try["https://www.masdar.ac.ae"].add("https://www.masdar.ac.ae/research-education/degree-programs/undergraduate-programs")
urls_to_try["https://www.masdar.ac.ae"].add("https://www.masdar.ac.ae/research-education/degree-programs/doctorate-programs")
urls_to_try["https://www.masdar.ac.ae"].add("https://www.masdar.ac.ae/research-education/degree-programs/master-s-programs")
urls_to_try["http://www.kustar.ac.ae"].add("http://www.kustar.ac.ae/pages/undergraduate-programs")
urls_to_try["http://www.zu.ac.ae"].add("https://www.zu.ac.ae/main//en/admission/undergraduate-programs/programs.aspx")
urls_to_try["https://www.aus.edu"].add("https://www.aus.edu/info/200135/undergraduate_programs")
urls_to_try["https://www.aus.edu"].add("https://www.aus.edu/info/200136/graduate_programs")
urls_to_try["https://aau.ac.ae"].add("https://aau.ac.ae/en/academics/undergraduate-programs/")
urls_to_try["https://aau.ac.ae"].add("https://aau.ac.ae/en/academics/graduate-programs/")
urls_to_try["https://www.ajman.ac.ae"].add("https://www.ajman.ac.ae/en/admission-and-registration/undergraduate/programs-offered.html")
urls_to_try["https://www.ajman.ac.ae"].add("https://www.ajman.ac.ae/en/admission-and-registration/graduate-admissions/graduate-programs-offered.html")
urls_to_try["http://nyuad.nyu.edu"].add("http://nyuad.nyu.edu/en/academics/undergraduate-programs/majors.html")
urls_to_try["http://www.kustar.ac.ae"].add("http://www.kustar.ac.ae/pages/undergraduate-admissions")
urls_to_try["http://www.kustar.ac.ae"].add("http://www.kustar.ac.ae/pages/graduate-admissions")
###Output
_____no_output_____
###Markdown
The URLs I find are given below (Note: these aren't ordered by preference):
###Code
for top_url,urls in urls_to_try.items():
print(top_url,len(urls))
for url in urls:
print("\t",url)
print()
###Output
https://www.sheffield.ac.uk 12
https://www.sheffield.ac.uk/postgraduate/taught/courses/arts/biblical/religion-leadership-society-ma
https://www.sheffield.ac.uk/postgraduate/taught
https://www.sheffield.ac.uk/undergraduate/finance
https://www.sheffield.ac.uk/postgraduate/accommodation
http://www.sheffield.ac.uk/undergraduate/finance/fees/2017
https://www.sheffield.ac.uk/undergraduate/finance/fees-calculator
https://www.sheffield.ac.uk/postgraduate/taught/apply
https://www.sheffield.ac.uk/postgraduate/taught/courses/arts-humanities
https://www.sheffield.ac.uk/postgraduate/taught/courses/pure-science
https://www.sheffield.ac.uk/news/nr/best-universities-arts-humanities-rankings-world-1.730131
https://www.sheffield.ac.uk/undergraduate/why/facilities
https://www.sheffield.ac.uk/postgraduate/taught/distance-learning
http://www.sharjah.ac.ae 14
http://www.sharjah.ac.ae/en/academics/degree-program/Pages/default.aspx
http://www.sharjah.ac.ae/en/about/contact_us/Pages/default.aspx
http://www.sharjah.ac.ae/en/academics/colleges/dentistry
http://www.sharjah.ac.ae/en/Admissions/fees-scholar/Pages/pt.aspx
http://www.sharjah.ac.ae/en/Admissions/fees-scholar/Pages/default.aspx
http://www.sharjah.ac.ae/en/academics/Colleges/dentistry/Pages/Tes.aspx
http://www.sharjah.ac.ae/en/academics/Colleges/fa/Pages/dw.aspx
http://www.sharjah.ac.ae/en/academics/colleges/comcol
http://www.sharjah.ac.ae/en/Administration/Pages/default.aspx
http://www.sharjah.ac.ae/en/academics/Colleges/ahss/Pages/dw.aspx
http://www.sharjah.ac.ae/en/Admissions/fees-scholar/Pages/cal.aspx
http://www.sharjah.ac.ae/en/Media/Pages/UOS20Y.aspx
http://www.sharjah.ac.ae/en/academics/Pages/default.aspx
http://www.sharjah.ac.ae/en/Research/spu/Pages/default.aspx
http://www.manchester.ac.uk 10
http://www.manchester.ac.uk/study/undergraduate/courses/2017
http://www.manchester.ac.uk/study/postgraduate-certificate-diploma
http://www.manchester.ac.uk/study/undergraduate/mature-students
http://www.manchester.ac.uk/study/postgraduate-research/open-days-fairs
http://www.manchester.ac.uk/study/undergraduate/courses
http://www.manchester.ac.uk/study/postgraduate-research/programmes/research-areas
http://www.manchester.ac.uk/study/undergraduate/courses/2018
http://www.manchester.ac.uk/study/postgraduate-research/contact
http://www.manchester.ac.uk/study/undergraduate/prospectus
http://www.manchester.ac.uk/study/postgraduate-research/admissions
https://aau.ac.ae 11
https://aau.ac.ae/en/news/2017/al-ain-university-of-science-and-technology-starts-to-develop-its-campus-in-al-ain
https://aau.ac.ae/en/admission/undergraduate/general-education-requirements
https://aau.ac.ae/en/academics/graduate-programs/
https://aau.ac.ae/en/deanships/sci-research&graduate-studies/vision-and-mission
https://aau.ac.ae/en/admission/undergraduate/course-grading-system
https://aau.ac.ae/en/academics/graduate-programs
https://aau.ac.ae/en/deanships/sci-research&graduate-studies/community-engagement
https://aau.ac.ae/en/admission/undergraduate/general-admission-requirements
https://aau.ac.ae/en/academics/undergraduate-programs/
https://aau.ac.ae/en/academics/undergraduate-programs
https://aau.ac.ae/en/deanships/sci-research&graduate-studies/deans-message
https://www.ajman.ac.ae 14
https://www.ajman.ac.ae/en/Colleges.html
https://www.ajman.ac.ae/en/academics/academics/colleges/unit-of-general-studies
https://www.ajman.ac.ae/en/admission-and-registration/graduate-admissions/documents-required.html
https://www.ajman.ac.ae/en/admission-and-registration/undergraduate/programs-offered.html
https://www.ajman.ac.ae/en/admission-and-registration/undergraduate/admission-requirements.html
https://www.ajman.ac.ae/en/admission-and-registration/undergraduate-admissions.html
https://www.ajman.ac.ae/en/academics/colleges.html
https://engineering.ajman.ac.ae/en/mission-of-college-of-engineering.html
https://www.ajman.ac.ae/en/admission-and-registration/graduate-admissions/transfer-from-other-universities-150611-125719.html
https://www.ajman.ac.ae/en/admission-and-registration/undergraduate/application-process.html
https://www.ajman.ac.ae/en/admission-and-registration/financial-information/graduate-financial-information.html
https://pharmacy.ajman.ac.ae/en/news/2017/college-of-pharmacy-celebrates-outstanding-would-be-graduates.html
https://www.ajman.ac.ae/en/community-service-department-csd/community-programs-and-activities.html
https://www.ajman.ac.ae/en/admission-and-registration/graduate-admissions/graduate-programs-offered.html
https://www.masdar.ac.ae 9
https://www.masdar.ac.ae/research-education/outreach-programs/summer-research-internships
https://www.masdar.ac.ae/research-education/degree-programs/undergraduate-programs
https://www.masdar.ac.ae/research-education/degree-offerings/phd-program
https://www.masdar.ac.ae/research-education/degree-offerings/msc-in-materials-science-and-engineering
https://www.masdar.ac.ae/research-education/degree-programs/master-s-programs
https://www.masdar.ac.ae/research-education/outreach-programs/yfel
https://www.masdar.ac.ae/programs
https://www.masdar.ac.ae/research-education/degree-offerings/practicing-professionals-program
https://www.masdar.ac.ae/research-education/degree-programs/doctorate-programs
http://www.birmingham.ac.uk 12
http://www.birmingham.ac.uk/research/activity/inflammation-ageing/courses/index.aspx
http://www.birmingham.ac.uk/dubai/register/index.aspx
http://www.birmingham.ac.uk/undergraduate/visit/opendays/index.aspx
http://www.birmingham.ac.uk/university/colleges/artslaw/staff.aspx
http://www.birmingham.ac.uk/university/colleges/mds/news/2017/07/karim-raza-aruk.aspx
http://www.birmingham.ac.uk/university/colleges/artslaw/index.aspx
http://www.birmingham.ac.uk/dubai/index.aspx
http://www.birmingham.ac.uk/schools/engineering/index.aspx
http://www.birmingham.ac.uk/research/activity/clinical-sciences/index.aspx
http://www.birmingham.ac.uk/undergraduate/visit/index.aspx
http://www.birmingham.ac.uk/contact/directions/index.aspx
http://www.birmingham.ac.uk/undergraduate/prospectus/index.aspx
http://www.pi.ac.ae 7
http://www.pi.ac.ae/en/Highlights/Pages/Details.aspx?ItemId=16
http://www.pi.ac.ae/en/Students
http://www.pi.ac.ae/en/UpcomingEvents/Pages/Details.aspx?ItemId=9
http://www.pi.ac.ae/en/Admissions/Undergraduate/Pages/default.aspx
http://www.pi.ac.ae/en/Pages/Contacts.aspx
http://www.pi.ac.ae/en/the-merge/pages/BoardOfTrustees.aspx
http://www.pi.ac.ae/en/Academics
http://www.uowdubai.ac.ae 10
http://www.uowdubai.ac.ae/undergraduate-programs/scholarships-and-tuition-grants
http://www.uowdubai.ac.ae/undergraduate-programs/subject-descriptions
http://www.uowdubai.ac.ae/postgraduate-programs/subject-descriptions
http://www.uowdubai.ac.ae/undergraduate-programs/fees-and-payment-information
http://www.uowdubai.ac.ae/undergraduate-programs/admission-requirements
http://www.uowdubai.ac.ae/computer-science-and-engineering-programs/bachelor-of-computer-science-multimedia-and-game-development-bcompsc-mmgd-degree
http://www.uowdubai.ac.ae/postgraduate-research-programs/fees-and-payment-information
http://www.uowdubai.ac.ae/computer-science-and-engineering-programs/bachelor-of-computer-science-bcompsc-degree
http://www.uowdubai.ac.ae/undergraduate-programs/hear-from-our-students-and-graduates
https://www.uowdubai.ac.ae/finance-and-accounting-programs/bachelor-of-commerce-accountancy-bcom-acc-degree
http://www.buid.ac.ae 11
http://buid.ac.ae/admission-requirements
http://www.buid.ac.ae/Programmes-Fees
http://buid.ac.ae/bdrc-2015-gallery
http://buid.ac.ae/PhD-Training-Courses
http://www.buid.ac.ae/phd-in-business-management
http://buid.ac.ae/CLDR-Finance-Scholarships
http://www.buid.ac.ae/Student-Visa-Information
http://www.buid.ac.ae/about-uae-dubai
http://www.buid.ac.ae/contact-us
http://www.buid.ac.ae/faculty-of-business
http://www.buid.ac.ae/welcome-message-chancellor
https://www.aus.edu 16
http://www.aus.edu/info/200135/undergraduate_programs/411/deadlines
http://www.aus.edu/info/200135/undergraduate_programs/504/apply
http://www.aus.edu/info/200170/college_of_architecture_art_and_design/168/departments
https://www.aus.edu/info/200194/new_student_information/350/achievement_academy_outreach_program
http://www.aus.edu/info/200135/undergraduate_programs/158/residential_halls/1
http://www.aus.edu/info/200136/graduate_programs/409/deadlines
http://www.aus.edu/info/200135/undergraduate_programs/414/faq_and_infodesk
http://www.aus.edu/info/200168/college_of_arts_and_sciences/175/departments
http://www.aus.edu/galleries/gallery/31/college_of_arts_and_sciences
http://www.aus.edu/info/200136/graduate_programs/392/admission_requirements
http://www.aus.edu/info/200135/undergraduate_programs/87/minors
http://www.aus.edu/info/200170/college_of_architecture_art_and_design
https://www.aus.edu/info/200136/graduate_programs
https://www.aus.edu/info/200135/undergraduate_programs
http://www.aus.edu/info/200198/college_of_architecture_art_and_design
http://www.aus.edu/info/200136/graduate_programs/158/residential_halls
https://gmu.ac.ae 10
https://gmu.ac.ae/college-of-graduate-studies/course-counselling
https://gmu.ac.ae/admissions/annual-intake-fee-structure
https://gmu.ac.ae/college-of-graduate-studies/master-of-physical-therapy
https://gmu.ac.ae/college-dentistry/course-counselling
https://gmu.ac.ae/college-of-graduate-studies
https://gmu.ac.ae/admissions/undergraduate-admission-requirements-2
https://gmu.ac.ae/college-of-graduate-studies/master-of-science-in-clinical-pathology-ms-cp
https://gmu.ac.ae
https://gmu.ac.ae/admissions/graduate-admission-requirements
https://gmu.ac.ae/the-campus
http://www.kustar.ac.ae 13
http://www.kustar.ac.ae/pages/undergraduate-admissions
http://www.kustar.ac.ae/pages/civil-infrastructure-and-environment-engineering/14158
http://www.kustar.ac.ae/pages/department-of-electrical-and-computer-engineering/13396
http://www.kustar.ac.ae/pages/graduate--programs
http://www.kustar.ac.ae/pages/undergraduate-admissions-documents
http://www.kustar.ac.ae/pages/college-of-engineering
http://www.kustar.ac.ae/pages/graduate-admissions
http://www.kustar.ac.ae/pages/news/news-details/khalifa-university-of-science-and-technology-achieves-top-university-ranking-in-the-uae-by-
http://www.kustar.ac.ae/pages/undergraduate-programs
http://blog.kustar.ac.ae/category/art
http://www.kustar.ac.ae/pages/graduate-students/14158
http://www.kustar.ac.ae/pages/office-of-information-technology-it
http://www.kustar.ac.ae/pages/department-of-electrical-and-computer-engineering/14158
https://www.adu.ac.ae 10
https://www.adu.ac.ae/en-us/academicsresearch/programs.aspx
http://www.adu.ac.ae/en-us/programdetail.aspx?ProgramId=169
http://www.adu.ac.ae/en-us/collegedetail.aspx?CollegeId=223
http://www.adu.ac.ae/en-us/academicsresearch/programs.aspx
http://www.adu.ac.ae/en-us/collegedetail.aspx?CollegeId=225
http://www.adu.ac.ae/en-us/programdetail.aspx?ProgramId=167
http://www.adu.ac.ae/en-us/academicsresearch/colleges.aspx
http://www.adu.ac.ae/en-us/programdetail.aspx?ProgramId=614
http://www.adu.ac.ae/en-us/collegedetail.aspx?CollegeId=705
http://www.adu.ac.ae/en-us/collegedetail.aspx?CollegeId=224
http://nyuad.nyu.edu 12
http://nyuad.nyu.edu/en/academics/undergraduate-programs/grad-requirements.html
http://nyuad.nyu.edu/en/academics/undergraduate-programs/grad-requirements/capstone-project-bak.html
http://nyuad.nyu.edu/en/academics/community-programs/primary-and-secondary-educational-engagement.html
http://nyuad.nyu.edu/en/academics/academic-divisions/social-science/undergraduate-programs/economics.html
http://nyuad.nyu.edu/en/academics/community-programs/smsp.html
http://nyuad.nyu.edu/en/academics/undergraduate-programs/majors.html
http://nyuad.nyu.edu/en/academics/undergraduate-programs/pre-professional-courses/media--culture-and-communication.html
http://nyuad.nyu.edu/en/academics/undergraduate-programs/language.html
http://nyuad.nyu.edu/en/academics/undergraduate-programs/majors/physics.html
http://nyuad.nyu.edu/en/academics/executive-education/custom-programs.html
http://nyuad.nyu.edu/en/academics/community-programs/smsp/contact-us.html
http://nyuad.nyu.edu/en/academics/community-programs/summer-academy.html
http://www.zu.ac.ae 8
http://www.zu.ac.ae/main/en/ocd/campus_access/index.aspx
https://www.zu.ac.ae/main//en/admission/undergraduate-programs/programs.aspx
http://www.zu.ac.ae/main/en/_ice/language.aspx
http://www.zu.ac.ae/main/en/graduate_programs/FAQs.aspx
http://www.zu.ac.ae/main/en/contact_us/index.aspx
http://www.zu.ac.ae/main/en/graduate_programs/Graduate_Programs_Folder/index.aspx
http://www.zu.ac.ae/main/en/colleges/index.aspx
http://www.zu.ac.ae/main/en/graduate_programs/key_dates.aspx
###Markdown
Topic modelling of courses 1) Iterate through course pages 2) Record sections of text with high graph score 3) Create n-grams according to previous method 4) Count the most frequent n-grams (after stop word removal) 5) Select courses from the sentences containing the most frequent n-grams
###Code
from bs4.element import Comment
def tag_visible(element):
if element.parent.name in ['style', 'script', 'head', 'title', 'meta', '[document]']:
return False
if isinstance(element, Comment):
return False
return True
def len_sentence(x):
try:
return len(tokenize_alphanum(x))
except TypeError as err:
print("Error with",x)
raise err
def get_sentences(url,return_url=False):
r = requests.get(url)
if r.status_code != 200:
print(url,"not found")
return []
soup = BeautifulSoup(r.text,"lxml")
texts = soup.findAll(text=True)
visible_texts = filter(tag_visible, texts)
if not return_url:
return [t.strip() for t in visible_texts
if len_sentence(t) > 0]
# Otherwise, find corresponding URL
sentences = []
anchors = [a for a in soup.findAll("a")
if tag_visible(a) and
len_sentence(a.text) > 0]
anchor_text = [a.text for a in anchors]
for t in visible_texts:
if len_sentence(t) == 0:
continue
_url = None
if t in anchor_text:
a = list(filter(lambda a : t==a.text,anchors))[0]
if "href" in a.attrs:
_url = a.attrs["href"]
sentences.append(dict(go_to_url=_url,text=t,found_on_url=url))
return sentences
# Loop over URLs, collect all sentences
sentences = []
for top_url,urls in urls_to_try.items():
for url in urls:
sentences += get_sentences(url)
# print(url)
# print(get_sentences(url,True))
break
break
print("Got",len(sentences),"sentences")
def superlog(x):
if x <= 0:
return 0
return np.log10(x)
# fig,ax = plt.subplots(figsize=(10,6))
# weights,bins,patches = ax.hist(list(map(len_sentence,sentences)),bins=np.arange(0,250,1))
# max_height = 0
# for p in patches:
# new_height = superlog(p.get_height())
# p.set_height(new_height)
# max_height = max((max_height,new_height))
# ax.set_ylim(0,1.1*max_height)
# ax.set_ylabel("log$_{10}$(count)")
# ax.set_xlabel("sentence length")
# # Only use short sentences
# _sentences = [s for s in sentences if len_sentence(s) < 20]
# # Perform N-gram extraction
# comper = Compounder(max_context=3,threshold=7)
# comper.process_sentences(_sentences)
# comper.print_sorted_compounds()
# # Iterate with the previous chunk to set the threshold
# comp_data = pd.DataFrame(comper.data)
# fig,ax = plt.subplots(figsize=(10,10))
# for context,grouped in comp_data.groupby("context"):
# ax.plot(grouped["threshold"],grouped["total"].apply(superlog),label=context)
# ax.legend()
###Output
_____no_output_____
###Markdown
Different approach: get sentences with high similarity to any UCAS word
###Code
r = requests.get("http://search.ucas.com/subject/fulllist")
ucas = []
soup = BeautifulSoup(r.text,"lxml")
for a in soup.find_all("a"):
if "href" not in a.attrs:
continue
if not a.attrs["href"].startswith("/subject/"):
continue
words = [w for w in tokenize_alphanum(a.text)
if w not in _stops]
ucas.append(" ".join(words))
def get_wiki_quals(url,tokenize=True):
r = requests.get(url)
soup = BeautifulSoup(r.text,"lxml")
ucas = []
for a in soup.find_all("a"):
if "href" not in a.attrs:
continue
href = a.attrs["href"]
if not href.startswith("/wiki/"):
continue
text_elements = a.text.split()
if len(text_elements) < 1:
continue
if text_elements[0].lower() not in href.lower():
continue
if tokenize:
words = [w for w in tokenize_alphanum(a.text)
if w not in _stops]
else:
words = a.text.split()
ucas.append(" ".join(words))
return set(ucas)
ucas_1 = get_wiki_quals("https://en.wikipedia.org/wiki/Category:Bachelor%27s_degrees")
ucas_2 = get_wiki_quals("https://en.wikipedia.org/wiki/List_of_master%27s_degrees")
ucas_3 = get_wiki_quals("https://en.wikipedia.org/wiki/Category:Doctoral_degrees")
ucas += list(ucas_1.symmetric_difference(ucas_2).symmetric_difference(ucas_3))
# Get all UCAS words
all_ucas = ["msc","ba","master","bachelor","phd"]
for u in ucas:
all_ucas += tokenize_alphanum(u)
all_ucas = set(all_ucas)
print(all_ucas)
class MatchScore(dict):
def match_score(self,sentence):
scores = []
for w in tokenize_alphanum(sentence):
if w not in self:
match,_score = fuzzy_process.extractOne(w,all_ucas)
length_ratio = len(match)/len(w)
# Normalise to length ratio
if length_ratio > 1:
length_ratio = 1/length_ratio
self[w] = (_score/100)*(length_ratio**2)
scores.append(self[w])
# Score = sqrt( (sum_i score_i^2) / sum_i )
if len(scores) == 0:
return 0
score = np.sqrt(sum(map(lambda x:x**2,scores)) / len(scores))
return score
matchScore = MatchScore()
def level_checker(query,levels=["bachelor","master","phd","diploma","doctor"]):
# Check if the qualification type is in the query
for lvl in levels:
if lvl in query:
return lvl
return None
def get_qual_type(query):
_q = tokenize_alphanum(query)
lvl = level_checker(_q)
if lvl is not None:
return lvl
# Otherwise, search for the acronym
for word in _q:
for qual,acronyms in qual_map.items():
if word in acronyms:
return level_checker(qual)
return None
matchScore.match_score("B.Sc. in Aerospace Engineering")
get_qual_type("B.Sc. in Aerospace Engineering")
def clean_text(t):
# Replace bad chars
bad_chars = ["\n","\r"]
for c in bad_chars:
t = t.replace(c,"")
# Strip space
while " " in t:
t = t.replace(" "," ")
t = t.lstrip()
t = t.rstrip()
words = t.split()
# Remove long words
if len(words) > 11:
return None
# Standardise qualifications
for w in t.split():
if w in qual_map:
t = t.replace(w,min(qual_map[w], key=len))
if "program" in q.lower():
return None
return t
url_courses = {}
for top_url,urls in urls_to_try.items():
# if "ajman" not in top_url:
# continue
print(top_url)
url_courses[top_url] = []
for url in set(urls):
# if "undergraduate/programs-offered" not in url:
# continue
# print("\t",url)
for data in get_sentences(url,return_url=True):
#print(data["text"])
data["text"] = clean_text(data["text"])
if data["text"] is None:
continue
score = matchScore.match_score(data["text"]) #fuzzy_process.extractOne(s,ucas,scorer=fuzz.token_sort_ratio)
if score > 0.6:
if data not in url_courses[top_url]:
url_courses[top_url].append(data)
#print("\t\t",data["text"],score)
#print(url_courses[top_url])
#print()
def tidy(x,except_=""):
return re.sub(r'[^a-zA-Z0-9'+except_+']','',x)
def gen_guess(name,n):
guess = []
for i,w in enumerate(name.split()):
if w in _stops:
continue
if i == 0:
guess.append(w[0])
else:
guess.append(w[0:n])
guess = ".".join(guess)
return guess
def find_qualification(name):
p = wikipedia.page(name)
words = {}
for i in [1,2]:
guess = gen_guess(name,i)
if len(guess) == 0:
continue
_guess = tidy(guess)
for word in p.summary.split():
_word = tidy(word)
if _word in _stops:
continue
if len(_word) == 0:
continue
if _word[0] != _guess[0]:
continue
#
score = fuzz.ratio(_guess,_word)
norm = len(_guess)/len(_word)
if norm > 1:
norm = 1/norm
_w = tidy(word,except_=".")
if score*norm > 0:
if _w in words:
if words[_w] > score*norm:
continue
words[_w] = score*norm
#
output = []
for qual,score in Counter(words).most_common():
if score > 50:
output.append(qual)
return output
wiki_1 = get_wiki_quals("https://en.wikipedia.org/wiki/Category:Bachelor%27s_degrees",False)
wiki_2 = get_wiki_quals("https://en.wikipedia.org/wiki/List_of_master%27s_degrees",False)
wiki_3 = get_wiki_quals("https://en.wikipedia.org/wiki/Category:Doctoral_degrees",False)
wiki = list(wiki_1.symmetric_difference(wiki_2).symmetric_difference(wiki_3))
qual_map = {}
for w in wiki:
if not any(w.startswith(x) for x in ("Ma","Ba","Dip","Doc","PhD")):
continue
if w.strip() == "":
continue
quals = find_qualification(w)
print(w)
print("\t",quals)
for q in quals:
if q not in qual_map:
qual_map[q] = set()
qual_map[q].add(w)
output = []
for top_url,data in url_courses.items():
if ".ac.uk" in top_url:
continue
for row in data:
qual = get_qual_type(row["text"])
if qual is None and row["go_to_url"] is not None:
qual = get_qual_type(row["go_to_url"])
if qual is not None:
if not row["text"].lower().startswith(qual):
qual = None
else:
row["qualification"] = qual
row["home_url"] = top_url
output.append(row)
output[2900]
with open("courses-tier0.json","w") as f:
f.write(str(output))
from ast import literal_eval
with open("courses-tier0.json") as f:
js = literal_eval(f.read())
df = pd.DataFrame(js)
df.tail()
qual_lens = []
for home,grouped in df.groupby("home_url"):
qs = set(grouped.loc[grouped.qualification=="doctor","text"].values)
print(home,len(qs))
for q in qs:
qual_lens.append(len(q.split()))
if len(q.split()) > 11:
continue
#if not q.lower().startswith("master"):
# continue
if "program" in q.lower():
continue
print("\t",q)
print()
print(min(qual_lens),
np.percentile(qual_lens,10),
np.percentile(qual_lens,50),
np.percentile(qual_lens,90),
max(qual_lens))
###Output
_____no_output_____
###Markdown
Now analyse basic features a) What qualifications are on offer? (count by unique text-home-qual) b) Can qualification be mapped into key topics from wikipedia? Engineering Maths Chemistry Physics Biology Computing Other
###Code
def filter_stops(t):
return " ".join(filter(lambda x : x not in _stops,t.split()))
def unique_text(texts,fuzzy_req=0.95):
unique_vals = []
for t in texts:
_t = filter_stops(t)
found_match = False
for v in unique_vals:
_v = filter_stops(v)
score_1 = fuzz.ratio(t,v)/100
score_2 = fuzz.token_sort_ratio(t,v)/100
score = np.sqrt((score_1**2 + score_2**2)/2)
if score >= fuzzy_req:
found_match = True
#if score < 0.98:
#print(t,"too similar to",v,score)
break
if not found_match:
unique_vals.append(t)
return unique_vals
unique_text(["joel klinger","joel klinger","joel a klinger"])
valid_values = []
unique_quals = set(df.qualification.values)
qual_table = []
for home,grouped in df.groupby("home_url"):
print(home)
condition = grouped.text.apply(lambda x : (x not in ["Bachelors Degree","Bachelor Degree:",
"Bachelor of Arts","Bachelor of Science 302",
"Bachelors Degree","Diploma",
"Bachelor’s Programs","Master’s Programs",
"Master’s thesis (6 credit hours)",
"Masters Degree",
'PhD','PhD Alumni Testimonials',
'PhD BM Programme Structure'])
and len(x.split()) < 11)
grouped = grouped.loc[condition]
_grouped = grouped.loc[~pd.isnull(grouped.qualification)]
allowed_values = unique_text(_grouped.text)
valid_values += allowed_values
grouped = grouped.loc[grouped.text.apply(lambda t:t in allowed_values)]
too_long = grouped.text.apply(lambda t: len(t.split()) > 11) # 11 = 90%ile
grouped = grouped.loc[~too_long]
grouped.drop_duplicates("text",inplace=True)
_data = dict(home=home)
for _qual in unique_quals:
if pd.isnull(_qual):
condition = pd.isnull(grouped.qualification)
else:
condition = grouped.qualification == _qual
_data[_qual] = (condition).sum()
_data["not_null"] = (~pd.isnull(grouped.qualification)).sum()
qual_table.append(_data)
qual_table = pd.DataFrame(qual_table)#,columns=["home","bachelor","master","phd","nan"])
qual_table["phd"] = qual_table["phd"] + qual_table["doctor"]
qual_table = qual_table[["home","bachelor","master","phd",np.nan,"not_null","doctor","diploma"]]
qual_table = qual_table.sort_values("not_null",ascending=False)
qual_table = qual_table.drop(["not_null","doctor",np.nan,"diploma"],axis=1)
qual_table.columns = ["Home","Bachelor","Master","Doctoral"] #,"Other (including bad results)"]
qual_table
condition = df.text.apply(lambda x: x in valid_values)
_df = df.loc[condition].drop_duplicates("text")
print(len(_df))
n = 0
for _,grouped in _df.groupby("home_url"):
n += len(grouped.drop_duplicates("text"))
print(n)
sorted(list(_df["text"].values))
def get_cats(t):
s = wikipedia.search(t)
if len(s) < 1:
return []
s = s[0]
try:
p = wikipedia.page(s)
return [c for c in p.categories
if len(c.split()) <= 3]
except (wikipedia.DisambiguationError,wikipedia.PageError):
return []
def subject(query,by):
for result in reversed(query.partition(" "+by+" ")):
if result != '':
return result.lstrip(" ")
results = {}
i = 0
for _,row in _df.iterrows():
i += 1
q = row.qualification
_t = re.sub(r'\([^)]*\)', '', row.text)
if _t in results:
continue
# First get categories for the qualification
t = _t
if (not pd.isnull(q)) and (q.lower() not in t.lower()):
t = _t+" "+q
cats = get_cats(t)
# Then attempt to get categories for the subject
for by in ["of","in"]:
t = subject(_t,by)
if t == _t:
continue
cats += get_cats(t)
results[_t] = cats
print(i,(~pd.isnull(df.qualification)).sum())
for k,v in results.items():
print(k)
print(v)
print()
# Collect categories on random pages to filter out common junk categories
random_cats = []
for s in wikipedia.random(pages=100):
try:
p = wikipedia.page(s)
except (wikipedia.DisambiguationError,wikipedia.PageError):
continue
random_cats += p.categories
random_words = []
for cat in random_cats:
random_words += [w for w in tokenize_alphanum(cat)]
count_rcats = [c for c,_ in Counter(random_cats).most_common(20)]
words_rcats = [w for w,_ in Counter(random_words).most_common(20)]
# The collect the categories together, ommitting junk cats and words
cats = []
words = []
for _,r in results.items():
cats += [c for c in r if c not in count_rcats]
for cat in r:
words += [w for w in tokenize_alphanum(cat)
if w not in words_rcats and len(w) > 1]
cats_count = Counter(cats)
words_count = Counter(words)
cats_count.most_common()
words_count.most_common()
# Assign a list of keywords to each row of data
# Calculate total fraction of ug/pg
# Then, by subject, calculate fractions of ug/pg
# Then d3 bars of subjects (on hover change fraction, but static) (top) and fractions (non-clickable, but dynamic) (bottom)
from matplotlib.patches import Rectangle
_df.loc[_df.qualification == 'doctor','qualification'] = 'phd'
fig,ax = plt.subplots(figsize=(10,1))
color = ['skyblue','darkslategrey','coral']
x0 = 0
for qual,c in zip(['bachelor','master','phd'],color):
n = (_df.qualification == qual).sum()
if qual == 'phd':
qual = 'Doctoral'
rect = Rectangle((x0,0),x0+n,1,facecolor=c,label=qual.title())
ax.add_patch(rect)
x0 += n
ax.axis('off')
ax.set_ylim(0,1)
ax.set_xlim(0,x0)
leg = ax.legend(loc='center left', bbox_to_anchor=(1, 0.5))
def cat_plot(qual_type):
fig,ax = plt.subplots(figsize=(10,1))
color = ['skyblue','darkslategrey','forestgreen','orange','y','coral','gray']
labels = ['Comp/IT','Engineering','Medical','Education','Business','Science','Other']
x0 = 0
_kw = []
for kw,c,L in zip([('comp',' it ','info','network','software'),
('eng',),
('health','medic','nursing','pharma'),('education','teaching'),
('business','manag','admin','financ','commerce'),('science',),
('',)],color,labels):
condition = _df.qualification == qual_type
condition = _df.loc[condition,"text"].apply(lambda x :
any(k in x.lower() for k in kw)
and not any(k in x.lower() for k in _kw))
n = condition.sum()
if kw == ('',):
kw = "Other"
rect = Rectangle((x0,0),x0+n,1,facecolor=c,label=L)
ax.add_patch(rect)
x0 += n
_kw += list(kw)
ax.axis('off')
ax.set_ylim(0,1)
ax.set_xlim(0,x0)
if qual_type == 'phd':
qual_type = 'Doctoral'
leg = ax.legend(loc='center left', bbox_to_anchor=(1, 0.5),title=qual_type.title())
cat_plot('bachelor')
cat_plot('master')
cat_plot('phd')
###Output
_____no_output_____ |
code/Tutorial.ipynb | ###Markdown
Tutorial Notebook for IBoat - PMCTS *Please be aware that this tutorial shall not replace the project's thorough documentation that you can find at :*https://pbarde.github.io/IBoat-PMCTS/ **Objectives :** After completing this tutorial you should be able to use all the tools developed during the IBoat - PMCTS projet. Using these tools you can investigate different algorithms' tunings in order to find an optimal strategy and compare it to the isochrones method. ** Finally, all the code of this notebook can be found in the following [Tutorial script](Tutorial.py) which will run more smoothly than in this jupyter notebook (mostly because of multiprocesses involved in the interactive plots).** ** Table of contents**1. [Before starting](before)2. [Weather forecasts and Simulators](weather) 1. [Downloading weather forecasts](dl) 2. [Loading weather objects](load) 3. [Creating simulators and displaying wind conditions](simu)3. [The Parallel Monte-Carlo Tree Search](PMCTS) 1. [Initializing the search](init) 2. [Creating a Forest and launching a PMCTS](forest) 3. [Processing and saving results](process)4. [Isochrones and Validation](iso) 1. [A. Setting up the search](iso_search) 2. [Comparing the results](iso_comp) 1. Before starting Why don't you create a **conda environment** to make sure everything plays smoothly ? Install Anaconda 3 ([Anaconda Download](https://www.anaconda.com/download/linux)) if not done already and run the following commands in your terminal (linux of course).```conda create --name mctssource activate mctsconda install pipconda install jupyterconda install basemapconda install netcdf4conda install scipy```*If you don't have TkAgg and Qt5Agg you'll need to install them with:*```sudo apt-get install python3-tksudo apt-get install python3-qt```*Then, go to the [Tutorial notebook](Tutorial.ipynb)'s directory and run:*```jupyter notebook "Tutorial.ipynb"```*Finally, when you are finished playing you can just remove the conda environment with the lines:*```source deactivateconda remove --name mcts --all``` ** /!\ Please do run the next cell once: **
###Code
import os
os.chdir(os.getcwd() + '/solver/') # just to make sure you're working in the proper directory
###Output
_____no_output_____
###Markdown
2. Weather forecasts and Simulators In this section you'll learn how to download, load, process and visualize weather forecasts. You will also create the simulators that are used by the workers in the PMCTS. A. Downloading weather forecasts Let's start by downloading some forecasts. *You might want to change the* `mydate` *variable. Note that if it's not working you might have chosen a date whose weather forecasts are not available anymore on the server or your proxy doesn't allow you to download with the* `netcdf4` *package. Or, you just got the date format wrong.*
###Code
import forest as ft
import time
# The starting day of the forecast. If it's too ancient, the forecast might not be available anymore
#mydate = '20180108' #time.strftime("%Y%m%d") # mydate = '20180228' # for February 2, 2018
mydate = '20180307'
# We will download the mean scenario (id=0) and the first 2 perturbed scenarios
scenario_ids = range(3)
#ft.download_scenarios(mydate, latBound=[40, 50], lonBound=[-15 + 360, 360], scenario_ids=scenario_ids)
###Output
_____no_output_____
###Markdown
B. Loading weather objects Now that we have downloaded some forecasts, let's load them in order to create simulators with them.
###Code
Weathers = ft.load_scenarios(mydate, latBound=[40, 50], lonBound=[-15 + 360, 360], scenario_ids=scenario_ids)
###Output
Loaded : ../data/20180307_0000z.obj
Loaded : ../data/20180307_0100z.obj
Loaded : ../data/20180307_0200z.obj
###Markdown
C. Creating simulators and displaying wind conditions We define the main parameters of our simulators, we create the simulators and we visualize their wind conditions. The plots are interactives (use the upper-left icons to navigate through the weather forecast).*Our code is not supposed to be executed in a jupyter notebook which is a bit capricious and does not handle mutli-processing for animations properly. So you can only animate one scenario at a time. To visualize multiple scenarios simulataneously from you should use* `ft.play_multiple_scenarios(Sims)` *in a script.*
###Code
%%capture
% matplotlib qt
import matplotlib.pyplot as plt
from simulatorTLKT import Boat
Boat.UNCERTAINTY_COEFF = 0.2 # Characterizes the uncertainty on the boat's dynamics
NUMBER_OF_SIM = 3 # <=20
SIM_TIME_STEP = 6 # in hours
STATE_INIT = [0, 44, 355]
N_DAYS_SIM = 3 # time horizon in days
Sims = ft.create_simulators(Weathers, numberofsim=NUMBER_OF_SIM, simtimestep=SIM_TIME_STEP,
stateinit=STATE_INIT, ndaysim=N_DAYS_SIM)
# in the notebook we can visualize scenarios only one by one.
#Sims[0].play_scenario()
## /!\ if executing from a .py script, you better use this to have multiple interactive plots:
#ft.play_multiple_scenarios(Sims)
###Output
_____no_output_____
###Markdown
*If you feel frustrated by the interactive plot you are invited to copy paste the code below in a .py file and replace* ```pythonfor ii, sim in enumerate(Sims) : sim.play_scenario(ii)```*by*```pythonft.play_multiple_scenarios(Sims)```*execute your new .py file and enjoy.(by the way, if you are lazy, you can also give the [Tutorial script](Tutorial.py) a run)* 3. The Parallel Monte-Carlo Tree Search In this section we will see how to launch a PMCTS, process and visualize the results. A. Initializing the search First of all we define a departure point, a mission heading and we compute the corresponding destination point and reference travel time. Then, we visualize the mean trajectories per scenario during the two initialization phases.
###Code
%%capture
% matplotlib qt
import matplotlib
matplotlib.rcParams.update({'font.size': 10})
missionheading = 0 # direction wrt. true North we want to go the furthest.
ntra = 50 # number of trajectories used during the initialization
destination, timemin = ft.initialize_simulators(Sims, ntra, STATE_INIT, missionheading, plot=True)
print("destination : {} & timemin : {}".format(destination, timemin))
Sims = ft.create_simulators(Weathers, numberofsim=NUMBER_OF_SIM, simtimestep=SIM_TIME_STEP,
stateinit=STATE_INIT, ndaysim=N_DAYS_SIM+2)
###Output
destination : [46.503901437615376, 355.0] & timemin : 2.4144127151444956
###Markdown
B. Creating a Forest and launching a PMCTS Now we create a Forest (the object managing the worker trees and the master tree) and we launch a search. To do this we must define the exploration parameters of the search.
###Code
import worker
##Exploration Parameters##
worker.RHO = 0.5 #Exploration coefficient in the UCT formula.
worker.UCT_COEFF = 1 / 2 ** 0.5 #Proportion between master utility and worker utility of node utility.
budget = 1000 # number of nodes we want to expand in each worker
frequency = 10 # number of steps performed by a worker before writing the results into the master
forest = ft.Forest(listsimulators=Sims, destination=destination, timemin=timemin, budget=budget)
if __name__ == '__main__':
master_nodes = forest.launch_search(STATE_INIT, frequency)
###Output
Iteration 50 on 1000 for workers 1
Iteration 50 on 1000 for workers 2
Iteration 50 on 1000 for workers 0
Iteration 100 on 1000 for workers 1
Iteration 100 on 1000 for workers 0
Iteration 100 on 1000 for workers 2
Iteration 150 on 1000 for workers 1
Iteration 150 on 1000 for workers 0
Iteration 150 on 1000 for workers 2
Iteration 200 on 1000 for workers 0
Iteration 200 on 1000 for workers 1
Iteration 200 on 1000 for workers 2
Iteration 250 on 1000 for workers 0
Iteration 250 on 1000 for workers 1
Iteration 250 on 1000 for workers 2
Iteration 300 on 1000 for workers 0
Iteration 300 on 1000 for workers 1
Iteration 300 on 1000 for workers 2
Iteration 350 on 1000 for workers 1
Iteration 350 on 1000 for workers 0
Iteration 350 on 1000 for workers 2
Iteration 400 on 1000 for workers 1
Iteration 400 on 1000 for workers 0
Iteration 400 on 1000 for workers 2
Iteration 450 on 1000 for workers 1
Iteration 450 on 1000 for workers 0
Iteration 450 on 1000 for workers 2
Iteration 500 on 1000 for workers 1
Iteration 500 on 1000 for workers 0
Iteration 500 on 1000 for workers 2
Iteration 550 on 1000 for workers 1
Iteration 550 on 1000 for workers 0
Iteration 550 on 1000 for workers 2
Iteration 600 on 1000 for workers 1
Iteration 600 on 1000 for workers 0
Iteration 600 on 1000 for workers 2
Iteration 650 on 1000 for workers 1
Iteration 650 on 1000 for workers 0
Iteration 650 on 1000 for workers 2
Iteration 700 on 1000 for workers 1
Iteration 700 on 1000 for workers 0
Iteration 700 on 1000 for workers 2
Iteration 750 on 1000 for workers 1
Iteration 750 on 1000 for workers 0
Iteration 750 on 1000 for workers 2
Iteration 800 on 1000 for workers 1
Iteration 800 on 1000 for workers 0
Iteration 800 on 1000 for workers 2
Iteration 850 on 1000 for workers 1
Iteration 850 on 1000 for workers 0
Iteration 850 on 1000 for workers 2
Iteration 900 on 1000 for workers 1
Iteration 900 on 1000 for workers 0
Iteration 900 on 1000 for workers 2
Iteration 950 on 1000 for workers 1
Iteration 950 on 1000 for workers 0
Iteration 950 on 1000 for workers 2
Iteration 1000 on 1000 for workers 1
Iteration 1000 on 1000 for workers 0
Iteration 1000 on 1000 for workers 2
###Markdown
Since the `master_nodes` object was created as a memory shared by multiple processes we need to do a deep copy of it before processing it.
###Code
from master_node import deepcopy_dict
new_dict = deepcopy_dict(master_nodes)
###Output
_____no_output_____
###Markdown
C. Processing and saving results At this point we can create a `MasterTree` object to process the results and get the optimal policies. We usually add it to the forest object.
###Code
from master import MasterTree
forest.master = MasterTree(Sims, destination, nodes=new_dict)
forest.master.get_best_policy()
###Output
Global policy
Depth 1: best reward = 0.8087301587301587 for action = 0
Depth 2: best reward = 0.8958333333333333 for action = 225
Depth 3: best reward = 0.6877752531986262 for action = 270
Policy for scenario 0
Depth 1: best reward = 0.8303571428571429 for action = 0
Depth 2: best reward = 0.9375 for action = 270
Depth 3: best reward = 0.9375 for action = 270
Policy for scenario 1
Depth 1: best reward = 0.828125 for action = 0
Depth 2: best reward = 0.9375 for action = 225
Depth 3: best reward = 0.9375 for action = 135
Policy for scenario 2
Depth 1: best reward = 0.8333333333333334 for action = 0
Depth 2: best reward = 0.9375 for action = 270
Depth 3: best reward = 0.9375 for action = 135
###Markdown
It is also possible to plot the master (or the worker corresponding to scenario 1) tree colored with nodes colored by utility, exploration and exploitation values.
###Code
%%capture
% matplotlib qt
forest.master.plot_tree_uct();
forest.master.plot_tree_uct(1);
###Output
/home/jean-mi/anaconda3/envs/mcts/lib/python3.6/site-packages/matplotlib/font_manager.py:1320: UserWarning: findfont: Font family ['normal'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
We can also follow the global best policy and look at the reward distribution along this path.
###Code
%%capture
% matplotlib qt
forest.master.plot_hist_best_policy(interactive = True)
###Output
/home/jean-mi/anaconda3/envs/mcts/lib/python3.6/site-packages/matplotlib/font_manager.py:1320: UserWarning: findfont: Font family ['normal'] not found. Falling back to DejaVu Sans
(prop.get_family(), self.defaultFamily[fontext]))
###Markdown
Finally we can save the results.
###Code
forest.master.save_tree("my_tuto_results")
###Output
_____no_output_____
###Markdown
4. Isochrones and Validation In this section we will see how to run a isochrone search and compare its performances. A. Setting up the search To set up a isochrones search we need the mean scenario and a simulator without noise on the boat dynamics.
###Code
%%capture
% matplotlib qt
import sys
sys.path.append('../isochrones/')
import isochrones as IC
Boat.UNCERTAINTY_COEFF = 0
#sim = ft.create_simulators(Weathers, numberofsim=NUMBER_OF_SIM, simtimestep=SIM_TIME_STEP,
# stateinit=STATE_INIT, ndaysim=4)[0]
sim = Sims[0]
solver_iso = IC.Isochrone(sim, STATE_INIT, destination, delta_cap=5, increment_cap=18, nb_secteur=200,
resolution=200)
temps_estime, plan_iso, plan_iso_ligne_droite, trajectoire = solver_iso.isochrone_methode()
IC.plot_trajectory(sim, trajectoire, quiv=True)
###Output
_____no_output_____
###Markdown
B. Comparing the results Finally we can compare how the two strategy perform on the different scenarios, on average on the scenarios and with a boat with uncertain dynamics for example.
###Code
%%capture
% matplotlib qt
plan_PMCTS = forest.master.best_policy[-1]
n = len(plan_iso) - 3
plan_iso = plan_iso[:n]
plan_PMCTS.pop()
#plan_PMCTS = []
Boat.UNCERTAINTY_COEFF = 0.2
mean_PMCTS, var_PMCTS = IC.estimate_perfomance_plan(Sims, ntra, STATE_INIT, destination, plan_PMCTS, plot=True, verbose=False)
mean_iso, var_iso = IC.estimate_perfomance_plan(Sims, ntra, STATE_INIT, destination, plan_iso, plot=True, verbose=False)
mean_line, var_line = IC.estimate_perfomance_plan(Sims, ntra, STATE_INIT, destination, plot=True, verbose=False)
IC.plot_comparision(mean_PMCTS, var_PMCTS, mean_iso, var_iso, mean_line, var_line, ["PCMTS", "Isochrones", "Straight line"])
IC.plot_comparision_percent(mean_PMCTS, var_PMCTS, mean_iso, var_iso, mean_line, var_line, ["PCMTS", "Isochrones"])
IC.plot_mean_squared_error(mean_PMCTS, var_PMCTS, mean_iso, var_iso, mean_line, var_line, ["PCMTS", "Isochrones", "Straight line"])
IC.plot_risk_probability(mean_PMCTS, var_PMCTS, mean_iso, var_iso, ["PCMTS", "Isochrones"], t_lim=0)
###Output
_____no_output_____ |
contrib/action_recognition/r2p1d/02_video_transformation.ipynb | ###Markdown
Video Dataset Transformation In this notebook, we show examples of video dataset transformation
###Code
%load_ext autoreload
%autoreload 2
import os
import time
import sys
import decord
import matplotlib.pyplot as plt
import numpy as np
from sklearn.metrics import accuracy_score
import torch
import torch.cuda as cuda
import torch.nn as nn
import torchvision
from vu.data import show_batch, VideoDataset
from vu.models.r2plus1d import DEFAULT_MEAN, DEFAULT_STD
from vu.utils import system_info
from vu.utils.functional_video import denormalize
from vu.utils.transforms_video import (
CenterCropVideo,
NormalizeVideo,
RandomCropVideo,
RandomHorizontalFlipVideo,
RandomResizedCropVideo,
ResizeVideo,
ToTensorVideo,
)
system_info()
def show_clip(clip, size_factor=600):
"""Show frames in a clip"""
if isinstance(clip, torch.Tensor):
# Convert [C, T, H, W] tensor to [T, H, W, C] numpy array
clip = np.moveaxis(clip.numpy(), 0, -1)
figsize = np.array([clip[0].shape[1]*len(clip), clip[0].shape[0]]) / size_factor
plt.tight_layout()
fig, axs = plt.subplots(1, len(clip), figsize=figsize)
for i, f in enumerate(clip):
axs[i].axis("off")
axs[i].imshow(f)
###Output
_____no_output_____
###Markdown
Prepare a Sample VideoA sample video and label paths:
###Code
VIDEO_PATH = os.path.join("data", "samples", "drinking.mp4")
LABEL_PATH = os.path.join("data", "samples", "label.txt")
video_reader = decord.VideoReader(VIDEO_PATH)
video_length = len(video_reader)
print("Video length = {} frames".format(video_length))
###Output
Video length = 152 frames
###Markdown
We use three frames (the first, middle, and the last) to quickly visualize video transformations.
###Code
clip = [
video_reader[0].asnumpy(),
video_reader[video_length//2].asnumpy(),
video_reader[video_length-1].asnumpy(),
]
show_clip(clip)
# [T, H, W, C] numpy array to [C, T, H, W] tensor
t_clip = ToTensorVideo()(torch.from_numpy(np.array(clip)))
t_clip.shape
###Output
_____no_output_____
###Markdown
Video TransformationsResizing with the original ratio
###Code
show_clip(ResizeVideo(size=800)(t_clip))
###Output
_____no_output_____
###Markdown
Resizing
###Code
show_clip(ResizeVideo(size=800, keep_ratio=False)(t_clip))
###Output
_____no_output_____
###Markdown
Center cropping
###Code
show_clip(CenterCropVideo(size=800)(t_clip))
###Output
_____no_output_____
###Markdown
Random cropping
###Code
random_crop = RandomCropVideo(size=800)
show_clip(random_crop(t_clip))
show_clip(random_crop(t_clip))
###Output
_____no_output_____
###Markdown
Random resized cropping
###Code
random_resized_crop = RandomResizedCropVideo(size=800)
show_clip(random_resized_crop(t_clip))
show_clip(random_resized_crop(t_clip))
###Output
_____no_output_____
###Markdown
Normalizing (and denormalizing to verify)
###Code
norm_t_clip = NormalizeVideo(mean=DEFAULT_MEAN, std=DEFAULT_STD)(t_clip)
show_clip(norm_t_clip)
show_clip(denormalize(norm_t_clip, mean=DEFAULT_MEAN, std=DEFAULT_STD))
###Output
_____no_output_____
###Markdown
Horizontal flipping
###Code
show_clip(RandomHorizontalFlipVideo(p=1.0)(t_clip))
###Output
_____no_output_____ |
notebooks/014_ajustePolinomios.ipynb | ###Markdown
$$ ax+b=0 $$
###Code
coef = np.polyfit(x, y, deg=1)
coef[0]
coef[1]
plt.plot(x, y)
plt.plot(coef[0]*x+coef[1])
###Output
_____no_output_____
###Markdown
$$ax^2+bx+c=0$$
###Code
coef = np.polyfit(x, y, deg=2)
coef
plt.plot(x, y)
plt.plot(coef[0]*x**2+coef[1]*x+coef[2])
coef = np.polyfit(x, y, deg=2)
p = np.poly1d(coef)
print(p)
coef = np.polyfit(x, y, deg=3)
p = np.poly1d(coef)
print(p)
coef = np.polyfit(x, y, deg=5)
p = np.poly1d(coef)
plt.plot(x,y)
plt.plot(x, p(x))
def ajuste(grado):
coef = np.polyfit(x,y,deg=grado)
p = np.poly1d(coef)
plt.plot(x,y)
plt.plot(x, p(x))
plt.ylim(-200,1000)
#plt.title(p)
ajuste(4)
interact(ajuste, grado=(0,10))
###Output
_____no_output_____ |
code/final_code_model/Jupyter_example/NER_v2.1_SampleNotebook.ipynb | ###Markdown
Neural Network for Spanish Named Entity Recognition Jupyter Notebook based on: **Kamal Raj** NER with Bidirectional LSTM-CNNs implementation available on Github. https://github.com/kamalkraj/Named-Entity-Recognition-with-Bidirectional-LSTM-CNNs.**Versión: -v_1.2-**Notas de version: - Se implementa las word embeddings en español: GloVe embeddings for SWBC; dimensions=300, vectors=855380. - Se modifico y mejoró el preprocesamiento de los datos de entrada para predicción. Ahora puede predecir las etiquetas I-(PER/LOC/ORG/MISC) Para 50 epoch: -Tiene un accuracy :~80 -No se implementa nada para el español, se encontró que era perjudicial con las embeddings. -Tiempo: 38 min aprox. Para 100 epoch: -Tiene un accuracy :~84 ( -No se implementa nada para el español, se encontró que era perjudicial con las embeddings. -Tiempo: 1 hr 20 min aprox. Entrenamiento realizado en: DESKTOP-0UQLV13 Processor: Intel Core i7-6700HQ CPU 2.6GHz RAM: 16GB OS: Windows 10 Home Single x64 Tipo de memoria: SSD Requiere: unidecode numpy (pip install --upgrade numpy) nltk (pip install --upgrade nltk) * Descargar nltk punkt y nltk stopwords: * >> import nltk * >> nltk.download('stopwords') * >> nltk.download('punkt') * Para más información: https://www.nltk.org/data.html random tensorflow 1.13.1 (pip install --upgrade tensorflow) *Actualmente (10/abril/19) no funciona con python 3.7. * Para más información: https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class01_intro_python.ipynb keras (pip install --upgrade keras) NER task can be formulated as: _Given a sequence of tokens (words, and may be punctuation symbols) provide a tag from predefined set of tags for each token in the sequence._For NER task there are some common types of entities which essentially are tags:- Persons- Locations- Organizations- Expressions of time- Quantities- Monetary values Furthermore, to distinguish consequent entities with the same tags BIO tagging scheme is used. "B" stands for beginning, "I" stands for the continuation of an entity and "O" means the absence of entity. Example with dropped punctuation: Bernhard B-PER Riemann I-PER Carl B-PER Friedrich I-PER Gauss I-PER and O Leonhard B-PER Euler I-PERIn the example above PER means person tag, and "B-" and "I-" are prefixes identifying beginnings and continuations of the entities. Without such prefixes, it is impossible to separate Bernhard Riemann from Carl Friedrich Gauss.
###Code
# Np for math
# Keras for models, layers 4 NN layers
import numpy as np
from keras.models import Model
from keras.layers import TimeDistributed,Conv1D,Dense,Embedding,Input,Dropout,LSTM,Bidirectional,MaxPooling1D,Flatten,concatenate
from keras.utils import Progbar
from keras.preprocessing.sequence import pad_sequences
from keras.initializers import RandomUniform
import unidecode
import string
# Read file (txt) and divide the sentences into character bins (word, tag).
def readfile(filename):
'''
read file
return format :
[ ['EU', 'B-ORG'], ['rejects', 'O'], ['German', 'B-MISC'], ['call', 'O'], ['to', 'O'], ['boycott', 'O'], ['British', 'B-MISC'], ['lamb', 'O'], ['.', 'O'] ]
'''
f = open(filename, encoding='utf-8-sig') # open the file. Update to fix ''
sentences = []
sentence = []
for line in f:
if len(line)==0 or line.startswith('-DOCSTART') or line[0]=="\n":
if len(sentence) > 0:
sentences.append(sentence)
sentence = []
continue
splits = line.split(' ')
#splits[0] = unidecode.unidecode(splits[0]) # Remove special characters from spanish
#splits[0] = splits[0].lower() # Lowercase the words
#splits[0] = splits[0].translate(str.maketrans('', '', string.punctuation)) # remove puntuation
splits[-1] = splits[-1].replace('\n', '').replace('\r', '') #Remove all line breaks from a long string of text
if splits[0] != '':
sentence.append([splits[0],splits[-1]])
if len(sentence) >0:
sentences.append(sentence)
sentence = []
return sentences
# Read the 3 sets ************************************************* PATH ************************************************
# Dataset CoNLL 2002 for Spanish, wich is divided into train, test, valid (dev) sets. Each row contains a word and it's tag
# https://github.com/teropa/nlp/tree/master/resources/corpora/conll2002
trainSentences = readfile("tidy_data/train.txt")
devSentences = readfile("tidy_data/valid.txt")
testSentences = readfile("tidy_data/test.txt")
print(len(trainSentences))
trainSentences[0]
devSentences[0]
testSentences[0]
# Create new attribute in the character bins for padding
def addCharInformatioin(Sentences):
for i,sentence in enumerate(Sentences):
for j,data in enumerate(sentence):
chars = [c for c in data[0]]
Sentences[i][j] = [data[0],chars,data[1]]
return Sentences
trainSentences = addCharInformatioin(trainSentences)
devSentences = addCharInformatioin(devSentences)
testSentences = addCharInformatioin(testSentences)
trainSentences[0]
devSentences[0]
testSentences[0]
# 1.Creates the label set ( tag's set)
# 2.Creates a set with the lowercased words contained in the train,dev,test sets
labelSet = set()
words = {}
for dataset in [trainSentences, devSentences, testSentences]:
for sentence in dataset:
for token,char,label in sentence:
labelSet.add(label)
words[token.lower()] = True
labelSet
words
# Gives the labels a numerical id.
# :: Create a mapping for the labels ::
label2Idx = {}
for label in labelSet:
label2Idx[label] = len(label2Idx)
label2Idx
# Look up table
# :: Hard coded case lookup ::
case2Idx = {'numeric': 0, 'allLower':1, 'allUpper':2, 'initialUpper':3, 'other':4, 'mainly_numeric':5, 'contains_digit': 6, 'PADDING_TOKEN':7}
caseEmbeddings = np.identity(len(case2Idx), dtype='float32')
case2Idx
caseEmbeddings
# :: Read in word embeddings ::
word2Idx = {}
wordEmbeddings = []
# *********************************************************************************************** PATH *************************************************
# GloVe embeddings from SBWC
# https://github.com/uchile-nlp/spanish-word-embeddings
#* Hace los wordEmbedings en base a la lista de embedings + revisa si la palabra en embeddings esta contenido en la lista
# de palabras ** Nota: Remember that the words are seen as vectors.
with open("word_embeddings/SBW-vectors-300-min5.txt", encoding="utf-8") as fEmbeddings: ## change to skip first line (headings)
next(fEmbeddings)
for line in fEmbeddings:
split = line.strip().split(' ')
word = split[0]
if len(word2Idx) == 0: #Add padding+unknown
word2Idx["PADDING_TOKEN"] = len(word2Idx)
vector = np.zeros(len(split)-1) #Zero vector vor 'PADDING' word
wordEmbeddings.append(vector)
word2Idx["UNKNOWN_TOKEN"] = len(word2Idx)
vector = np.random.uniform(-0.25, 0.25, len(split)-1)
wordEmbeddings.append(vector)
if split[0].lower() in words:
vector = np.array([float(num) for num in split[1:]])
wordEmbeddings.append(vector)
word2Idx[split[0]] = len(word2Idx)
wordEmbeddings = np.array(wordEmbeddings)
wordEmbeddings
wordEmbeddings.shape[0]
wordEmbeddings.shape[1]
char2Idx = {"PADDING":0, "UNKNOWN":1}
for c in " 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZáéíóúñäëïöüÁÉÍÓÚÄËÏÖÜÃÂñÑàèìòùÀÈÌÒÙ.,-_()[]{}¡!¿?:;#'\"/\\%$`&=*+@^~|‘´·»³©\xadº±¼\xa0":
char2Idx[c] = len(char2Idx)
# characters and position (val)
char2Idx
# words and possition (val)
word2Idx
### Padding the sentences
def padding(Sentences):
maxlen = 52
for sentence in Sentences:
char = sentence[2]
for x in char:
maxlen = max(maxlen,len(x))
for i,sentence in enumerate(Sentences):
Sentences[i][2] = pad_sequences(Sentences[i][2],52,padding='post')
return Sentences
# classify the word according to the caseLookup ( numeric, mainly_numeric, allLower, allUpper, initialUpper, contains_digit)
def getCasing(word, caseLookup):
casing = 'other'
#Get number of digits in word
numDigits = 0
for char in word:
if char.isdigit():
numDigits += 1
digitFraction = numDigits / float(len(word))
if word.isdigit(): #Is a digit
casing = 'numeric'
elif digitFraction > 0.5:
casing = 'mainly_numeric'
elif word.islower(): #All lower case
casing = 'allLower'
elif word.isupper(): #All upper case
casing = 'allUpper'
elif word[0].isupper(): #is a title, initial char upper, then all lower
casing = 'initialUpper'
elif numDigits > 0:
casing = 'contains_digit'
return caseLookup[casing]
# Create words embeding matrices to padding
def createMatrices(sentences, word2Idx, label2Idx, case2Idx,char2Idx):
unknownIdx = word2Idx['UNKNOWN_TOKEN']
paddingIdx = word2Idx['PADDING_TOKEN']
dataset = []
wordCount = 0
unknownWordCount = 0
for sentence in sentences:
wordIndices = []
caseIndices = []
charIndices = []
labelIndices = []
for word,char,label in sentence:
wordCount += 1
# if the word is in the list of words to index, then index it (verify with the lower cased word)
if word in word2Idx:
wordIdx = word2Idx[word]
elif word.lower() in word2Idx:
wordIdx = word2Idx[word.lower()]
else: # else tag it as unknown
wordIdx = unknownIdx
unknownWordCount += 1
charIdx = []
for x in char:
charIdx.append(char2Idx[x])
#Get the label and map to int
wordIndices.append(wordIdx)
caseIndices.append(getCasing(word, case2Idx)) #Call getCasing
charIndices.append(charIdx)
labelIndices.append(label2Idx[label])
dataset.append([wordIndices, caseIndices, charIndices, labelIndices])
return dataset
# Padding the train/dev/test set and convert them to embedings
train_set = padding(createMatrices(trainSentences,word2Idx, label2Idx, case2Idx,char2Idx))
dev_set = padding(createMatrices(devSentences,word2Idx, label2Idx, case2Idx,char2Idx))
test_set = padding(createMatrices(testSentences, word2Idx, label2Idx, case2Idx,char2Idx))
trainSentences[0]
train_set[0]
# save the words and labels to index as dict types
idx2Label = {v: k for k, v in label2Idx.items()}
#***************************************PATH***********************
np.save("model_data/idx2Label.npy",idx2Label)
np.save("model_data/word2Idx.npy",word2Idx)
# Create a batch for each set (later we will create mini-batch)
def createBatches(data):
l = []
for i in data:
l.append(len(i[0]))
l = set(l)
batches = []
batch_len = []
z = 0
for i in l:
for batch in data:
if len(batch[0]) == i:
batches.append(batch)
z += 1
batch_len.append(z)
return batches,batch_len
train_batch,train_batch_len = createBatches(train_set)
dev_batch,dev_batch_len = createBatches(dev_set)
test_batch,test_batch_len = createBatches(test_set)
#train_batch_len
###Output
_____no_output_____
###Markdown
Start with Tensorflow. Remember that tf first construct a graph, and then run it. tf automatically determines the best contruction taking into consideration each node requirements.
###Code
# Create a tensor for the inputs
words_input = Input(shape=(None,),dtype='int32',name='words_input')
# Create a tensor of the embeddings using the words embeddings and feeding with the words_input tensor
words = Embedding(input_dim=wordEmbeddings.shape[0], output_dim=wordEmbeddings.shape[1], weights=[wordEmbeddings], trainable=False)(words_input)
# Create a tensor of casing input
casing_input = Input(shape=(None,), dtype='int32', name='casing_input')
#Create a tensor of the casing using the words embeddings and feeding with the casing_input tensor
casing = Embedding(output_dim=caseEmbeddings.shape[1], input_dim=caseEmbeddings.shape[0], weights=[caseEmbeddings], trainable=False)(casing_input)
###Output
_____no_output_____
###Markdown
More tensors for the model....
###Code
character_input=Input(shape=(None,52,),name='char_input')
embed_char_out=TimeDistributed(Embedding(len(char2Idx),30,embeddings_initializer=RandomUniform(minval=-0.5, maxval=0.5)), name='char_embedding')(character_input)
# Establish the dropout (neurons?)
dropout= Dropout(0.5)(embed_char_out)
conv1d_out= TimeDistributed(Conv1D(kernel_size=3, filters=30, padding='same',activation='tanh', strides=1))(dropout)
# max pool of the convolutional
maxpool_out=TimeDistributed(MaxPooling1D(52))(conv1d_out)
# Flattern layer for the CNN, it is requered to be flattern for the CNN
char = TimeDistributed(Flatten())(maxpool_out)
char = Dropout(0.5)(char)
output = concatenate([words, casing,char])
output = Bidirectional(LSTM(200, return_sequences=True, dropout=0.50, recurrent_dropout=0.25))(output)
output = TimeDistributed(Dense(len(label2Idx), activation='softmax'))(output)
###Output
_____no_output_____
###Markdown
Model. Inlcudes a Summary of the Model.
###Code
model = Model(inputs=[words_input, casing_input,character_input], outputs=[output])
model.compile(loss='sparse_categorical_crossentropy', optimizer='nadam')
model.summary()
#Number of epochs
epochs = 150
# Minibatches
def iterate_minibatches(dataset,batch_len):
start = 0
for i in batch_len:
tokens = []
caseing = []
char = []
labels = []
data = dataset[start:i]
start = i
for dt in data:
t,c,ch,l = dt
l = np.expand_dims(l,-1)
tokens.append(t)
caseing.append(c)
char.append(ch)
labels.append(l)
yield np.asarray(labels),np.asarray(tokens),np.asarray(caseing),np.asarray(char)
#Training of n epochs
for epoch in range(epochs):
print("Epoch %d/%d"%(epoch,epochs))
a = Progbar(len(train_batch_len))
for i,batch in enumerate(iterate_minibatches(train_batch,train_batch_len)):
labels, tokens, casing,char = batch
model.train_on_batch([tokens, casing,char], labels)
a.update(i)
a.update(i+1)
print(' ')
# Saving the model
model.save("model_data/model.h5")
###Output
_____no_output_____
###Markdown
Evaluating model accurracy. Using F1, precision and recal for Dev and Test sets.
###Code
def tag_dataset(dataset):
correctLabels = []
predLabels = []
b = Progbar(len(dataset))
for i,data in enumerate(dataset):
tokens, casing,char, labels = data
tokens = np.asarray([tokens])
casing = np.asarray([casing])
char = np.asarray([char])
pred = model.predict([tokens, casing,char], verbose=False)[0]
pred = pred.argmax(axis=-1) #Predict the classes
correctLabels.append(labels)
predLabels.append(pred)
b.update(i)
b.update(i+1)
return predLabels, correctLabels
#Method to compute the accruarcy. Call predict_labels to get the labels for the dataset
def compute_f1(predictions, correct, idx2Label):
label_pred = []
for sentence in predictions:
label_pred.append([idx2Label[element] for element in sentence])
label_correct = []
for sentence in correct:
label_correct.append([idx2Label[element] for element in sentence])
#print label_pred
#print label_correct
prec = compute_precision(label_pred, label_correct)
rec = compute_precision(label_correct, label_pred)
f1 = 0
if (rec+prec) > 0:
f1 = 2.0 * prec * rec / (prec + rec);
return prec, rec, f1
def compute_precision(guessed_sentences, correct_sentences):
assert(len(guessed_sentences) == len(correct_sentences))
correctCount = 0
count = 0
for sentenceIdx in range(len(guessed_sentences)):
guessed = guessed_sentences[sentenceIdx]
correct = correct_sentences[sentenceIdx]
assert(len(guessed) == len(correct))
idx = 0
while idx < len(guessed):
if guessed[idx][0] == 'B': #A new chunk starts
count += 1
if guessed[idx] == correct[idx]:
idx += 1
correctlyFound = True
while idx < len(guessed) and guessed[idx][0] == 'I': #Scan until it no longer starts with I
if guessed[idx] != correct[idx]:
correctlyFound = False
idx += 1
if idx < len(guessed):
if correct[idx][0] == 'I': #The chunk in correct was longer
correctlyFound = False
if correctlyFound:
correctCount += 1
else:
idx += 1
else:
idx += 1
precision = 0
if count > 0:
precision = float(correctCount) / count
return precision
# Performance on dev dataset
predLabels, correctLabels = tag_dataset(dev_batch)
pre_dev, rec_dev, f1_dev = compute_f1(predLabels, correctLabels, idx2Label)
print("Dev-Data: Prec: %.3f, Rec: %.3f, F1: %.3f" % (pre_dev, rec_dev, f1_dev))
# Performance on test dataset
predLabels, correctLabels = tag_dataset(test_batch)
pre_test, rec_test, f1_test= compute_f1(predLabels, correctLabels, idx2Label)
print("Test-Data: Prec: %.3f, Rec: %.3f, F1: %.3f" % (pre_test, rec_test, f1_test))
###Output
1517/1517 [==============================] - 8s 5ms/step
Test-Data: Prec: 0.849, Rec: 0.853, F1: 0.851
###Markdown
Test with data
###Code
from nltk.tokenize import RegexpTokenizer
from nltk.corpus import stopwords
###Output
_____no_output_____
###Markdown
Defining class for testing.
###Code
import numpy as np
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
from keras.models import load_model
from keras.preprocessing.sequence import pad_sequences
from nltk import word_tokenize
class Parser:
def __init__(self):
# ::Hard coded char lookup ::
self.char2Idx = {"PADDING":0, "UNKNOWN":1}
for c in " 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ.,-_()[]{}!?:;#'\"/\\%$`&=*+@^~|":
self.char2Idx[c] = len(self.char2Idx)
# :: Hard coded case lookup ::
self.case2Idx = {'numeric': 0, 'allLower':1, 'allUpper':2, 'initialUpper':3, 'other':4, 'mainly_numeric':5, 'contains_digit': 6, 'PADDING_TOKEN':7}
def load_models(self, loc=None):
if not loc:
loc = os.path.join(os.path.expanduser('~'), '.ner_model')
self.model = load_model(os.path.join(loc,"model.h5"))
# loading word2Idx
self.word2Idx = np.load(os.path.join(loc,"word2Idx.npy")).item()
# loading idx2Label
self.idx2Label = np.load(os.path.join(loc,"idx2Label.npy")).item()
def getCasing(self,word, caseLookup):
casing = 'other'
numDigits = 0
for char in word:
if char.isdigit():
numDigits += 1
digitFraction = numDigits / float(len(word))
if word.isdigit(): #Is a digit
casing = 'numeric'
elif digitFraction > 0.5:
casing = 'mainly_numeric'
elif word.islower(): #All lower case
casing = 'allLower'
elif word.isupper(): #All upper case
casing = 'allUpper'
elif word[0].isupper(): #is a title, initial char upper, then all lower
casing = 'initialUpper'
elif numDigits > 0:
casing = 'contains_digit'
return caseLookup[casing]
def createTensor(self,sentence, word2Idx,case2Idx,char2Idx):
unknownIdx = word2Idx['UNKNOWN_TOKEN']
wordIndices = []
caseIndices = []
charIndices = []
for word,char in sentence:
word = str(word)
if word in word2Idx:
wordIdx = word2Idx[word]
elif word.lower() in word2Idx:
wordIdx = word2Idx[word.lower()]
else:
wordIdx = unknownIdx
charIdx = []
for x in char:
if x in char2Idx.keys():
charIdx.append(char2Idx[x])
else:
charIdx.append(char2Idx['UNKNOWN'])
wordIndices.append(wordIdx)
caseIndices.append(self.getCasing(word, case2Idx))
charIndices.append(charIdx)
return [wordIndices, caseIndices, charIndices]
def addCharInformation(self, sentence):
return [[word, list(str(word))] for word in sentence]
def padding(self,Sentence):
Sentence[2] = pad_sequences(Sentence[2],52,padding='post')
return Sentence
def predict(self,Sentence):
Sentence = words = word_tokenize(Sentence)
Sentence = self.addCharInformation(Sentence)
Sentence = self.padding(self.createTensor(Sentence,self.word2Idx,self.case2Idx,self.char2Idx))
tokens, casing,char = Sentence
tokens = np.asarray([tokens])
casing = np.asarray([casing])
char = np.asarray([char])
pred = self.model.predict([tokens, casing,char], verbose=False)[0]
pred = pred.argmax(axis=-1)
pred = [self.idx2Label[x].strip() for x in pred]
return list(zip(words,pred))
p = Parser()
p.load_models("model_data/")
from nltk import sent_tokenize
text_file = open("Input_sample.txt").read()
token_sent = sent_tokenize(text_file)
print(token_sent)
###Output
['MÉXICO.—Marcelo Ebrard va a renunciar a la Secretaría de Relaciones Exteriores, según versiones periodísticas que hoy fueron desmentidas por la SRE.', '“En el ámbito de la fantasía”\n\nRoberto Velasco Álvarez, vocero de la Cancillería, aseguró a través de Twitter que es totalmente falsa la versión que circuló sobre el tema.', 'El evento referido, señaló, sólo ocurrió en el ámbito de la fantasía, según lo publicado por El Universal.', 'Primero Gertz Moreno\n\nAyer lunes también circuló la versión sobre una supuesta renuncia del Fiscal General de la República, Alejandro Gertz Manero, por cuestiones de salud.', 'También fue desmentido por la dependencia.', 'Hoy en importante firma\n\nEsta mañana, en Palacio Nacional, con el presidente Andrés Manuel López Obrador como testigo de honor, Michelle Bachelet, alta comisionada de Naciones Unidas para los Derechos Humanos, y el canciller Marcelo Ebrard firmaron el Acuerdo para la formación en materia de derechos humanos y operación de acuerdo a estándares internacionales de derechos humanos a la Guardia Nacional.', 'En el acto, Ebrard dijo que México se perfila hacia un nuevo paradigma de respeto, promoción y protección de los derechos y las libertades fundamentales, que coloca el respeto irrestricto a los derechos humanos en el centro de la política exterior del Gobierno de la República.']
###Markdown
Input:
###Code
print(text_file)
outlist =[]
for t in token_sent:
t= unidecode.unidecode(t)
outlist.append(p.predict(t))
to_out=[]
for s in outlist:
for w in s:
if ('O') not in w:
print(w)
to_out.append(w)
with open('Output_sample.txt', 'w') as f:
for item in to_out:
f.write("\n")
for x in item:
f.write("%s " %x)
###Output
('MEXICO.', 'B-ORG')
('Marcelo', 'B-PER')
('Ebrard', 'I-PER')
('Secretaria', 'B-ORG')
('de', 'I-ORG')
('Relaciones', 'I-MISC')
('Exteriores', 'I-MISC')
('SRE', 'B-MISC')
('Roberto', 'B-PER')
('Velasco', 'I-PER')
('Alvarez', 'I-PER')
('Cancilleria', 'B-PER')
('Twitter', 'B-LOC')
('senalo', 'I-MISC')
('El', 'B-ORG')
('Universal', 'I-ORG')
('Gertz', 'B-PER')
('Moreno', 'I-PER')
('Ayer', 'I-PER')
('Fiscal', 'B-MISC')
('General', 'I-MISC')
('de', 'I-ORG')
('la', 'I-ORG')
('Republica', 'I-ORG')
('Alejandro', 'B-PER')
('Gertz', 'I-PER')
('Manero', 'I-PER')
('Tambien', 'B-PER')
('Esta', 'B-MISC')
('Palacio', 'B-LOC')
('Nacional', 'I-LOC')
('Andres', 'B-PER')
('Manuel', 'I-PER')
('Lopez', 'I-PER')
('Obrador', 'I-PER')
('Michelle', 'I-PER')
('Bachelet', 'I-PER')
('Naciones', 'B-MISC')
('Unidas', 'I-MISC')
('para', 'I-ORG')
('los', 'I-MISC')
('Derechos', 'I-MISC')
('Humanos', 'I-MISC')
('Marcelo', 'B-PER')
('Ebrard', 'I-PER')
('Acuerdo', 'B-MISC')
('Guardia', 'B-ORG')
('Nacional', 'I-ORG')
('Ebrard', 'B-MISC')
('dijo', 'I-MISC')
('Mexico', 'B-PER')
('Gobierno', 'B-ORG')
('de', 'I-ORG')
('la', 'I-ORG')
('Republica', 'I-ORG')
|
2_Plagiarism_Feature_Engineering(1).ipynb | ###Markdown
Plagiarism Detection, Feature EngineeringIn this project, you will be tasked with building a plagiarism detector that examines an answer text file and performs binary classification; labeling that file as either plagiarized or not, depending on how similar that text file is to a provided, source text. Your first task will be to create some features that can then be used to train a classification model. This task will be broken down into a few discrete steps:* Clean and pre-process the data.* Define features for comparing the similarity of an answer text and a source text, and extract similarity features.* Select "good" features, by analyzing the correlations between different features.* Create train/test `.csv` files that hold the relevant features and class labels for train/test data points.In the _next_ notebook, Notebook 3, you'll use the features and `.csv` files you create in _this_ notebook to train a binary classification model in a SageMaker notebook instance.You'll be defining a few different similarity features, as outlined in [this paper](https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c412841_developing-a-corpus-of-plagiarised-short-answers/developing-a-corpus-of-plagiarised-short-answers.pdf), which should help you build a robust plagiarism detector!To complete this notebook, you'll have to complete all given exercises and answer all the questions in this notebook.> All your tasks will be clearly labeled **EXERCISE** and questions as **QUESTION**.It will be up to you to decide on the features to include in your final training and test data.--- Read in the DataThe cell below will download the necessary, project data and extract the files into the folder `data/`.This data is a slightly modified version of a dataset created by Paul Clough (Information Studies) and Mark Stevenson (Computer Science), at the University of Sheffield. You can read all about the data collection and corpus, at [their university webpage](https://ir.shef.ac.uk/cloughie/resources/plagiarism_corpus.html). > **Citation for data**: Clough, P. and Stevenson, M. Developing A Corpus of Plagiarised Short Answers, Language Resources and Evaluation: Special Issue on Plagiarism and Authorship Analysis, In Press. [Download]
###Code
# NOTE:
# you only need to run this cell if you have not yet downloaded the data
# otherwise you may skip this cell or comment it out
!wget https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c4147f9_data/data.zip
!unzip data
# import libraries
import pandas as pd
import numpy as np
import os
###Output
_____no_output_____
###Markdown
This plagiarism dataset is made of multiple text files; each of these files has characteristics that are is summarized in a `.csv` file named `file_information.csv`, which we can read in using `pandas`.
###Code
csv_file = 'data/file_information.csv'
plagiarism_df = pd.read_csv(csv_file)
# print out the first few rows of data info
plagiarism_df.head()
###Output
_____no_output_____
###Markdown
Types of PlagiarismEach text file is associated with one **Task** (task A-E) and one **Category** of plagiarism, which you can see in the above DataFrame. Tasks, A-EEach text file contains an answer to one short question; these questions are labeled as tasks A-E. For example, Task A asks the question: "What is inheritance in object oriented programming?" Categories of plagiarism Each text file has an associated plagiarism label/category:**1. Plagiarized categories: `cut`, `light`, and `heavy`.*** These categories represent different levels of plagiarized answer texts. `cut` answers copy directly from a source text, `light` answers are based on the source text but include some light rephrasing, and `heavy` answers are based on the source text, but *heavily* rephrased (and will likely be the most challenging kind of plagiarism to detect). **2. Non-plagiarized category: `non`.** * `non` indicates that an answer is not plagiarized; the Wikipedia source text is not used to create this answer. **3. Special, source text category: `orig`.*** This is a specific category for the original, Wikipedia source text. We will use these files only for comparison purposes. --- Pre-Process the DataIn the next few cells, you'll be tasked with creating a new DataFrame of desired information about all of the files in the `data/` directory. This will prepare the data for feature extraction and for training a binary, plagiarism classifier. EXERCISE: Convert categorical to numerical dataYou'll notice that the `Category` column in the data, contains string or categorical values, and to prepare these for feature extraction, we'll want to convert these into numerical values. Additionally, our goal is to create a binary classifier and so we'll need a binary class label that indicates whether an answer text is plagiarized (1) or not (0). Complete the below function `numerical_dataframe` that reads in a `file_information.csv` file by name, and returns a *new* DataFrame with a numerical `Category` column and a new `Class` column that labels each answer as plagiarized or not. Your function should return a new DataFrame with the following properties:* 4 columns: `File`, `Task`, `Category`, `Class`. The `File` and `Task` columns can remain unchanged from the original `.csv` file.* Convert all `Category` labels to numerical labels according to the following rules (a higher value indicates a higher degree of plagiarism): * 0 = `non` * 1 = `heavy` * 2 = `light` * 3 = `cut` * -1 = `orig`, this is a special value that indicates an original file.* For the new `Class` column * Any answer text that is not plagiarized (`non`) should have the class label `0`. * Any plagiarized answer texts should have the class label `1`. * And any `orig` texts will have a special label `-1`. Expected outputAfter running your function, you should get a DataFrame with rows that looks like the following: ``` File Task Category Class0 g0pA_taska.txt a 0 01 g0pA_taskb.txt b 3 12 g0pA_taskc.txt c 2 13 g0pA_taskd.txt d 1 14 g0pA_taske.txt e 0 0......99 orig_taske.txt e -1 -1```
###Code
# Read in a csv file and return a transformed dataframe
def numerical_dataframe(csv_file='data/file_information.csv'):
'''Reads in a csv file which is assumed to have `File`, `Category` and `Task` columns.
This function does two things:
1) converts `Category` column values to numerical values
2) Adds a new, numerical `Class` label column.
The `Class` column will label plagiarized answers as 1 and non-plagiarized as 0.
Source texts have a special label, -1.
:param csv_file: The directory for the file_information.csv file
:return: A dataframe with numerical categories and a new `Class` label column'''
df=pd.read_csv(csv_file)
df['Category']=[0 if df.loc[i,'Category']=='non'
else 1 if df.loc[i,'Category']=='heavy'
else 2 if df.loc[i,'Category']=='light'
else 3 if df.loc[i,'Category']=='cut'
else -1
for i in df.index]
df['Class']=[-1 if df.loc[i,'Category']==-1
else 0 if df.loc[i,'Category']==0
else 1
for i in df.index]
return df
###Output
_____no_output_____
###Markdown
Test cellsBelow are a couple of test cells. The first is an informal test where you can check that your code is working as expected by calling your function and printing out the returned result.The **second** cell below is a more rigorous test cell. The goal of a cell like this is to ensure that your code is working as expected, and to form any variables that might be used in _later_ tests/code, in this case, the data frame, `transformed_df`.> The cells in this notebook should be run in chronological order (the order they appear in the notebook). This is especially important for test cells.Often, later cells rely on the functions, imports, or variables defined in earlier cells. For example, some tests rely on previous tests to work.These tests do not test all cases, but they are a great way to check that you are on the right track!
###Code
# informal testing, print out the results of a called function
# create new `transformed_df`
transformed_df = numerical_dataframe(csv_file ='data/file_information.csv')
# check work
# check that all categories of plagiarism have a class label = 1
transformed_df.head(10)
transformed_df.tail(10)
# test cell that creates `transformed_df`, if tests are passed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# importing tests
import problem_unittests as tests
# test numerical_dataframe function
tests.test_numerical_df(numerical_dataframe)
# if above test is passed, create NEW `transformed_df`
transformed_df = numerical_dataframe(csv_file ='data/file_information.csv')
# check work
print('\nExample data: ')
transformed_df.head()
###Output
Tests Passed!
Example data:
###Markdown
Text Processing & Splitting DataRecall that the goal of this project is to build a plagiarism classifier. At it's heart, this task is a comparison text; one that looks at a given answer and a source text, compares them and predicts whether an answer has plagiarized from the source. To effectively do this comparison, and train a classifier we'll need to do a few more things: pre-process all of our text data and prepare the text files (in this case, the 95 answer files and 5 original source files) to be easily compared, and split our data into a `train` and `test` set that can be used to train a classifier and evaluate it, respectively. To this end, you've been provided code that adds additional information to your `transformed_df` from above. The next two cells need not be changed; they add two additional columns to the `transformed_df`:1. A `Text` column; this holds all the lowercase text for a `File`, with extraneous punctuation removed.2. A `Datatype` column; this is a string value `train`, `test`, or `orig` that labels a data point as part of our train or test setThe details of how these additional columns are created can be found in the `helpers.py` file in the project directory. You're encouraged to read through that file to see exactly how text is processed and how data is split.Run the cells below to get a `complete_df` that has all the information you need to proceed with plagiarism detection and feature engineering.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import helpers
# create a text column
text_df = helpers.create_text_column(transformed_df)
text_df.head()
# after running the cell above
# check out the processed text for a single file, by row index
row_idx = 0 # feel free to change this index
sample_text = text_df.iloc[0]['Text']
print('Sample processed text:\n\n', sample_text)
###Output
Sample processed text:
inheritance is a basic concept of object oriented programming where the basic idea is to create new classes that add extra detail to existing classes this is done by allowing the new classes to reuse the methods and variables of the existing classes and new methods and classes are added to specialise the new class inheritance models the is kind of relationship between entities or objects for example postgraduates and undergraduates are both kinds of student this kind of relationship can be visualised as a tree structure where student would be the more general root node and both postgraduate and undergraduate would be more specialised extensions of the student node or the child nodes in this relationship student would be known as the superclass or parent class whereas postgraduate would be known as the subclass or child class because the postgraduate class extends the student class inheritance can occur on several layers where if visualised would display a larger tree structure for example we could further extend the postgraduate node by adding two extra extended classes to it called msc student and phd student as both these types of student are kinds of postgraduate student this would mean that both the msc student and phd student classes would inherit methods and variables from both the postgraduate and student classes
###Markdown
Split data into training and test setsThe next cell will add a `Datatype` column to a given DataFrame to indicate if the record is: * `train` - Training data, for model training.* `test` - Testing data, for model evaluation.* `orig` - The task's original answer from wikipedia. Stratified samplingThe given code uses a helper function which you can view in the `helpers.py` file in the main project directory. This implements [stratified random sampling](https://en.wikipedia.org/wiki/Stratified_sampling) to randomly split data by task & plagiarism amount. Stratified sampling ensures that we get training and test data that is fairly evenly distributed across task & plagiarism combinations. Approximately 26% of the data is held out for testing and 74% of the data is used for training.The function **train_test_dataframe** takes in a DataFrame that it assumes has `Task` and `Category` columns, and, returns a modified frame that indicates which `Datatype` (train, test, or orig) a file falls into. This sampling will change slightly based on a passed in *random_seed*. Due to a small sample size, this stratified random sampling will provide more stable results for a binary plagiarism classifier. Stability here is smaller *variance* in the accuracy of classifier, given a random seed.
###Code
random_seed = 1 # can change; set for reproducibility
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
import helpers
# create new df with Datatype (train, test, orig) column
# pass in `text_df` from above to create a complete dataframe, with all the information you need
complete_df = helpers.train_test_dataframe(text_df, random_seed=random_seed)
# check results
complete_df.head(10)
###Output
_____no_output_____
###Markdown
Determining PlagiarismNow that you've prepared this data and created a `complete_df` of information, including the text and class associated with each file, you can move on to the task of extracting similarity features that will be useful for plagiarism classification. > Note: The following code exercises, assume that the `complete_df` as it exists now, will **not** have its existing columns modified. The `complete_df` should always include the columns: `['File', 'Task', 'Category', 'Class', 'Text', 'Datatype']`. You can add additional columns, and you can create any new DataFrames you need by copying the parts of the `complete_df` as long as you do not modify the existing values, directly.--- Similarity Features One of the ways we might go about detecting plagiarism, is by computing **similarity features** that measure how similar a given answer text is as compared to the original wikipedia source text (for a specific task, a-e). The similarity features you will use are informed by [this paper on plagiarism detection](https://s3.amazonaws.com/video.udacity-data.com/topher/2019/January/5c412841_developing-a-corpus-of-plagiarised-short-answers/developing-a-corpus-of-plagiarised-short-answers.pdf). > In this paper, researchers created features called **containment** and **longest common subsequence**. Using these features as input, you will train a model to distinguish between plagiarized and not-plagiarized text files. Feature EngineeringLet's talk a bit more about the features we want to include in a plagiarism detection model and how to calculate such features. In the following explanations, I'll refer to a submitted text file as a **Student Answer Text (A)** and the original, wikipedia source file (that we want to compare that answer to) as the **Wikipedia Source Text (S)**. ContainmentYour first task will be to create **containment features**. To understand containment, let's first revisit a definition of [n-grams](https://en.wikipedia.org/wiki/N-gram). An *n-gram* is a sequential word grouping. For example, in a line like "bayes rule gives us a way to combine prior knowledge with new information," a 1-gram is just one word, like "bayes." A 2-gram might be "bayes rule" and a 3-gram might be "combine prior knowledge."> Containment is defined as the **intersection** of the n-gram word count of the Wikipedia Source Text (S) with the n-gram word count of the Student Answer Text (S) *divided* by the n-gram word count of the Student Answer Text.$$ \frac{\sum{count(\text{ngram}_{A}) \cap count(\text{ngram}_{S})}}{\sum{count(\text{ngram}_{A})}} $$If the two texts have no n-grams in common, the containment will be 0, but if _all_ their n-grams intersect then the containment will be 1. Intuitively, you can see how having longer n-gram's in common, might be an indication of cut-and-paste plagiarism. In this project, it will be up to you to decide on the appropriate `n` or several `n`'s to use in your final model. EXERCISE: Create containment featuresGiven the `complete_df` that you've created, you should have all the information you need to compare any Student Answer Text (A) with its appropriate Wikipedia Source Text (S). An answer for task A should be compared to the source text for task A, just as answers to tasks B, C, D, and E should be compared to the corresponding original source text.In this exercise, you'll complete the function, `calculate_containment` which calculates containment based upon the following parameters:* A given DataFrame, `df` (which is assumed to be the `complete_df` from above)* An `answer_filename`, such as 'g0pB_taskd.txt' * An n-gram length, `n` Containment calculationThe general steps to complete this function are as follows:1. From *all* of the text files in a given `df`, create an array of n-gram counts; it is suggested that you use a [CountVectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.CountVectorizer.html) for this purpose.2. Get the processed answer and source texts for the given `answer_filename`.3. Calculate the containment between an answer and source text according to the following equation. >$$ \frac{\sum{count(\text{ngram}_{A}) \cap count(\text{ngram}_{S})}}{\sum{count(\text{ngram}_{A})}} $$ 4. Return that containment value.You are encouraged to write any helper functions that you need to complete the function below.
###Code
# Calculate the ngram containment for one answer file/source file pair in a df
from sklearn.feature_extraction.text import CountVectorizer
def calculate_containment(df, n, answer_filename):
'''Calculates the containment between a given answer text and its associated source text.
This function creates a count of ngrams (of a size, n) for each text file in our data.
Then calculates the containment by finding the ngram count for a given answer text,
and its associated source text, and calculating the normalized intersection of those counts.
:param df: A dataframe with columns,
'File', 'Task', 'Category', 'Class', 'Text', and 'Datatype'
:param n: An integer that defines the ngram size
:param answer_filename: A filename for an answer text in the df, ex. 'g0pB_taskd.txt'
:return: A single containment value that represents the similarity
between an answer text and its source text.
'''
counts = CountVectorizer(analyzer='word', ngram_range=(n,n))
a_text = df.loc[df["File"] == answer_filename]["Text"].values[0]
a_task = df.loc[df["File"] == answer_filename]["Task"].values[0]
s_text = df.loc[(df["Task"] == a_task) & (df["Datatype"] == 'orig')]["Text"].values[0]
ngrams = counts.fit_transform([a_text, s_text])
ngram_array = ngrams.toarray()
result=np.amin(ngram_array,axis=0).sum()/ngram_array[0].sum()
return result
###Output
_____no_output_____
###Markdown
Test cellsAfter you've implemented the containment function, you can test out its behavior. The cell below iterates through the first few files, and calculates the original category _and_ containment values for a specified n and file.>If you've implemented this correctly, you should see that the non-plagiarized have low or close to 0 containment values and that plagiarized examples have higher containment values, closer to 1.Note what happens when you change the value of n. I recommend applying your code to multiple files and comparing the resultant containment values. You should see that the highest containment values correspond to files with the highest category (`cut`) of plagiarism level.
###Code
# select a value for n
n = 3
# indices for first few files
test_indices = range(5)
# iterate through files and calculate containment
category_vals = []
containment_vals = []
for i in test_indices:
# get level of plagiarism for a given file index
category_vals.append(complete_df.loc[i, 'Category'])
# calculate containment for given file and n
filename = complete_df.loc[i, 'File']
c = calculate_containment(complete_df, n, filename)
containment_vals.append(c)
# print out result, does it make sense?
print('Original category values: \n', category_vals)
print()
print(str(n)+'-gram containment values: \n', containment_vals)
# run this test cell
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# test containment calculation
# params: complete_df from before, and containment function
tests.test_containment(complete_df, calculate_containment)
###Output
Tests Passed!
###Markdown
QUESTION 1: Why can we calculate containment features across *all* data (training & test), prior to splitting the DataFrame for modeling? That is, what about the containment calculation means that the test and training data do not influence each other? **Answer:** Source text are non changing accross train or test data The test and the train splits are independent from each other and don't affect their containment result. --- Longest Common SubsequenceContainment a good way to find overlap in word usage between two documents; it may help identify cases of cut-and-paste as well as paraphrased levels of plagiarism. Since plagiarism is a fairly complex task with varying levels, it's often useful to include other measures of similarity. The paper also discusses a feature called **longest common subsequence**.> The longest common subsequence is the longest string of words (or letters) that are *the same* between the Wikipedia Source Text (S) and the Student Answer Text (A). This value is also normalized by dividing by the total number of words (or letters) in the Student Answer Text. In this exercise, we'll ask you to calculate the longest common subsequence of words between two texts. EXERCISE: Calculate the longest common subsequenceComplete the function `lcs_norm_word`; this should calculate the *longest common subsequence* of words between a Student Answer Text and corresponding Wikipedia Source Text. It may be helpful to think of this in a concrete example. A Longest Common Subsequence (LCS) problem may look as follows:* Given two texts: text A (answer text) of length n, and string S (original source text) of length m. Our goal is to produce their longest common subsequence of words: the longest sequence of words that appear left-to-right in both texts (though the words don't have to be in continuous order).* Consider: * A = "i think pagerank is a link analysis algorithm used by google that uses a system of weights attached to each element of a hyperlinked set of documents" * S = "pagerank is a link analysis algorithm used by the google internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents"* In this case, we can see that the start of each sentence of fairly similar, having overlap in the sequence of words, "pagerank is a link analysis algorithm used by" before diverging slightly. Then we **continue moving left -to-right along both texts** until we see the next common sequence; in this case it is only one word, "google". Next we find "that" and "a" and finally the same ending "to each element of a hyperlinked set of documents".* Below, is a clear visual of how these sequences were found, sequentially, in each text.* Now, those words appear in left-to-right order in each document, sequentially, and even though there are some words in between, we count this as the longest common subsequence between the two texts. * If I count up each word that I found in common I get the value 20. **So, LCS has length 20**. * Next, to normalize this value, divide by the total length of the student answer; in this example that length is only 27. **So, the function `lcs_norm_word` should return the value `20/27` or about `0.7408`.**In this way, LCS is a great indicator of cut-and-paste plagiarism or if someone has referenced the same source text multiple times in an answer. LCS, dynamic programmingIf you read through the scenario above, you can see that this algorithm depends on looking at two texts and comparing them word by word. You can solve this problem in multiple ways. First, it may be useful to `.split()` each text into lists of comma separated words to compare. Then, you can iterate through each word in the texts and compare them, adding to your value for LCS as you go. The method I recommend for implementing an efficient LCS algorithm is: using a matrix and dynamic programming. **Dynamic programming** is all about breaking a larger problem into a smaller set of subproblems, and building up a complete result without having to repeat any subproblems. This approach assumes that you can split up a large LCS task into a combination of smaller LCS tasks. Let's look at a simple example that compares letters:* A = "ABCD"* S = "BD"We can see right away that the longest subsequence of _letters_ here is 2 (B and D are in sequence in both strings). And we can calculate this by looking at relationships between each letter in the two strings, A and S.Here, I have a matrix with the letters of A on top and the letters of S on the left side:This starts out as a matrix that has as many columns and rows as letters in the strings S and O **+1** additional row and column, filled with zeros on the top and left sides. So, in this case, instead of a 2x4 matrix it is a 3x5.Now, we can fill this matrix up by breaking it into smaller LCS problems. For example, let's first look at the shortest substrings: the starting letter of A and S. We'll first ask, what is the Longest Common Subsequence between these two letters "A" and "B"? **Here, the answer is zero and we fill in the corresponding grid cell with that value.**Then, we ask the next question, what is the LCS between "AB" and "B"?**Here, we have a match, and can fill in the appropriate value 1**.If we continue, we get to a final matrix that looks as follows, with a **2** in the bottom right corner.The final LCS will be that value **2** *normalized* by the number of n-grams in A. So, our normalized value is 2/4 = **0.5**. The matrix rulesOne thing to notice here is that, you can efficiently fill up this matrix one cell at a time. Each grid cell only depends on the values in the grid cells that are directly on top and to the left of it, or on the diagonal/top-left. The rules are as follows:* Start with a matrix that has one extra row and column of zeros.* As you traverse your string: * If there is a match, fill that grid cell with the value to the top-left of that cell *plus* one. So, in our case, when we found a matching B-B, we added +1 to the value in the top-left of the matching cell, 0. * If there is not a match, take the *maximum* value from either directly to the left or the top cell, and carry that value over to the non-match cell.After completely filling the matrix, **the bottom-right cell will hold the non-normalized LCS value**.This matrix treatment can be applied to a set of words instead of letters. Your function should apply this to the words in two texts and return the normalized LCS value.
###Code
# Compute the normalized LCS given an answer text and a source text
def lcs_norm_word(answer_text, source_text):
'''Computes the longest common subsequence of words in two texts; returns a normalized value.
:param answer_text: The pre-processed text for an answer text
:param source_text: The pre-processed text for an answer's associated source text
:return: A normalized LCS value'''
splitted_answer = answer_text.split()
splitted_source = source_text.split()
n_a=len(splitted_answer)+1
n_s=len(splitted_source)+1
matrix = [[0]*(n_a) for _ in range(n_s)]
for i in range(1,len(matrix)):
for j in range(1,len(matrix[0])):
if splitted_answer[j-1] == splitted_source[i-1]:
matrix[i][j] = 1 + matrix[i-1][j-1]
else:
matrix[i][j] = max(matrix[i-1][j], matrix[i][j-1])
return matrix[-1][-1] /( n_a-1)
###Output
_____no_output_____
###Markdown
Test cellsLet's start by testing out your code on the example given in the initial description.In the below cell, we have specified strings A (answer text) and S (original source text). We know that these texts have 20 words in common and the submitted answer is 27 words long, so the normalized, longest common subsequence should be 20/27.
###Code
# Run the test scenario from above
# does your function return the expected value?
A = "i think pagerank is a link analysis algorithm used by google that uses a system of weights attached to each element of a hyperlinked set of documents"
S = "pagerank is a link analysis algorithm used by the google internet search engine that assigns a numerical weighting to each element of a hyperlinked set of documents"
# calculate LCS
lcs = lcs_norm_word(A, S)
print('LCS = ', lcs)
# expected value test
assert lcs==20/27., "Incorrect LCS value, expected about 0.7408, got "+str(lcs)
print('Test passed!')
###Output
LCS = 0.7407407407407407
Test passed!
###Markdown
This next cell runs a more rigorous test.
###Code
# run test cell
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# test lcs implementation
# params: complete_df from before, and lcs_norm_word function
tests.test_lcs(complete_df, lcs_norm_word)
###Output
Tests Passed!
###Markdown
Finally, take a look at a few resultant values for `lcs_norm_word`. Just like before, you should see that higher values correspond to higher levels of plagiarism.
###Code
# test on your own
test_indices = range(5) # look at first few files
category_vals = []
lcs_norm_vals = []
# iterate through first few docs and calculate LCS
for i in test_indices:
category_vals.append(complete_df.loc[i, 'Category'])
# get texts to compare
answer_text = complete_df.loc[i, 'Text']
task = complete_df.loc[i, 'Task']
# we know that source texts have Class = -1
orig_rows = complete_df[(complete_df['Class'] == -1)]
orig_row = orig_rows[(orig_rows['Task'] == task)]
source_text = orig_row['Text'].values[0]
# calculate lcs
lcs_val = lcs_norm_word(answer_text, source_text)
lcs_norm_vals.append(lcs_val)
# print out result, does it make sense?
print('Original category values: \n', category_vals)
print()
print('Normalized LCS values: \n', lcs_norm_vals)
###Output
Original category values:
[0, 3, 2, 1, 0]
Normalized LCS values:
[0.1917808219178082, 0.8207547169811321, 0.8464912280701754, 0.3160621761658031, 0.24257425742574257]
###Markdown
--- Create All FeaturesNow that you've completed the feature calculation functions, it's time to actually create multiple features and decide on which ones to use in your final model! In the below cells, you're provided two helper functions to help you create multiple features and store those in a DataFrame, `features_df`. Creating multiple containment featuresYour completed `calculate_containment` function will be called in the next cell, which defines the helper function `create_containment_features`. > This function returns a list of containment features, calculated for a given `n` and for *all* files in a df (assumed to the the `complete_df`).For our original files, the containment value is set to a special value, -1.This function gives you the ability to easily create several containment features, of different n-gram lengths, for each of our text files.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Function returns a list of containment features, calculated for a given n
# Should return a list of length 100 for all files in a complete_df
def create_containment_features(df, n, column_name=None):
containment_values = []
if(column_name==None):
column_name = 'c_'+str(n) # c_1, c_2, .. c_n
# iterates through dataframe rows
for i in df.index:
file = df.loc[i, 'File']
# Computes features using calculate_containment function
if df.loc[i,'Category'] > -1:
c = calculate_containment(df, n, file)
containment_values.append(c)
# Sets value to -1 for original tasks
else:
containment_values.append(-1)
print(str(n)+'-gram containment features created!')
return containment_values
###Output
_____no_output_____
###Markdown
Creating LCS featuresBelow, your complete `lcs_norm_word` function is used to create a list of LCS features for all the answer files in a given DataFrame (again, this assumes you are passing in the `complete_df`. It assigns a special value for our original, source files, -1.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Function creates lcs feature and add it to the dataframe
def create_lcs_features(df, column_name='lcs_word'):
lcs_values = []
# iterate through files in dataframe
for i in df.index:
# Computes LCS_norm words feature using function above for answer tasks
if df.loc[i,'Category'] > -1:
# get texts to compare
answer_text = df.loc[i, 'Text']
task = df.loc[i, 'Task']
# we know that source texts have Class = -1
orig_rows = df[(df['Class'] == -1)]
orig_row = orig_rows[(orig_rows['Task'] == task)]
source_text = orig_row['Text'].values[0]
# calculate lcs
lcs = lcs_norm_word(answer_text, source_text)
lcs_values.append(lcs)
# Sets to -1 for original tasks
else:
lcs_values.append(-1)
print('LCS features created!')
return lcs_values
###Output
_____no_output_____
###Markdown
EXERCISE: Create a features DataFrame by selecting an `ngram_range`The paper suggests calculating the following features: containment *1-gram to 5-gram* and *longest common subsequence*. > In this exercise, you can choose to create even more features, for example from *1-gram to 7-gram* containment features and *longest common subsequence*. You'll want to create at least 6 features to choose from as you think about which to give to your final, classification model. Defining and comparing at least 6 different features allows you to discard any features that seem redundant, and choose to use the best features for your final model!In the below cell **define an n-gram range**; these will be the n's you use to create n-gram containment features. The rest of the feature creation code is provided.
###Code
# Define an ngram range
ngram_range = range(1,7)
# The following code may take a minute to run, depending on your ngram_range
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
features_list = []
# Create features in a features_df
all_features = np.zeros((len(ngram_range)+1, len(complete_df)))
# Calculate features for containment for ngrams in range
i=0
for n in ngram_range:
column_name = 'c_'+str(n)
features_list.append(column_name)
# create containment features
all_features[i]=np.squeeze(create_containment_features(complete_df, n))
i+=1
# Calculate features for LCS_Norm Words
features_list.append('lcs_word')
all_features[i]= np.squeeze(create_lcs_features(complete_df))
# create a features dataframe
features_df = pd.DataFrame(np.transpose(all_features), columns=features_list)
# Print all features/columns
print()
print('Features: ', features_list)
print()
# print some results
features_df.head(10)
###Output
_____no_output_____
###Markdown
Correlated FeaturesYou should use feature correlation across the *entire* dataset to determine which features are ***too*** **highly-correlated** with each other to include both features in a single model. For this analysis, you can use the *entire* dataset due to the small sample size we have. All of our features try to measure the similarity between two texts. Since our features are designed to measure similarity, it is expected that these features will be highly-correlated. Many classification models, for example a Naive Bayes classifier, rely on the assumption that features are *not* highly correlated; highly-correlated features may over-inflate the importance of a single feature. So, you'll want to choose your features based on which pairings have the lowest correlation. These correlation values range between 0 and 1; from low to high correlation, and are displayed in a [correlation matrix](https://www.displayr.com/what-is-a-correlation-matrix/), below.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
# Create correlation matrix for just Features to determine different models to test
corr_matrix = features_df.corr().abs().round(2)
# display shows all of a dataframe
display(corr_matrix)
###Output
_____no_output_____
###Markdown
EXERCISE: Create selected train/test dataComplete the `train_test_data` function below. This function should take in the following parameters:* `complete_df`: A DataFrame that contains all of our processed text data, file info, datatypes, and class labels* `features_df`: A DataFrame of all calculated features, such as containment for ngrams, n= 1-5, and lcs values for each text file listed in the `complete_df` (this was created in the above cells)* `selected_features`: A list of feature column names, ex. `['c_1', 'lcs_word']`, which will be used to select the final features in creating train/test sets of data.It should return two tuples:* `(train_x, train_y)`, selected training features and their corresponding class labels (0/1)* `(test_x, test_y)`, selected training features and their corresponding class labels (0/1)** Note: x and y should be arrays of feature values and numerical class labels, respectively; not DataFrames.**Looking at the above correlation matrix, you should decide on a **cutoff** correlation value, less than 1.0, to determine which sets of features are *too* highly-correlated to be included in the final training and test data. If you cannot find features that are less correlated than some cutoff value, it is suggested that you increase the number of features (longer n-grams) to choose from or use *only one or two* features in your final model to avoid introducing highly-correlated features.Recall that the `complete_df` has a `Datatype` column that indicates whether data should be `train` or `test` data; this should help you split the data appropriately.
###Code
# Takes in dataframes and a list of selected features (column names)
# and returns (train_x, train_y), (test_x, test_y)
def train_test_data(complete_df, features_df, selected_features):
'''Gets selected training and test features from given dataframes, and
returns tuples for training and test features and their corresponding class labels.
:param complete_df: A dataframe with all of our processed text data, datatypes, and labels
:param features_df: A dataframe of all computed, similarity features
:param selected_features: An array of selected features that correspond to certain columns in `features_df`
:return: training and test features and labels: (train_x, train_y), (test_x, test_y)'''
# get the training features
train_x = (features_df[selected_features].iloc[complete_df.index[complete_df["Datatype"] == 'train']]).values
# And training class labels (0 or 1)
train_y = ((complete_df[complete_df["Datatype"] == 'train'])['Class']).values
# get the test features and labels
test_x = (features_df[selected_features].iloc[complete_df.index[complete_df["Datatype"] == 'test']]).values
test_y = ((complete_df[complete_df["Datatype"] == 'test'])['Class']).values
return (train_x, train_y), (test_x, test_y)
###Output
_____no_output_____
###Markdown
Test cellsBelow, test out your implementation and create the final train/test data.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
test_selection = list(features_df)[:2] # first couple columns as a test
# test that the correct train/test data is created
(train_x, train_y), (test_x, test_y) = train_test_data(complete_df, features_df, test_selection)
# params: generated train/test data
tests.test_data_split(train_x, train_y, test_x, test_y)
###Output
Tests Passed!
###Markdown
EXERCISE: Select "good" featuresIf you passed the test above, you can create your own train/test data, below. Define a list of features you'd like to include in your final mode, `selected_features`; this is a list of the features names you want to include.
###Code
# Select your list of features, this should be column names from features_df
# ex. ['c_1', 'lcs_word']
selected_features = ['c_1', 'c_5', 'lcs_word']
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
(train_x, train_y), (test_x, test_y) = train_test_data(complete_df, features_df, selected_features)
# check that division of samples seems correct
# these should add up to 95 (100 - 5 original files)
print('Training size: ', len(train_x))
print('Test size: ', len(test_x))
print()
print('Training df sample: \n', train_x[:10])
###Output
Training size: 70
Test size: 25
Training df sample:
[[0.39814815 0. 0.19178082]
[0.86936937 0.44954128 0.84649123]
[0.59358289 0.08196721 0.31606218]
[0.54450262 0. 0.24257426]
[0.32950192 0. 0.16117216]
[0.59030837 0. 0.30165289]
[0.75977654 0.24571429 0.48430493]
[0.51612903 0. 0.27083333]
[0.44086022 0. 0.22395833]
[0.97945205 0.78873239 0.9 ]]
###Markdown
Question 2: How did you decide on which features to include in your final model?
###Code
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(10,6))
s = sns.heatmap(corr_matrix, annot = True, cmap = 'RdBu', vmin = 0.8, vmax=1)
s.set_yticklabels(s.get_yticklabels(), rotation = 0, fontsize =12)
s.set_xticklabels(s.get_xticklabels(), rotation = 90, fontsize =12)
plt.title('Correlation Heatmap', fontsize =18)
plt.show()
###Output
_____no_output_____
###Markdown
**Answer:** C1 C5 LCS methodc_3 and c_4, c_4 and c_5 , c_5 and c_6 and are the highest correlated features.We drop C2 as it has correlation above the threshold (0.97) and select C1 and C5 that have low correlation with each other correlation with each other and lcs_word as it's different measurement of similarity. --- Creating Final Data FilesNow, you are almost ready to move on to training a model in SageMaker!You'll want to access your train and test data in SageMaker and upload it to S3. In this project, SageMaker will expect the following format for your train/test data:* Training and test data should be saved in one `.csv` file each, ex `train.csv` and `test.csv`* These files should have class labels in the first column and features in the rest of the columnsThis format follows the practice, outlined in the [SageMaker documentation](https://docs.aws.amazon.com/sagemaker/latest/dg/cdf-training.html), which reads: "Amazon SageMaker requires that a CSV file doesn't have a header record and that the target variable [class label] is in the first column." EXERCISE: Create csv filesDefine a function that takes in x (features) and y (labels) and saves them to one `.csv` file at the path `data_dir/filename`.It may be useful to use pandas to merge your features and labels into one DataFrame and then convert that into a csv file. You can make sure to get rid of any incomplete rows, in a DataFrame, by using `dropna`.
###Code
def make_csv(x, y, filename, data_dir):
'''Merges features and labels and converts them into one csv file with labels in the first column.
:param x: Data features
:param y: Data labels
:param file_name: Name of csv file, ex. 'train.csv'
:param data_dir: The directory where files will be saved
'''
# make data dir, if it does not exist
if not os.path.exists(data_dir):
os.makedirs(data_dir)
df = pd.concat([pd.DataFrame(y), pd.DataFrame(x)], axis=1).dropna()
df.to_csv(os.path.join(data_dir, filename), index=False, header=False)
# nothing is returned, but a print statement indicates that the function has run
print('Path created: '+str(data_dir)+'/'+str(filename))
###Output
_____no_output_____
###Markdown
Test cellsTest that your code produces the correct format for a `.csv` file, given some text features and labels.
###Code
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
fake_x = [ [0.39814815, 0.0001, 0.19178082],
[0.86936937, 0.44954128, 0.84649123],
[0.44086022, 0., 0.22395833] ]
fake_y = [0, 1, 1]
make_csv(fake_x, fake_y, filename='to_delete.csv', data_dir='test_csv')
# read in and test dimensions
fake_df = pd.read_csv('test_csv/to_delete.csv', header=None)
# check shape
assert fake_df.shape==(3, 4), \
'The file should have as many rows as data_points and as many columns as features+1 (for indices).'
# check that first column = labels
assert np.all(fake_df.iloc[:,0].values==fake_y), 'First column is not equal to the labels, fake_y.'
print('Tests passed!')
# delete the test csv file, generated above
! rm -rf test_csv
###Output
_____no_output_____
###Markdown
If you've passed the tests above, run the following cell to create `train.csv` and `test.csv` files in a directory that you specify! This will save the data in a local directory. Remember the name of this directory because you will reference it again when uploading this data to S3.
###Code
# can change directory, if you want
data_dir = 'plagiarism_data'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
make_csv(train_x, train_y, filename='train.csv', data_dir=data_dir)
make_csv(test_x, test_y, filename='test.csv', data_dir=data_dir)
###Output
Path created: plagiarism_data/train.csv
Path created: plagiarism_data/test.csv
|
rebar_toy.ipynb | ###Markdown
Implementation of REBAR (https://arxiv.org/abs/1703.07370), a low-variance, unbiased gradient estimator for discrete latent variable models. This notebook is focused solely on the toy problem from Section 5.1 to help illuminate how REBAR works. The objective of the toy problem is to estimate the parameter for a single Bernoulli random variable. Recall that the problem being solved is $\text{max} \hspace{5px} \mathbb{E} [f(b, \theta) | p(b) ]$, $b$ ~ Bernoulli($\theta$).For the toy problem, we want to estimate $\theta$ that minimizes the mean square error $ \mathbb{E} [(b - t)^2 \hspace{5px}|\hspace{5px} p(b) ]$, where $t = 0.45$.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from torch.autograd import Variable
from torch.autograd import grad
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Lets first create our simple model. It's just a single number $\theta$, $0 < \theta < 1$, whose value we want to estimate.
###Code
class SimpleBernoulli(nn.Module):
def __init__(self, init_value=0.5):
super(SimpleBernoulli, self).__init__()
self.theta = nn.Parameter(torch.FloatTensor([init_value]))
def forward(self):
return F.sigmoid(self.theta)
###Output
_____no_output_____
###Markdown
Now, the REBAR gradient estimator is a single-sample Monte-Carlo estimate that attempts to remove the bias introduced by a continuous relaxation of the discrete latent variable. It does this by combining gradients from the continuous relaxation (a.k.a. the Concrete relaxation (https://arxiv.org/abs/1611.00712)), with REINFORCE gradients. I won't go into the details of REBAR's derivation (see Section 3 of the paper), but I will break down the components of the REBAR estimator and explain the control variates as clearly as possible. Please let me know if you notice that I misinterpreted anything or made any errors (this is a fairly challenging paper for non-statisticians!)First, let's carefully implement some utility functions.
###Code
def binary_log_likelihood(y, log_y_hat):
# standard LL for vectors of binary labels y and log predictions log_y_hat
return (y * -F.softplus(-log_y_hat)) + (1 - y) * (-log_y_hat - F.softplus(-log_y_hat))
def H(x):
# Heaviside function, 0 if x < 0 else 1
return torch.div(F.threshold(x, 0, 0), x)
def reparam_pz(u, theta):
# From Appendix C of the paper, this is the reparameterization of p(z).
# Returns z, a Gumbel RV.
return (torch.log(theta) - torch.log(1 - theta)) + (torch.log(u) - torch.log(1 - u))
def reparam_pz_b(v, b, theta):
# From Appendix C of the paper, this is the reparameterization of p(z|b) for the
# case where b ~ Bernoulli($\theta$). Returns z_squiggle, a Gumbel RV
return(b * F.softplus(torch.log(v) - torch.log((1 - v) * (1 - theta)))) \
+ ((1 - b) * (-F.softplus(torch.log(v) - torch.log(v * (1 - theta)))))
###Output
_____no_output_____
###Markdown
One of the main concepts to understand is the fact that the Concrete relaxation is applied to the discrete RV b ~ Bernoulli($\theta$), s.t. b = H(z) where H is the heaviside function and z ~ Gumbel.Now, we can initialize a few things.
###Code
random_seed = 1337
torch.manual_seed(random_seed)
# hyperparams
rebar_eta_z = 0.1
rebar_eta_zb = 0.1
rebar_lamda=0.5
concrete_lamda = 0.5
batch_size = 128
train_steps = 8000
# Initialize three models to compare the REINFORCE, Concrete(0.5), and REBAR estimators
reinforce = SimpleBernoulli()
concrete = SimpleBernoulli()
rebar = SimpleBernoulli()
reinforce_opt = optim.Adam(reinforce.parameters(), lr=1e-3)
concrete_opt = optim.Adam(concrete.parameters(), lr=1e-3)
rebar_opt = optim.Adam(rebar.parameters(), lr=1e-3)
mse = nn.MSELoss()
# labels
targets = Variable(torch.FloatTensor([0.45]).repeat(batch_size), requires_grad=False)
###Output
_____no_output_____
###Markdown
Now for the main training loop, where most of the REBAR magic happens:
###Code
reinforce_loss = []
concrete_loss = []
rebar_loss = []
for i in range(train_steps):
# For each iteration of the loop, we will compute a
# single-sample MC estimate of the gradient
# Get the latest estimate of $\theta$ copy it to form a minibatch
reinforce_theta = reinforce.forward().repeat(batch_size)
concrete_theta = concrete.forward().repeat(batch_size)
rebar_theta = rebar.forward().repeat(batch_size)
# sample batch_size pairs of Unif(0,1). You're supposed to couple u,v
# to do the reparameterizations, but we omit that for this toy problem
uv = Variable(torch.FloatTensor(2, batch_size).uniform_(0, 1), requires_grad=False)
u = uv[0] + 1e-9 # for numerical stability
v = uv[1] + 1e-9 # for numerical stability
########## First, we'll compute the REINFORCE estimator ##########
# Lets record where the loss is at currently
discrete_reinforce_preds = torch.bernoulli(reinforce_theta.detach())
reinforce_loss.append(mse(discrete_reinforce_preds, targets).data.numpy())
# Now, the REINFORCE estimator (Eq. 2 of the paper, beg. of Section 3)
reinforce_z = reparam_pz(u, reinforce_theta)
reinforce_Hz = H(reinforce_z) # this is the non-differentiable reparameterization
# evaluate f
reinforce_f_Hz = (reinforce_Hz - targets) ** 2
# This is d_log_P(b) / d_$\theta$
grad_logP = grad(binary_log_likelihood(reinforce_Hz, \
torch.log(reinforce_theta)).split(1), reinforce_theta)[0]
# Apply the Monte-carlo REINFORCE gradient estimator
reinforce_grad_est = (reinforce_f_Hz * grad_logP).mean()
reinforce_opt.zero_grad()
reinforce.theta.grad = reinforce_grad_est
reinforce_opt.step()
########## Next up, the Concrete(0.5) estimator ##########
discrete_concrete_preds = torch.bernoulli(concrete_theta.detach())
concrete_loss.append(mse(discrete_concrete_preds, targets).data.numpy())
# Now, the Concrete(0.5) estimator. We compute the continuous relaxation of
# the reparameterization and use that.. (end of Section 2 of the paper)
concrete_z = reparam_pz(u, concrete_theta)
soft_concrete_z = F.sigmoid(concrete_z / concrete_lamda) + 1e-9
# evaluate f
f_soft_concrete_z = (soft_concrete_z - targets) ** 2
grad_f = grad(f_soft_concrete_z.split(1), concrete_theta)[0]
# Apply the Monte-carlo Concrete gradient estimator
concrete_grad_est = grad_f.mean()
concrete_opt.zero_grad()
concrete.theta.grad = concrete_grad_est
concrete_opt.step()
########## Finally, we tie it all together with REBAR ##########
discrete_rebar_preds = torch.bernoulli(rebar_theta.detach())
rebar_loss.append(mse(discrete_rebar_preds, targets).data.numpy())
# We compute the continuous relaxation of the reparameterization
# as well as the REINFORCE estimator and combine them.
rebar_z = reparam_pz(u, rebar_theta)
# "hard" bc this is non-differentiable
hard_concrete_rebar_z = H(rebar_z)
# We also need to compute the reparam for p(z|b) - see the paper
# for explanation of this conditional marginalization as control variate
rebar_zb = reparam_pz_b(v, hard_concrete_rebar_z, rebar_theta)
# "soft" relaxations
soft_concrete_rebar_z = F.sigmoid(rebar_z / rebar_lamda) + 1e-9
soft_concrete_rebar_zb = F.sigmoid(rebar_zb / rebar_lamda) + 1e-9
# evaluate f
f_hard_concrete_rebar_z = (hard_concrete_rebar_z - targets) ** 2
f_soft_concrete_rebar_z = (soft_concrete_rebar_z - targets) ** 2
f_soft_concrete_rebar_zb = (soft_concrete_rebar_zb - targets) ** 2
# compute the necessary derivatives
grad_logP = grad(binary_log_likelihood(hard_concrete_rebar_z, \
torch.log(rebar_theta)).split(1), rebar_theta, retain_graph=True)[0]
grad_sc_z = grad(f_soft_concrete_rebar_z.split(1), rebar_theta, retain_graph=True)[0]
grad_sc_zb = grad(f_soft_concrete_rebar_zb.split(1), rebar_theta)[0]
# Notice how we combine the REINFORCE and concrete estimators
rebar_grad_est = (((f_hard_concrete_rebar_z - rebar_eta_zb * f_soft_concrete_rebar_zb) \
* grad_logP) + rebar_eta_zb * grad_sc_z - rebar_eta_zb * grad_sc_zb).mean()
# Apply the Monte-carlo REBAR gradient estimator
rebar_opt.zero_grad()
rebar.theta.grad = rebar_grad_est
rebar_opt.step()
if (i+1) % 1000 == 0:
print("step: {}".format(i+1))
print("reinforce_loss {}".format(reinforce_loss[-1]))
print("concrete(0.5)_loss {}".format(concrete_loss[-1]))
print("rebar_loss {}\n".format(rebar_loss[-1]))
###Output
step: 1000
reinforce_loss [ 0.24390602]
concrete(0.5)_loss [ 0.24624977]
rebar_loss [ 0.24156231]
step: 2000
reinforce_loss [ 0.22281231]
concrete(0.5)_loss [ 0.23921856]
rebar_loss [ 0.22515607]
step: 3000
reinforce_loss [ 0.21109354]
concrete(0.5)_loss [ 0.23453106]
rebar_loss [ 0.21187481]
step: 4000
reinforce_loss [ 0.20249985]
concrete(0.5)_loss [ 0.23531231]
rebar_loss [ 0.20718734]
step: 5000
reinforce_loss [ 0.20562485]
concrete(0.5)_loss [ 0.23765603]
rebar_loss [ 0.20484361]
step: 6000
reinforce_loss [ 0.20328109]
concrete(0.5)_loss [ 0.23921858]
rebar_loss [ 0.20562483]
step: 7000
reinforce_loss [ 0.20328109]
concrete(0.5)_loss [ 0.24078104]
rebar_loss [ 0.20406234]
step: 8000
reinforce_loss [ 0.2032811]
concrete(0.5)_loss [ 0.23374978]
rebar_loss [ 0.20406234]
###Markdown
We can plot the loss per train step to see if we can replicate the results from the paper
###Code
# @hidden_cell
fig = plt.figure(figsize=(12, 9))
plt.plot(reinforce_loss, 'm', label="REINFORCE", alpha=0.7)
plt.plot(concrete_loss, 'r', label="Concrete(0.5)", alpha=0.7)
plt.plot(rebar_loss, 'b', label="REBAR", alpha=0.7)
plt.title("Optimal loss is 0.2025")
plt.xlabel("train_steps")
plt.ylabel("loss")
plt.ylim(0.2, 0.32)
plt.grid(True)
plt.legend()
plt.show()
###Output
_____no_output_____ |
scripts/d21-en/mxnet/chapter_natural-language-processing-pretraining/bert-dataset.ipynb | ###Markdown
The Dataset for Pretraining BERT:label:`sec_bert-dataset`To pretrain the BERT model as implemented in :numref:`sec_bert`,we need to generate the dataset in the ideal format to facilitatethe two pretraining tasks:masked language modeling and next sentence prediction.On one hand,the original BERT model is pretrained on the concatenation oftwo huge corpora BookCorpus and English Wikipedia (see :numref:`subsec_bert_pretraining_tasks`),making it hard to run for most readers of this book.On the other hand,the off-the-shelf pretrained BERT modelmay not fit for applications from specific domains like medicine.Thus, it is getting popular to pretrain BERT on a customized dataset.To facilitate the demonstration of BERT pretraining,we use a smaller corpus WikiText-2 :cite:`Merity.Xiong.Bradbury.ea.2016`.Comparing with the PTB dataset used for pretraining word2vec in :numref:`sec_word2vec_data`,WikiText-2 i) retains the original punctuation, making it suitable for next sentence prediction; ii) retains the original case and numbers; iii) is over twice larger.
###Code
import os
import random
from mxnet import gluon, np, npx
from d2l import mxnet as d2l
npx.set_np()
###Output
_____no_output_____
###Markdown
In the WikiText-2 dataset,each line represents a paragraph wherespace is inserted between any punctuation and its preceding token.Paragraphs with at least two sentences are retained.To split sentences, we only use the period as the delimiter for simplicity.We leave discussions of more complex sentence splitting techniques in the exercisesat the end of this section.
###Code
#@save
d2l.DATA_HUB['wikitext-2'] = (
'https://s3.amazonaws.com/research.metamind.io/wikitext/'
'wikitext-2-v1.zip', '3c914d17d80b1459be871a5039ac23e752a53cbe')
#@save
def _read_wiki(data_dir):
file_name = os.path.join(data_dir, 'wiki.train.tokens')
with open(file_name, 'r') as f:
lines = f.readlines()
# Uppercase letters are converted to lowercase ones
paragraphs = [
line.strip().lower().split(' . ') for line in lines
if len(line.split(' . ')) >= 2]
random.shuffle(paragraphs)
return paragraphs
###Output
_____no_output_____
###Markdown
Defining Helper Functions for Pretraining TasksIn the following,we begin by implementing helper functions for the two BERT pretraining tasks:next sentence prediction and masked language modeling.These helper functions will be invoked laterwhen transforming the raw text corpusinto the dataset of the ideal format to pretrain BERT. Generating the Next Sentence Prediction TaskAccording to descriptions of :numref:`subsec_nsp`,the `_get_next_sentence` function generates a training examplefor the binary classification task.
###Code
#@save
def _get_next_sentence(sentence, next_sentence, paragraphs):
if random.random() < 0.5:
is_next = True
else:
# `paragraphs` is a list of lists of lists
next_sentence = random.choice(random.choice(paragraphs))
is_next = False
return sentence, next_sentence, is_next
###Output
_____no_output_____
###Markdown
The following function generates training examples for next sentence predictionfrom the input `paragraph` by invoking the `_get_next_sentence` function.Here `paragraph` is a list of sentences, where each sentence is a list of tokens.The argument `max_len` specifies the maximum length of a BERT input sequence during pretraining.
###Code
#@save
def _get_nsp_data_from_paragraph(paragraph, paragraphs, vocab, max_len):
nsp_data_from_paragraph = []
for i in range(len(paragraph) - 1):
tokens_a, tokens_b, is_next = _get_next_sentence(
paragraph[i], paragraph[i + 1], paragraphs)
# Consider 1 '<cls>' token and 2 '<sep>' tokens
if len(tokens_a) + len(tokens_b) + 3 > max_len:
continue
tokens, segments = d2l.get_tokens_and_segments(tokens_a, tokens_b)
nsp_data_from_paragraph.append((tokens, segments, is_next))
return nsp_data_from_paragraph
###Output
_____no_output_____
###Markdown
Generating the Masked Language Modeling Task:label:`subsec_prepare_mlm_data`In order to generate training examplesfor the masked language modeling taskfrom a BERT input sequence,we define the following `_replace_mlm_tokens` function.In its inputs, `tokens` is a list of tokens representing a BERT input sequence,`candidate_pred_positions` is a list of token indices of the BERT input sequenceexcluding those of special tokens (special tokens are not predicted in the masked language modeling task),and `num_mlm_preds` indicates the number of predictions (recall 15% random tokens to predict).Following the definition of the masked language modeling task in :numref:`subsec_mlm`,at each prediction position, the input may be replaced bya special “<mask>” token or a random token, or remain unchanged.In the end, the function returns the input tokens after possible replacement,the token indices where predictions take place and labels for these predictions.
###Code
#@save
def _replace_mlm_tokens(tokens, candidate_pred_positions, num_mlm_preds,
vocab):
# Make a new copy of tokens for the input of a masked language model,
# where the input may contain replaced '<mask>' or random tokens
mlm_input_tokens = [token for token in tokens]
pred_positions_and_labels = []
# Shuffle for getting 15% random tokens for prediction in the masked
# language modeling task
random.shuffle(candidate_pred_positions)
for mlm_pred_position in candidate_pred_positions:
if len(pred_positions_and_labels) >= num_mlm_preds:
break
masked_token = None
# 80% of the time: replace the word with the '<mask>' token
if random.random() < 0.8:
masked_token = '<mask>'
else:
# 10% of the time: keep the word unchanged
if random.random() < 0.5:
masked_token = tokens[mlm_pred_position]
# 10% of the time: replace the word with a random word
else:
masked_token = random.randint(0, len(vocab) - 1)
mlm_input_tokens[mlm_pred_position] = masked_token
pred_positions_and_labels.append(
(mlm_pred_position, tokens[mlm_pred_position]))
return mlm_input_tokens, pred_positions_and_labels
###Output
_____no_output_____
###Markdown
By invoking the aforementioned `_replace_mlm_tokens` function,the following function takes a BERT input sequence (`tokens`)as an input and returns indices of the input tokens(after possible token replacement as described in :numref:`subsec_mlm`),the token indices where predictions take place,and label indices for these predictions.
###Code
#@save
def _get_mlm_data_from_tokens(tokens, vocab):
candidate_pred_positions = []
# `tokens` is a list of strings
for i, token in enumerate(tokens):
# Special tokens are not predicted in the masked language modeling
# task
if token in ['<cls>', '<sep>']:
continue
candidate_pred_positions.append(i)
# 15% of random tokens are predicted in the masked language modeling task
num_mlm_preds = max(1, round(len(tokens) * 0.15))
mlm_input_tokens, pred_positions_and_labels = _replace_mlm_tokens(
tokens, candidate_pred_positions, num_mlm_preds, vocab)
pred_positions_and_labels = sorted(pred_positions_and_labels,
key=lambda x: x[0])
pred_positions = [v[0] for v in pred_positions_and_labels]
mlm_pred_labels = [v[1] for v in pred_positions_and_labels]
return vocab[mlm_input_tokens], pred_positions, vocab[mlm_pred_labels]
###Output
_____no_output_____
###Markdown
Transforming Text into the Pretraining DatasetNow we are almost ready to customize a `Dataset` class for pretraining BERT.Before that, we still need to define a helper function `_pad_bert_inputs`to append the special “<mask>” tokens to the inputs.Its argument `examples` contain the outputs from the helper functions `_get_nsp_data_from_paragraph` and `_get_mlm_data_from_tokens` for the two pretraining tasks.
###Code
#@save
def _pad_bert_inputs(examples, max_len, vocab):
max_num_mlm_preds = round(max_len * 0.15)
all_token_ids, all_segments, valid_lens, = [], [], []
all_pred_positions, all_mlm_weights, all_mlm_labels = [], [], []
nsp_labels = []
for (token_ids, pred_positions, mlm_pred_label_ids, segments,
is_next) in examples:
all_token_ids.append(
np.array(
token_ids + [vocab['<pad>']] * (max_len - len(token_ids)),
dtype='int32'))
all_segments.append(
np.array(segments + [0] * (max_len - len(segments)),
dtype='int32'))
# `valid_lens` excludes count of '<pad>' tokens
valid_lens.append(np.array(len(token_ids), dtype='float32'))
all_pred_positions.append(
np.array(
pred_positions + [0] *
(max_num_mlm_preds - len(pred_positions)), dtype='int32'))
# Predictions of padded tokens will be filtered out in the loss via
# multiplication of 0 weights
all_mlm_weights.append(
np.array([1.0] * len(mlm_pred_label_ids) + [0.0] *
(max_num_mlm_preds - len(pred_positions)),
dtype='float32'))
all_mlm_labels.append(
np.array(
mlm_pred_label_ids + [0] *
(max_num_mlm_preds - len(mlm_pred_label_ids)), dtype='int32'))
nsp_labels.append(np.array(is_next))
return (all_token_ids, all_segments, valid_lens, all_pred_positions,
all_mlm_weights, all_mlm_labels, nsp_labels)
###Output
_____no_output_____
###Markdown
Putting the helper functions forgenerating training examples of the two pretraining tasks,and the helper function for padding inputs together,we customize the following `_WikiTextDataset` class as the WikiText-2 dataset for pretraining BERT.By implementing the `__getitem__ `function,we can arbitrarily access the pretraining (masked language modeling and next sentence prediction) examples generated from a pair of sentences from the WikiText-2 corpus.The original BERT model uses WordPiece embeddings whose vocabulary size is 30,000 :cite:`Wu.Schuster.Chen.ea.2016`.The tokenization method of WordPiece is a slight modification ofthe original byte pair encoding algorithm in :numref:`subsec_Byte_Pair_Encoding`.For simplicity, we use the `d2l.tokenize` function for tokenization.Infrequent tokens that appear less than five times are filtered out.
###Code
#@save
class _WikiTextDataset(gluon.data.Dataset):
def __init__(self, paragraphs, max_len):
# Input `paragraphs[i]` is a list of sentence strings representing a
# paragraph; while output `paragraphs[i]` is a list of sentences
# representing a paragraph, where each sentence is a list of tokens
paragraphs = [
d2l.tokenize(paragraph, token='word') for paragraph in paragraphs]
sentences = [
sentence for paragraph in paragraphs for sentence in paragraph]
self.vocab = d2l.Vocab(
sentences, min_freq=5,
reserved_tokens=['<pad>', '<mask>', '<cls>', '<sep>'])
# Get data for the next sentence prediction task
examples = []
for paragraph in paragraphs:
examples.extend(
_get_nsp_data_from_paragraph(paragraph, paragraphs,
self.vocab, max_len))
# Get data for the masked language model task
examples = [(_get_mlm_data_from_tokens(tokens, self.vocab) +
(segments, is_next))
for tokens, segments, is_next in examples]
# Pad inputs
(self.all_token_ids, self.all_segments, self.valid_lens,
self.all_pred_positions, self.all_mlm_weights, self.all_mlm_labels,
self.nsp_labels) = _pad_bert_inputs(examples, max_len, self.vocab)
def __getitem__(self, idx):
return (self.all_token_ids[idx], self.all_segments[idx],
self.valid_lens[idx], self.all_pred_positions[idx],
self.all_mlm_weights[idx], self.all_mlm_labels[idx],
self.nsp_labels[idx])
def __len__(self):
return len(self.all_token_ids)
###Output
_____no_output_____
###Markdown
By using the `_read_wiki` function and the `_WikiTextDataset` class,we define the following `load_data_wiki` to download and WikiText-2 datasetand generate pretraining examples from it.
###Code
#@save
def load_data_wiki(batch_size, max_len):
num_workers = d2l.get_dataloader_workers()
data_dir = d2l.download_extract('wikitext-2', 'wikitext-2')
paragraphs = _read_wiki(data_dir)
train_set = _WikiTextDataset(paragraphs, max_len)
train_iter = gluon.data.DataLoader(train_set, batch_size, shuffle=True,
num_workers=num_workers)
return train_iter, train_set.vocab
###Output
_____no_output_____
###Markdown
Setting the batch size to 512 and the maximum length of a BERT input sequence to be 64,we print out the shapes of a minibatch of BERT pretraining examples.Note that in each BERT input sequence,$10$ ($64 \times 0.15$) positions are predicted for the masked language modeling task.
###Code
batch_size, max_len = 512, 64
train_iter, vocab = load_data_wiki(batch_size, max_len)
for (tokens_X, segments_X, valid_lens_x, pred_positions_X, mlm_weights_X,
mlm_Y, nsp_y) in train_iter:
print(tokens_X.shape, segments_X.shape, valid_lens_x.shape,
pred_positions_X.shape, mlm_weights_X.shape, mlm_Y.shape,
nsp_y.shape)
break
###Output
(512, 64) (512, 64) (512,) (512, 10) (512, 10) (512, 10) (512,)
###Markdown
In the end, let us take a look at the vocabulary size.Even after filtering out infrequent tokens,it is still over twice larger than that of the PTB dataset.
###Code
len(vocab)
###Output
_____no_output_____ |
book-notes/3.5-classifying-newswires.ipynb | ###Markdown
Classifying newswires: a multi-class classification exampleThis notebook contains the code samples found in Chapter 3, Section 5 of [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python?a_aid=keras&a_bid=76564dff). Note that the original text features far more content, in particular further explanations and figures: in this notebook, you will only find source code and related comments.----In the previous section we saw how to classify vector inputs into two mutually exclusive classes using a densely-connected neural network. But what happens when you have more than two classes? In this section, we will build a network to classify Reuters newswires into 46 different mutually-exclusive topics. Since we have many classes, this problem is an instance of "multi-class classification", and since each data point should be classified into only one category, the problem is more specifically an instance of "single-label, multi-class classification". If each data point could have belonged to multiple categories (in our case, topics) then we would be facing a "multi-label, multi-class classification" problem. The Reuters datasetWe will be working with the _Reuters dataset_, a set of short newswires and their topics, published by Reuters in 1986. It's a very simple, widely used toy dataset for text classification. There are 46 different topics; some topics are more represented than others, but each topic has at least 10 examples in the training set.Like IMDB and MNIST, the Reuters dataset comes packaged as part of Keras. Let's take a look right away:
###Code
from keras.datasets import reuters
(train_data, train_labels), (test_data, test_labels) = reuters.load_data(num_words=10000)
###Output
_____no_output_____
###Markdown
Like with the IMDB dataset, the argument `num_words=10000` restricts the data to the 10,000 most frequently occurring words found in the data.We have 8,982 training examples and 2,246 test examples:
###Code
len(train_data)
len(test_data)
###Output
_____no_output_____
###Markdown
As with the IMDB reviews, each example is a list of integers (word indices):
###Code
train_data[10]
###Output
_____no_output_____
###Markdown
Here's how you can decode it back to words, in case you are curious:
###Code
word_index = reuters.get_word_index()
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
# Note that our indices were offset by 3
# because 0, 1 and 2 are reserved indices for "padding", "start of sequence", and "unknown".
decoded_newswire = ' '.join([reverse_word_index.get(i - 3, '?') for i in train_data[0]])
decoded_newswire
###Output
_____no_output_____
###Markdown
The label associated with an example is an integer between 0 and 45: a topic index.
###Code
train_labels[10]
###Output
_____no_output_____
###Markdown
Preparing the dataWe can vectorize the data with the exact same code as in our previous example:
###Code
import numpy as np
def vectorize_sequences(sequences, dimension=10000):
results = np.zeros((len(sequences), dimension))
for i, sequence in enumerate(sequences):
results[i, sequence] = 1.
return results
# Our vectorized training data
x_train = vectorize_sequences(train_data)
# Our vectorized test data
x_test = vectorize_sequences(test_data)
###Output
_____no_output_____
###Markdown
To vectorize the labels, there are two possibilities: we could just cast the label list as an integer tensor, or we could use a "one-hot" encoding. One-hot encoding is a widely used format for categorical data, also called "categorical encoding". For a more detailed explanation of one-hot encoding, you can refer to Chapter 6, Section 1. In our case, one-hot encoding of our labels consists in embedding each label as an all-zero vector with a 1 in the place of the label index, e.g.:
###Code
def to_one_hot(labels, dimension=46):
results = np.zeros((len(labels), dimension))
for i, label in enumerate(labels):
results[i, label] = 1.
return results
# Our vectorized training labels
one_hot_train_labels = to_one_hot(train_labels)
# Our vectorized test labels
one_hot_test_labels = to_one_hot(test_labels)
###Output
_____no_output_____
###Markdown
Note that there is a built-in way to do this in Keras, which you have already seen in action in our MNIST example:
###Code
from keras.utils.np_utils import to_categorical
one_hot_train_labels = to_categorical(train_labels)
one_hot_test_labels = to_categorical(test_labels)
###Output
_____no_output_____
###Markdown
Building our networkThis topic classification problem looks very similar to our previous movie review classification problem: in both cases, we are trying to classify short snippets of text. There is however a new constraint here: the number of output classes has gone from 2 to 46, i.e. the dimensionality of the output space is much larger. In a stack of `Dense` layers like what we were using, each layer can only access information present in the output of the previous layer. If one layer drops some information relevant to the classification problem, this information can never be recovered by later layers: each layer can potentially become an "information bottleneck". In our previous example, we were using 16-dimensional intermediate layers, but a 16-dimensional space may be too limited to learn to separate 46 different classes: such small layers may act as information bottlenecks, permanently dropping relevant information.For this reason we will use larger layers. Let's go with 64 units:
###Code
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
###Output
_____no_output_____
###Markdown
There are two other things you should note about this architecture:* We are ending the network with a `Dense` layer of size 46. This means that for each input sample, our network will output a 46-dimensional vector. Each entry in this vector (each dimension) will encode a different output class.* The last layer uses a `softmax` activation. You have already seen this pattern in the MNIST example. It means that the network will output a _probability distribution_ over the 46 different output classes, i.e. for every input sample, the network will produce a 46-dimensional output vector where `output[i]` is the probability that the sample belongs to class `i`. The 46 scores will sum to 1.The best loss function to use in this case is `categorical_crossentropy`. It measures the distance between two probability distributions: in our case, between the probability distribution output by our network, and the true distribution of the labels. By minimizing the distance between these two distributions, we train our network to output something as close as possible to the true labels.
###Code
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Validating our approachLet's set apart 1,000 samples in our training data to use as a validation set:
###Code
x_val = x_train[:1000]
partial_x_train = x_train[1000:]
y_val = one_hot_train_labels[:1000]
partial_y_train = one_hot_train_labels[1000:]
###Output
_____no_output_____
###Markdown
Now let's train our network for 20 epochs:
###Code
history = model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=512,
validation_data=(x_val, y_val))
###Output
Train on 7982 samples, validate on 1000 samples
Epoch 1/20
7982/7982 [==============================] - 2s 253us/step - loss: 2.4910 - accuracy: 0.5381 - val_loss: 1.6529 - val_accuracy: 0.6470
Epoch 2/20
7982/7982 [==============================] - 1s 154us/step - loss: 1.3807 - accuracy: 0.7066 - val_loss: 1.3030 - val_accuracy: 0.7160
Epoch 3/20
7982/7982 [==============================] - 1s 159us/step - loss: 1.0456 - accuracy: 0.7716 - val_loss: 1.1619 - val_accuracy: 0.7480
Epoch 4/20
7982/7982 [==============================] - 1s 152us/step - loss: 0.8284 - accuracy: 0.8249 - val_loss: 1.0314 - val_accuracy: 0.7770
Epoch 5/20
7982/7982 [==============================] - 1s 144us/step - loss: 0.6597 - accuracy: 0.8656 - val_loss: 0.9772 - val_accuracy: 0.7940
Epoch 6/20
7982/7982 [==============================] - 1s 147us/step - loss: 0.5267 - accuracy: 0.8930 - val_loss: 0.9226 - val_accuracy: 0.8010
Epoch 7/20
7982/7982 [==============================] - 1s 147us/step - loss: 0.4200 - accuracy: 0.9133 - val_loss: 0.9041 - val_accuracy: 0.8010
Epoch 8/20
7982/7982 [==============================] - 1s 145us/step - loss: 0.3418 - accuracy: 0.9287 - val_loss: 0.8723 - val_accuracy: 0.8230
Epoch 9/20
7982/7982 [==============================] - 1s 152us/step - loss: 0.2841 - accuracy: 0.9372 - val_loss: 0.8817 - val_accuracy: 0.8210
Epoch 10/20
7982/7982 [==============================] - 1s 155us/step - loss: 0.2383 - accuracy: 0.9441 - val_loss: 0.9074 - val_accuracy: 0.8140
Epoch 11/20
7982/7982 [==============================] - 1s 146us/step - loss: 0.2079 - accuracy: 0.9481 - val_loss: 0.9076 - val_accuracy: 0.8230
Epoch 12/20
7982/7982 [==============================] - 1s 144us/step - loss: 0.1794 - accuracy: 0.9519 - val_loss: 0.9351 - val_accuracy: 0.8110
Epoch 13/20
7982/7982 [==============================] - 1s 144us/step - loss: 0.1657 - accuracy: 0.9516 - val_loss: 0.9114 - val_accuracy: 0.8240
Epoch 14/20
7982/7982 [==============================] - 1s 147us/step - loss: 0.1493 - accuracy: 0.9550 - val_loss: 0.9470 - val_accuracy: 0.8160
Epoch 15/20
7982/7982 [==============================] - 1s 155us/step - loss: 0.1392 - accuracy: 0.9567 - val_loss: 0.9764 - val_accuracy: 0.8100
Epoch 16/20
7982/7982 [==============================] - 1s 152us/step - loss: 0.1323 - accuracy: 0.9560 - val_loss: 1.0110 - val_accuracy: 0.8090
Epoch 17/20
7982/7982 [==============================] - 1s 144us/step - loss: 0.1230 - accuracy: 0.9568 - val_loss: 1.0354 - val_accuracy: 0.8120
Epoch 18/20
7982/7982 [==============================] - 1s 144us/step - loss: 0.1210 - accuracy: 0.9580 - val_loss: 1.0226 - val_accuracy: 0.8020
Epoch 19/20
7982/7982 [==============================] - 1s 146us/step - loss: 0.1131 - accuracy: 0.9579 - val_loss: 1.0285 - val_accuracy: 0.8090
Epoch 20/20
7982/7982 [==============================] - 1s 146us/step - loss: 0.1142 - accuracy: 0.9572 - val_loss: 1.0519 - val_accuracy: 0.7980
###Markdown
Let's display its loss and accuracy curves:
###Code
import matplotlib.pyplot as plt
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(1, len(loss) + 1)
plt.plot(epochs, loss, 'bo', label='Training loss')
plt.plot(epochs, val_loss, 'b', label='Validation loss')
plt.title('Training and validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
plt.clf() # clear figure
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
plt.plot(epochs, acc, 'bo', label='Training acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.title('Training and validation accuracy')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
It seems that the network starts overfitting after 9 epochs. Let's train a new network from scratch for 9 epochs, then let's evaluate it on the test set:
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(64, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=9,
batch_size=512,
validation_data=(x_val, y_val))
results = model.evaluate(x_test, one_hot_test_labels)
results
###Output
_____no_output_____
###Markdown
Our approach reaches an accuracy of ~78%. With a balanced binary classification problem, the accuracy reached by a purely random classifier would be 50%, but in our case it is closer to 19%, so our results seem pretty good, at least when compared to a random baseline:
###Code
import copy
test_labels_copy = copy.copy(test_labels)
np.random.shuffle(test_labels_copy)
float(np.sum(np.array(test_labels) == np.array(test_labels_copy))) / len(test_labels)
###Output
_____no_output_____
###Markdown
Generating predictions on new dataWe can verify that the `predict` method of our model instance returns a probability distribution over all 46 topics. Let's generate topic predictions for all of the test data:
###Code
predictions = model.predict(x_test)
###Output
_____no_output_____
###Markdown
Each entry in `predictions` is a vector of length 46:
###Code
predictions[0].shape
###Output
_____no_output_____
###Markdown
The coefficients in this vector sum to 1:
###Code
np.sum(predictions[0])
###Output
_____no_output_____
###Markdown
The largest entry is the predicted class, i.e. the class with the highest probability:
###Code
np.argmax(predictions[0])
###Output
_____no_output_____
###Markdown
A different way to handle the labels and the lossWe mentioned earlier that another way to encode the labels would be to cast them as an integer tensor, like such:
###Code
y_train = np.array(train_labels)
y_test = np.array(test_labels)
###Output
_____no_output_____
###Markdown
The only thing it would change is the choice of the loss function. Our previous loss, `categorical_crossentropy`, expects the labels to follow a categorical encoding. With integer labels, we should use `sparse_categorical_crossentropy`:
###Code
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])
###Output
_____no_output_____
###Markdown
This new loss function is still mathematically the same as `categorical_crossentropy`; it just has a different interface. On the importance of having sufficiently large intermediate layersWe mentioned earlier that since our final outputs were 46-dimensional, we should avoid intermediate layers with much less than 46 hidden units. Now let's try to see what happens when we introduce an information bottleneck by having intermediate layers significantly less than 46-dimensional, e.g. 4-dimensional.
###Code
model = models.Sequential()
model.add(layers.Dense(64, activation='relu', input_shape=(10000,)))
model.add(layers.Dense(4, activation='relu'))
model.add(layers.Dense(46, activation='softmax'))
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
model.fit(partial_x_train,
partial_y_train,
epochs=20,
batch_size=128,
validation_data=(x_val, y_val))
###Output
Train on 7982 samples, validate on 1000 samples
Epoch 1/20
7982/7982 [==============================] - 2s 245us/step - loss: 3.1165 - accuracy: 0.1482 - val_loss: 2.5008 - val_accuracy: 0.4210
Epoch 2/20
7982/7982 [==============================] - 2s 219us/step - loss: 2.0189 - accuracy: 0.5397 - val_loss: 1.7644 - val_accuracy: 0.5500
Epoch 3/20
7982/7982 [==============================] - 2s 221us/step - loss: 1.5814 - accuracy: 0.5485 - val_loss: 1.6006 - val_accuracy: 0.5600
Epoch 4/20
7982/7982 [==============================] - 2s 226us/step - loss: 1.3943 - accuracy: 0.6032 - val_loss: 1.5222 - val_accuracy: 0.6060
Epoch 5/20
7982/7982 [==============================] - 2s 227us/step - loss: 1.2586 - accuracy: 0.6576 - val_loss: 1.4737 - val_accuracy: 0.6270
Epoch 6/20
7982/7982 [==============================] - 2s 222us/step - loss: 1.1553 - accuracy: 0.6812 - val_loss: 1.4628 - val_accuracy: 0.6430
Epoch 7/20
7982/7982 [==============================] - 2s 221us/step - loss: 1.0756 - accuracy: 0.6967 - val_loss: 1.4449 - val_accuracy: 0.6700
Epoch 8/20
7982/7982 [==============================] - 2s 221us/step - loss: 1.0098 - accuracy: 0.7157 - val_loss: 1.4763 - val_accuracy: 0.6560
Epoch 9/20
7982/7982 [==============================] - 2s 220us/step - loss: 0.9577 - accuracy: 0.7249 - val_loss: 1.4851 - val_accuracy: 0.6710
Epoch 10/20
7982/7982 [==============================] - 2s 221us/step - loss: 0.9086 - accuracy: 0.7365 - val_loss: 1.5083 - val_accuracy: 0.6660
Epoch 11/20
7982/7982 [==============================] - 2s 220us/step - loss: 0.8688 - accuracy: 0.7551 - val_loss: 1.5409 - val_accuracy: 0.6810
Epoch 12/20
7982/7982 [==============================] - 2s 222us/step - loss: 0.8298 - accuracy: 0.7707 - val_loss: 1.6126 - val_accuracy: 0.6870
Epoch 13/20
7982/7982 [==============================] - 2s 221us/step - loss: 0.7939 - accuracy: 0.7820 - val_loss: 1.6335 - val_accuracy: 0.6880
Epoch 14/20
7982/7982 [==============================] - 2s 222us/step - loss: 0.7591 - accuracy: 0.7909 - val_loss: 1.6729 - val_accuracy: 0.6900
Epoch 15/20
7982/7982 [==============================] - 2s 224us/step - loss: 0.7271 - accuracy: 0.7993 - val_loss: 1.7405 - val_accuracy: 0.6880
Epoch 16/20
7982/7982 [==============================] - 2s 221us/step - loss: 0.6996 - accuracy: 0.8081 - val_loss: 1.7965 - val_accuracy: 0.6840
Epoch 17/20
7982/7982 [==============================] - 2s 222us/step - loss: 0.6765 - accuracy: 0.8093 - val_loss: 1.8731 - val_accuracy: 0.6810
Epoch 18/20
7982/7982 [==============================] - 2s 226us/step - loss: 0.6573 - accuracy: 0.8138 - val_loss: 1.8925 - val_accuracy: 0.6920
Epoch 19/20
7982/7982 [==============================] - 2s 223us/step - loss: 0.6355 - accuracy: 0.8166 - val_loss: 1.9720 - val_accuracy: 0.6790
Epoch 20/20
7982/7982 [==============================] - 2s 223us/step - loss: 0.6192 - accuracy: 0.8193 - val_loss: 2.0456 - val_accuracy: 0.6820
|
tl_eda.ipynb | ###Markdown
Get Data
###Code
df_train = pd.read_csv('balanced_train_segments.csv', sep=', ', skiprows=2, engine='python')
df_eval = pd.read_csv('eval_segments.csv', sep=', ', skiprows=2, engine='python')
#df_unbalanced = pd.read_csv('unbalanced_train_segments.csv', skiprows=2, engine='python')
df = pd.concat([df_train, df_eval])
df.describe()
footsteps = '/m/07pbtc8'
gasp = '/m/07s0dtb'
whistle = '/m/01w250'
sigh = '/m/07plz5l'
bark = '/m/05tny_'
door = '/m/02dgv'
run = '/m/06h7j'
keys = '/m/03v3yw'
scissors = '/m/01lsmm'
keyboard = '/m/01m2v'
sound_classes = [footsteps, gasp, whistle, sigh, bark, door, run, keys, scissors, keyboard]
df_footsteps = df[df.positive_labels.str.contains(footsteps)]
df_gasp = df[df.positive_labels.str.contains(gasp)]
df_whistle = df[df.positive_labels.str.contains(whistle)]
df_sigh = df[df.positive_labels.str.contains(sigh)]
df_bark = df[df.positive_labels.str.contains(bark)]
df_door = df[df.positive_labels.str.contains(door)]
df_run = df[df.positive_labels.str.contains(run)]
df_keys = df[df.positive_labels.str.contains(keys)]
df_scissors = df[df.positive_labels.str.contains(scissors)]
df_keyboard = df[df.positive_labels.str.contains(keyboard)]
df_footsteps.loc[:,'positive_labels'] = ' footsteps'
df_gasp.loc[:,'positive_labels'] = ' gasp'
df_whistle.loc[:,'positive_labels'] = ' whistle'
df_sigh.loc[:,'positive_labels'] = ' sigh'
df_bark.loc[:,'positive_labels'] = ' bark'
df_door.loc[:,'positive_labels'] = ' door'
df_run.loc[:,'positive_labels'] = ' run'
df_keys.loc[:,'positive_labels'] = ' keys'
df_scissors.loc[:,'positive_labels'] = ' scissors'
df_keyboard.loc[:,'positive_labels'] = ' keyboard'
df_bark.head()
df_footsteps.loc[:,['start_seconds', 'end_seconds']] = ' ' + df_footsteps[['start_seconds', 'end_seconds']].astype(str)
df_gasp.loc[:,['start_seconds', 'end_seconds']] = ' ' + df_gasp[['start_seconds', 'end_seconds']].astype(str)
df_whistle.loc[:,['start_seconds', 'end_seconds']] = ' ' + df_whistle[['start_seconds', 'end_seconds']].astype(str)
df_sigh.loc[:,['start_seconds', 'end_seconds']] = ' ' + df_sigh[['start_seconds', 'end_seconds']].astype(str)
df_bark.loc[:,['start_seconds', 'end_seconds']] = ' ' + df_bark[['start_seconds', 'end_seconds']].astype(str)
df_door.loc[:,['start_seconds', 'end_seconds']] = ' ' + df_door[['start_seconds', 'end_seconds']].astype(str)
df_run.loc[:,['start_seconds', 'end_seconds']] = ' ' + df_run[['start_seconds', 'end_seconds']].astype(str)
df_keys.loc[:,['start_seconds', 'end_seconds']] = ' ' + df_keys[['start_seconds', 'end_seconds']].astype(str)
df_scissors.loc[:,['start_seconds', 'end_seconds']] = ' ' + df_scissors[['start_seconds', 'end_seconds']].astype(str)
df_keyboard.loc[:,['start_seconds', 'end_seconds']] = ' ' + df_keyboard[['start_seconds', 'end_seconds']].astype(str)
df_bark.describe()
# df_footsteps.columns = ['YTID', ' start_seconds', ' end_seconds', ' positive_labels']
# df_gasp.columns = ['YTID', ' start_seconds', ' end_seconds', ' positive_labels']
# df_whistle.columns = ['YTID', ' start_seconds', ' end_seconds', ' positive_labels']
# df_sigh.columns = ['YTID', ' start_seconds', ' end_seconds', ' positive_labels']
# df_bark.columns = ['YTID', ' start_seconds', ' end_seconds', ' positive_labels']
# df_door.columns = ['YTID', ' start_seconds', ' end_seconds', ' positive_labels']
# df_run.columns = ['YTID', ' start_seconds', ' end_seconds', ' positive_labels']
# df_keys.columns = ['YTID', ' start_seconds', ' end_seconds', ' positive_labels']
# df_scissors.columns = ['YTID', ' start_seconds', ' end_seconds', ' positive_labels']
# df_keyboard.columns = ['YTID', ' start_seconds', ' end_seconds', ' positive_labels']
df_footsteps.to_csv('footsteps.csv', sep=',', index=False)
df_gasp.to_csv('gasp.csv', sep=',', index=False)
df_whistle.to_csv('whistle.csv', sep=',', index=False)
df_sigh.to_csv('sigh.csv', sep=',', index=False)
df_bark.to_csv('bark.csv', sep=',', index=False)
df_door.to_csv('door.csv', sep=',', index=False)
df_run.to_csv('run.csv', sep=',', index=False)
df_keys.to_csv('keys.csv', sep=',', index=False)
df_scissors.to_csv('scissors.csv', sep=',', index=False)
df_keyboard.to_csv('keyboard.csv', sep=',', index=False)
barks.to_csv('barks.csv', sep=',', index=False)
doors.to_csv('doors.csv', sep=',', index=False)
###Output
_____no_output_____
###Markdown
Spectrograms
###Code
y, sr = librosa.load('2mJbGx5D-zA_150.0.wav')
librosa.feature.melspectrogram(y=y, sr=sr)
D = np.abs(librosa.stft(y))**2
S = librosa.feature.melspectrogram(S=D)
S = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=128, fmax=8000)
len(S[0])
# plt.figure(figsize=(10, 4))
# librosa.display.specshow(librosa.power_to_db(S, ref=np.max), y_axis='mel', fmax=8000, x_axis='time')
# plt.colorbar(format='%+2.0f dB')
# plt.title('Mel spectrogram')
# plt.tight_layout()
# plt.show()
fig = plt.figure(figsize=(10, 4))
librosa.display.specshow(librosa.power_to_db(S, ref=np.max), y_axis='mel', fmax=8000, x_axis='time')
plt.savefig('test.png')
S_ = copy.deepcopy(S)
S_ = np.repeat(S_[:, :, np.newaxis], 3, axis=2)
print(S_.shape)
image = S_.reshape((1, S_.shape[0], S_.shape[1], S_.shape[2]))
image.shape
image = preprocess_input(image)
inputtensor = image
model = VGG16()
model = VGG16(weights="imagenet",
include_top=False,
input_tensor=inputtensor,
input_shape=(128, 431, 3))
model
print(model.summary())
yhat = model.predict(image)
label = decode_predictions(yhat)
###Output
_____no_output_____ |
METRO_Fundamentals_of_Python.ipynb | ###Markdown
METRO Library Council - Fundamentals of Python 2020.02.25 Why Python? Python is an incredibly efficient programming language and allows us to do some impressive things with only a few lines of code! Thanks to Python’s syntax, we can write “clean” code that is easier to debug and allows for overall readability. Further, code written in Python is easily extendable and reusable, allowing you and others to build upon existing code. Python is used in a variety of contexts from game design to data analysis. It is also used a lot in academic research, especially in the sciences. Yet, it has utility regardless what discipline you come from or are currently working in. Python EnvironmentIf Python is installed on your computer, you can interact with it on the command line, sometimes referred to as the "terminal", "console", "Python shell" or "REPL" (Read, Eval, Print and Loop). More often, people use a text editor, such as [Sublime](https://www.sublimetext.com/), or more sophisticated IDEs such as [PyCharm](https://www.jetbrains.com/pycharm/) to write and run code. With a lot of setups available, you have many options to choose from!Today, we are using a browser-based Jupyter Notebook, which allows users to selectively run code cells and add rich text elements (paragraph, equations, figures, notes, links) in Markdown. With code, notes, instructions, and comments all in one place, it serves as a powerful resource and learning tool. What does this lesson cover? * Variables* Basic Data Types* Lists * Dictionaries* Loops* Conditional Statements* Functions and Arguments* Python Libraries Basic SyntaxPython is sometimes loosely referred to as 'executable pseudocode' since it often uses easily recognizable words, which can be used to infer what is happening or what will happen. Take for example, the simple line of code below. What do you think will happen when we run it? **to run press `Shift-Enter`**
###Code
print("Hello, World!")
###Output
_____no_output_____
###Markdown
VariablesA variable is assigned a *value*. Once assigned, the variable holds the information associated with that value. Variables are important in programming languages. In Python, the convention is to use descriptive variables to help make clear what you are trying to do in your code. Using clear variables can help you maintain your code and can help others read and understand your code.
###Code
## To use data, we first assign it to a "variable"
NY_state_bird = "Eastern Bluebird"
###Output
_____no_output_____
###Markdown
You can think about it as:> "The variable 'NY_state_bird' *gets* the value 'Eastern Bluebird'.
###Code
## Create a variable called my_name, assign it a value, and run
my_name = "Genevieve"
###Output
_____no_output_____
###Markdown
The value assigned to a variable will remain the same until you alter it. The value can be changed or reassigned within the program. For example, we can change the value of `message` to something new.
###Code
# Assign variable
message = "Hello, World!"
print(message)
# Reassign variable
message = "METRO rocks!"
print(message)
###Output
_____no_output_____
###Markdown
Why can we do this? Because Python interprets one line at a time! Be careful, once a variable has been changed, it will hold that new value. When in doubt, don't re-use variable names unless you are absolutely sure that you don't need the old value any longer. Rules for naming variables* can only contain letters, numbers, and underscores* can start with an underscore, but not a number* no spaces, but an underscore can be used to separate words or you can use CamelCase* cannot use the names of Python [built-in functions](https://docs.python.org/3/library/functions.html) or [keywords](https://www.w3schools.com/python/python_ref_keywords.asp)* should be short, but descriptive (employee_name is better than e_n)* be consistent. If you start with CamelCase or snake_case, try to use it throughout
###Code
# Examples of acceptable variable names
test_1 =
employee_name =
_homework_grades =
InventoryList =
ISBN =
# Less helpful variable names
t_1 =
e_n =
_hom_grad =
inventlist =
bknr =
# keywords, built-in functions, and reserved words cannot be used as variable names
True = 3
# Spelling and syntax must be exact
message = "I'm starting to understand variables!"
print(mesage)
# Create a few helpful variable names relevent to your work
GitHub_URLs =
Author_First_Name =
Author_Last_Name =
Article_Title =
Article_URLs =
# Facilitator note: have participants pair up and read each other's variables as a form of peer code review.
# Are the variables easily understood by the person reading them?
###Output
_____no_output_____
###Markdown
Data TypesThere are four basic data types in Python. They are:* string: `"Hello, World!"`* integer: `74`* decimal (float): `7.4`* boolean: `TRUE` `FALSE`
###Code
# If you are unsure what the data type is, you can use the type() method
# which returns class type of the argument(object) passed as parameter.
# the syntax would be type(variable)
message = "Hello, World"
num = 74
# using the type(variable) syntax, check the data types of the variables above
type(message)
type(num)
###Output
_____no_output_____
###Markdown
StringsA *string* is a series of unicode characters surrounded by single or double quotation marks. Strings can be anything from a short word like "is" to combinations of words and numbers like "123 Mulberry Lane," to an entire corpus of texts (all of Jane Eyre). Strings can be stored, printed, and transformed.
###Code
# With this variable, we are printing it's type and its length with Python's built-in functions
favorite_game = "sonic the hedgehog"
print(type(favorite_game))
print(len(favorite_game))
# helpful built-in functions that transform strings
print(favorite_game.title())
print(favorite_game.upper())
print(favorite_game.lower())
# Create a variable and assign it a string. Use two of the built-in functions and print them both.
# You should have three lines of code.
movie = "little women"
year = "(2019)"
director = "greta gerwig"
print(movie.title() + year + " directed by " + director.title() + " was pretty good!")
###Output
Little Women(2019) directed by Greta Gerwig was pretty good!
###Markdown
Concatenating StringsSometimes it's helpful to join strings together. A common method is to use the plus symbol (+) to add multiple strings together. Simply place a + between as many strings as you want to join together.
###Code
# concatenate strings, in this case a first and a last name, using the + operator
first_name = "grace"
last_name = "hopper"
full_name = first_name + " " + last_name
print(full_name)
# adding to this, we can use a built-in function + a longer string.
print(full_name.title() + " was a pioneer of computer programming.")
# Alternatively, we could do something like this:
output = " was a pioneer of computer programming."
print(full_name.title() + output)
###Output
_____no_output_____
###Markdown
Now you try! Create two or more variables and print out a message of your own!
###Code
## Create two or more variables and use the print statement to tell us about something that interests you!
## Favorite author, actor, game, place to visit are all good options!
fav_activity = "hiking"
fav_place = "mountains"
print("My favorite thing to do is " + fav_activity + " in the " + fav_place + ".")
###Output
My favorite thing to do is hiking in the mountains.
###Markdown
Sometimes, we need need to escape characters, add spaces, line breaks, and tabs to our text.
###Code
# Here's my example with an escape character.
first_name = "alan"
last_name = "turing"
full_name = first_name + " " + last_name
print(full_name.title() + " once said, \"Machines take me by surprise with great frequency.\"")
# Adding a line break and a empty line with two consecutive "/n"
line_one = "An old silent pond..."
line_two = "A frog jumps into the pond,"
line_three = "Splash! Silence again."
Basho_Haiku = line_one + "\n" + "\n" + line_two + "\n" + "\n" + line_three
print(Basho_Haiku)
###Output
_____no_output_____
###Markdown
Challege 1
###Code
# create a quote from a famous person and also create your own haiku (5, 7, 5)
# feel free to partner up!
#see two answers above to solve this challege
###Output
_____no_output_____
###Markdown
Tidying up StringsTexts are messy, sometimes you need to clean them up! Striping Whitespace
###Code
# we can remove white space from the left and right
text = " This sentence has too many spaces. "
print(text)
text = text.lstrip()
text = text.rstrip()
print(text)
###Output
_____no_output_____
###Markdown
Using RemoveNote: .remove( ) takes two parameters, first is what you are looking for in the text and the second is what you want to replace it with.> `string.replace(old, new)`
###Code
# use replace to remove white spaces from the middle of texts, replacing it with a single space.
text = text.replace(" ", " ")
print(text)
###Output
_____no_output_____
###Markdown
Real World ExampleWorking with a list of strings may require some tidying up, especially if you want to work with text as data. Often this is a process! Here's a real world example from my research on ingredients in classic recipes.
###Code
# we can clean this text up using some of Python's built-in functions as well as re (a regular expressions library)
import re
# Here is the in ingredient list. How do we isolate just the ingredients by themselves?
crawfish_pasta = "1/4 cup olive oil; 5 scallions, roughly chopped; 2 cloves garlic, minced; 1 medium yellow onion, minced; 1/2 small green bell pepper, seeded and minced; 1/2 cup dry white wine; dried mint; 1 can whole peeled tomatoes; 2 lb. cooked, peeled crawfish tails; 1 cup heavy cream; 1/3 cup roughly chopped parsley, plus more for garnish; Tabasco; Kosher salt and freshly ground black pepper; 1 lb. linguine;Grated parmesan"
print(crawfish_pasta)
# Remove some of the re-occuring text using replace
crawfish_pasta = crawfish_pasta.replace(", minced", "").replace(", roughly chopped", "").replace("/", "").replace(" cup", "").replace(", seeded and minced", "").replace("roughly chopped", "").replace(", plus more for garnish", "").replace("lb. ", "")
print(crawfish_pasta)
# use a simple regular expression to remove numbers and strip white space
crawfish_pasta = re.sub('[0-9]', '', crawfish_pasta).strip()
print(crawfish_pasta)
# replace consecutive spaces and colapse spaces following the semicolons
crawfish_pasta = crawfish_pasta.replace(" ", "").replace("; ", ";")
print(crawfish_pasta)
#split at semicolon (i.e. our delimiter) and break string into a list
crawfish_pasta = crawfish_pasta.split(';')
print(crawfish_pasta)
for ingredient in crawfish_pasta:
print(ingredient)
# use some of the built-in fuctions and re to clean up this messy line of text
messy_text = " He st0pped before the d00r of his own cottage , which was the fourth one from the main building and next to the 1ast. "
tidy_text = messy_text.replace("0", "o").replace("1", "l").replace(" ", " ").replace(" ", "").strip()
print(tidy_text)
###Output
_____no_output_____
###Markdown
Any Questions About Strings? IntegersAn *integer* in Python is a whole number, also known as counting numbers or natural numbers. They can be positive or negative.
###Code
3+4 # Addition
4-3 # Subtraction
30*10 # Multiplication
30/10 # Whole division
46//4 # Floor Division
10**6 # An exponent operation
## Order of Operations PEMDAS - Please excuse my dear Aunt Sally!
10 - ((14 / 2) * 3) + 1
# determine if a number is odd or even using modulo, or remainder, operations
even_num = 10%2
odd_num = 15%2
print(even_num)
print(odd_num)
###Output
_____no_output_____
###Markdown
Why? An even number has no remainder when divided by 2. An odd number will have a remainder. FloatsA *float* in Python is any number with a decimal point.
###Code
# volume of a rectangle
height = 3.5
length = 7.25
width = 7.75
V = height*length*width
print(V)
# using int means the value is restricted to the non-decimal number
int(V)
# the round function helps handle floating point numbers (floats).
# the number after the comma (in this case 2) sets the precision of the decimal.
# this example restricts the numbers after the decimal point to 2 (handy when printing sums that refer to money)
round(V, 2)
# If you need only what is after the decimal point, you could do something like this:
dec_portion = V - int(V)
print(dec_portion)
###Output
_____no_output_____
###Markdown
Avoiding TypeError with the Str() FunctionWe've talked about strings and we've talked about Integers. When using them in the same space, we need to avoid type errors.
###Code
# strings and integers are different data types:
fav_num = 3.14
message = "My favorite number is " + fav_num + "."
print(message)
# The str() function is used to convert the specified value, in this case an integer, into a string.
message = "My favorite number is " + str(fav_num) + "."
print(message)
###Output
_____no_output_____
###Markdown
BooleansA *boolean* is a binary variable, having two possible values called “true” and “false.”.
###Code
3>=4
3 == 4
x = 255
y = 33
if x > y:
print("x is greater than y.")
else:
print("x is not greater than y.")
###Output
_____no_output_____
###Markdown
Challenge 2
###Code
# Working with a partner, we are going to create some helpful conversions and print them.
# hint: begin with identifying the information you need and assign it to a variable
# 1: Convert farenheit to celsius:
# formula: celsius = (farenheit - 32) * 5/9
farenheit = 77
celsius = (farenheit - 32) * 5/9
print(celsius)
# 2: Convert celsius to farenheit:
# formula: farenheit = (celsius * 9/5) + 32
celsius = 25
farenheit = (celsius * 9/5) + 32
print(farenheit)
# 3: Convert pounds to kilograms:
# formula: kilograms = pounds/2.2046226218
pounds = 105
kilograms = pounds/2.2046226218
print(round(kilograms))
# 4: Convert kilograms to pounds:
# formula: pounds = kilograms * 2.2046226218
kilograms = 48
pounds = kilograms * 2.2046226218
print(round(pounds))
###Output
_____no_output_____
###Markdown
-------BREAK---------- ListsA *list* is a collection of items in a particular order. Lists can contain strings, integers, floats and each element is separtated by a comma. Lists are always inside square brackets.
###Code
# An example of a list
my_grocery_list = ["bananas", "pears", "apples", "chocolate", "coffee", "cereal", "kale", "juice"]
print(my_grocery_list)
# we can use the len function to see how many items are in a list
len(my_grocery_list)
# remember, Python indexing starts at zero
print(my_grocery_list[2])
# using the end index will give you the last item in the list
print(my_grocery_list[-1])
# If you need a range in a list, you can use can set a range using start_index:end_index
print(my_grocery_list[3:5])
# Sorting will put your list in alphabetical order
print(sorted(my_grocery_list))
# using the sorted fuction is not permanent
print(my_grocery_list)
# to make it so, use the sort function
my_grocery_list.sort()
print(my_grocery_list)
# we can also permanently reverse this list
my_grocery_list.reverse()
print(my_grocery_list)
###Output
_____no_output_____
###Markdown
DictionariesA *dictionary* in Python is a collection of *key-value pairs*. Each key is connected to a value and you can use a key to access the value associated with that key.
###Code
#dictionaries can hold lots of information
#uses a key, value pair
club_member_1 = {
"member_name": "Brian Jones",
"member_since": "2017",
"member_handle": "@bjones_tweets",
}
print(club_member_1)
#to get information from the dictionary
for key, value in club_member_1.items():
print(key, ":", value)
# we can nest dictionaries into a list
club_member_2 = {
"member_name": "Anne Green",
"member_since": "2015",
"member_handle": "@Greenbean",
}
club_members = [club_member_1, club_member_2]
print(club_members)
for member in club_members:
for key, value in member.items():
print(key, ":", value)
print()
###Output
_____no_output_____
###Markdown
Challenge 3
###Code
# Make a list of 7 items
# print the list
# reverse and print the list
# print the middle three elements of the list
# OPTIONAL: use the member information above to create a club_member_3, append it to club_members,
# and print the new list. HINT: remember, codes is reusable.
list = ["ALA", "ASIS&T", "MLA", "ARLIS/NA", "ACRL", "PLA", "METRO"]
print(list)
print()
list.reverse()
print(list)
print()
print(list[2:4])
print()
club_member_3 = {
"member_name": "Jane Smith",
"member_since": "2010",
"member_handle": "@JS_Designs",
}
club_members.append(club_member_3)
for member in club_members:
for key, value in member.items():
print(key, ":", value)
print()
###Output
_____no_output_____
###Markdown
List Manipulation: Adding, Editing, and Removing Items in a list
###Code
# this is a list of items in our makerspace
makerspace = ["Arduino", "3D printer", "sewing machine", "sewing machine case", "soldering iron", "micro controlers", "Legos", "Legos case"]
print(makerspace)
# we can add items to our list using append function
# appending adds these items to the end of the list
makerspace.append("camera")
makerspace.append("SD cards")
makerspace.append("batteries")
print(makerspace)
# If we want to insert at a particular place, can can use the insert fuction
makerspace.insert(3, "thread") # insert takes two parameters: list.insert.(index, element)
print(makerspace)
# we can also use an index to replace an existing element with a new element
makerspace[0] = "Raspberry Pi"
print(makerspace)
# the del function deletes at an index
# del can be used in lists, but not strings
del makerspace[4]
print(makerspace)
# the pop function also removes elements from a list at a specific index
makerspace.pop(7)
print(makerspace)
###Output
_____no_output_____
###Markdown
Challenge 4
###Code
# append "record player" to the end of the list, insert "gramophone" at index 2,
# delete "CD" from the list, reverse and print.
analog_media = ["VCR", "tape player", "16 mm projector", "CD", "reel to reel", "8 track"]
analog_media.append("record player")
analog_media.insert(2, "gramaphone")
del analog_media[4]
print(analog_media)
###Output
_____no_output_____
###Markdown
For LoopsLooping allows you to take the same action(s) with every item in a list. This is very useful for when you have lists that contain alot of items.
###Code
# range starts at the first number and goes up to the number BEFORE the stopping number
for number in range(1,6):
print(number)
# range can also include negative numbers
for new_number in range(-3,4):
print(new_number)
# for each number, square the number
for number in range(1,6):
print(number,"squared =", number**2)
# use an index counter to track the index of each name
name_list = ["Denise", "Tammy", "Belinda", "Byron", "Keelie", "Charles", "Alison", "Jamal", "Greta", "Manuel"]
name_index = 0
for name in name_list:
print(name_index, ":", name)
name_index += 1
# incrementing by one
musicians = ['Allen Toussaint', 'Buddy Bolden', 'Danny Barker', 'Dr. John', 'Fats Domino', 'Irma Thomas']
for musician in musicians:
print(musician)
musicians = ['Allen Toussaint', 'Buddy Bolden', 'Danny Barker', 'Dr. John', 'Fats Domino', 'Irma Thomas']
for musician in musicians:
print("I have a record by "+ musician + ".")
###Output
_____no_output_____
###Markdown
Conditional Statements IF StatementsAn *if statement allows* you to check for a condition(s) and take action based on that condition(s).Logically it means:`if conditional_met: do something`
###Code
# test to see if a number is greater than or equal to a particular value
age = 18
if age >= 18:
print("You can vote!")
# loop through list and print each name in it
name_list = ["Kylie", "Allie", "Sherman", "Deborah", "Kyran", "Shawna", "Diane", "Josephine", "Peabody", "Ferb", "Wylie"]
for name in name_list:
print(name)
# loop through list and use if to find a particular name
for name in name_list:
if name == "Diane":
print(name, "found!")
###Output
_____no_output_____
###Markdown
Else Statements
###Code
# Diane has RSVP'd to say that she can't make it and we should take her off the list
counter = 0
for name in name_list:
if name == "Diane":
print(name, "found! Located at index: ", counter)
break
else:
print("not found ...", counter)
counter += 1
# Locating the exact index makes it easier to remove her name
del name_list[6]
print(name_list)
# Python has built-in finding functions that make this very easy
name_list2 = ["Kylie", "Allie", "Sherman", "Deborah", "Kyran", "Shawna", "Diane", "Josephine", "Peabody", "Ferb", "Wylie"]
rsvp_yes = ["Kylie", "Allie", "Shawna","Josephine", "Peabody", "Ferb", "Wylie"]
rsvp_no = ["Kyran","Sherman","Diane"]
for name in name_list2:
if name in rsvp_no:
print(name, "RSVP'd no, removing from list ...")
name_list2.remove(name)
else:
if name in rsvp_yes:
print(name, "RSVP'd yes and is coming to the pary!")
print()
print(name_list2)
###Output
_____no_output_____
###Markdown
Elif Statement
###Code
# separate the elements into other lists by type
my_list = [9, "Endymion", 1, "Rex", 65.4, "Zulu", 30, 9.87, "Orpheus", 16.45]
my_int_list = []
my_float_list = []
my_string_list = []
for value in my_list:
if(type(value)==int):
my_int_list.append(value)
elif(type(value)==float):
my_float_list.append(value)
elif(type(value)==str):
my_string_list.append(value)
print(my_list)
print(my_int_list)
print(my_float_list)
print(my_string_list)
###Output
_____no_output_____
###Markdown
OPTIONAL: If, Elif, Else Statements
###Code
# counting number ranges
#create a list of 50 random numbers between 1 and 100
import random
number_range_list = []
for entry in range(0,50):
number=random.randint(1,100)
number_range_list.append(number)
# count and categorize numbers by range
first_quarter = []
second_quarter = []
third_quarter = []
fourth_quarter = []
# check ranges
for number in number_range_list:
#print(number)
if number <= 25:
first_quarter.append(number)
elif number <=50:
second_quarter.append(number)
elif number <=75:
third_quarter.append(number)
else:
fourth_quarter.append(number)
# calculate percentage of whole in each quarter
q1_total = round((len(first_quarter)/50)*100)
q2_total = round((len(second_quarter)/50)*100)
q3_total = round((len(third_quarter)/50)*100)
q4_total = round((len(fourth_quarter)/50)*100)
print("data ready ...")
# print out the data to give a basic shape to it visually
print("Quick, General Bar-Chart \n")
print(" 0-25:",first_quarter)
print(" 26-50:",second_quarter)
print(" 51-75:",third_quarter)
print("76-100:",fourth_quarter)
# print out the analysis
print("Distribution of 50 randomly generated numbers \n by percentage per qarter:")
print(" 0-25:",q1_total, "%")
print(" 26-50:",q2_total, "%")
print(" 51-75:",q3_total, "%")
print("76-100:",q4_total, "%")
# break down the information further
print("Minimum Values")
print(" 0-25:",min(first_quarter))
print(" 26-50:",min(second_quarter))
print(" 51-75:",min(third_quarter))
print("76-100:",min(fourth_quarter))
print("Maximum Values")
print(" 0-25:",max(first_quarter))
print(" 26-50:",max(second_quarter))
print(" 51-75:",max(third_quarter))
print("76-100:",max(fourth_quarter))
print("Average Values")
print(" 0-25:",round(sum(first_quarter)/len(first_quarter)))
print(" 26-50:",round(sum(second_quarter)/len(second_quarter)))
print(" 51-75:",round(sum(third_quarter)/len(third_quarter)))
print("76-100:",round(sum(fourth_quarter)/len(fourth_quarter)))
###Output
_____no_output_____
###Markdown
While StatementsA *while satement* operates as long as a particular condition is true. Logically it means:`while conditional_true: do something`
###Code
# what do you think this will do?
counter = 10;
while counter>0:
print("Counter =", counter)
#print("Counter = " + str(counter))
counter = counter-1
#What do you think would happen if at the end of our code we wrote "counter = counter + 1"?
###Output
_____no_output_____
###Markdown
Infinite Loops occur when a condition is never met. This can cause the program to run out of memory and crash. Functions Function 1
###Code
# data for function 1
famous_programmers = ["augusta ada king (lovelace)", "grace hopper", "limor fried", "evelyn boyd granville",
"parisa tabriz", "sister mary kenneth keller", "carol shaw"]
# create a function by using the keywork 'def'
# set the argument(s) it will accept inside the parentheses
# remember, a function must be defined before it is called
def thank_you(names_list):
for name in names_list:
message = "Thank you " + name.title() + ". Your work and contributions to programming have been amazing!"
print(message)
# call the function and use the list as the argument
thank_you(famous_programmers)
###Output
_____no_output_____
###Markdown
Function 2
###Code
# data for function 2
weekly_high_temp = [44, 56, 47, 50, 53, 57, 61]
# building on the challenge you did previously let's create a function that does a conversion
# instead of a single value, let's take in a list and convert it
def farenheit_to_celsius(temperature_list):
temp_in_celcius = []
for number in temperature_list:
celcius = round((number - 32) * 5/9)
temp_in_celcius.append(celcius)
return temp_in_celcius
print("Weekly High Temps in Celcius:", farenheit_to_celsius(weekly_high_temp))
###Output
_____no_output_____
###Markdown
Challenge 5
###Code
# Create a function to convert this list of feet to inches
measurements_feet = [7, 9, 14, 33, 18.5, 45, 3.25, 10, 1, 2.5]
def feet_to_inches(feet):
measurement_inches = []
for foot in feet:
inches = foot * 12
measurement_inches.append(inches)
return measurement_inches
print(feet_to_inches(measurements_feet))
###Output
_____no_output_____
###Markdown
Python Libraries Python has a huge collection of libraries, also known as packages, which are essentially a collection of modules in a directory. Python is a package itself and comes with many built in modules. There are, however, many other packages that are useful if you need to complete certain tasks, such as data cleaning, web scrapping, or statistical analysis. Since you already have Anaconda, use [Anaconda package repository](https://anaconda.org/anaconda/repo) to install python libraries you need! Web Scraping with BeautifulSoupinstall BeautifulSoup on the terminal using Anconda's [repo for bs4](https://anaconda.org/anaconda/beautifulsoup4)`conda install -c anaconda beautifulsoup4`
###Code
from bs4 import BeautifulSoup
import requests
import time
url = "http://shakespeare.mit.edu/Poetry/sonnets.html"
results_page = requests.get(url)
page_html = results_page.text
soup = BeautifulSoup(page_html, "html.parser")
sonnets = soup.find_all('dl')
for sonnet in sonnets:
each_sonnet = sonnet.find_all('a')
for each in each_sonnet:
title = each.text
url = each['href']
url = "http://shakespeare.mit.edu/Poetry/" + url
sonnet_request = requests.get(url)
sonnet_html = sonnet_request.text
sonnet_soup = BeautifulSoup(sonnet_html, "html.parser")
sonnet_text = sonnet_soup.find("blockquote")
sonnet_text = sonnet_text.text
print(title)
print()
print(url)
print()
print(sonnet_text)
print("---------------------")
time.sleep(2)
###Output
_____no_output_____ |
BOOTCAMP/Complete-Python-3-Bootcamp-master/02-Python Statements/(Bootcamp)-02-03-for Loops.ipynb | ###Markdown
______Content Copyright by Pierian Data for LoopsA for loop acts as an iterator in Python; it goes through items that are in a *sequence* or any other iterable item. Objects that we've learned about that we can iterate over include strings, lists, tuples, and even built-in iterables for dictionaries, such as keys or values.We've already seen the for statement a little bit in past lectures but now let's formalize our understanding.Here's the general format for a for loop in Python: for item in object: statements to do stuff The variable name used for the item is completely up to the coder, so use your best judgment for choosing a name that makes sense and you will be able to understand when revisiting your code. This item name can then be referenced inside your loop, for example if you wanted to use if statements to perform checks.Let's go ahead and work through several example of for loops using a variety of data object types. We'll start simple and build more complexity later on. Example 1Iterating through a list
###Code
# We'll learn how to automate this sort of list in the next lecture
list1 = [1,2,3,4,5,6,7,8,9,10]
for num in list1:
print(num)
###Output
1
2
3
4
5
6
7
8
9
10
###Markdown
Great! Hopefully this makes sense. Now let's add an if statement to check for even numbers. We'll first introduce a new concept here--the modulo. ModuloThe modulo allows us to get the remainder in a division and uses the % symbol. For example:
###Code
17 % 5
###Output
_____no_output_____
###Markdown
This makes sense since 17 divided by 5 is 3 remainder 2. Let's see a few more quick examples:
###Code
# 3 Remainder 1
10 % 3
# 2 Remainder 4
18 % 7
# 2 no remainder
4 % 2
###Output
_____no_output_____
###Markdown
Notice that if a number is fully divisible with no remainder, the result of the modulo call is 0. We can use this to test for even numbers, since if a number modulo 2 is equal to 0, that means it is an even number!Back to the for loops! Example 2Let's print only the even numbers from that list!
###Code
for num in list1:
if num % 2 == 0:
print(num)
###Output
2
4
6
8
10
###Markdown
We could have also put an else statement in there:
###Code
for num in list1:
if num % 2 == 0:
print(num)
else:
print('Odd number')
###Output
Odd number
2
Odd number
4
Odd number
6
Odd number
8
Odd number
10
###Markdown
Example 3Another common idea during a for loop is keeping some sort of running tally during multiple loops. For example, let's create a for loop that sums up the list:
###Code
# Start sum at zero
list_sum = 0
for num in list1:
list_sum = list_sum + num
print(list_sum)
###Output
55
###Markdown
Great! Read over the above cell and make sure you understand fully what is going on. Also we could have implemented a += to perform the addition towards the sum. For example:
###Code
# Start sum at zero
list_sum = 0
for num in list1:
list_sum += num
print(list_sum)
###Output
55
###Markdown
Example 4We've used for loops with lists, how about with strings? Remember strings are a sequence so when we iterate through them we will be accessing each item in that string.
###Code
for letter in 'This is a string.':
print(letter)
###Output
T
h
i
s
i
s
a
s
t
r
i
n
g
.
###Markdown
Example 5Let's now look at how a for loop can be used with a tuple:
###Code
tup = (1,2,3,4,5)
for t in tup:
print(t)
###Output
1
2
3
4
5
###Markdown
Example 6Tuples have a special quality when it comes to for loops. If you are iterating through a sequence that contains tuples, the item can actually be the tuple itself, this is an example of *tuple unpacking*. During the for loop we will be unpacking the tuple inside of a sequence and we can access the individual items inside that tuple!
###Code
list2 = [(2,4),(6,8),(10,12)]
for tup in list2:
print(tup)
# Now with unpacking!
for (t1,t2) in list2:
print(t1)
###Output
2
6
10
###Markdown
Cool! With tuples in a sequence we can access the items inside of them through unpacking! The reason this is important is because many objects will deliver their iterables through tuples. Let's start exploring iterating through Dictionaries to explore this further! Example 7
###Code
d = {'k1':1,'k2':2,'k3':3}
for item in d:
print(item)
###Output
k1
k2
k3
###Markdown
Notice how this produces only the keys. So how can we get the values? Or both the keys and the values? We're going to introduce three new Dictionary methods: **.keys()**, **.values()** and **.items()**In Python each of these methods return a *dictionary view object*. It supports operations like membership test and iteration, but its contents are not independent of the original dictionary – it is only a view. Let's see it in action:
###Code
# Create a dictionary view object
d.items()
###Output
_____no_output_____
###Markdown
Since the .items() method supports iteration, we can perform *dictionary unpacking* to separate keys and values just as we did in the previous examples.
###Code
# Dictionary unpacking
for k,v in d.items():
print(k)
print(v)
###Output
k1
1
k2
2
k3
3
###Markdown
If you want to obtain a true list of keys, values, or key/value tuples, you can *cast* the view as a list:
###Code
list(d.keys())
###Output
_____no_output_____
###Markdown
Remember that dictionaries are unordered, and that keys and values come back in arbitrary order. You can obtain a sorted list using sorted():
###Code
sorted(d.values())
###Output
_____no_output_____ |
src/notebooks/oclc-find-related-issns.ipynb | ###Markdown
Using the xID service from OCLCBelow I grab some records from a marcxml file, grab the ISSNs, and use the xISSN web service (to be discontinued in March 2016) to populate a dataframe with related ISSNs for the given ISSN._Committed by Jack Ammerman (@jwacooks) 2016-01-05_
###Code
import os
import numpy as np
import pandas as pd
#import glob
import os
#import codecs
import json
#import time
from urllib.request import Request, urlopen
from urllib.parse import urlencode, quote_plus
import pymarc
import marcx
import io
os.chdir('/Volumes/jwa_drive1/git/issn')
###Output
_____no_output_____
###Markdown
Let's get some recordsI generated a managed set in Alma and exported the bib records in MARCXML format.Below is a routine to parse the marc records and create a dataframe that holds the relevant fields from the marc records.
###Code
f = '/Volumes/jwa_drive1/git/issn/marcRecords.xml'
records = pymarc.parse_xml_to_array(io.open(f,mode='r',encoding='utf-8'))
columns = ['issn', 'mmsid', 'title','other_issns']
df = pd.DataFrame()
for rec in records:
d = {}
rec = marcx.FatRecord.from_record(rec)
try:
d['issn'] = rec['022']['a']
except Exception as e:
continue
#d['issn'] = ''
d['title'] = rec.title().replace('/','')
d['mmsid'] = rec['001'].data
d['other_issns'] = ''
df = df.append(d,ignore_index=True)
## uncomment line below to verify that the records were loaded properly
#df.head()
###Output
_____no_output_____
###Markdown
Using the dataframe (df), iterate through the rows to make calls to the xISSN service* iterate through a slice of the dataframe.* for each row, we grab the issn and use that as part of the query string to send to the xISSN service. * the response is a string (bytes) that we convert to json.* we navigate the json response to grab the list of issn numbers which we call 'ISSNs'* ISSNs includes format information as well. we parse that list to extract only the desired issn number which we add to a newly constructed 'issn_list'* we add the issn_list to the dataframe
###Code
base_url = 'http://xissn.worldcat.org/webservices/xid/issn/'
end_url = '?method=getForms&format=json'
## below we are iterating through the first 20 rows of the dataframe the size of the
## slice can be modified by changing the value in 'df[:20]'
##
for index,row in df[:20].iterrows():
issn = row['issn']
try:
response = json.loads(urlopen(base_url + issn + end_url).read().decode('utf-8'))
except Exception as e:
## trapping for the possibility that there is no issn for the record in the dataframe
print(index,e)
continue
ISSNs = response['group'][0]['list']
issn_list = []
for issn in ISSNs:
issn_list.append(issn['issn'])
#print(issn_list)
row['other_issns'] = issn_list
df.head(20)
###Output
_____no_output_____ |
Homework/Homework 3 (Evaluation of Predictive Models) Revision 1.ipynb | ###Markdown
Homework 3: Evaluation of Predictive ModelsThis exercise should guide you through performing a predictive modeling analysis. You will choose a model type, set critical complexity parameters, and apply it to select prospects for a direct mailing charity campaign.
###Code
# Import the libraries we will be using
import numpy as np
import pandas as pd
import sys
import matplotlib.pylab as plt
import seaborn as sns
%matplotlib inline
sns.set(style='ticks', palette='Set2')
plt.rcParams['figure.figsize'] = 10, 8
sys.path.append("..")
# tools for loading a few toy datasets
from ds_utils.sample_data import *
from sklearn.model_selection import train_test_split
X, y = get_mailing_data()
X_mailing_train, X_mailing_test, y_mailing_train, y_mailing_use_secret = train_test_split(X, y, train_size=.75, test_size=.25, random_state=42)
###Output
_____no_output_____ |
doc/notebooks/MinHash, design and performance.ipynb | ###Markdown
Designing a Python library for building prototypes around MinHashThis is very much work-in-progress. May be the software and or ideas presented with be the subject of a peer-reviewed or self-published write-up. For now the URL for this is: https://github.com/lgautier/mashing-pumpkinsMinHash in the context of biological sequenced was introduced by the Maryland Bioinformatics Lab [add reference here].Building a MinHash is akin to taking a sample of all k-mers / n-grams found in a sequence and using that sample as a signature or sketch for that sequence. A look at convenience *vs* performanceMoving Python code to C leads to performance improvement... sometimes. Test sequenceFirst we need a test sequence. Generating a random one quickly can be achieved as follows, for example. If you already have you own way to generate a sequence, or your own benchmark sequence, the following code cell can be changed so as to end up with a variable `sequence` that is a `bytes` object containing it.
###Code
# we take a DNA sequence as an example, but this is arbitrary and not necessary.
alphabet = b'ATGC'
# create a lookup structure to go from byte to 4-mer
# (a arbitrary byte is a bitpacked 4-mer)
quad = [None, ]*(len(alphabet)**4)
i = 0
for b1 in alphabet:
for b2 in alphabet:
for b3 in alphabet:
for b4 in alphabet:
quad[i] = bytes((b1, b2, b3, b4))
i += 1
# random bytes for a 3M genome (order of magnitude for a bacterial genome)
import ssl
def make_rnd_sequence(size):
sequencebitpacked = ssl.RAND_bytes(int(size/4))
sequence = bytearray(int(size))
for i, b in zip(range(0, len(sequence), 4), sequencebitpacked):
sequence[i:(i+4)] = quad[b]
return bytes(sequence)
size = int(2E6)
sequence = make_rnd_sequence(size)
import time
class timedblock(object):
def __enter__(self):
self.tenter = time.time()
return self
def __exit__(self, type, value, traceback):
self.texit = time.time()
@property
def duration(self):
return self.texit - self.tenter
###Output
_____no_output_____
###Markdown
Kicking the tires with `sourmash`The executable `sourmash` is a nice package from the dib-lab implemented in Python and including a library [add reference here]. Perfect for trying out quick what MinHash sketches can do.We will create a MinHash of maximum size 1000 (1000 elements) and of k-mer size 21 (all ngrams of length 21 across the input sequences will be considered for inclusion in the MinHash. At the time of writing MinHash is implemented in C/C++ and use that as a reference for speed, as we measure the time it takes to process our 1M reference sequence
###Code
from sourmash_lib._minhash import MinHash
SKETCH_SIZE = 5000
sequence_str = sequence.decode("utf-8")
with timedblock() as tb:
smh = MinHash(SKETCH_SIZE, 21)
smh.add_sequence(sequence_str)
t_sourmash = tb.duration
print("%.2f seconds / sequence" % t_sourmash)
###Output
0.42 seconds / sequence
###Markdown
This is awesome. The sketch for a bacteria-sized DNA sequence can be computed very quickly (about a second on my laptop). Redisigning it all for convenience and flexibilityWe have redesigned what a class could look like, and implemented that design in Pythonforemost for our own convenience and to match the claim of convenience. Now how bad is the impact on performance ?Our new design allows flexibility with respect to the hash function used, and to initially illustrate our point we use `mmh` an existing Python package wrapping MurmurHash3, the hashing function used in `MASH` and `sourmash`.
###Code
# make a hashing function to match our design
import mmh3
def hashfun(sequence, nsize, hbuffer, w=100):
n = min(len(hbuffer), len(sequence)-nsize+1)
for i in range(n):
ngram = sequence[i:(i+nsize)]
hbuffer[i] = mmh3.hash64(ngram)[0]
return n
from mashingpumpkins.minhashsketch import MinSketch
from array import array
with timedblock() as tb:
mhs = MinSketch(21, SKETCH_SIZE, hashfun, 42)
mhs.add(sequence, hashbuffer=array("q", [0,]*200))
t_basic = tb.duration
print("%.2f seconds / sequence" % (t_basic))
print("Our Python implementation is %.2f times slower." % (t_basic / t_sourmash))
###Output
0.91 seconds / sequence
Our Python implementation is 2.17 times slower.
###Markdown
Ah. Our Python implementation only using `mmh3` and the standard library is only a bit slower. There is more to it though. The code in "mashingpumpkins" is doing more by keeping track of the k-mer/n-gram along with the hash value in order to allow the generation of inter-operable sketch [add reference to discussion on GitHub]. Our design in computing batches of hash values each time C is reached for MurmurHash3. We have implemented the small C function require to call MurmurHash for several k-mers, and when using it we have interesting performance gains.
###Code
from mashingpumpkins._murmurhash3 import hasharray
hashfun = hasharray
with timedblock() as tb:
hashbuffer = array('Q', [0, ] * 300)
mhs = MinSketch(21, SKETCH_SIZE, hashfun, 42)
mhs.add(sequence, hashbuffer=hashbuffer)
t_batch = tb.duration
print("%.2f seconds / sequence" % (t_batch))
print("Our Python implementation is %.2f times faster." % (t_sourmash / t_batch))
###Output
0.25 seconds / sequence
Our Python implementation is 1.68 times faster.
###Markdown
Wow!At the time of writing this is between 1.5 and 2.5 times faster than C-implemented `sourmash`. And we are doing more work (we are keeping the ngrams / kmers associated with hash values). We can modifying our class to stop storing the associated k-mer (only keep the hash value) to see if it improves performances: However, as it was pointed out sourmash's minhash also checking that the sequenceo only uses letters from the DNA alphabet **and** computes the sketch for both the sequence and its reverse complement. We add these 2 operations (check and reverse complement) in a custom child class:
###Code
from mashingpumpkins._murmurhash3 import hasharray
hashfun = hasharray
from array import array
trans_tbl = bytearray(256)
for x,y in zip(b'ATGC', b'TACG'):
trans_tbl[x] = y
def revcomp(sequence):
ba = bytearray(sequence)
ba.reverse()
ba = ba.translate(trans_tbl)
return ba
class MyMash(MinSketch):
def add(self, seq, hashbuffer=array('Q', [0, ]*300)):
ba = revcomp(sequence)
if ba.find(0) >= 0:
raise ValueError("Input sequence is not DNA")
super().add(sequence, hashbuffer=hashbuffer)
super().add(ba, hashbuffer=hashbuffer)
with timedblock() as tb:
mhs = MyMash(21, SKETCH_SIZE, hashfun, 42)
mhs.add(sequence)
t_batch = tb.duration
print("%.2f seconds / sequence" % (t_batch))
print("Our Python implementation is %.2f times faster." % (t_sourmash / t_batch))
###Output
0.42 seconds / sequence
Our Python implementation is 1.01 times faster.
###Markdown
Still pretty good, the code for the check is not particularly optimal (that's the kind of primitives that would go to C). MASH quirksUnfortunately this is not quite what MASH (sourmash is based on) is doing. Tim highlighted what is happening: for every ngram and its reverse complement, the one with the lowest lexicograph order is picked for inclusion in the sketch.Essentially, picking segment chunks depending on the lexicographic order of the chunk's direct sequence vs its reverse complement is a sampling/filtering strategy at local level before the hash value is considered for inclusion in the MinHash. The only possible reason for this could be the because the hash value is expensive to compute (but this does not seem to be the case).Anyway, writing a slightly modified batch C function that does that extra sampling/filtering is easy and let's use conserve our design. We can then implement a MASH-like sampling in literally one line:
###Code
from mashingpumpkins import _murmurhash3_mash
def hashfun(sequence, nsize, buffer=array('Q', [0,]*300), seed=42):
return _murmurhash3_mash.hasharray_withrc(sequence, revcomp(sequence), nsize, buffer, seed)
with timedblock() as tb:
hashbuffer = array('Q', [0, ] * 300)
mhs = MinSketch(21, SKETCH_SIZE, hashfun, 42)
mhs.add(sequence)
t_batch = tb.duration
print("%.2f seconds / sequence" % (t_batch))
print("Our Python implementation is %.2f times faster." % (t_sourmash / t_batch))
###Output
0.25 seconds / sequence
Our Python implementation is 1.65 times faster.
###Markdown
So now the claim is that we are just like sourmash/MASH, but mostly in Python and faster.We check that the sketches are identical, and they are:
###Code
len(set(smh.get_mins()) ^ mhs._heapset)
###Output
_____no_output_____
###Markdown
Parallel processingNow what about parallel processing ?
###Code
from mashingpumpkins.sequence import chunkpos_iter
import ctypes
import multiprocessing
from functools import reduce
import time
NSIZE = 21
SEED = 42
def build_mhs(args):
sketch_size, nsize, sequence = args
mhs = MinSketch(nsize, sketch_size, hashfun, SEED)
mhs.add(sequence)
return mhs
res_mp = []
for l_seq in (int(x) for x in (1E6, 5E6, 1E7, 5E7)):
sequence = make_rnd_sequence(l_seq)
for sketch_size in (1000, 5000, 10000):
sequence_str = sequence.decode("utf-8")
with timedblock() as tb:
smh = MinHash(sketch_size, 21)
smh.add_sequence(sequence_str)
t_sourmash = tb.duration
with timedblock() as tb:
ncpu = 2
p = multiprocessing.Pool(ncpu)
# map step (parallel in chunks)
result = p.imap_unordered(build_mhs,
((sketch_size, NSIZE, sequence[begin:end])
for begin, end in chunkpos_iter(NSIZE, l_seq, l_seq//ncpu)))
# reduce step (reducing as chunks are getting ready)
mhs_mp = reduce(lambda x, y: x+y, result, next(result))
p.terminate()
t_pbatch = tb.duration
res_mp.append((l_seq, t_pbatch, sketch_size, t_sourmash))
from rpy2.robjects.lib import dplyr, ggplot2 as ggp
from rpy2.robjects.vectors import IntVector, FloatVector, StrVector, BoolVector
from rpy2.robjects import Formula
dataf = dplyr.DataFrame({'l_seq': IntVector([x[0] for x in res_mp]),
'time': FloatVector([x[1] for x in res_mp]),
'sketch_size': IntVector([x[2] for x in res_mp]),
'ref_time': FloatVector([x[3] for x in res_mp])})
p = (ggp.ggplot(dataf) +
ggp.geom_line(ggp.aes_string(x='l_seq',
y='log2(ref_time/time)',
color='factor(sketch_size, ordered=TRUE)'),
size=3) +
ggp.scale_x_sqrt("sequence length") +
ggp.theme_gray(base_size=18) +
ggp.theme(legend_position="top",
axis_text_x = ggp.element_text(angle = 90, hjust = 1))
)
import rpy2.ipython.ggplot
rpy2.ipython.ggplot.image_png(p, width=1000, height=500)
###Output
_____no_output_____
###Markdown
We have just made sourmash/MASH about 2 times faster... some of the times. Parallelization does not always bring speedups (depends on the size of the sketch and on the length of the sequence for which the sketch is built). Scaling up Now how much time should it take to compute signature for various references ?First we check quickly that the time is roughly proportional to the size of the reference:
###Code
SEED = 42
def run_sourmash(sketchsize, sequence, nsize):
sequence_str = sequence.decode("utf-8")
with timedblock() as tb:
smh = MinHash(sketchsize, nsize)
smh.add_sequence(sequence_str)
return {'t': tb.duration,
'what': 'sourmash',
'keepngrams': False,
'l_sequence': len(sequence),
'bufsize': 0,
'nsize': nsize,
'sketchsize': sketchsize}
def run_mashingp(cls, bufsize, sketchsize, sequence, hashfun, nsize):
hashbuffer = array('Q', [0, ] * bufsize)
with timedblock() as tb:
mhs = cls(nsize, sketchsize, hashfun, SEED)
mhs.add(sequence, hashbuffer=hashbuffer)
keepngrams = True
return {'t': tb.duration,
'what': 'mashingpumpkins',
'keepngrams': keepngrams,
'l_sequence': len(sequence),
'bufsize': bufsize,
'nsize': nsize,
'sketchsize': sketchsize}
import gc
def run_mashingmp(cls, bufsize, sketchsize, sequence, hashfun, nsize):
with timedblock() as tb:
ncpu = 2
p = multiprocessing.Pool(ncpu)
l_seq = len(sequence)
result = p.imap_unordered(build_mhs,
((sketchsize, NSIZE, sequence[begin:end])
for begin, end in chunkpos_iter(nsize, l_seq, l_seq//ncpu))
)
# reduce step (reducing as chunks are getting ready)
mhs_mp = reduce(lambda x, y: x+y, result, next(result))
p.terminate()
return {'t': tb.duration,
'what': 'mashingpumpinks-2p',
'keepngrams': True,
'l_sequence': len(sequence),
'bufsize': bufsize,
'nsize': nsize,
'sketchsize': sketchsize}
from ipywidgets import FloatProgress
from IPython.display import display
res = list()
bufsize = 300
seqsizes = (5E5, 1E6, 5E6, 1E7)
sketchsizes = [int(x) for x in (5E3, 1E4, 5E4, 1E5)]
f = FloatProgress(min=0, max=len(seqsizes)*len(sketchsizes)*2)
display(f)
for seqsize in (int(s) for s in seqsizes):
env = dict()
sequencebitpacked = ssl.RAND_bytes(int(seqsize/4))
sequencen = bytearray(int(seqsize))
for i, b in zip(range(0, len(sequencen), 4), sequencebitpacked):
sequencen[i:(i+4)] = quad[b]
sequencen = bytes(sequencen)
for sketchsize in sketchsizes:
for nsize in (21, 31):
tmp = run_sourmash(sketchsize, sequencen, nsize)
tmp.update([('hashfun', 'murmurhash3')])
res.append(tmp)
for funname, hashfun in (('murmurhash3', hasharray),):
tmp = run_mashingp(MinSketch, bufsize, sketchsize, sequencen, hashfun, nsize)
tmp.update([('hashfun', funname)])
res.append(tmp)
tmp = run_mashingmp(MinSketch, bufsize, sketchsize, sequencen, hashfun, nsize)
tmp.update([('hashfun', funname)])
res.append(tmp)
f.value += 1
from rpy2.robjects.lib import dplyr, ggplot2 as ggp
from rpy2.robjects.vectors import IntVector, FloatVector, StrVector, BoolVector
from rpy2.robjects import Formula
d = dict((n, FloatVector([x[n] for x in res])) for n in ('t',))
d.update((n, StrVector([x[n] for x in res])) for n in ('what', 'hashfun'))
d.update((n, BoolVector([x[n] for x in res])) for n in ('keepngrams', ))
d.update((n, IntVector([x[n] for x in res])) for n in ('l_sequence', 'bufsize', 'sketchsize', 'nsize'))
dataf = dplyr.DataFrame(d)
p = (ggp.ggplot((dataf
.filter("hashfun != 'xxhash'")
.mutate(nsize='paste0("k=", nsize)',
implementation='paste0(what, ifelse(keepngrams, "(w/ kmers)", ""))'))) +
ggp.geom_line(ggp.aes_string(x='l_sequence',
y='l_sequence/t/1E6',
color='implementation',
group='paste(implementation, bufsize, nsize, keepngrams)'),
alpha=1) +
ggp.facet_grid(Formula('nsize~sketchsize')) +
ggp.scale_x_log10('sequence length') +
ggp.scale_y_continuous('MB/s') +
ggp.scale_color_brewer('Implementation', palette="Set1") +
ggp.theme_gray(base_size=18) +
ggp.theme(legend_position="top",
axis_text_x = ggp.element_text(angle = 90, hjust = 1))
)
import rpy2.ipython.ggplot
rpy2.ipython.ggplot.image_png(p, width=1000, height=500)
###Output
_____no_output_____
###Markdown
The rate (MB/s) with which a sequence is processed seems to strongly depend on the size of the input sequence for the `mashingpumpkins` implementation (suggesting a significant setup cost than is amortized as the sequence is getting longer), and parallelization achieve a small boost in performance (with the size of the sketch apparently counteracting that small boost). Our implementation also appears to be scaling better with increasing sequence size (relatively faster as the size is increasing).Keeping the kmers comes with a slight cost for the larger `max_size` values (not shown). Our Python implementation is otherwise holding up quite well. XXHash appears give slightly faster processing rates in the best case, and makes no difference compared with MurmushHash3 in other cases (not shown).
###Code
dataf_plot = (
dataf
.filter("hashfun != 'xxhash'")
.mutate(nsize='paste0("k=", nsize)',
implementation='paste0(what, ifelse(keepngrams, "(w/ kmers)", ""))')
)
dataf_plot2 = (dataf_plot.filter('implementation!="sourmash"')
.inner_join(
dataf_plot.filter('implementation=="sourmash"')
.select('t', 'nsize', 'sketchsize', 'l_sequence'),
by=StrVector(('nsize', 'sketchsize', 'l_sequence'))))
p = (ggp.ggplot(dataf_plot2) +
ggp.geom_line(ggp.aes_string(x='l_sequence',
y='log2(t.y/t.x)',
color='implementation',
group='paste(implementation, bufsize, nsize, keepngrams)'),
alpha=1) +
ggp.facet_grid(Formula('nsize~sketchsize')) +
ggp.scale_x_log10('sequence length') +
ggp.scale_y_continuous('log2(time ratio)') +
ggp.scale_color_brewer('Implementation', palette="Set1") +
ggp.theme_gray(base_size=18) +
ggp.theme(legend_position="top",
axis_text_x = ggp.element_text(angle = 90, hjust = 1))
)
import rpy2.ipython.ggplot
rpy2.ipython.ggplot.image_png(p, width=1000, height=500)
###Output
_____no_output_____
###Markdown
One can also observe that the performance dip for the largest `max_size` value is recovering as the input sequence is getting longer. We verifiy this with a .1GB reference and `max_size` equal to 20,000.
###Code
seqsize = int(1E8)
print("generating sequence:")
f = FloatProgress(min=0, max=seqsize)
display(f)
sequencebitpacked = ssl.RAND_bytes(int(seqsize/4))
sequencen = bytearray(int(seqsize))
for i, b in zip(range(0, len(sequencen), 4), sequencebitpacked):
sequencen[i:(i+4)] = quad[b]
if i % int(1E4) == 0:
f.value += int(1E4)
f.value = i+4
sequencen = bytes(sequencen)
sketchsize = 20000
bufsize = 1000
nsize = 21
funname, hashfun = ('murmurhash3', hasharray)
tmp = run_mashingmp(MinSketch, bufsize, sketchsize, sequencen, hashfun, nsize)
print("%.2f seconds" % tmp['t'])
print("%.2f MB / second" % (tmp['l_sequence']/tmp['t']/1E6))
###Output
5.15 seconds
19.40 MB / second
###Markdown
In comparison, this is what `sourmash` manages to achieve:
###Code
tmp_sm = run_sourmash(sketchsize, sequencen, nsize)
print("%.2f seconds" % tmp_sm['t'])
print("%.2f MB / second" % (tmp_sm['l_sequence']/tmp_sm['t']/1E6))
###Output
20.01 seconds
5.00 MB / second
|
varia/test_tds_to_stac_items.ipynb | ###Markdown
THREDDS Catalog to STAC Items Investigating data ingestion pipelineIn order to expose TDS in STAC API, STAC items needs to be generated with appropriate metadata.Next step in the pipeline would be to keep the STAC API index DB up to data with the static items generated in this notebook.
###Code
# Variables
# Since os.getcwd() doesn't works as expected in ipynb, we need to manually define the catalog save path, which is absolute
CATALOG_SAVE_PATH = "TO_DEFINE"
from siphon.catalog import TDSCatalog
import json
def parse_datasets(catalog):
"""
Collect all available datasets.
"""
datasets = []
for dataset_name, dataset_obj in catalog.datasets.items():
http_url = dataset_obj.access_urls.get("httpserver", "")
odap_url = dataset_obj.access_urls.get("opendap", "")
ncml_url = dataset_obj.access_urls.get("ncml", "")
uddc_url = dataset_obj.access_urls.get("uddc", "")
iso_url = dataset_obj.access_urls.get("iso", "")
wcs_url = dataset_obj.access_urls.get("wcs", "")
wms_url = dataset_obj.access_urls.get("wms", "")
datasets.append({
"dataset_name" : dataset_name,
"http_url" : http_url,
"odap_url" : odap_url,
"ncml_url" : ncml_url,
"uddc_url" : uddc_url,
"iso_url" : iso_url,
"wcs_url" : wcs_url,
"wms_url" : wms_url
})
for catalog_name, catalog_obj in catalog.catalog_refs.items():
d = parse_datasets(catalog_obj.follow())
datasets.extend(d)
return datasets
def crawl_tds():
"""
Crawl TDS.
"""
top_cat = TDSCatalog("https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/catalog/birdhouse/cccs_portal/indices/Final/BCCAQv2/tx_mean/catalog.xml")
# tds_ds = parse_datasets(top_cat)
tds_ds = [{"dataset_name": "BCCAQv2+ANUSPLIN300_ensemble-percentiles_historical+allrcps_1950-2100_tx_mean_YS.nc", "http_url": "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/fileServer/birdhouse/cccs_portal/indices/Final/BCCAQv2/tx_mean/allrcps_ensemble_stats/YS/BCCAQv2+ANUSPLIN300_ensemble-percentiles_historical+allrcps_1950-2100_tx_mean_YS.nc", "odap_url": "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/dodsC/birdhouse/cccs_portal/indices/Final/BCCAQv2/tx_mean/allrcps_ensemble_stats/YS/BCCAQv2+ANUSPLIN300_ensemble-percentiles_historical+allrcps_1950-2100_tx_mean_YS.nc", "ncml_url": "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/ncml/birdhouse/cccs_portal/indices/Final/BCCAQv2/tx_mean/allrcps_ensemble_stats/YS/BCCAQv2+ANUSPLIN300_ensemble-percentiles_historical+allrcps_1950-2100_tx_mean_YS.nc", "uddc_url": "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/uddc/birdhouse/cccs_portal/indices/Final/BCCAQv2/tx_mean/allrcps_ensemble_stats/YS/BCCAQv2+ANUSPLIN300_ensemble-percentiles_historical+allrcps_1950-2100_tx_mean_YS.nc", "iso_url": "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/iso/birdhouse/cccs_portal/indices/Final/BCCAQv2/tx_mean/allrcps_ensemble_stats/YS/BCCAQv2+ANUSPLIN300_ensemble-percentiles_historical+allrcps_1950-2100_tx_mean_YS.nc", "wcs_url": "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/wcs/birdhouse/cccs_portal/indices/Final/BCCAQv2/tx_mean/allrcps_ensemble_stats/YS/BCCAQv2+ANUSPLIN300_ensemble-percentiles_historical+allrcps_1950-2100_tx_mean_YS.nc", "wms_url": "https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/wms/birdhouse/cccs_portal/indices/Final/BCCAQv2/tx_mean/allrcps_ensemble_stats/YS/BCCAQv2+ANUSPLIN300_ensemble-percentiles_historical+allrcps_1950-2100_tx_mean_YS.nc"}]
# cache crawl result data (give option to use it or not)
for i, item in enumerate(tds_ds):
# add metadata attributes to crawl result elements
item = add_tds_ds_metadata(item)
# STACItemFactory call
stac_item = get_stac_item(item)
# write STAC item json to file
write_item(stac_item)
print("finished creating all STAC items")
# json_dump = json.dumps(tds_ds)
# print(json_dump)
def add_tds_ds_metadata(ds):
"""
Add extra metadata to item.
"""
# replace with regexes
extra_meta = {
"model" : "BCCAQv2+ANUSPLIN300",
"experiment" : "ensemble-percentiles",
"frequency" : "YS",
"modeling_realm" : "historical+allrcps",
"mip_table" : "",
"ensemble_member" : "",
"version_number" : "",
"variable_name" : "tx_mean",
"temporal_subset" : "1950-2100"
}
return dict(ds, **extra_meta)
def get_stac_item(item):
"""
"""
return item
def write_item(item):
"""
"""
pass
crawl_tds()
# create STAC collection
import pystac
from datetime import datetime
import json
import os
# items
collection_item = pystac.Item(id='local-image-col-1',
geometry={},
bbox={},
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
collection_item.common_metadata.gsd = 0.3
collection_item.common_metadata.platform = 'Maxar'
collection_item.common_metadata.instruments = ['WorldView3']
# asset = pystac.Asset(href=img_path,
# media_type=pystac.MediaType.GEOTIFF)
# collection_item.add_asset('image', asset)
collection_item2 = pystac.Item(id='local-image-col-2',
geometry={},
bbox={},
datetime=datetime.utcnow(),
properties={},
stac_extensions=[pystac.Extensions.EO])
collection_item2.common_metadata.gsd = 0.3
collection_item2.common_metadata.platform = 'Maxar'
collection_item2.common_metadata.instruments = ['WorldView3']
# asset2 = pystac.Asset(href=img_path,
# media_type=pystac.MediaType.GEOTIFF)
# collection_item2.add_asset('image', asset2)
# extents
sp_extent = pystac.SpatialExtent([None,None,None,None])
capture_date = datetime.strptime('2015-10-22', '%Y-%m-%d')
tmp_extent = pystac.TemporalExtent([(capture_date, None)])
extent = pystac.Extent(sp_extent, tmp_extent)
# collection
catalog = pystac.Catalog(id='bccaqv2', description='BCCAQv2 STAC')
collection = pystac.Collection(id='tx-mean',
description='tx mean',
extent=extent,
license='CC-BY-SA-4.0')
collection.add_items([collection_item, collection_item2])
catalog.clear_items()
catalog.clear_children()
catalog.add_child(collection)
# catalog.describe()
# normalize and save
print("save path : " + CATALOG_SAVE_PATH)
catalog.normalize_hrefs(CATALOG_SAVE_PATH)
# print(catalog.get_self_href())
# print(collection_item2.get_self_href())
catalog.save(catalog_type=pystac.CatalogType.SELF_CONTAINED)
# print(json.dumps(catalog.to_dict(), indent=4))
# label_item = catalog.get_child('tx-mean').get_item('local-image-col-1')
# label_item.to_dict()
with open(catalog.get_self_href()) as f:
print(f.read())
with open(collection.get_self_href()) as f:
print(f.read())
with open(collection_item.get_self_href()) as f:
print(f.read())
###Output
{
"type": "Feature",
"stac_version": "1.0.0-beta.2",
"id": "local-image-col-1",
"properties": {
"gsd": 0.3,
"platform": "Maxar",
"instruments": [
"WorldView3"
],
"datetime": "2021-01-15T16:47:29.020127Z"
},
"geometry": {},
"links": [
{
"rel": "root",
"href": "../collection.json",
"type": "application/json"
},
{
"rel": "collection",
"href": "../collection.json",
"type": "application/json"
},
{
"rel": "parent",
"href": "../collection.json",
"type": "application/json"
}
],
"assets": {},
"bbox": {},
"stac_extensions": [
"eo"
],
"collection": "tx-mean"
}
|
Figure3_lung_meta_analysis/analysis/Cov19_age_sex_poisson_nUMIoffset_noints_GLM_pseudobulk.ipynb | ###Markdown
This notebook contains the code for the meta-analysis of healthy lung data for ACE2, TMPRSS2, and CTSL. It contains the pseudo-bulk analysis for the simple model without interaction terms that was run on the patient-level data. This script contains the code that was run on the full data and does not test for smoking associations.
###Code
import scanpy as sc
import numpy as np
import scipy as sp
import matplotlib.pyplot as plt
import pandas as pd
from matplotlib import rcParams
from matplotlib import colors
from matplotlib import patches
import seaborn as sns
import batchglm
import diffxpy.api as de
import patsy as pat
from statsmodels.stats.multitest import multipletests
import logging, warnings
import statsmodels.api as sm
plt.rcParams['figure.figsize']=(8,8) #rescale figures
sc.settings.verbosity = 3
#sc.set_figure_params(dpi=200, dpi_save=300)
sc.logging.print_versions()
de.__version__
logging.getLogger("tensorflow").setLevel(logging.ERROR)
logging.getLogger("batchglm").setLevel(logging.INFO)
logging.getLogger("diffxpy").setLevel(logging.INFO)
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 35)
warnings.filterwarnings("ignore", category=DeprecationWarning, module="tensorflow")
#User inputs
folder = '/storage/groups/ml01/workspace/malte.luecken/2020_cov19_study'
adata_diffxpy = '/storage/groups/ml01/workspace/malte.luecken/2020_cov19_study/COVID19_lung_atlas_revision_v3.h5ad'
output_folder = 'diffxpy_out/'
de_output_base = 'COVID19_lung_atlas_revision_v3_lung_cov19_poissonglm_pseudo_nUMIoffset_noInts'
###Output
_____no_output_____
###Markdown
Read the data
###Code
adata = sc.read(adata_diffxpy)
adata
adata.obs.age = adata.obs.age.astype(float)
adata.obs.dtypes
adata.obs['dataset'] = adata.obs['last_author/PI']
adata.obs.dataset.value_counts()
###Output
_____no_output_____
###Markdown
Filter the data Keep only datsets with:- more than 1 donor- non-fetal- lung
###Code
# Remove fetal datasets
dats_to_remove = set(['Rawlins', 'Spence', 'Linnarsson'])
dat = adata.obs.groupby(['donor']).agg({'sex':'first', 'age':'first', 'dataset':'first'})
# Single donor filter
don_tab = dat['dataset'].value_counts()
dats_to_remove.update(set(don_tab.index[don_tab == 1]))
dats_to_remove = list(dats_to_remove)
dats_to_remove
adata = adata[~adata.obs.dataset.isin(dats_to_remove)].copy()
adata.obs.lung_vs_nasal.value_counts()
# Filter for only lung data
adata = adata[adata.obs.lung_vs_nasal.isin(['lung']),].copy()
adata
# Rename smoking status covariate
adata.obs['smoking_status'] = adata.obs.smoked_boolean
adata.obs.dataset.value_counts()
adata.obs['sample'].nunique()
adata.obs['donor'].nunique()
###Output
_____no_output_____
###Markdown
Check the data
###Code
np.mean(adata.X.astype(int) != adata.X)
# Check if any non-integer data in a particular dataset
for dat in adata.obs.dataset.unique():
val = np.mean(adata[adata.obs.dataset.isin([dat]),:].X.astype(int) != adata[adata.obs.dataset.isin([dat]),:].X)
if val != 0:
print(f'dataset= {dat}; value= {val}')
adata[adata.obs.dataset.isin([dat]),:].X[:20,:20].A
###Output
_____no_output_____
###Markdown
All counts are integers
###Code
adata.obs.age.value_counts()
adata.obs.sex.value_counts()
###Output
_____no_output_____
###Markdown
Fit models and perform DE
###Code
cluster_key = 'ann_level_2'
clust_tbl = adata.obs[cluster_key].value_counts()
clusters = clust_tbl.index[clust_tbl > 1000]
ct_to_rm = clusters[[ct.startswith('1') for ct in clusters]]
clusters = clusters.drop(ct_to_rm.tolist()).tolist()
clusters
###Output
_____no_output_____
###Markdown
Calculate DE genes per cluster.
###Code
adata
###Output
_____no_output_____
###Markdown
Generate pseudobulks
###Code
for gene in adata.var_names:
adata.obs[gene] = adata[:,gene].X.A
adata
dat_pseudo = adata.obs.groupby(['donor', 'ann_level_2']).agg({'ACE2':'mean', 'TMPRSS2':'mean', 'CTSL':'mean', 'total_counts':'mean', 'age':'first', 'smoking_status':'first', 'sex':'first', 'dataset':'first'}).dropna().reset_index(level=[0,1])
adata_pseudo = sc.AnnData(dat_pseudo[['ACE2', 'TMPRSS2', 'CTSL']], obs=dat_pseudo.drop(columns=['ACE2', 'TMPRSS2', 'CTSL']))
adata_pseudo.obs.head()
adata_pseudo.obs['total_counts_scaled'] = adata_pseudo.obs['total_counts']/adata_pseudo.obs['total_counts'].mean()
formula = "1 + sex + age + dataset"
tested_coef = ["sex[T.male]", "age"]
dmat = de.utils.design_matrix(
data=adata_pseudo,
formula="~" + formula,
as_numeric=["age"],
return_type="patsy"
)
dmat[1]
###Output
_____no_output_____
###Markdown
Poisson GLM
###Code
# Poisson GLM loop
de_results_lvl2_glm = dict()
# Test over clusters
for clust in clusters:
adata_pseudo_tmp = adata_pseudo[adata_pseudo.obs[cluster_key] == clust,:].copy()
print(f'In cluster {clust}:')
print(adata_pseudo_tmp.obs['smoking_status'].value_counts())
print(adata_pseudo_tmp.obs['sex'].value_counts())
# Filter out genes to reduce multiple testing burden
sc.pp.filter_genes(adata_pseudo_tmp, min_cells=4)
if adata_pseudo_tmp.n_vars == 0:
print('No genes expressed in more than 10 cells!')
continue
if len(adata_pseudo_tmp.obs.sex.value_counts())==1:
print(f'{clust} only has 1 type of male/female sample.')
continue
print(f'Testing {adata_pseudo_tmp.n_vars} genes...')
print(f'Testing in {adata_pseudo_tmp.n_obs} donors...')
print("")
# List to store results
de_results_list = []
# Set up design matrix
dmat = de.utils.design_matrix(
data=adata_pseudo_tmp, #[idx_train],
formula="~" + formula,
as_numeric=["age"],
return_type="patsy"
)
# Test if model is full rank
if np.linalg.matrix_rank(np.asarray(dmat[0])) < np.min(dmat[0].shape):
print(f'Cannot test {clust} as design matrix is not full rank.')
continue
for i, gene in enumerate(adata_pseudo_tmp.var_names):
# Specify model
pois_model = sm.GLM(
endog=adata_pseudo_tmp.X[:, i], #[idx_train, :],
exog=dmat[0],
offset=np.log(adata_pseudo_tmp.obs['total_counts_scaled'].values),
family=sm.families.Poisson()
)
# Fit the model
pois_results = pois_model.fit()
# Test over coefs
for coef in tested_coef:
de_results_temp = pois_results.wald_test(
[x for i, x in enumerate(pois_model.exog_names) if dmat[1][i] in [coef]]
)
# Output the results nicely
de_results_temp = pd.DataFrame({
"gene": gene,
"cell_identity": clust,
"covariate": coef,
"coef": pois_results.params[[y == coef for y in dmat[1]]],
"coef_sd": pois_results.bse[[y == coef for y in dmat[1]]],
"pval": de_results_temp.pvalue
}, index= [clust+"_"+gene+"_"+coef])
de_results_list.append(de_results_temp)
de_results = pd.concat(de_results_list)
de_results['adj_pvals'] = multipletests(de_results['pval'].tolist(), method='fdr_bh')[1]
# Store the results
de_results_lvl2_glm[clust] = de_results
# Join the dataframes:
full_res_lvl2_glm = pd.concat([de_results_lvl2_glm[i] for i in de_results_lvl2_glm.keys()], ignore_index=True)
###Output
In cluster Myeloid:
False 66
True 55
nan 21
Name: smoking_status, dtype: int64
male 76
female 66
Name: sex, dtype: int64
Testing 3 genes...
Testing in 142 donors...
In cluster Airway epithelium:
False 90
True 71
nan 20
Name: smoking_status, dtype: int64
male 97
female 84
Name: sex, dtype: int64
Testing 3 genes...
Testing in 181 donors...
In cluster Alveolar epithelium:
False 67
True 39
nan 14
Name: smoking_status, dtype: int64
male 64
female 56
Name: sex, dtype: int64
Testing 3 genes...
Testing in 120 donors...
In cluster Lymphoid:
False 72
True 62
nan 21
Name: smoking_status, dtype: int64
male 82
female 73
Name: sex, dtype: int64
Testing 3 genes...
Testing in 155 donors...
In cluster Fibroblast lineage:
False 54
True 40
nan 14
Name: smoking_status, dtype: int64
male 59
female 49
Name: sex, dtype: int64
Testing 3 genes...
Testing in 108 donors...
In cluster Blood vessels:
False 48
True 31
nan 13
Name: smoking_status, dtype: int64
male 54
female 38
Name: sex, dtype: int64
Testing 3 genes...
Testing in 92 donors...
In cluster Submucosal Gland:
False 28
True 17
Name: smoking_status, dtype: int64
male 25
female 20
Name: sex, dtype: int64
Testing 3 genes...
Testing in 45 donors...
In cluster Smooth Muscle:
False 35
True 26
nan 5
Name: smoking_status, dtype: int64
male 34
female 32
Name: sex, dtype: int64
Testing 3 genes...
Testing in 66 donors...
In cluster Lymphatics:
False 45
True 31
nan 13
Name: smoking_status, dtype: int64
male 48
female 41
Name: sex, dtype: int64
Testing 3 genes...
Testing in 89 donors...
In cluster Mesothelium:
True 15
False 14
nan 2
Name: smoking_status, dtype: int64
male 16
female 15
Name: sex, dtype: int64
filtered out 1 genes that are detected in less than 4 cells
Testing 2 genes...
Testing in 31 donors...
In cluster Endothelial-like:
nan 3
Name: smoking_status, dtype: int64
male 3
Name: sex, dtype: int64
filtered out 3 genes that are detected in less than 4 cells
No genes expressed in more than 10 cells!
In cluster Granulocytes:
nan 13
True 1
Name: smoking_status, dtype: int64
male 7
female 7
Name: sex, dtype: int64
filtered out 2 genes that are detected in less than 4 cells
Testing 1 genes...
Testing in 14 donors...
###Markdown
Inspect some results
###Code
de_results_lvl2_glm.keys()
full_res_lvl2_glm
full_res_lvl2_glm.loc[full_res_lvl2_glm['gene'] == 'ACE2',]
full_res_lvl2_glm.loc[full_res_lvl2_glm['gene'] == 'TMPRSS2',]
###Output
_____no_output_____
###Markdown
Level 3 annotation
###Code
cluster_key = 'ann_level_3'
clust_tbl = adata.obs[cluster_key].value_counts()
clusters = clust_tbl.index[clust_tbl > 1000]
ct_to_rm = clusters[[ct.startswith('1') or ct.startswith('2') for ct in clusters]]
clusters = clusters.drop(ct_to_rm.tolist()).tolist()
clusters
adata_sub = adata[adata.obs.ann_level_3.isin(clusters),:]
adata_sub
adata_sub.obs.donor.nunique()
adata_sub.obs['sample'].nunique()
###Output
_____no_output_____
###Markdown
Generate pseudobulk
###Code
for gene in adata_sub.var_names:
adata_sub.obs[gene] = adata_sub[:,gene].X.A
dat_pseudo_sub = adata_sub.obs.groupby(['donor', 'ann_level_3']).agg({'ACE2':'mean', 'TMPRSS2':'mean', 'CTSL':'mean', 'total_counts':'mean', 'age':'first', 'smoking_status':'first', 'sex':'first', 'dataset':'first'}).dropna().reset_index(level=[0,1])
adata_pseudo_sub = sc.AnnData(dat_pseudo_sub[['ACE2', 'TMPRSS2', 'CTSL']], obs=dat_pseudo_sub.drop(columns=['ACE2', 'TMPRSS2', 'CTSL']))
adata_pseudo_sub.obs.head()
adata_pseudo_sub.obs['total_counts_scaled'] = adata_pseudo_sub.obs['total_counts']/adata_pseudo_sub.obs['total_counts'].mean()
###Output
_____no_output_____
###Markdown
Poisson GLM First check if there are any datasets with only 1 sex or 1 smoking status, which would make the model overparameterized (not full rank).
###Code
np.any(adata_pseudo_tmp.obs.smoking_status.value_counts() == 1)
np.any(pd.crosstab(adata_pseudo_tmp.obs.smoking_status, adata_pseudo_tmp.obs.sex) == 1)
clusters
# Poisson GLM loop
de_results_lvl3_glm = dict()
# Test over clusters
for clust in clusters:
adata_pseudo_tmp = adata_pseudo_sub[adata_pseudo_sub.obs[cluster_key] == clust,:].copy()
print(f'In cluster {clust}:')
print(adata_pseudo_tmp.obs['sex'].value_counts())
# Filter out genes to reduce multiple testing burden
sc.pp.filter_genes(adata_pseudo_tmp, min_cells=4)
if adata_pseudo_tmp.n_vars == 0:
print('No genes expressed in more than 10 cells!')
continue
if np.any(adata_pseudo_tmp.obs.sex.value_counts()==1):
print(f'{clust} only has 1 male or 1 female sample.')
continue
print(f'Testing {adata_pseudo_tmp.n_vars} genes...')
print(f'Testing in {adata_pseudo_tmp.n_obs} donors...')
print("")
# List to store results
de_results_list = []
# Set up design matrix
dmat = de.utils.design_matrix(
data=adata_pseudo_tmp,
formula="~" + formula,
as_numeric=["age"],
return_type="patsy"
)
# Test if model is full rank
if np.linalg.matrix_rank(np.asarray(dmat[0])) < np.min(dmat[0].shape):
print(f'Cannot test {clust} as design matrix is not full rank.')
continue
for i, gene in enumerate(adata_pseudo_tmp.var_names):
# Specify model
pois_model = sm.GLM(
endog=adata_pseudo_tmp.X[:, i],
exog=dmat[0],
offset=np.log(adata_pseudo_tmp.obs['total_counts_scaled'].values),
family=sm.families.Poisson()
)
# Fit the model
pois_results = pois_model.fit()
# Test over coefs
for coef in tested_coef:
de_results_temp = pois_results.wald_test(
[x for i, x in enumerate(pois_model.exog_names) if dmat[1][i] in [coef]]
)
# Output the results nicely
de_results_temp = pd.DataFrame({
"gene": gene,
"cell_identity": clust,
"covariate": coef,
"coef": pois_results.params[[y == coef for y in dmat[1]]],
"coef_sd": pois_results.bse[[y == coef for y in dmat[1]]],
"pval": de_results_temp.pvalue
}, index= [clust+"_"+gene+"_"+coef])
de_results_list.append(de_results_temp)
de_results = pd.concat(de_results_list)
de_results['adj_pvals'] = multipletests(de_results['pval'].tolist(), method='fdr_bh')[1]
# Store the results
de_results_lvl3_glm[clust] = de_results
# Join the dataframes:
full_res_lvl3_glm = pd.concat([de_results_lvl3_glm[i] for i in de_results_lvl3_glm.keys()], ignore_index=True)
de_results_lvl3_glm.keys()
full_res_lvl3_glm
full_res_lvl3_glm.loc[full_res_lvl3_glm['gene'] == 'ACE2',]
full_res_lvl3_glm.loc[full_res_lvl3_glm['gene'] == 'TMPRSS2',]
###Output
_____no_output_____
###Markdown
Store results
###Code
full_res_lvl2_glm.to_csv(folder+'/'+output_folder+de_output_base+'_lvl2_full.csv')
full_res_lvl3_glm.to_csv(folder+'/'+output_folder+de_output_base+'_lvl3_full.csv')
###Output
_____no_output_____ |
vehicle_insurance.ipynb | ###Markdown
Vehicle Insurance Claims PredictionTask to predict the conditions and insurance claims amount based on image and other metadata. Based on dataset from HackerEarth Fast, Furious and Insured: HackerEarth Machine Learning Challenge.
###Code
# Imports
import numpy as np
import pandas as pd
from sklearn.preprocessing import LabelEncoder
import cv2
import os
import matplotlib.pyplot as plt
from PIL import Image
# Suppress warnings
import warnings
warnings.filterwarnings('ignore')
%cd './drive/My Drive/'
###Output
/content/drive/My Drive
###Markdown
EDA
###Code
train = pd.read_csv('./dataset/train.csv')
test = pd.read_csv('./dataset/test.csv')
train.T
test.T
combined = pd.concat([train[['Image_path','Insurance_company','Cost_of_vehicle','Min_coverage','Expiry_date','Max_coverage']],test])
train.isna().sum()
test.isna().sum()
###Output
_____no_output_____
###Markdown
Dropping NA values for amount, since it is a label.
###Code
train = train[~train.Amount.isna()]
train.shape
train.dtypes
###Output
_____no_output_____
###Markdown
Categorizing
###Code
le_insurance = LabelEncoder()
combined['Insurance_company'] = le_insurance.fit_transform(combined['Insurance_company'])
train['Insurance_company'] = le_insurance.transform(train['Insurance_company'])
test['Insurance_company'] = le_insurance.transform(test['Insurance_company'])
train.describe()
combined.describe()
###Output
_____no_output_____
###Markdown
Outlier Removal -ve Amount??
###Code
(train.Amount<0).sum()
train[train.Amount<0]
train = train[train.Amount>=0]
train.shape
###Output
_____no_output_____
###Markdown
Measuring Correlation and Redundant Features
###Code
corr = train.corr()
corr.style.background_gradient(cmap='PiYG')
###Output
_____no_output_____
###Markdown
Condition label is solely dependent on the vehicle image.But, amount label is dependent on the other dataframe fields, given that condition is 1 else amout is 0 i.e. not damaged. So clearing the no-damage rows for further inspection of the damaged rows.
###Code
train1 = train[train.Condition==1]
train1 = train1.drop(columns='Condition')
train1.T
corr = train1.corr()
corr.style.background_gradient(cmap='PiYG')
corr = combined.corr()
corr.style.background_gradient(cmap='PiYG')
###Output
_____no_output_____
###Markdown
Max_coverage correlation doesn't seem to be present in the testing distribution. Hence, finding the deviants in the testing set.
###Code
mean_val = np.around((train1.Max_coverage/train1.Cost_of_vehicle).mean(),decimals=2)
mean_val
test_df_corr = np.around((test.Max_coverage/test.Cost_of_vehicle),decimals=2)
(test_df_corr==mean_val).sum()
###Output
_____no_output_____
###Markdown
The difference in corr could be due to the removed column of `Condition`. Most probably a data leakage, where correlations of Damaged and Non-Damaged conditions are different.
###Code
train0 = train[train.Condition==0]
train0 = train0.drop(columns='Condition')
mean_val = np.around((train0.Max_coverage/train0.Cost_of_vehicle).mean(),decimals=2)
mean_val
test_df_corr = np.around((test.Max_coverage/test.Cost_of_vehicle),decimals=2)
(test_df_corr==mean_val).sum()
###Output
_____no_output_____
###Markdown
Comparing the distribution between `Condition` of 0 and 1, just to be sure!!
###Code
print(len(train1)/len(train0),558/42)
###Output
13.01010101010101 13.285714285714286
###Markdown
Approx. same, so it could be assumed that the `Condition` columns is sorted!! `Amount` lefts to be seen.
###Code
mean_val = np.around((train1.Max_coverage/train1.Cost_of_vehicle).mean(),decimals=2)
test['Condition'] = (test_df_corr==mean_val).astype(int)
test1 = test[test.Condition==1]
test1 = test1.drop(columns='Condition')
combined1 = pd.concat([train1[['Image_path','Insurance_company','Cost_of_vehicle','Min_coverage','Expiry_date','Max_coverage']],test1])
test0 = test[test.Condition==0]
test0 = test0.drop(columns='Condition')
corr = combined1.corr()
corr.style.background_gradient(cmap='PiYG')
###Output
_____no_output_____
###Markdown
Hence, keeping only `Cost_of_vehicle`, since rest perfectly correlate.
###Code
train1 = train1.drop(columns=['Min_coverage','Max_coverage'])
combined1 = combined1.drop(columns=['Min_coverage','Max_coverage'])
train.plot(y='Amount',kind='kde')
train.plot(y='Cost_of_vehicle',kind='kde')
%matplotlib inline
#qt
train.plot(y='Amount',kind='box')
###Output
_____no_output_____
###Markdown
Eliminating outliers i.e >1.26e+4
###Code
(train.Amount>1.26e+04).sum()
train = train[train.Amount<=1.26e+04]
train.shape
train.plot(x='Expiry_date',y='Amount',c='Insurance_company',kind='scatter',cmap='hsv')
train.plot(x='Cost_of_vehicle',y='Amount',c='Insurance_company',kind='scatter',cmap='hsv')
print(min(combined.Expiry_date),max(combined.Expiry_date))
# DataFrame for usage in the Convolution Network Training
conv_df = {'Image_paths':[],'condition':[]}
for _, rows in train.iterrows():
conv_df['Image_paths'].append(rows.Image_path)
conv_df['condition'].append(rows.Condition)
for _, rows in test.iterrows():
conv_df['Image_paths'].append(rows.Image_path)
conv_df['condition'].append(rows.Condition)
conv_df = pd.DataFrame(conv_df)
conv_df.head()
###Output
_____no_output_____
###Markdown
EDA - ImagesCombined folder, consisting of both train and test images exists.
###Code
root = './dataset/combined'
height, width = 0, 0
for f in os.listdir(root):
if os.path.isdir(os.path.join(root,f)):
continue
img = Image.open(os.path.join(root,f))
w, h = img.size
width+=w
height+=h
width/=len(os.listdir(root))
height/=len(os.listdir(root))
print(width, height)
combined_imgs = []
for f in os.listdir(root):
if os.path.isdir(os.path.join(root,f)):
continue
img = Image.open(os.path.join(root,f))
combined_imgs.append(np.array(img.resize((599,396))))
combined_imgs = np.asarray(combined_imgs)
combined_imgs.shape
combined_map = combined_imgs.mean(axis=0).astype(np.uint8)
plt.imshow(combined_map)
del combined_imgs
combined_map.shape
img1 = combined_map[:,:,0]
_, thresh1 = cv2.threshold(img1, 125, 255, cv2.THRESH_BINARY)
plt.imshow(thresh1, cmap='gray')
img2 = combined_map[:,:,1]
_, thresh2 = cv2.threshold(img2, 125, 255, cv2.THRESH_BINARY)
plt.imshow(thresh1, cmap='gray')
img3 = combined_map[:,:,2]
_, thresh3 = cv2.threshold(img3, 125, 255, cv2.THRESH_BINARY)
plt.imshow(thresh1, cmap='gray')
plt.imshow(np.dstack((thresh1,thresh2,thresh3)))
###Output
_____no_output_____
###Markdown
Conv TrainingCreating a model that can simulate the trends in the `Amount`, solely from the images. This can further be used as an additional metric alongside the remaining. In effect, model stacking.
###Code
# Utilizing Fastai for the ConvNet implementation -> Using a Pretrained ResNet34
from fastai.vision import *
from fastai.metrics import error_rate
from torch import nn
# Normalizing Amount
train_conv = train1[['Image_path','Amount']]
amt = np.asarray(train_conv.Amount.values)
amt = (amt - amt.min())/(amt.max() - amt.min())
train_conv.Amount = amt
train_conv.head()
tfms = get_transforms(do_flip=True # random flip with prob 0.5
, flip_vert=True # flip verticle or rotated by 90
, max_rotate=3 # rotation with prob p_affine
, max_zoom=1.2 # zoom between 1 and value
, max_lighting=0.4 # lighting and contrast
, max_warp=0.5 # symmetric wrap with prob p_affine
, p_affine=0.5 # prob
, p_lighting=0.3 # prob of lighting
)
# DataBunch API with continuous values as labels, using label_cls=FloatList
data = (ImageList
.from_df(path='./dataset/combined',df=train_conv)
.split_by_rand_pct(0.2)
.label_from_df(cols='Amount',label_cls=FloatList)
.transform(tfms, size=(256,256))
.databunch()
.normalize(imagenet_stats))
data.show_batch(rows=3, figsize=(7,6))
# Creating a ConvNet Regression learner based on ResNet34 backbone
model = models.resnet34
learn = create_cnn(data, model, metrics=error_rate)
learn.model
# learning rate finder
learn.freeze()
learn.lr_find()
# plot the lr_find to get better learning rate
# learning rate vs loss
learn.recorder.plot(suggestion=True)
# 5 epochs with freeze
learn.freeze()
learn.fit_one_cycle(5, max_lr=slice(5e-2))
# learning rate finder
learn.unfreeze()
learn.lr_find()
# plot the lr_find to get better learning rate
# learning rate vs loss
learn.recorder.plot(suggestion=True)
# 20 epochs with un-freeze
learn.unfreeze()
learn.fit_one_cycle(20, max_lr=slice(1e-4,1e-3))
learn.recorder.plot_losses()
learn.save('resnet34_pass2_ver2')
# Uncomment for reuse without training
#learn.load('resnet34_pass2_ver2')
# Predicted Value -> to be used as an additional feature
predicted_val = {'Image_path':[],'AmtPred':[]}
for f in os.listdir('./dataset/combined'):
path = os.path.join('./dataset/combined',f)
if os.path.isdir(path):
continue
cat, _, _ = learn.predict(open_image(path))
predicted_val['Image_path'].append(f)
predicted_val['AmtPred'].append(cat.data[0])
predicted_val = pd.DataFrame(predicted_val)
predicted_val.head()
###Output
_____no_output_____
###Markdown
Final Model TrainingUtilizing the predicted amount from convolution learner as an additional feature.
###Code
from datetime import datetime
import pydot
from PIL import Image
from sklearn.impute import KNNImputer
from sklearn.ensemble import RandomForestRegressor
from sklearn.tree import export_graphviz
# Selecting the required features, and labels and adding the Predicted Amount feature
train_df = train1[['Image_path','Insurance_company','Cost_of_vehicle','Expiry_date']]
img = train_df.Image_path.values
amt = []
for _,row in predicted_val.iterrows():
if row.Image_path in img:
amt.append(row.AmtPred)
train_df['AmtPred'] = amt
train_df['Amount'] = train1['Amount']
# Converting from datetime object to ordinal representation
train_df['Expiry_date'] = train_df['Expiry_date'].apply(lambda x: datetime.strptime(x,'%Y-%m-%d').toordinal())
train_df.pop('Image_path')
train_df.head()
###Output
_____no_output_____
###Markdown
Imputing missing values.
###Code
imputer = KNNImputer(n_neighbors=4)
train_df = imputer.fit_transform(train_df)
train_df
###Output
_____no_output_____
###Markdown
Shuffling and splitting into train and validation sets.
###Code
np.random.shuffle(train_df)
X = train_df[:,:-1]
y = train_df[:,-1]
len(X)*0.9
X_train = X[:1159]
X_val = X[1159:]
y_train = y[:1159]
y_val = y[1159:]
# List of features being used
feature_list = ['Insurance_company','Cost_of_vehicle','Expiry_date','AmtPred']
###Output
_____no_output_____
###Markdown
Sample Random Forest to measure feature importances.
###Code
rf_small = RandomForestRegressor(n_estimators=10, max_depth = 3)
rf_small.fit(X, y)
tree_small = rf_small.estimators_[5]
export_graphviz(tree_small, out_file = 'small_tree.dot', feature_names = feature_list, rounded = True, precision = 1)
(graph, ) = pydot.graph_from_dot_file('small_tree.dot')
graph.write_png('small_tree.png');
plt.imshow(Image.open('small_tree.png'))
importances = list(rf_small.feature_importances_)# List of tuples with variable and importance
feature_importances = [(feature, round(importance, 2)) for feature, importance in zip(feature_list, importances)]# Sort the feature importances by most important first
feature_importances = sorted(feature_importances, key = lambda x: x[1], reverse = True)# Print out the feature and importances
[print('Variable: {:20} Importance: {}'.format(*pair)) for pair in feature_importances];
###Output
Variable: Expiry_date Importance: 0.55
Variable: AmtPred Importance: 0.26
Variable: Cost_of_vehicle Importance: 0.16
Variable: Insurance_company Importance: 0.02
###Markdown
Measuring the error in using different amount of features depending on its importance as seen above.
###Code
# Random forest with only the two most important variables
rf_most_important1 = RandomForestRegressor(n_estimators= 1000, random_state=42)
important_indices = [feature_list.index('Expiry_date'), feature_list.index('AmtPred')]
train_important = X_train[:, important_indices]
test_important = X_val[:, important_indices]
rf_most_important1.fit(train_important, y_train)
predictions = rf_most_important1.predict(test_important)
errors = abs(predictions - y_val)
print('Mean Absolute Error:', round(np.mean(errors), 2), 'degrees.')
# Random forest with only the three most important variables
rf_most_important2 = RandomForestRegressor(n_estimators= 1000, random_state=42)
important_indices = [feature_list.index('Expiry_date'), feature_list.index('AmtPred'), feature_list.index('Cost_of_vehicle')]
train_important = X_train[:, important_indices]
test_important = X_val[:, important_indices]
rf_most_important2.fit(train_important, y_train)
predictions = rf_most_important2.predict(test_important)
errors = abs(predictions - y_val)
print('Mean Absolute Error:', round(np.mean(errors), 2), 'degrees.')
# Random forest with only the all present variables
rf_most_important3 = RandomForestRegressor(n_estimators= 1000, random_state=42)
rf_most_important3.fit(X_train, y_train)
predictions = rf_most_important3.predict(X_val)
errors = abs(predictions - y_val)
print('Mean Absolute Error:', round(np.mean(errors), 2), 'degrees.')
###Output
Mean Absolute Error: 2524.14 degrees.
Accuracy: -63.36 %.
###Markdown
Using only the 2 most important variables, since it has the lowest MAE.
###Code
import pickle
filehandler = open(b"./dataset/combined/models/RandomForestRegressor.pickle","wb")
pickle.dump(rf_most_important1, filehandler)
###Output
_____no_output_____
###Markdown
Test Submission
###Code
test_df = test1[['Image_path','Expiry_date']]
img = test_df.Image_path.values
amt = []
for _,row in predicted_val.iterrows():
if row.Image_path in img:
amt.append(row.AmtPred)
test_df['AmtPred'] = amt
from datetime import datetime
test_df['Expiry_date'] = test_df['Expiry_date'].apply(lambda x: datetime.strptime(x,'%Y-%m-%d').toordinal())
test_df.pop('Image_path')
test_df.head()
test_df.values
predictions = rf_most_important1.predict(test_df)
predictions = predictions.astype(int)
print(len(predictions))
predictions
print(len(test1))
submission = {'Image_path':[],'Condition':[],'Amount':[]}
for _, row in test0.iterrows():
submission['Image_path'].append(row.Image_path)
submission['Condition'].append(0)
submission['Amount'].append(0)
i = 0
for _, row in test1.iterrows():
submission['Image_path'].append(row.Image_path)
submission['Condition'].append(1)
submission['Amount'].append(predictions[i])
i += 1
submission = pd.DataFrame(submission)
submission.head()
# Final submission file
submission.to_csv('./dataset/submission.csv', index=False)
###Output
_____no_output_____ |
notebooks/WV8 - 2x spectrogram network-1.ipynb | ###Markdown
0. IntroductionGiven signal of length L, and the STFT parameters:1. Window length, $M$2. Shift/stride, $R$ ($ 1 \le R \le M $, for no loss of information)3. FFT size, $N$ ($N\ge M$, for our purpose, $ N=M $)Then segments, $K$, will be $ K=\lfloor (L-M)/R \rfloor$In our problem, our data samples have $L=128$, which limits our options for window length. If we choose a large $M$ necessary for finer resolution of the frequency components (say $M\ge L$ with zero-padding or over-sampling), we would lose the temporal information of when the frequency peaks occur. So we will make the following tradeoff: $N=M=32$, $R=2$. Furthermore prefix and suffix $M/2$ samples to the signal frame to also fully incorporate the spectral behavior at the edges (when using tapered windows), thus $L'=128+N=160$. With these parameters and adjustments, $K=64$. Thus the inputs to our CNN classifier will be $(M/2+1)\times K=17\times 64$ spectrogram images.
###Code
import numpy as np
from scipy import signal
import matplotlib.pyplot as plt
from collections import defaultdict, Counter
import keras
from keras.layers import Dense, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.models import Sequential
from keras.callbacks import History
history = History()
###Output
_____no_output_____
###Markdown
1. Loading the UCI HAR dataset into an numpy ndarrayDownload dataset from https://archive.ics.uci.edu/ml/datasets/Human+Activity+Recognition+Using+Smartphones
###Code
activities_description = {
1: 'walking',
2: 'walking upstairs',
3: 'walking downstairs',
4: 'sitting',
5: 'standing',
6: 'laying'
}
def read_signals(filename):
with open(filename, 'r') as fp:
data = fp.read().splitlines()
data = map(lambda x: x.strip().split(), data)
data = [list(map(float, line)) for line in data]
return data
def read_labels(filename):
with open(filename, 'r') as fp:
activities = fp.read().splitlines()
activities = list(map(lambda x: int(x)-1, activities))
return activities
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation, :, :]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
DATA_FOLDER = '../datasets/UCI HAR Dataset/'
INPUT_FOLDER_TRAIN = DATA_FOLDER+'train/Inertial Signals/'
INPUT_FOLDER_TEST = DATA_FOLDER+'test/Inertial Signals/'
INPUT_FILES_TRAIN = ['body_acc_x_train.txt', 'body_acc_y_train.txt', 'body_acc_z_train.txt',
'body_gyro_x_train.txt', 'body_gyro_y_train.txt', 'body_gyro_z_train.txt',
'total_acc_x_train.txt', 'total_acc_y_train.txt', 'total_acc_z_train.txt']
INPUT_FILES_TEST = ['body_acc_x_test.txt', 'body_acc_y_test.txt', 'body_acc_z_test.txt',
'body_gyro_x_test.txt', 'body_gyro_y_test.txt', 'body_gyro_z_test.txt',
'total_acc_x_test.txt', 'total_acc_y_test.txt', 'total_acc_z_test.txt']
LABELFILE_TRAIN = DATA_FOLDER+'train/y_train.txt'
LABELFILE_TEST = DATA_FOLDER+'test/y_test.txt'
train_signals, test_signals = [], []
for input_file in INPUT_FILES_TRAIN:
sig = read_signals(INPUT_FOLDER_TRAIN + input_file)
train_signals.append(sig)
train_signals = np.transpose(train_signals, (1, 2, 0))
for input_file in INPUT_FILES_TEST:
sig = read_signals(INPUT_FOLDER_TEST + input_file)
test_signals.append(sig)
test_signals = np.transpose(test_signals, (1, 2, 0))
train_labels = read_labels(LABELFILE_TRAIN)
test_labels = read_labels(LABELFILE_TEST)
[no_signals_train, no_steps_train, no_components_train] = np.shape(train_signals)
[no_signals_test, no_steps_test, no_components_test] = np.shape(test_signals)
no_labels = len(np.unique(train_labels[:]))
print("The train dataset contains {} signals, each one of length {} and {} components ".format(no_signals_train, no_steps_train, no_components_train))
print("The test dataset contains {} signals, each one of length {} and {} components ".format(no_signals_test, no_steps_test, no_components_test))
print("The train dataset contains {} labels, with the following distribution:\n {}".format(np.shape(train_labels)[0], Counter(train_labels[:])))
print("The test dataset contains {} labels, with the following distribution:\n {}".format(np.shape(test_labels)[0], Counter(test_labels[:])))
uci_har_signals_train, uci_har_labels_train = randomize(train_signals, np.array(train_labels))
uci_har_signals_test, uci_har_labels_test = randomize(test_signals, np.array(test_labels))
###Output
The train dataset contains 7352 signals, each one of length 128 and 9 components
The test dataset contains 2947 signals, each one of length 128 and 9 components
The train dataset contains 7352 labels, with the following distribution:
Counter({5: 1407, 4: 1374, 3: 1286, 0: 1226, 1: 1073, 2: 986})
The test dataset contains 2947 labels, with the following distribution:
Counter({5: 537, 4: 532, 0: 496, 3: 491, 1: 471, 2: 420})
###Markdown
2. Applying a STFT to UCI HAR signals and saving the resulting spectrogram into an numpy ndarray
###Code
def plot_spectrogram(sig, M, noverlap, windowname = 'hann'):
# get the window taps
win = signal.get_window(windowname,M,False)
# prefix/suffix
pref = sig[-int(M/2):]*win[0:int(M/2)]
suf = sig[0:int(M/2)]*win[-int(M/2):]
sig = np.concatenate((pref, sig))
sig = np.concatenate((sig,suf))
f, t, Sxx = signal.spectrogram(sig, window=win, nperseg=M, noverlap=noverlap)
return f,t,Sxx
M = 32
noverlap = M-2
train_size = np.shape(train_signals)[0]
#import pdb; pdb.set_trace()
train_data_stft = np.ndarray(shape=(train_size, 17, 65, 9*2))
for ii in range(0,train_size):
if ii % 1000 == 0:
print(ii)
for jj in range(0,9):
sig = uci_har_signals_train[ii, :, jj]
f,t,Sxx = plot_spectrogram(sig, M, noverlap, windowname='hann')
train_data_stft[ii, :, :, 2*jj] = Sxx
f,t,Sxx = plot_spectrogram(sig, M, noverlap, windowname='boxcar')
train_data_stft[ii, :, :, 2*jj+1] = Sxx
test_size = np.shape(test_signals)[0]
test_data_stft = np.ndarray(shape=(test_size, 17, 65, 9*2))
for ii in range(0,test_size):
if ii % 100 == 0:
print(ii)
for jj in range(0,9):
sig = uci_har_signals_test[ii, :, jj]
f,t,Sxx = plot_spectrogram(sig, M, noverlap, windowname='hann')
test_data_stft[ii, :, :, 2*jj] = Sxx
f,t,Sxx = plot_spectrogram(sig, M, noverlap, windowname='boxcar')
test_data_stft[ii, :, :, 2*jj+1] = Sxx
###Output
0
1000
2000
3000
4000
5000
6000
7000
0
100
200
300
400
500
600
700
800
900
1000
1100
1200
1300
1400
1500
1600
1700
1800
1900
2000
2100
2200
2300
2400
2500
2600
2700
2800
2900
###Markdown
3. Training a Convolutional Neural Network
###Code
x_train = train_data_stft
y_train = list(uci_har_labels_train[:train_size])
x_test = test_data_stft
y_test = list(uci_har_labels_test[:test_size])
img_x = 17
img_y = 65
img_z = 9*2
num_classes = 6
batch_size = 16
epochs = 100
# reshape the data into a 4D tensor - (sample_number, x_img_size, y_img_size, num_channels)
# because the MNIST is greyscale, we only have a single channel - RGB colour images would have 3
input_shape = (img_x, img_y, img_z)
# convert the data to the right type
#x_train = x_train.reshape(x_train.shape[0], img_x, img_y, img_z)
#x_test = x_test.reshape(x_test.shape[0], img_x, img_y, img_z)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices - this is for use in the
# categorical_crossentropy loss below
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(5, 5), strides=(1, 1), activation='relu', input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(Conv2D(64, (5, 5), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dense(1000, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer=keras.optimizers.Adam(),
metrics=['accuracy'])
model.fit(x_train, y_train, batch_size=batch_size,
epochs=epochs, verbose=1,
validation_data=(x_test, y_test),
callbacks=[history])
train_score = model.evaluate(x_train, y_train, verbose=0)
print('Train loss: {}, Train accuracy: {}'.format(train_score[0], train_score[1]))
test_score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss: {}, Test accuracy: {}'.format(test_score[0], test_score[1]))
fig, axarr = plt.subplots(figsize=(12,6), ncols=2)
axarr[0].plot(range(1, epochs+1), history.history['accuracy'], label='train score')
axarr[0].plot(range(1, epochs+1), history.history['val_accuracy'], label='test score')
axarr[0].set_xlabel('Number of Epochs', fontsize=18)
axarr[0].set_ylabel('Accuracy', fontsize=18)
axarr[0].set_ylim([0,1])
axarr[1].plot(range(1, epochs+1), history.history['accuracy'], label='train score')
axarr[1].plot(range(1, epochs+1), history.history['val_accuracy'], label='test score')
axarr[1].set_xlabel('Number of Epochs', fontsize=18)
axarr[1].set_ylabel('Accuracy', fontsize=18)
axarr[1].set_ylim([0.6,1])
plt.legend()
plt.show()
###Output
_____no_output_____ |
wordcloud-tutorial.ipynb | ###Markdown
Nuvem de Palavras em Python Bibliotecas
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from wordcloud import WordCloud, ImageColorGenerator
from PIL import Image
###Output
_____no_output_____
###Markdown
Dados Download dos dados em: https://drive.google.com/file/d/1KHxV3dpi1X2uX5SSbQwNlNIbkpLoHOZS/view?usp=sharing
###Code
df_vacs = pd.read_csv('dados_vacinas.csv', encoding = 'ISO-8859-1')
df_vacs.head()
###Output
_____no_output_____
###Markdown
Word Cloud Padrão
###Code
# vamos trocar os espaços dos nomes dos imunizantes
# por um underline, para as palavras aparecerem juntas
df_vacs['Imuno'] = df_vacs['Imuno'].str.replace(' ', '_',)
# cria um dicionário com os dados, necessário para a word cloud
d = {}
for Imuno, Total in df_vacs.values:
d[Imuno] = Total
# inicializa uma word cloud
wordcloud = WordCloud()
# gera uma wordcloud através do dicionário
wordcloud.generate_from_frequencies(frequencies = d)
plt.figure(figsize = (15, 10)) # tamanho do gráfico
plt.imshow(wordcloud, interpolation = 'bilinear') # plotagem da nuvem de palavras
plt.axis('off') # remove as bordas
plt.show() # mostra a word cloud
###Output
_____no_output_____
###Markdown
Word Cloud Melhorada
###Code
# abre uma imagem e a transforma em um array do numpy
image_mask = np.array(Image.open('seringa.png'))
# inicializa uma word cloud
wordcloud = WordCloud(background_color = 'white', # cor de fundo
width = 1000, # largura
height = 500, # altura
mask = image_mask, # imagem utilizada
contour_width = 3, # espessura do contorno
contour_color = 'lightblue', # cor do contorno
colormap = 'winter') # cor das palavras
# gera uma wordcloud através do dicionário
wordcloud.generate_from_frequencies(frequencies = d)
plt.figure(figsize = (15, 10)) # tamanho do gráfico
plt.imshow(wordcloud, interpolation = 'bilinear') # plotagem da nuvem de palavras
plt.axis('off') # remove as bordas
plt.show() # mostra a word cloud
###Output
_____no_output_____
###Markdown
Word Cloud Através de um Texto
###Code
# importa o texto
texto = open('texto_exemplo.txt').read()
# importa as stopwords em português
stopwords = open('stopwords.txt').read()
# transforma as stopwords em um lista
lista_stopwords = stopwords.split(' \n')
# abre uma imagem e a transforma em um array do numpy
mask_frasco = np.array(Image.open('frasco_vacina.png'))
# pega as cores da imagem acima
mask_cores = ImageColorGenerator(mask_frasco)
# inicializa uma word cloud
wordcloud = WordCloud(stopwords = lista_stopwords,
mask = mask_frasco, # imagem utilizada
background_color = 'black', # cor de fundo
width = 1000, # largura
height = 500, # altura
contour_width = 2, # espessura do contorno
contour_color = 'lightblue', # cor do contorno
color_func = mask_cores) # cores das palavras
# gera uma wordcloud através do texto
wordcloud.generate(texto)
plt.figure(figsize = (20, 15), facecolor = 'k') # tamanho do gráfico
plt.imshow(wordcloud, interpolation = 'bilinear') # plotagem da nuvem de palavras
plt.axis('off') # remove as bordas
plt.show() # mostra a word cloud
plt.savefig('vac.jpg')
###Output
_____no_output_____ |
Course 1 - Natural Language Processing with Classification and Vector Spaces/Week 4/NLP_C1_W4_lecture_nb_02.ipynb | ###Markdown
Hash functions and multiplanesIn this lab, we are going to practice the most important concepts related to the hash functions explained in the videos. You will be using these in this week's assignment.A key point for the lookup using hash functions is the calculation of the hash key or bucket id that we assign for a given entry. In this notebook, we will cover:* Basic hash tables* Multiplanes* Random planes Basic Hash tablesHash tables are data structures that allow indexing data to make lookup tasks more efficient. In this part, you will see the implementation of the simplest hash function.
###Code
import numpy as np # library for array and matrix manipulation
import pprint # utilities for console printing
from utils_nb import plot_vectors # helper function to plot vectors
import matplotlib.pyplot as plt # visualization library
pp = pprint.PrettyPrinter(indent=4) # Instantiate a pretty printer
###Output
_____no_output_____
###Markdown
In the next cell, we will define a straightforward hash function for integer numbers. The function will receive a list of integer numbers and the desired amount of buckets. The function will produce a hash table stored as a dictionary, where keys contain the hash keys, and the values will provide the hashed elements of the input list. The hash function is just the remainder of the integer division between each element and the desired number of buckets.
###Code
def basic_hash_table(value_l, n_buckets):
def hash_function(value, n_buckets):
return int(value) % n_buckets
hash_table = {i:[] for i in range(n_buckets)} # Initialize all the buckets in the hash table as empty lists
for value in value_l:
hash_value = hash_function(value,n_buckets) # Get the hash key for the given value
hash_table[hash_value].append(value) # Add the element to the corresponding bucket
return hash_table
###Output
_____no_output_____
###Markdown
Now let's see the hash table function in action. The pretty print function (`pprint()`) will produce a visually appealing output.
###Code
value_l = [100, 10, 14, 17, 97] # Set of values to hash
hash_table_example = basic_hash_table(value_l, n_buckets=10)
pp.pprint(hash_table_example)
###Output
{ 0: [100, 10],
1: [],
2: [],
3: [],
4: [14],
5: [],
6: [],
7: [17, 97],
8: [],
9: []}
###Markdown
In this case, the bucket key must be the rightmost digit of each number. PlanesMultiplanes hash functions are other types of hash functions. Multiplanes hash functions are based on the idea of numbering every single region that is formed by the intersection of n planes. In the following code, we show the most basic forms of the multiplanes principle. First, with a single plane:
###Code
P = np.array([[1, 1]]) # Define a single plane.
fig, ax1 = plt.subplots(figsize=(8, 8)) # Create a plot
plot_vectors([P], axes=[2, 2], ax=ax1) # Plot the plane P as a vector
# Plot random points.
for i in range(0, 10):
v1 = np.array(np.random.uniform(-2, 2, 2)) # Get a pair of random numbers between -2 and 2
side_of_plane = np.sign(np.dot(P, v1.T))
# Color the points depending on the sign of the result of np.dot(P, point.T)
if side_of_plane == 1:
ax1.plot([v1[0]], [v1[1]], 'bo') # Plot blue points
else:
ax1.plot([v1[0]], [v1[1]], 'ro') # Plot red points
plt.show()
###Output
_____no_output_____
###Markdown
The first thing to note is that the vector that defines the plane does not mark the boundary between the two sides of the plane. It marks the direction in which you find the 'positive' side of the plane. Not intuitive at all!If we want to plot the separation plane, we need to plot a line that is perpendicular to our vector `P`. We can get such a line using a $90^o$ rotation matrix.Feel free to change the direction of the plane `P`.
###Code
P = np.array([[1, 2]]) # Define a single plane. You may change the direction
# Get a new plane perpendicular to P. We use a rotation matrix
PT = np.dot([[0, 1], [-1, 0]], P.T).T
fig, ax1 = plt.subplots(figsize=(8, 8)) # Create a plot with custom size
plot_vectors([P], colors=['b'], axes=[2, 2], ax=ax1) # Plot the plane P as a vector
# Plot the plane P as a 2 vectors.
# We scale by 2 just to get the arrows outside the current box
plot_vectors([PT * 4, PT * -4], colors=['k', 'k'], axes=[4, 4], ax=ax1)
# Plot 20 random points.
for i in range(0, 20):
v1 = np.array(np.random.uniform(-4, 4, 2)) # Get a pair of random numbers between -4 and 4
side_of_plane = np.sign(np.dot(P, v1.T)) # Get the sign of the dot product with P
# Color the points depending on the sign of the result of np.dot(P, point.T)
if side_of_plane == 1:
ax1.plot([v1[0]], [v1[1]], 'bo') # Plot a blue point
else:
ax1.plot([v1[0]], [v1[1]], 'ro') # Plot a red point
plt.show()
###Output
_____no_output_____
###Markdown
Now, let us see what is inside the code that color the points.
###Code
P = np.array([[1, 1]]) # Single plane
v1 = np.array([[1, 2]]) # Sample point 1
v2 = np.array([[-1, 1]]) # Sample point 2
v3 = np.array([[-2, -1]]) # Sample point 3
np.dot(P, v1.T)
np.dot(P, v2.T)
np.dot(P, v3.T)
###Output
_____no_output_____
###Markdown
The function below checks in which side of the plane P is located the vector `v`
###Code
def side_of_plane(P, v):
dotproduct = np.dot(P, v.T) # Get the dot product P * v'
sign_of_dot_product = np.sign(dotproduct) # The sign of the elements of the dotproduct matrix
sign_of_dot_product_scalar = sign_of_dot_product.item() # The value of the first item
return sign_of_dot_product_scalar
side_of_plane(P, v1) # In which side is [1, 2]
side_of_plane(P, v2) # In which side is [-1, 1]
side_of_plane(P, v3) # In which side is [-2, -1]
###Output
_____no_output_____
###Markdown
Hash Function with multiple planesIn the following section, we are going to define a hash function with a list of three custom planes in 2D.
###Code
P1 = np.array([[1, 1]]) # First plane 2D
P2 = np.array([[-1, 1]]) # Second plane 2D
P3 = np.array([[-1, -1]]) # Third plane 2D
P_l = [P1, P2, P3] # List of arrays. It is the multi plane
# Vector to search
v = np.array([[2, 2]])
###Output
_____no_output_____
###Markdown
The next function creates a hash value based on a set of planes. The output value is a combination of the side of the plane where the vector is localized with respect to the collection of planes.We can think of this list of planes as a set of basic hash functions, each of which can produce only 1 or 0 as output.
###Code
def hash_multi_plane(P_l, v):
hash_value = 0
for i, P in enumerate(P_l):
sign = side_of_plane(P,v)
hash_i = 1 if sign >=0 else 0
hash_value += 2**i * hash_i
return hash_value
hash_multi_plane(P_l, v) # Find the number of the plane that containes this value
###Output
_____no_output_____
###Markdown
Random PlanesIn the cell below, we create a set of three random planes
###Code
np.random.seed(0)
num_dimensions = 2 # is 300 in assignment
num_planes = 3 # is 10 in assignment
random_planes_matrix = np.random.normal(
size=(num_planes,
num_dimensions))
print(random_planes_matrix)
v = np.array([[2, 2]])
###Output
_____no_output_____
###Markdown
The next function is similar to the `side_of_plane()` function, but it evaluates more than a plane each time. The result is an array with the side of the plane of `v`, for the set of planes `P`
###Code
# Side of the plane function. The result is a matrix
def side_of_plane_matrix(P, v):
dotproduct = np.dot(P, v.T)
sign_of_dot_product = np.sign(dotproduct) # Get a boolean value telling if the value in the cell is positive or negative
return sign_of_dot_product
###Output
_____no_output_____
###Markdown
Get the side of the plane of the vector `[2, 2]` for the set of random planes.
###Code
sides_l = side_of_plane_matrix(
random_planes_matrix, v)
sides_l
###Output
_____no_output_____
###Markdown
Now, let us use the former function to define our multiplane hash function
###Code
def hash_multi_plane_matrix(P, v, num_planes):
sides_matrix = side_of_plane_matrix(P, v) # Get the side of planes for P and v
hash_value = 0
for i in range(num_planes):
sign = sides_matrix[i].item() # Get the value inside the matrix cell
hash_i = 1 if sign >=0 else 0
hash_value += 2**i * hash_i # sum 2^i * hash_i
return hash_value
###Output
_____no_output_____
###Markdown
Print the bucket hash for the vector `v = [2, 2]`.
###Code
hash_multi_plane_matrix(random_planes_matrix, v, num_planes)
###Output
_____no_output_____
###Markdown
NoteThis showed you how to make one set of random planes. You will make multiple sets of random planes in order to make the approximate nearest neighbors more accurate. Document vectorsBefore we finish this lab, remember that you can represent a document as a vector by adding up the word vectors for the words inside the document. In this example, our embedding contains only three words, each represented by a 3D array.
###Code
word_embedding = {"I": np.array([1,0,1]),
"love": np.array([-1,0,1]),
"learning": np.array([1,0,1])
}
words_in_document = ['I', 'love', 'learning', 'not_a_word']
document_embedding = np.array([0,0,0])
print('document_embedding originally:', document_embedding)
print('\n')
for word in words_in_document:
print('word:',word)
add_this = word_embedding.get(word,0)
print('add_this:',add_this)
document_embedding += add_this
print('document_embedding after adding:',document_embedding)
print()
print(document_embedding)
###Output
document_embedding originally: [0 0 0]
word: I
add_this: [1 0 1]
document_embedding after adding: [1 0 1]
word: love
add_this: [-1 0 1]
document_embedding after adding: [0 0 2]
word: learning
add_this: [1 0 1]
document_embedding after adding: [1 0 3]
word: not_a_word
add_this: 0
document_embedding after adding: [1 0 3]
[1 0 3]
|
nbs/test.ipynb | ###Markdown
Testing Notebooks> Testing of notebooks used for documentation You can use `nbdev_test_nbs` from [nbdev](https://nbdev.fast.ai/test.htmlnbdev_test_nbs) to test notebooks. No customization is necessary for docs sites. This is aliased as `nbdoc_test` for convenience:
###Code
!nbdoc_test --help
###Output
usage: nbdoc_test [-h] [--fname FNAME] [--flags FLAGS] [--n_workers N_WORKERS]
[--verbose VERBOSE] [--timing] [--pause PAUSE]
Test in parallel the notebooks matching `fname`, passing along `flags`
optional arguments:
-h, --help show this help message and exit
--fname FNAME A notebook name or glob to convert
--flags FLAGS Space separated list of flags
--n_workers N_WORKERS Number of workers to use
--verbose VERBOSE Print errors along the way (default: True)
--timing Timing each notebook to see the ones are slow (default:
False)
--pause PAUSE Pause time (in secs) between notebooks to avoid race
conditions (default: 0.5)
###Markdown
To use `nbdev_test_nbs`, you must also define a `settings.ini` file at the root of the repo. For documentation based testing, we recommend setting the following variables:- recursive = True- tst_flags = notest`tst_flags = notest` allow you to make commments on cells like `notest` to allow tests to skip a specific cell. This is useful for skipping long-running tests. You can [read more about this here](https://nbdev.fast.ai/test.htmlnbdev_test_nbs).`recursive = True` sets the default behavior of `nbdev_test_nbs` to `True` which is probably is what you want for a documentation site with many folders nested arbitrarily deep that may contain notebooks.Here is this project's `settings.ini` (note that the `recursive` flag is set to `False` as this project is not a documentation site):
###Code
!cat ../settings.ini
#notest
from nbdev.test import nbdev_test_nbs
nbdev_test_nbs('test_files/example_input.ipynb', n_workers=0)
#notest
nbdev_test_nbs('test_files/', n_workers=0)
###Output
testing /Users/hamel/github/nbdoc/nbs/test_files/example_input.ipynb
testing /Users/hamel/github/nbdoc/nbs/test_files/hello_world.ipynb
testing /Users/hamel/github/nbdoc/nbs/test_files/run_flow.ipynb
testing /Users/hamel/github/nbdoc/nbs/test_files/run_flow_showstep.ipynb
testing /Users/hamel/github/nbdoc/nbs/test_files/writefile.ipynb
All tests are passing!
|
ht_requests/ht_requests_example.ipynb | ###Markdown
Create authorization controller
###Code
auth_control = ht_ac.HTAuthorizationController(access_str)
###Output
_____no_output_____
###Markdown
List Projects
###Code
projects = ht_reqs.list_projects(auth_control=auth_control)
print_json(projects)
###Output
_____no_output_____
###Markdown
View User Files
###Code
contents = ht_reqs.get_item_dict_from_ht_path(auth_control=auth_control, ht_path='/')
print_json(contents)
###Output
_____no_output_____
###Markdown
Create a folder
###Code
folder_id = ht_reqs.create_folder(auth_control=auth_control,
folder_name=folder_name,
metadata=[
{
'keyName': 'Symmetry',
'value': {
'type': 'string',
'link': 62
},
'unit': '',
'annotation': ''
},
{
'keyName': 'GRID',
'value': {
'type': 'string',
'link': 'SqrGrid'
},
'unit': '',
'annotation': ''
}
])
folder_id
###Output
_____no_output_____
###Markdown
Upload a file to the folder we just created
###Code
upload_path = ht_reqs.get_ht_id_path_from_ht_path(auth_control=auth_control, ht_path=f'/{folder_name}')
file_info = ht_reqs.upload_file(
auth_control,
local_path=upload_file,
ht_id_path=upload_path,
metadata=[
{
'keyName': 'Symmetry',
'value': {
'type': 'string',
'link': 62
},
'unit': '',
'annotation': ''
},
{
'keyName': 'GRID',
'value': {
'type': 'string',
'link': 'SqrGrid'
},
'unit': '',
'annotation': ''
}
]
)
file_info
###Output
_____no_output_____
###Markdown
Check that the file upload succeeded
###Code
contents = ht_reqs.get_item_dict_from_ht_path(auth_control=auth_control, ht_path='/')
print_json(contents)
contents = ht_reqs.get_item_dict_from_ht_path(auth_control=auth_control, ht_path=f'/{folder_name}')
print_json(contents)
###Output
_____no_output_____ |
book-d2l-en/chapter_multilayer-perceptrons/numerical-stability-and-init.ipynb | ###Markdown
Numerical Stability and InitializationSo far we covered the tools needed to implement multilayer perceptrons, how to solve regression and classification problems, and how to control capacity. However, we took initialization of the parameters for granted, or rather simply assumed that they would not be particularly relevant. In the following we will look at them in more detail and discuss some useful heuristics.Secondly, we were not particularly concerned with the choice of activation. Indeed, for shallow networks this is not very relevant. For deep networks, however, design choices of nonlinearity and initialization play a crucial role in making the optimization algorithm converge relatively rapidly. Failure to be mindful of these issues can lead to either exploding or vanishing gradients. Vanishing and Exploding GradientsConsider a deep network with $d$ layers, input $\mathbf{x}$ and output $\mathbf{o}$. Each layer satisfies:$$\mathbf{h}^{t+1} = f_t (\mathbf{h}^t) \text{ and thus } \mathbf{o} = f_d \circ \ldots \circ f_1(\mathbf{x})$$If all activations and inputs are vectors, we can write the gradient of $\mathbf{o}$ with respect to any set of parameters $\mathbf{W}_t$ associated with the function $f_t$ at layer $t$ simply as$$\partial_{\mathbf{W}_t} \mathbf{o} = \underbrace{\partial_{\mathbf{h}^{d-1}} \mathbf{h}^d}_{:= \mathbf{M}_d} \cdot \ldots \cdot \underbrace{\partial_{\mathbf{h}^{t}} \mathbf{h}^{t+1}}_{:= \mathbf{M}_t} \underbrace{\partial_{\mathbf{W}_t} \mathbf{h}^t}_{:= \mathbf{v}_t}.$$In other words, it is the product of $d-t$ matrices $\mathbf{M}_d \cdot \ldots \cdot \mathbf{M}_t$ and the gradient vector $\mathbf{v}_t$. What happens is quite similar to the situation when we experienced numerical underflow when multiplying too many probabilities. At the time we were able to mitigate the problem by switching from into log-space, i.e. by shifting the problem from the mantissa to the exponent of the numerical representation. Unfortunately the problem outlined in the equation above is much more serious: initially the matrices $M_t$ may well have a wide variety of eigenvalues. They might be small, they might be large, and in particular, their product might well be *very large* or *very small*. This is not (only) a problem of numerical representation but it means that the optimization algorithm is bound to fail. It either receives gradients with excessively large or excessively small steps. In the former case, the parameters explode and in the latter case we end up with vanishing gradients and no meaningful progress. Exploding GradientsTo illustrate this a bit better, we draw 100 Gaussian random matrices and multiply them with some initial matrix. For the scaling that we picked, the matrix product explodes. If this were to happen to us with a deep network, we would have no meaningful chance of making the algorithm converge.
###Code
%matplotlib inline
import mxnet as mx
from mxnet import nd, autograd
from matplotlib import pyplot as plt
M = nd.random.normal(shape=(4,4))
print('A single matrix', M)
for i in range(100):
M = nd.dot(M, nd.random.normal(shape=(4,4)))
print('After multiplying 100 matrices', M)
###Output
A single matrix
[[ 2.2122064 0.7740038 1.0434405 1.1839255 ]
[ 1.8917114 -1.2347414 -1.771029 -0.45138445]
[ 0.57938355 -1.856082 -1.9768796 -0.20801921]
[ 0.2444218 -0.03716067 -0.48774993 -0.02261727]]
<NDArray 4x4 @cpu(0)>
After multiplying 100 matrices
[[ 3.1575275e+20 -5.0052276e+19 2.0565092e+21 -2.3741922e+20]
[-4.6332600e+20 7.3445046e+19 -3.0176513e+21 3.4838066e+20]
[-5.8487235e+20 9.2711797e+19 -3.8092853e+21 4.3977330e+20]
[-6.2947415e+19 9.9783660e+18 -4.0997977e+20 4.7331174e+19]]
<NDArray 4x4 @cpu(0)>
###Markdown
Vanishing GradientsThe converse problem of vanishing gradients is just as bad. One of the major culprits in this context are the activation functions $\sigma$ that are interleaved with the linear operations in each layer. Historically, a popular activation used to be the sigmoid function $(1 + \exp(-x))$ that was introduced in the section discussing [Multilayer Perceptrons](../chapter_deep-learning-basics/mlp.md). Let us briefly review the function to see why picking it as nonlinear activation function might be problematic.
###Code
x = nd.arange(-8.0, 8.0, 0.1)
x.attach_grad()
with autograd.record():
y = x.sigmoid()
y.backward()
plt.figure(figsize=(8, 4))
plt.plot(x.asnumpy(), y.asnumpy())
plt.plot(x.asnumpy(), x.grad.asnumpy())
plt.legend(['sigmoid', 'gradient'])
plt.show()
###Output
_____no_output_____ |
2020_week_10/Lagrangian_pendulum.ipynb | ###Markdown
Simple pendulum using Lagrange's equationDefines a LagrangianPendulum class that is used to generate basic pendulum plots from solving Lagrange's equations.* Last revised 17-Mar-2019 by Dick Furnstahl ([email protected]). Euler-Lagrange equationFor a simple pendulum, the Lagrangian with generalized coordinate $\phi$ is$\begin{align} \mathcal{L} = \frac12 m L^2 \dot\phi^2 - mgL(1 - \cos\phi)\end{align}$The Euler-Lagrange equation is$\begin{align} \frac{d}{dt}\frac{\partial\mathcal{L}}{\partial \dot\phi} = \frac{\partial\mathcal L}{\partial\phi} \quad\Longrightarrow\quad m L^2 \ddot \phi = -mgL\sin\phi \ \mbox{or}\ \ddot\phi = - \omega_0^2\sin\phi = 0 \;.\end{align}$ Hamilton's equationsThe generalized momentum corresponding to $\phi$ is$\begin{align} \frac{\partial\mathcal{L}}{\partial \dot\phi} = m L^2 \dot\phi \equiv p_\phi \;.\end{align}$We can invert this equation to find $\dot\phi = p_\phi / m L^2$.Constructing the Hamiltonian by Legendre transformation we find $\begin{align} \mathcal{H} &= \dot\phi p_\phi - \mathcal{L} \\ &= \frac{p_\phi^2}{m L^2} - \frac12 m L^2 \dot\phi^2 + mgL(1 - \cos\phi) \\ &= \frac{p_\phi^2}{2 m L^2} + mgL(1 - \cos\phi) \;.\end{align}$Thus $\mathcal{H}$ is simply $T + V$. Hamilton's equations are$\begin{align} \dot\phi &= \frac{\partial\mathcal{H}}{\partial p_\phi} = \frac{p_\phi}{m L^2} \\ \dot p_\phi &= -\frac{\partial\mathcal{H}}{\partial \phi} = -mgL \sin\phi \;.\end{align}$
###Code
%matplotlib inline
import numpy as np
from scipy.integrate import odeint, solve_ivp
import matplotlib.pyplot as plt
# The dpi (dots-per-inch) setting will affect the resolution and how large
# the plots appear on screen and printed. So you may want/need to adjust
# the figsize when creating the figure.
plt.rcParams['figure.dpi'] = 100. # this is the default for notebook
# Change the common font size (smaller when higher dpi)
font_size = 12
plt.rcParams.update({'font.size': font_size})
###Output
_____no_output_____
###Markdown
Pendulum class and utility functions
###Code
class LagrangianPendulum():
"""
Pendulum class implements the parameters and Lagrange's equations for
a simple pendulum (no driving or damping).
Parameters
----------
L : float
length of the simple pendulum
g : float
gravitational acceleration at the earth's surface
omega_0 : float
natural frequency of the pendulum (\sqrt{g/l} where l is the
pendulum length)
mass : float
mass of pendulum
Methods
-------
dy_dt(t, y)
Returns the right side of the differential equation in vector y,
given time t and the corresponding value of y.
"""
def __init__(self, L=1., mass=1., g=1.
):
self.L = L
self.g = g
self.omega_0 = np.sqrt(g/L)
self.mass = mass
def dy_dt(self, t, y):
"""
This function returns the right-hand side of the diffeq:
[dphi/dt d^2phi/dt^2]
Parameters
----------
t : float
time
y : float
A 2-component vector with y[0] = phi(t) and y[1] = dphi/dt
Returns
-------
"""
return [y[1], -self.omega_0**2 * np.sin(y[0]) ]
def solve_ode(self, t_pts, phi_0, phi_dot_0,
abserr=1.0e-9, relerr=1.0e-9):
"""
Solve the ODE given initial conditions.
Specify smaller abserr and relerr to get more precision.
"""
y = [phi_0, phi_dot_0]
solution = solve_ivp(self.dy_dt, (t_pts[0], t_pts[-1]),
y, t_eval=t_pts,
atol=abserr, rtol=relerr)
phi, phi_dot = solution.y
return phi, phi_dot
def plot_y_vs_x(x, y, axis_labels=None, label=None, title=None,
color=None, linestyle=None, semilogy=False, loglog=False,
ax=None):
"""
Generic plotting function: return a figure axis with a plot of y vs. x,
with line color and style, title, axis labels, and line label
"""
if ax is None: # if the axis object doesn't exist, make one
ax = plt.gca()
if (semilogy):
line, = ax.semilogy(x, y, label=label,
color=color, linestyle=linestyle)
elif (loglog):
line, = ax.loglog(x, y, label=label,
color=color, linestyle=linestyle)
else:
line, = ax.plot(x, y, label=label,
color=color, linestyle=linestyle)
if label is not None: # if a label if passed, show the legend
ax.legend()
if title is not None: # set a title if one if passed
ax.set_title(title)
if axis_labels is not None: # set x-axis and y-axis labels if passed
ax.set_xlabel(axis_labels[0])
ax.set_ylabel(axis_labels[1])
return ax, line
def start_stop_indices(t_pts, plot_start, plot_stop):
start_index = (np.fabs(t_pts-plot_start)).argmin() # index in t_pts array
stop_index = (np.fabs(t_pts-plot_stop)).argmin() # index in t_pts array
return start_index, stop_index
###Output
_____no_output_____
###Markdown
Make simple pendulum plots
###Code
# Labels for individual plot axes
phi_vs_time_labels = (r'$t$', r'$\phi(t)$')
phi_dot_vs_time_labels = (r'$t$', r'$d\phi/dt(t)$')
state_space_labels = (r'$\phi$', r'$d\phi/dt$')
# Common plotting time (generate the full time then use slices)
t_start = 0.
t_end = 50.
delta_t = 0.001
t_pts = np.arange(t_start, t_end+delta_t, delta_t)
L = 1.
g = 1.
mass = 1.
# Instantiate a pendulum
p1 = LagrangianPendulum(L=L, g=g, mass=mass)
# both plots: same initial conditions
phi_0 = (3./4.)*np.pi
phi_dot_0 = 0.
phi, phi_dot = p1.solve_ode(t_pts, phi_0, phi_dot_0)
# start the plot!
fig = plt.figure(figsize=(15,5))
overall_title = 'Simple pendulum from Lagrangian: ' + \
rf' $\omega_0 = {p1.omega_0:.2f},$' + \
rf' $\phi_0 = {phi_0:.2f},$' + \
rf' $\dot\phi_0 = {phi_dot_0:.2f}$' + \
'\n' # \n means a new line (adds some space here)
fig.suptitle(overall_title, va='baseline')
# first plot: phi plot
ax_a = fig.add_subplot(1,3,1)
start, stop = start_stop_indices(t_pts, t_start, t_end)
plot_y_vs_x(t_pts[start : stop], phi[start : stop],
axis_labels=phi_vs_time_labels,
color='blue',
label=None,
title=r'$\phi(t)$',
ax=ax_a)
# second plot: phi_dot plot
ax_b = fig.add_subplot(1,3,2)
start, stop = start_stop_indices(t_pts, t_start, t_end)
plot_y_vs_x(t_pts[start : stop], phi_dot[start : stop],
axis_labels=phi_dot_vs_time_labels,
color='blue',
label=None,
title=r'$\dot\phi(t)$',
ax=ax_b)
# third plot: state space plot from t=30 to t=50
ax_c = fig.add_subplot(1,3,3)
start, stop = start_stop_indices(t_pts, t_start, t_end)
plot_y_vs_x(phi[start : stop], phi_dot[start : stop],
axis_labels=state_space_labels,
color='blue',
label=None,
title='State space',
ax=ax_c)
fig.tight_layout()
fig.savefig('simple_pendulum_Lagrange.png', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Now trying the power spectrum, plotting only positive frequencies and cutting off the lower peaks:
###Code
start, stop = start_stop_indices(t_pts, t_start, t_end)
signal = phi[start:stop]
power_spectrum = np.abs(np.fft.fft(signal))**2
freqs = np.fft.fftfreq(signal.size, delta_t)
idx = np.argsort(freqs)
fig_ps = plt.figure(figsize=(5,5))
ax_ps = fig_ps.add_subplot(1,1,1)
ax_ps.semilogy(freqs[idx], power_spectrum[idx], color='blue')
ax_ps.set_xlim(0, 1.)
ax_ps.set_ylim(1.e5, 1.e11)
ax_ps.set_xlabel('frequency')
ax_ps.set_title('Power Spectrum')
fig_ps.tight_layout()
###Output
_____no_output_____
###Markdown
Simple pendulum using Lagrange's equationDefines a LagrangianPendulum class that is used to generate basic pendulum plots from solving Lagrange's equations.* Last revised 17-Mar-2019 by Dick Furnstahl ([email protected]). Euler-Lagrange equationFor a simple pendulum, the Lagrangian with generalized coordinate $\phi$ is$\begin{align} \mathcal{L} = \frac12 m L^2 \dot\phi^2 - mgL(1 - \cos\phi)\end{align}$The Euler-Lagrange equation is$\begin{align} \frac{d}{dt}\frac{\partial\mathcal{L}}{\partial \dot\phi} = \frac{\partial\mathcal L}{\partial\phi} \quad\Longrightarrow\quad m L^2 \ddot \phi = -mgL\sin\phi \ \mbox{or}\ \ddot\phi = - \omega_0^2\sin\phi = 0 \;.\end{align}$ Hamilton's equationsThe generalized momentum corresponding to $\phi$ is$\begin{align} \frac{\partial\mathcal{L}}{\partial \dot\phi} = m L^2 \dot\phi \equiv p_\phi \;.\end{align}$We can invert this equation to find $\dot\phi = p_\phi / m L^2$.Constructing the Hamiltonian by Legendre transformation we find $\begin{align} \mathcal{H} &= \dot\phi p_\phi - \mathcal{L} \\ &= \frac{p_\phi^2}{m L^2} - \frac12 m L^2 \dot\phi^2 + mgL(1 - \cos\phi) \\ &= \frac{p_\phi^2}{2 m L^2} + mgL(1 - \cos\phi) \;.\end{align}$Thus $\mathcal{H}$ is simply $T + V$. Hamilton's equations are$\begin{align} \dot\phi &= \frac{\partial\mathcal{H}}{\partial p_\phi} = \frac{p_\phi}{m L^2} \\ \dot p_\phi &= -\frac{\partial\mathcal{H}}{\partial \phi} = -mgL \sin\phi \;.\end{align}$
###Code
%matplotlib inline
import numpy as np
from scipy.integrate import odeint, solve_ivp
import matplotlib.pyplot as plt
# The dpi (dots-per-inch) setting will affect the resolution and how large
# the plots appear on screen and printed. So you may want/need to adjust
# the figsize when creating the figure.
plt.rcParams['figure.dpi'] = 100. # this is the default for notebook
# Change the common font size (smaller when higher dpi)
font_size = 12
plt.rcParams.update({'font.size': font_size})
###Output
_____no_output_____
###Markdown
Pendulum class and utility functions
###Code
class LagrangianPendulum():
"""
Pendulum class implements the parameters and Lagrange's equations for
a simple pendulum (no driving or damping).
Parameters
----------
L : float
length of the simple pendulum
g : float
gravitational acceleration at the earth's surface
omega_0 : float
natural frequency of the pendulum (\sqrt{g/l} where l is the
pendulum length)
mass : float
mass of pendulum
Methods
-------
dy_dt(t, y)
Returns the right side of the differential equation in vector y,
given time t and the corresponding value of y.
"""
def __init__(self, L=1., mass=1., g=1.
):
self.L = L
self.g = g
self.omega_0 = np.sqrt(g/L)
self.mass = mass
def dy_dt(self, t, y):
"""
This function returns the right-hand side of the diffeq:
[dphi/dt d^2phi/dt^2]
Parameters
----------
t : float
time
y : float
A 2-component vector with y[0] = phi(t) and y[1] = dphi/dt
Returns
-------
"""
return [y[1], -self.omega_0**2 * np.sin(y[0]) ]
def solve_ode(self, t_pts, phi_0, phi_dot_0,
abserr=1.0e-9, relerr=1.0e-9):
"""
Solve the ODE given initial conditions.
Specify smaller abserr and relerr to get more precision.
"""
y = [phi_0, phi_dot_0]
solution = solve_ivp(self.dy_dt, (t_pts[0], t_pts[-1]),
y, t_eval=t_pts,
atol=abserr, rtol=relerr)
phi, phi_dot = solution.y
return phi, phi_dot
def plot_y_vs_x(x, y, axis_labels=None, label=None, title=None,
color=None, linestyle=None, semilogy=False, loglog=False,
ax=None):
"""
Generic plotting function: return a figure axis with a plot of y vs. x,
with line color and style, title, axis labels, and line label
"""
if ax is None: # if the axis object doesn't exist, make one
ax = plt.gca()
if (semilogy):
line, = ax.semilogy(x, y, label=label,
color=color, linestyle=linestyle)
elif (loglog):
line, = ax.loglog(x, y, label=label,
color=color, linestyle=linestyle)
else:
line, = ax.plot(x, y, label=label,
color=color, linestyle=linestyle)
if label is not None: # if a label if passed, show the legend
ax.legend()
if title is not None: # set a title if one if passed
ax.set_title(title)
if axis_labels is not None: # set x-axis and y-axis labels if passed
ax.set_xlabel(axis_labels[0])
ax.set_ylabel(axis_labels[1])
return ax, line
def start_stop_indices(t_pts, plot_start, plot_stop):
start_index = (np.fabs(t_pts-plot_start)).argmin() # index in t_pts array
stop_index = (np.fabs(t_pts-plot_stop)).argmin() # index in t_pts array
return start_index, stop_index
###Output
_____no_output_____
###Markdown
Make simple pendulum plots
###Code
# Labels for individual plot axes
phi_vs_time_labels = (r'$t$', r'$\phi(t)$')
phi_dot_vs_time_labels = (r'$t$', r'$d\phi/dt(t)$')
state_space_labels = (r'$\phi$', r'$d\phi/dt$')
# Common plotting time (generate the full time then use slices)
t_start = 0.
t_end = 50.
delta_t = 0.001
t_pts = np.arange(t_start, t_end+delta_t, delta_t)
L = 1.
g = 1.
mass = 1.
# Instantiate a pendulum
p1 = LagrangianPendulum(L=L, g=g, mass=mass)
# both plots: same initial conditions
phi_0 = (3./4.)*np.pi
phi_dot_0 = 0.
phi, phi_dot = p1.solve_ode(t_pts, phi_0, phi_dot_0)
# start the plot!
fig = plt.figure(figsize=(15,5))
overall_title = 'Simple pendulum from Lagrangian: ' + \
rf' $\omega_0 = {p1.omega_0:.2f},$' + \
rf' $\phi_0 = {phi_0:.2f},$' + \
rf' $\dot\phi_0 = {phi_dot_0:.2f}$' + \
'\n' # \n means a new line (adds some space here)
fig.suptitle(overall_title, va='baseline')
# first plot: phi plot
ax_a = fig.add_subplot(1,3,1)
start, stop = start_stop_indices(t_pts, t_start, t_end)
plot_y_vs_x(t_pts[start : stop], phi[start : stop],
axis_labels=phi_vs_time_labels,
color='blue',
label=None,
title=r'$\phi(t)$',
ax=ax_a)
# second plot: phi_dot plot
ax_b = fig.add_subplot(1,3,2)
start, stop = start_stop_indices(t_pts, t_start, t_end)
plot_y_vs_x(t_pts[start : stop], phi_dot[start : stop],
axis_labels=phi_dot_vs_time_labels,
color='blue',
label=None,
title=r'$\dot\phi(t)$',
ax=ax_b)
# third plot: state space plot from t=30 to t=50
ax_c = fig.add_subplot(1,3,3)
start, stop = start_stop_indices(t_pts, t_start, t_end)
plot_y_vs_x(phi[start : stop], phi_dot[start : stop],
axis_labels=state_space_labels,
color='blue',
label=None,
title='State space',
ax=ax_c)
fig.tight_layout()
fig.savefig('simple_pendulum_Lagrange.png', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Now trying the power spectrum, plotting only positive frequencies and cutting off the lower peaks:
###Code
start, stop = start_stop_indices(t_pts, t_start, t_end)
signal = phi[start:stop]
power_spectrum = np.abs(np.fft.fft(signal))**2
freqs = np.fft.fftfreq(signal.size, delta_t)
idx = np.argsort(freqs)
fig_ps = plt.figure(figsize=(5,5))
ax_ps = fig_ps.add_subplot(1,1,1)
ax_ps.semilogy(freqs[idx], power_spectrum[idx], color='blue')
ax_ps.set_xlim(0, 1.)
ax_ps.set_ylim(1.e5, 1.e11)
ax_ps.set_xlabel('frequency')
ax_ps.set_title('Power Spectrum')
fig_ps.tight_layout()
###Output
_____no_output_____ |
docs/_build/doctrees/nbsphinx/OtherNotebooks/VerticesClassification.ipynb | ###Markdown
Vertex ClassificationThe most common type of analysis that we want to do with a colloidal ice system is to map it to a system of vertices, for which we can then classify and count types of vertices. A system of vertices is a directed network. A vertex geometry is a set of points $\mathbf{v}_i \in \mathbb{V}$ which define the vertex locations in 3D space. The topology of the vertices is defined by their connectivity; two vertices, $v_i$ and $v_j$ are connected if they are joined by an edge $\mathbf{e}_{ij} = \left(v_i, v_j\right)$, and the set of all edges is $\mathbb{E}$. The vertex network is directed, which means that the edge $\mathbf{e}_{ij} \neq \mathbf{e}_{ji}$.The mapping from a colloidal ice consists on assigning an edge to each trap. An edge's direction goes from the vertex with a hole to the vertex with a particle. A vertex object 'v' should be able to guess the vertex geometry and topology that maps a colloidal ice object 'col' for simple systems by writing: v.colloids_to_vertices(col) However, some assistance on this guessing could be useful. For example:* A set of points containing the vertex positions * A topology, codified as a list of edges forming a vertex.The vertices object should also keep its topology stored internally. Obtaining the vertex classification for a new frame should be an update, and not a processing from scratch. This means that the processing might not work for dynamic systems where vertices move, or topologies change. That's fine, we can't even do that in simulation yet for most systems.
###Code
# This only adds the package to the path.
import os
import sys
sys.path.insert(0, '../../../')
import icenumerics as ice
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import scipy.spatial as spa
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
We first create a colloidal ice to play around with
###Code
ureg = ice.ureg
sp = ice.spins()
sp.create_lattice("square",[10,10],lattice_constant=30*ureg.um, border="closed spin")
particle = ice.particle(radius = 5.15*ureg.um,
susceptibility = 0.0576,
diffusion = 0.125*ureg.um**2/ureg.s,
temperature = 300*ureg.K,
density = 1000*ureg.kg/ureg.m**3)
trap = ice.trap(trap_sep = 10*ureg.um,
height = 80*ureg.pN*ureg.nm,
stiffness = 6e-4*ureg.pN/ureg.nm)
col = ice.colloidal_ice(sp, particle, trap, height_spread = 0, susceptibility_spread = 0.1)
col.pad_region(30*ureg.um)
###Output
_____no_output_____
###Markdown
* Vertices should work if I calculate them here, before the colloidal_ice has a trj object. * Maybe the colloidal_ice object should keep a trj dataframe to begin with. * Also, it occurs to me that the standard trj object stored by colloidal ice shouldn't be directly read from the lammpstrj, but instead should be processed to a trj object containing directions and colloids. This might be part of creating a *results* object.
###Code
world = ice.world(
field = 20*ureg.mT,
temperature = 300*ureg.K,
dipole_cutoff = 200*ureg.um)
col.simulate(world,
name = "test",
include_timestamp = False,
targetdir = r".",
framerate = 10*ureg.Hz,
timestep = 10*ureg.ms,
run_time = 10*ureg.s,
output = ["x","y","z","mux","muy","muz"])
f, (ax2) = plt.subplots(1,1,figsize = (8,8))
col.display(ax2)
for i, trj_i in col.trj.groupby("id"):
if all(trj_i.type==1):
plt.plot(trj_i.x,trj_i.y, color = "r")
###Output
_____no_output_____
###Markdown
Create a 'vertex' object from a 'colloidal_ice' object
###Code
v = ice.vertices()
v.colloids_to_vertices(col)
f, (ax1) = plt.subplots(1,1,figsize = (8,8))
v.display(ax1)
col.display(ax1)
###Output
_____no_output_____
###Markdown
Create a vertex structure from the results of lammps. We can get the vertex structure from a trajectory. To do this, the trajectory must be in the 'ice' format obtained by 'get_ice_trj':
###Code
trj = ice.get_ice_trj(col.trj, bounds = col.bnd)
trj.head()
###Output
_____no_output_____
###Markdown
Notice that in this format, the columns contain the directions and colloid positions of the traps. The 'vertex' class can process the results of a single frame, by giving it as an input a trajectory with a single index.
###Code
v = ice.vertices()
frame = 0
v = v.trj_to_vertices(trj.loc[frame])
f, (ax1) = plt.subplots(1,1,figsize = (8,8))
v.display(ax1)
col.display(ax1)
###Output
_____no_output_____
###Markdown
Multiple frames If the 'trj' is a MultiIndex, the 'trj_to_vertices' method will iterate over all the indices which are not 'id', and calculate the vertex structure of all frames. However, it will only calculate the topology of the first frame.
###Code
%%time
v = ice.vertices()
frames = trj.index.get_level_values("frame").unique()
v.trj_to_vertices(trj.loc[frames[:]])
v.vertices.head()
f, (ax1) = plt.subplots(1,1,figsize = (8,8))
v.display(ax1)
col.display(ax1)
###Output
_____no_output_____
###Markdown
Classifying and Counting vertices.From the previous vertex object, where the trayectory of the vertices have been calculated (the time evolution of the vertices) we can then classify and count the vertices using the following procedures: First we define a function that classifes the vertices. This means taking properties that have been calculated like charge, coordination or dipole, and asigning a "type". It's advisable to make the type an integer.
###Code
def classify_vertices(vrt):
vrt["type"] = np.NaN
vrt.loc[vrt.eval("coordination==4 & charge == -4"),"type"] = 1
vrt.loc[vrt.eval("coordination==4 & charge == -2"),"type"] = 2
vrt.loc[vrt.eval("coordination==4 & charge == 0 & (dx**2+dy**2)==0"),"type"] = 3
vrt.loc[vrt.eval("coordination==4 & charge == 0 & (dx**2+dy**2)>0"),"type"] = 4
vrt.loc[vrt.eval("coordination==4 & charge == 2"),"type"] = 5
vrt.loc[vrt.eval("coordination==4 & charge == 4"),"type"] = 6
return vrt
v.vertices = classify_vertices(v.vertices)
###Output
_____no_output_____
###Markdown
Now, we can use the builtin function `count_vertices`, which takes a column, and get's the total number o vertices for which the given column is of a given unique value.
###Code
count = ice.count_vertices(v.vertices)
for t,v_t in count.groupby("type"):
plt.plot(v_t.index.get_level_values("frame"), v_t.fraction, label = t)
plt.legend()
###Output
_____no_output_____ |
linearRegression_forecast/2.3..Deploy_BYOS_Forecast.ipynb | ###Markdown
[모듈 2.3] 모델 배포 및 추론이 노트북은 아래와 같은 작업을 합니다.- 테스트 데이터 파일을 로딩- 모델 배포 방식 결정 - **처음 테스트시에는 로컬 배포(세이지메이커 노트북 인스턴스 사용)를 하시고, 완료되면 클라우드 배포를 하세요.** - 로컬 배포 - 클라우드 배포- 인퍼런스 코드 테스트 및 코드 설명 - 추론시에 사용할 사용자 정의 코드를 로컬에서 테스트 합니다.- 추가 패키지 설치 - 세이지 메이커 모델 생성- 모델을 엔드포인트에 배포- 모델 추론 실행- 엔드포인트 제거
###Code
import pandas as pd
import os
import matplotlib.pyplot as plt
import numpy as np
%store -r test_file_path
%store -r s3_model_dir
%store -r local_model_dir
###Output
_____no_output_____
###Markdown
테스트 파일을 로딩데이터를 로딩한 후에 test 데이터를 test_y, test_X로 분리 합니다. (훈련시에 사용한 데이터의 타입이 모두 float64 이기에, 똑같이 float64 로 타입을 변경 합니다.
###Code
df = pd.read_csv(test_file_path, header=None)
df = df.astype('float64')
test_y = df.iloc[:,0]
test_X = df.iloc[:,1:].to_numpy()
print("test_X : ", test_X.shape)
###Output
test_X : (149, 59)
###Markdown
모델 배포 방식: 로컬 모드 파라미터**아래 instance_type 에 따라서 로컬 모드, 클라우드 모드로 제어 합니다.처음에는 로컬모드, 잘 동작하면 클라우드 모드로 바꾸어서 해보세요.**- 로컬 모드이면 세이지 메이커의 로컬 세션이 할당이 됩니다. - 세이지 노트북 인스턴스에서 Docker Container가 생성이 되어서 배포를 합니다.- 클라우드 모드이면 세이지 메이커에의 세션이 할당이 되어서, 클라우드에서 배포가 진행 됩니다. - 클라우드 상에서 EC2 인스턴스가 생성이 되고, 이 EC2안에서 Docker Container가 생성이 되어서 배포 합니다.
###Code
instance_type = 'local'
# instance_type="ml.c4.xlarge"
###Output
_____no_output_____
###Markdown
인퍼런스 코드 테스트실제로 개발을 할시에 추론 코드의 로직 확인(디버깅 등)을 위해서 자주 추론 코드를 고치고 테스트 합니다. 그래서 이미 훈련된 모델을 로딩해서 추론 코드의 로직을 확인하는 과정을 합니다.SK Learn Model의 배포 및 추론을 위해서 자세한 설명은 아래를 참조 하세요.- [Deploy a Scikit-learn Model](https://sagemaker.readthedocs.io/en/stable/frameworks/sklearn/using_sklearn.htmldeploy-a-scikit-learn-model) 인퍼런스 코드 설명 (src/inference.py)아래는 실제 사용자가 정의한 model_fn, predict_fn를 inference.py 에 기술하였습니다. - 실제 아래의 추론시에 아래와 같은 로그를 확인하여 디버깅을 할 수 있습니다.```algo-1-l1rre_1 | From user-inference file- Model loaded: algo-1-l1rre_1 | From user-inference file- Shape of predictions: (149,)```- 만약에 아래의 함수들을 기술하지 않으면 디폴트로 기술되어 있는 model_fn, predict_fn가 호출 됩니다.- input_fn, output_fn 도 사용자 정의로 기술할 수 있습니다. 본 예제에서는 다루지 않았으나, 추후 사용자 정의가 필요하면 기술하여 inference.py 에 넣으시면 됩니다.```pythonimport joblib, osdef model_fn(model_dir): """ Deserialized and return fitted model Note that this should have the same name as the serialized model in the main method """ model = joblib.load(os.path.join(model_dir, "model.joblib")) print("From user-inference file- Model loaded: ") return modeldef predict_fn(input_data, model): """ 주어진 input_data를 """ payload = input_data predictions = model.predict(payload) print("From user-inference file- Shape of predictions: ", predictions.shape) return predictions```
###Code
import src.inference
from importlib import reload
# 사용자 정의 모듈 리로드 코드 입니다. 테스트 하면서 코드를 수정하면 아래와 같이 재로딩하면 반영이 됩니다.
src.inference = reload(src.inference)
from src.inference import model_fn, predict_fn
def input_fn(input_data, request_content_type='text/csv'):
"""
추론 형태에 맞게 입력 형태를 변경 합니다.
"""
n_feature = input_data.shape[1]
sample = input_data.reshape(-1,n_feature)
return sample
# 추론 형태로 변환 합니다.
sample = input_fn(test_X[0:10])
# 모델 로딩
model = model_fn('model')
# 추론 실행
predictons = predict_fn(sample, model)
print(predictons)
import time
local_model_path = f'file://{os.getcwd()}/model/model.joblib'
endpoint_name = 'local-endpoint-scikit-learn-{}'.format(int(time.time()))
###Output
_____no_output_____
###Markdown
참조: Docker 명령어로컬로드로 실행시에는 로컬에서 다커가 실행이 됩니다. 하지만 다커가 종료가 되지 않은 상태이면, 터미널을 열고 아래 명령어를 사용하여 다커를 중지 시키고 다시 실행합니다.- 현재 실행중인 컨테이너를 보여주기``` docker container ls```- 현재 실행중인 컨테이너를 모두 정지 시키기```docker stop $(docker ps -a -q)``` 추가 패키지 설치src/requirements.txt 안에 필요한 패키지를 기술 합니다. (예: ```pandas<2```). 이렇게 하면 엔드 포인트의 배포시에 설치가 됩니다.requirements.txt 를 src 폴더안에 같은 위치에 있으면 세이지 메이커가 자동적으로 패키지를 설치 합니다.```algo-1-l1rre_1 | /miniconda3/bin/python -m pip install . -r requirements.txtalgo-1-l1rre_1 | Processing /opt/ml/codealgo-1-l1rre_1 | Requirement already satisfied: pandas<2 in /miniconda3/lib/python3.7/site-packages (from -r requirements.txt (line 1)) (1.1.3)```
###Code
import sagemaker
from sagemaker import local
if instance_type == 'local':
sess = local.LocalSession()
model_path = local_model_path # 로컬에 저장된 모델 경로를 사용
else:
sess = sagemaker.Session()
model_path = s3_model_dir # S3 에 저장된 모델 경로를 사용
###Output
_____no_output_____
###Markdown
세이지 메이커 모델 생성SKLearnModel 를 생성하여 sagemaker model을 생성합니다.- model_data: 모델 아티팩트의 경로가 기술됨- framework_version: 배포가 될 SKLearn Model의 버전이 정해짐- entry_point: src/inference.py 인퍼런스 사용자 정의 코드가 기술됨- sagemaker_session: 로컬 모드 혹은 클라우드 모드가 결정이 됨
###Code
from sagemaker import get_execution_role
# Get a SageMaker-compatible role used by this Notebook Instance.
role = get_execution_role()
FRAMEWORK_VERSION = "0.23-1"
from sagemaker.sklearn.model import SKLearnModel
sm_model = SKLearnModel(model_data = model_path,
role = role,
source_dir = 'src',
entry_point = 'inference.py',
framework_version = FRAMEWORK_VERSION,
py_version = 'py3',
sagemaker_session = sess
)
###Output
_____no_output_____
###Markdown
모델 배포 및 추론세이지 메이커 모델을 바탕으로 배포를 합니다. 배포후에 predictor를 바탕으로 엔드포인트에 접근할 수 있습니다.- instance_type: 로컬 모드인지 클라우드 인스턴스가 결정이 됩니다.
###Code
%%time
predictor = sm_model.deploy(
initial_instance_count=1,
instance_type= instance_type,
endpoint_name = endpoint_name,
wait=True
)
###Output
Attaching to tmpw4r12ssk_algo-1-6o529_1
[36malgo-1-6o529_1 |[0m 2020-12-06 05:13:06,921 INFO - sagemaker-containers - No GPUs detected (normal if no gpus installed)
[36malgo-1-6o529_1 |[0m 2020-12-06 05:13:06,925 INFO - sagemaker-containers - No GPUs detected (normal if no gpus installed)
[36malgo-1-6o529_1 |[0m 2020-12-06 05:13:06,926 INFO - sagemaker-containers - nginx config:
[36malgo-1-6o529_1 |[0m worker_processes auto;
[36malgo-1-6o529_1 |[0m daemon off;
[36malgo-1-6o529_1 |[0m pid /tmp/nginx.pid;
[36malgo-1-6o529_1 |[0m error_log /dev/stderr;
[36malgo-1-6o529_1 |[0m
[36malgo-1-6o529_1 |[0m worker_rlimit_nofile 4096;
[36malgo-1-6o529_1 |[0m
[36malgo-1-6o529_1 |[0m events {
[36malgo-1-6o529_1 |[0m worker_connections 2048;
[36malgo-1-6o529_1 |[0m }
[36malgo-1-6o529_1 |[0m
[36malgo-1-6o529_1 |[0m http {
[36malgo-1-6o529_1 |[0m include /etc/nginx/mime.types;
[36malgo-1-6o529_1 |[0m default_type application/octet-stream;
[36malgo-1-6o529_1 |[0m access_log /dev/stdout combined;
[36malgo-1-6o529_1 |[0m
[36malgo-1-6o529_1 |[0m upstream gunicorn {
[36malgo-1-6o529_1 |[0m server unix:/tmp/gunicorn.sock;
[36malgo-1-6o529_1 |[0m }
[36malgo-1-6o529_1 |[0m
[36malgo-1-6o529_1 |[0m server {
[36malgo-1-6o529_1 |[0m listen 8080 deferred;
[36malgo-1-6o529_1 |[0m client_max_body_size 0;
[36malgo-1-6o529_1 |[0m
[36malgo-1-6o529_1 |[0m keepalive_timeout 3;
[36malgo-1-6o529_1 |[0m
[36malgo-1-6o529_1 |[0m location ~ ^/(ping|invocations|execution-parameters) {
[36malgo-1-6o529_1 |[0m proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
[36malgo-1-6o529_1 |[0m proxy_set_header Host $http_host;
[36malgo-1-6o529_1 |[0m proxy_redirect off;
[36malgo-1-6o529_1 |[0m proxy_read_timeout 60s;
[36malgo-1-6o529_1 |[0m proxy_pass http://gunicorn;
[36malgo-1-6o529_1 |[0m }
[36malgo-1-6o529_1 |[0m
[36malgo-1-6o529_1 |[0m location / {
[36malgo-1-6o529_1 |[0m return 404 "{}";
[36malgo-1-6o529_1 |[0m }
[36malgo-1-6o529_1 |[0m
[36malgo-1-6o529_1 |[0m }
[36malgo-1-6o529_1 |[0m }
[36malgo-1-6o529_1 |[0m
[36malgo-1-6o529_1 |[0m
[36malgo-1-6o529_1 |[0m 2020-12-06 05:13:08,065 INFO - sagemaker-containers - Module inference does not provide a setup.py.
[36malgo-1-6o529_1 |[0m Generating setup.py
[36malgo-1-6o529_1 |[0m 2020-12-06 05:13:08,065 INFO - sagemaker-containers - Generating setup.cfg
[36malgo-1-6o529_1 |[0m 2020-12-06 05:13:08,065 INFO - sagemaker-containers - Generating MANIFEST.in
[36malgo-1-6o529_1 |[0m 2020-12-06 05:13:08,065 INFO - sagemaker-containers - Installing module with the following command:
[36malgo-1-6o529_1 |[0m /miniconda3/bin/python -m pip install . -r requirements.txt
[36malgo-1-6o529_1 |[0m Processing /opt/ml/code
[36malgo-1-6o529_1 |[0m Requirement already satisfied: pandas<2 in /miniconda3/lib/python3.7/site-packages (from -r requirements.txt (line 1)) (1.1.3)
[36malgo-1-6o529_1 |[0m Requirement already satisfied: python-dateutil>=2.7.3 in /miniconda3/lib/python3.7/site-packages (from pandas<2->-r requirements.txt (line 1)) (2.8.1)
[36malgo-1-6o529_1 |[0m Requirement already satisfied: pytz>=2017.2 in /miniconda3/lib/python3.7/site-packages (from pandas<2->-r requirements.txt (line 1)) (2020.1)
[36malgo-1-6o529_1 |[0m Requirement already satisfied: numpy>=1.15.4 in /miniconda3/lib/python3.7/site-packages (from pandas<2->-r requirements.txt (line 1)) (1.19.2)
[36malgo-1-6o529_1 |[0m Requirement already satisfied: six>=1.5 in /miniconda3/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas<2->-r requirements.txt (line 1)) (1.15.0)
[36malgo-1-6o529_1 |[0m Building wheels for collected packages: inference
[36malgo-1-6o529_1 |[0m Building wheel for inference (setup.py) ... [?25ldone
[36malgo-1-6o529_1 |[0m [?25h Created wheel for inference: filename=inference-1.0.0-py2.py3-none-any.whl size=9582 sha256=1ae39ed8184cfca46a85eef0503558dcc63df17616b0bd90c683fc1fdbab7117
[36malgo-1-6o529_1 |[0m Stored in directory: /home/model-server/tmp/pip-ephem-wheel-cache-85h40tj1/wheels/3e/0f/51/2f1df833dd0412c1bc2f5ee56baac195b5be563353d111dca6
[36malgo-1-6o529_1 |[0m Successfully built inference
[36malgo-1-6o529_1 |[0m Installing collected packages: inference
[36malgo-1-6o529_1 |[0m Successfully installed inference-1.0.0
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [161] [INFO] Starting gunicorn 20.0.4
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [161] [INFO] Listening at: unix:/tmp/gunicorn.sock (161)
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [161] [INFO] Using worker: gevent
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [164] [INFO] Booting worker with pid: 164
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [165] [INFO] Booting worker with pid: 165
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [166] [INFO] Booting worker with pid: 166
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [293] [INFO] Booting worker with pid: 293
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [357] [INFO] Booting worker with pid: 357
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [358] [INFO] Booting worker with pid: 358
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [359] [INFO] Booting worker with pid: 359
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [486] [INFO] Booting worker with pid: 486
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [550] [INFO] Booting worker with pid: 550
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [551] [INFO] Booting worker with pid: 551
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [615] [INFO] Booting worker with pid: 615
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [743] [INFO] Booting worker with pid: 743
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [742] [INFO] Booting worker with pid: 742
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [808] [INFO] Booting worker with pid: 808
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [809] [INFO] Booting worker with pid: 809
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:09 +0000] [874] [INFO] Booting worker with pid: 874
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1064] [INFO] Booting worker with pid: 1064
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1066] [INFO] Booting worker with pid: 1066
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1220] [INFO] Booting worker with pid: 1220
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1252] [INFO] Booting worker with pid: 1252
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1260] [INFO] Booting worker with pid: 1260
[36malgo-1-6o529_1 |[0m 2020-12-06 05:13:10,336 INFO - sagemaker-containers - No GPUs detected (normal if no gpus installed)
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1327] [INFO] Booting worker with pid: 1327
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1416] [INFO] Booting worker with pid: 1416
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1583] [INFO] Booting worker with pid: 1583
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1584] [INFO] Booting worker with pid: 1584
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1585] [INFO] Booting worker with pid: 1585
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1714] [INFO] Booting worker with pid: 1714
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1780] [INFO] Booting worker with pid: 1780
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1908] [INFO] Booting worker with pid: 1908
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [1910] [INFO] Booting worker with pid: 1910
[36malgo-1-6o529_1 |[0m From user-inference file- Model loaded:
![36malgo-1-6o529_1 |[0m [2020-12-06 05:13:10 +0000] [2040] [INFO] Booting worker with pid: 2040
[36malgo-1-6o529_1 |[0m 172.18.0.1 - - [06/Dec/2020:05:13:10 +0000] "GET /ping HTTP/1.1" 200 0 "-" "-"
CPU times: user 174 ms, sys: 19.8 ms, total: 194 ms
Wall time: 7.91 s
###Markdown
추론 실행
###Code
import p_utils
from importlib import reload
from p_utils import evaluate, show_chart
p_utils = reload(p_utils)
## 추론 입력 데이터 생성
sample = input_fn(test_X)
## 추론 실행
ridge_pred = predictor.predict(sample)
# Evaluate
MdAPE = evaluate(test_y, ridge_pred)
print('Ridge-onestep-ahead MdAPE = ', MdAPE)
# 차트로 보기
show_chart(test_y, ridge_pred)
###Output
Ridge-onestep-ahead MdAPE = 0.017380620413302225
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:11 +0000] [2690] [INFO] Booting worker with pid: 2690
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:11 +0000] [2691] [INFO] Booting worker with pid: 2691
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:11 +0000] [2820] [INFO] Booting worker with pid: 2820
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:11 +0000] [2949] [INFO] Booting worker with pid: 2949
[36malgo-1-6o529_1 |[0m [2020-12-06 05:13:11 +0000] [2951] [INFO] Booting worker with pid: 2951
###Markdown
Endpoint Cleanup위에서 생성한 엔드포인트를 삭제 합니다.
###Code
predictor.delete_endpoint()
###Output
Gracefully stopping... (press Ctrl+C again to force)
|
Projekty/Projekt2/Grupa1/GassowskaKozminskiPrzybylek/KamienMilowy1/Project2_Milestone1.ipynb | ###Markdown
Etap I - EDAZbiór danych zawiera 8265 słów występujących w księgach religijnych przygotowanych jako mini-korpus tych ksiąg (gdyż nie są to wszystkie słowa, a wybrane przez twórców). Większość świętych tekstów w tym zbiorze danych zebrano z projektu Gutenberg. W tym etapie zajmiemy się analizą tych słów i ksiąg, zawierających je. Wczytanie zbioru
###Code
data = pd.read_csv('/content/drive/My Drive/AllBooks_baseline_DTM_Labelled.csv')
data.head()
data.shape # bardzo dużo kolumn
data.isnull().values.any() #sprawdzenie czy są braki
###Output
_____no_output_____
###Markdown
Zebrane dane zawierają o wiele więcej kolumn niż wierszy, a przygotowana ramka danych nie ma brakujących wartości. Stworzenie kolumnny z nazwą księgi - tylko do EDAAby móc rozpoznawać księgi oraz je jakoś przeanalizować, zdecydowaliśmy się na stworzenie kolumny z nazwą księgi i pogrupowanie rozdziałów z ramki danych na całe księgi. W ten sposób przyjrzymy się temu o czym są całe księgi.
###Code
data['label']=data['Unnamed: 0'].str.split('_',expand=True)[0] #do jakiej księgi należy dany rozdział
np.unique(data['label'])
###Output
_____no_output_____
###Markdown
Mamy 8 ksiąg - tak jak w było przedstawione w opisie zbioru, na początek zapoznajmy się z tym, jakie to są księgi. Tłumaczenia nazw i krótkie opisy ksiąg:* Book of Eccleasiasticus - Mądrość Syracha - jedna z ksiąg deuterookanonicznych Starego Testamentu (czyli takich, które są w Biblii chrześcijańskiej, ale nie w hebrajskiej. Napisana ok. 190 r.p.n.e. w Jerozolimie* Book Of Ecclesiastes - Księga Koheleta - znajduje sie w obu Bibliach. Datowanie księgi nie jest pewne, choć są znaki, aby określać je na III w.p.n.e.* Book of Proverb - Księga Przysłów - również w obu Bibliach, jest pracą zbiorową złożoną z różnych tekstów przez nieznanego autora ok. V w.p.n.e.* Book of Wisdom - Księga Mądrości - napisana prawdopodobnie w Aleksandrii (Egipt) - datowana na ok. 50 r.p.n.e.* Buddhism - oczywiście nazwa religii dalekiego wschodu - buddyzmu, który najczęściej jest wyznawany w kraja półwyspu indochińskiego, Malezji, Chinach i Monoglii.* Tao Te Ching - także: Lao Tsu - chińska księga najprawdopodobniej napisana w 6 wieku p.n.e. przez mędrca Laozi. Uważana jest za podstawowe dzieło taoizmu - jedną z najpopularniejszych chińskich religii. Badacze twierdzą także, że miała ona wpływ także na kształtowanie filozofii buddystów.* Upanishads - upaniszady - teksty (w liczbie ponad 200) religii hinduizmu należące do wedyjskiego objawienia o treści religijno - filozoficznej. Jedne z późniejszych tekstów należących do Wed (świętych ksiąg hinduizmu), napisane w VIII - III w.p.n.e. Są do dziś wykorzystywane nawet w naszej kulturze wśród osób praktykujących medytację.* Yoga Sutra - jogasutry (hinduizm) - najstarszy tekst klasycznej Jogi (w znaczeniu systemu filozofii indyjskiej, uznającego autorytet Wed), składający się ze 195 sutr - zwięzłych sentencji. Według hinduskich tradycji, ich autorem był żyjący w II w.p.n.e. Patañjāli. Przejdźmy do analizowania ksiąg i ich słów. Najczęściej występujące słowa ogółem
###Code
words = pd.DataFrame(data.drop(['Unnamed: 0', 'label'], axis=1).sum())
words_sorted = words.sort_values(by=0, ascending=False)
words_sorted.head(10) #10 najczęściej używanych słów ogółem
###Output
_____no_output_____
###Markdown
Najczęściej występuje słowo $shall$.Przyjrzyjmy się wykresowi wystąpień słów.
###Code
plt.plot(words_sorted.head(50))
plt.xticks(rotation = 90)
plt.title('Najczęściej używane słowa we wszystkich księgach')
plt.show()
###Output
_____no_output_____
###Markdown
Jak widzimy po tabeli i wykresie dwa-trzy słowa się wyróżniają a potem mamy wyraźny spadek. Większość słów nie przekracza nawet 200 wystąpień. Liczba użyć danego słowa w każdej z ksiąg Przygotowaliśmy ramkę danych z liczbami wystapień słów dla każdej księgi. Można ją zobaczyć poniżej.
###Code
grouped_data = data.groupby(['label']).sum()
grouped_data[words_sorted.index.values]
###Output
_____no_output_____
###Markdown
Sprawdzenie czy istnieje kolumna z samymi zerami (słowo nie występujące nigdzie)
###Code
check = grouped_data==0
check.all().any()
###Output
_____no_output_____
###Markdown
Tak więc twórcy danych zdecydowali się na wykorzystanie tylko tych słów, które występują w co najmniej jednej księdze. Wspominając o tym, sprawdźmy ile rzeczywiście słów występuje tylko w jednej księdze. Najczęściej używane słowo dla każdej księgi Skoro wiemy jakie słowa są najpopularniejsze ogólnie to przyjrzyjmy się jak to przekłada się na oddzielne księgi.
###Code
pd.DataFrame(grouped_data.idxmax(axis=1), columns = ['Najczęściej używany wyraz:'])
###Output
_____no_output_____
###Markdown
W czterech księgach powtórzyło się słowo $shall$, natomiast słowa z księgi Buddyjskiej i TaoTeChing nie znalazły się w 10 najpopularnijszych słowach dla ogółu, więc możlwie, że przejawiają się one w większośći właśnie tam (tao szczególnie). Ramka danych ze słowami, które występują tylko w jednej księdze Znamy już najpopularniejsze słowa to teraz przyjrzymy się unikatowym wyrazom, które skupiają się tylko w danej księdze.
###Code
unique_words = grouped_data==grouped_data.sum()
unique_words = grouped_data.columns[unique_words.any()]
unique_words_dataframe = pd.DataFrame()
for word in unique_words:
idx = grouped_data[grouped_data[word]!=0].index[0]
unique_words_dataframe = unique_words_dataframe.append({'word':word, 'book':idx}, ignore_index=True)
unique_words_dataframe
#z ciekawości sprawdziliśmy, czy tao występuje tylko w taoistycznej księdze
unique_words_dataframe[unique_words_dataframe.word == 'tao']
###Output
_____no_output_____
###Markdown
4888 słowa są unikatowe, czyli ponad 50% słów wystepuje tylko w jednej z ośmiu ksiąg - może być przydatne przy klasteryzacji.Sprawdźmy też, ile słów jest charakterystycznych dla danej księgi - to znaczy ponad 90% wystąpień zawiera się w jednym tekście oraz ile wystąpiło w każdej z ksiąg (zauważmy, że mogą wystąpić słowa w obydwu grupach).
###Code
print(f'Odsetek słów, których ponad 90% wystąpień jest w dokładnie jednej księdze: {round(100*np.mean(grouped_data.max(axis = 0)/grouped_data.sum() >0.9), 2)}%')
print(f'Odsetek słów, które wystąpiły w każdej z ksiąg: {round(100*np.mean(np.mean((grouped_data>0)) == 1), 2)}%')
###Output
Odsetek słów, których ponad 90% wystąpień jest w dokładnie jednej księdze: 59.63%
Odsetek słów, które wystąpiły w każdej z ksiąg: 1.29%
###Markdown
Mamy prawie 60% słów, których 90% wystąpień skupiona jest w jednej księdze.Przyjrzyjmy się temu 1.29% słów we wszystkich księgach. Wspólne słowa dla wszystkich ksiąg
###Code
common_words = np.array(grouped_data.columns[np.where(np.mean((grouped_data>0)) == 1)])
common_words
common_words.size
###Output
_____no_output_____
###Markdown
Mamy 107 wspólnych słów, jak było powiedziane wyżej - jest to 1.29% wszystkich słów. Przyjrzyjmy się, jakie części mowy pojawiają się najczęściej - sugestią byłyby orzeczenia/czasowniki, gdyż to słowa ogólne, które najbardziej powinny nie być bezpośrednio do jednej z ksiąg. Przekonajmy się.Uwaga: Można się przyjrzeć i zobaczyć, że kilka słów się powtarza, jednak w innej formie gramatycznej. W tym przypadku zastosujemy lematyzację, gdyż do sprawdzenia części mowy, nie stracimy cennych informacji.
###Code
def lemmatization(data):
'''
Podajemy numpy array list a zwracana jest numpy array list unikatowych słów po lematyzacji
'''
lem = WordNetLemmatizer()
A = []
for i in data.tolist():
a = lem.lemmatize(i)
a = lem.lemmatize(a, 'a')
a = lem.lemmatize(a, 'v')
A.append(a)
x = np.array(A)
A = np.unique(x)
return A
A = lemmatization(common_words)
A.size
###Output
_____no_output_____
###Markdown
Mamy mniej o 9 słów. (Lemmatyzacja to experyment, możliwe, że nadal niektóre słowa nie zmieniły formy)
###Code
tag_words = nltk.pos_tag(A.tolist())
tag_words
###Output
_____no_output_____
###Markdown
Policzmy teraz jakiej częsci mowy jest najwięcej.
###Code
POS_list = []
for i in tag_words:
POS_list.append(i[1])
Counter(POS_list).most_common()
###Output
_____no_output_____
###Markdown
Najwięcej jest rzeczowników - aż 37.76%, czyli założenie, że to będą czasowniki jest błędne. Dużo mamy też przymiotników (19), przysłówków (14). Jeśli chodzi o czasowniki to mamy ich 15, porozrzucanych na różne czasy. Wiąże się to z tym, iż become został uznany za przymiotnik, bring za czasownik kończący się na -ing, są też imiesłowy odczasownikowe.WAŻNE: Funkcja lematyzacji nie sprawdziła się idealnie, nie wolno jej wierzyć w 100%.[tutaj można znaleźć listę tagów: https://medium.com/@gianpaul.r/tokenization-and-parts-of-speech-pos-tagging-in-pythons-nltk-library-2d30f70af13b ] Unikatowe słowa znajdujące się w każdej z badanych ksiąg Wiemy, że unikatowych słów jest ponad 50%. Zobaczmy to jeszcze w liczbach. Liczba unikalnych słów Poniżej mamy liczbę różnych słów (unikalnych) w każdej z badanych ksiąg.
###Code
data_len_unique_words = pd.DataFrame(np.sum(grouped_data>0, axis = 1), columns = ['Liczba różnych słów w badanym tekście:'])
data_len_unique_words
exceptional_words = pd.DataFrame.from_dict(Counter(unique_words_dataframe.book), orient='index').reset_index().rename(columns={'index':'book', 0:'liczba unikatowych słów'}).sort_values('book')
exceptional_words
###Output
_____no_output_____
###Markdown
W powyższym kodzie mamy liczbę unikatowych (czyli takich występujących tylko w danej księdze), bez ich liczby wystąpień. Widzimy, że najwięcej takich słów jest w YogaSutra, czyli hinduizmie. Liczba wszystkich słów ogólnie w każdej z ksiąg
###Code
data_len_words = pd.DataFrame(np.sum(grouped_data, axis = 1), columns = ['Liczba słów w badanym tekście:'])
data_len_words
###Output
_____no_output_____
###Markdown
Procent unikatowych słówCelem sprawdzenia jaką częścią słów w księdze są wyrazy występujące wyłącznie w niej.
###Code
exceptional_sum = grouped_data[unique_words].sum(axis = 1)
A = []
for i in range(0,8):
x = exceptional_sum.values[i]
w = data_len_words.values[i][0]
A.append(f'{round(x/w*100, 2)}%')
translations = ['Madrosc Syracha', 'Ksiega Koheleta', 'Ksiega Przyslow', 'Ksiega Madrosci', 'Buddyzm', 'Tao Te Ching', 'Upaniszady', 'JogaSutry']
pd.DataFrame({'label': translations, 'Procent unikatowych słów w księdze': A})
###Output
_____no_output_____
###Markdown
Względem swojej wielkości to buddyjska księga ma najwięcej wystąpień unikatowych słów. Średnia liczba wystąpień wyrazuZbadamy też średnią liczbę wystąpień wyrazu - niska wartość może oznaczać np. bogate słownictwo bądź szeroki zakres tematyczny księgi:
###Code
def plot_bar(data1, data2, title):
plt.bar(data1, data2)
plt.xticks(rotation = 90)
plt.title(title)
plt.show()
plot_bar(translations, np.sum(grouped_data, axis = 1)/np.sum(grouped_data>0, axis = 1), 'Średnia wystąpień danego wyrazu w tekście')
###Output
_____no_output_____
###Markdown
Największą średnią liczbę wystąpień wyrazów możemy zauważyć w Mądrościach Syracha oraz w Buddyźmie.Natomiast księgi o wartościach wystąpień poniżej trzech są najkrótszymi księgami z podanych. Średnia długość słów w księdze
###Code
WordsLen = []
for col in grouped_data.columns:
WordsLen.append(len(col))
A = []
j = 0
for word in grouped_data.values:
i = 0
length = 0
for x in word:
length = length + x*WordsLen[i] # liczba wystąpień słowa razy długość słowa
i = i+1
A.append(length / data_len_words.values[j][0])
j = j+1
plt.bar(translations, A)
plt.xticks(rotation = 90)
plt.title('Średnia długość słowa w danym tekście')
plt.show()
###Output
_____no_output_____
###Markdown
Średnia długość słów w księdze jest do siebie zbliżona dla wszystkich słów. Ciekawiej pewnie byłoby zająć się średnią długością zdań. Podobieństwa ksiąg Opis metodologii:Dla każdej z ksiąg możemy bez problemu zdobyć wektor liczności wystąpień poszczególnych słów. Dla każdej pary ksiąg można zatem obliczyć odległość euklidesową biorąc za współrzędne te właśnie liczności i porównać obliczone wartości dla różnych par.Update: powyższa metoda jest wrażliwa na liczbę słów występujących w poszczególnych księgach, co nie powinno mieć miejsca. Celem zapobiegnięcia takiej czułości, znormalizujemy liczbę poszczególnych słów w księdze. Takie działanie ma sens, gdy porównujemy księgi o podobnym rzędzie liczby wyrazów. Dla poszczególnych tekstów, gdzie słów jest kilkadziesiąt, osiągane wyniki byłyby wyższe.
###Code
def distance_plot(labels, df, ret_max = False, vmax = None):
data = df/(df.sum(axis = 1)[:, None])
A = [[0]*len(labels) for _ in range(len(labels))]
maxdist = 0
for i in range(len(labels)):
for j in range(i+1, len(labels)):
x = data.iloc[i,:]
y = data.iloc[j,:]
dist = math.sqrt(np.sum((x-y)**2))
A[i][j] = dist
A[j][i] = dist
if dist > maxdist: maxdist = dist
if ret_max: return maxdist
sns.heatmap(A, annot = True, fmt = '.3f', vmax = vmax, xticklabels = labels, yticklabels=labels).set_title('"Ogległości" między księgami - im mniej, tym bardziej podobne')
plt.show()
distance_plot(translations, grouped_data)
###Output
_____no_output_____
###Markdown
Jeśli metodologia jest właściwa to najbardziej podobnymi księgami okazują się Księga Przysłów i Mądrość Syracha oraz Księga Mądrości i Mądrość Syracha. Podobne są też do siebie księgi biblijne po prostu.Oczywiście korzystamy tylko z części słów występujących w księgach. LematyzacjaNa razie nie będziemy jej stosować, być może wrócimy z tym przy inżynierii cech.Przypominając, wykorzystalismy lematyzację przy słowach występujących dla wszystkich ksiąg. Nie mamy pewności czy zamienione zostały wszystkie słowa.Tutaj sprawdzimy, o ile słów zmniejszy się ramka danych po lematyzacji.
###Code
words_list = np.array(grouped_data.columns)
words_after_lem = lemmatization(words_list)
print("Liczba słów po zabiegu lematyzacji: ", len(words_after_lem.tolist()))
print("Liczba słów w niezmienionej ramce danych: ", len(grouped_data.columns))
###Output
Liczba słów po zabiegu lematyzacji: 6032
Liczba słów w niezmienionej ramce danych: 8266
###Markdown
Jak widzimy, po sprowadzeniu słów do podstawowej formy straciliśmy ponad 2 tys. słów z danych. W inżynierii cech może być ciekawe połączenie tych kolumn o tych samych słowach a innych formach w jedną i na tym przeprowadzenie klasteryzacji w celu porównania. Analiza sentymentuSprawdzimy czy księgi ogólnie są bardziej pozytywne, negatywne czy może neutralne. Dobrym podejściem jest wytrenowanie modelu oceniającego sentyment a potem przetestowanie go do danych, w naszym przypadku nie możemy tak zrobić, więc zastosujemy drugie, dużo prostrze podejście - mianowicie policzymy sentyment dla każdego słowa, potem zliczymy liczby wystąpień tych słów, liczba których wartości będzie większa określi nasz sentyment.Jest to słabe podejście, ale przynajmniej warto je przetestować.
###Code
columns_names = data.columns.tolist()
columns_names.pop(0) # pozbywamy się kolumny z nazwami ksiąg
columns_sentiment = []
for i in columns_names:
text = i
blob = TextBlob(text)
sentiment = blob.sentiment.polarity
if sentiment == 0.0:
sentiment = "neutral"
elif sentiment > 0:
sentiment = "positive"
else:
sentiment = "negative"
columns_sentiment.append(sentiment)
def sentiment(data):
books_sentiment = []
for row in data.values:
i = 0
pos, neu, neg = 0,0,0
# zliczamy liczby sentymentów
for x in row:
if columns_sentiment[i] == "positive":
pos = pos + x
elif columns_sentiment[i] == "negative":
neg = neg + x
else:
neu = neu + x
i = i + 1
# wynik dla wiersza
if pos >= neg:
if pos < neu or pos == neg:
books_sentiment.append("neutral")
else:
books_sentiment.append("positive")
else:
if neg < neu:
books_sentiment.append("neutral")
else:
books_sentiment.append("negative")
return books_sentiment
sentiment(grouped_data)
###Output
_____no_output_____
###Markdown
Jak się okazuje najwięcej jest słów neutralnych w każdej z ksiąg. Oczywiście, jest to naiwna metoda i trzeba o tym pamiętać, biorąc pod uwagę te wyniki. Wróćmy do niepogrupowanych rozdziałów Ile rozdziałów ma dana księga?
###Code
all_rows_books = data.label.to_list()
CH_sum = Counter(all_rows_books)
data_CH_sum = pd.DataFrame.from_dict(CH_sum, orient='index').reset_index()
data_CH_sum.rename(columns={'index': 'Księga', 0:'Rozdziały'}, inplace=True)
plot_bar(data_CH_sum['Księga'], data_CH_sum['Rozdziały'], 'Liczba rozdziałów w księdze')
###Output
_____no_output_____
###Markdown
YogaSutra jest drugą największą księgą i ma najwięcej rozdziałów. Natomiast najdłuższa jest BookOfEccleasiasticus, a ma "tylko" 50 rozdziałów, więc prawdopodobnie to jej rozdziały są najdłuższe. W jakich księgach są najdłuższe rozdziały?
###Code
CH_sum = pd.DataFrame(np.sum(data, axis = 1), columns = ['Liczba słów w rozdziałach ksiąg:'])
sorted_CH_sum = pd.concat([data['Unnamed: 0'], CH_sum], axis=1).sort_values(by=['Liczba słów w rozdziałach ksiąg:'], ascending=False)
sorted_CH_sum.head(10)
###Output
_____no_output_____
###Markdown
Najdłuższy jest rozdział należy do księgi Buddyzmu, jednak jak zakładalismy to właśnie BookOfEccleasiasticus, czyli Mądrość Syracha ma najdłusze rozdziały (6 z 10 wyświetlonych). Przyjrzyjmy się jakie są najkrótsze rozdziały.
###Code
sorted_CH_sum.sort_values(by=['Liczba słów w rozdziałach ksiąg:']).head(10)
###Output
_____no_output_____
###Markdown
Najkrótsze są rozdziały Upaniszady. Ciekawostka - spośród wybranych słów przez twórców ramki, rozdział 14 Buddyzmu nie zawiera żadnego. Analiza sentymentu dla rozdziałów ksiąg
###Code
CH_data = data.drop(['Unnamed: 0', 'label'], axis = 1)
CH_sent = sentiment(CH_data)
Counter(CH_sent)
###Output
_____no_output_____ |
cal4-program-R.ipynb | ###Markdown
[pour Statistique et Science des Données](https://github.com/wikistat/Intro-Python) Programmer en **Résumé** Un bref aperçu de la syntaxe du langage S mis en \oe uvre dans R : fonctions, instructions decontrôle et d'itérations, fonction {\tt apply}. introductionR est la version *GNU* du langage S conçu initialement aux *Bell labs* par John Chambers à partir de 1975 dans une syntaxe très proche du langage C. En mars 2017, l'index \href{http://www.tiobe.com/index.php/content/paperinfo/tpci/index.html}{TIOBE} le classe en 14ème position derrière Java (1er) ou Python (5ème) mais devant MATLAB (18) ou SAS (21). Structure de contrôleIl est important d'intégrer que R, comme Matlab, est un langage interprété donc lent, voire très lent, losqu'il s'agit d'exécuter des boucles. Celles-ci doivent être impérativement évitées dès qu'une syntaxe, impliquant des calculs matriciels ou les commandes de type `apply`, peut se substituer. Structures conditionnelles if(condition) {instructions} est la syntaxe permettant de calculer les instructions uniquement si la condition est vraie. if(condition) { A } else { B } calcule les instructions A si la condition est vraie et les instructions B sinon. Dans l'exemple suivant, lesdeux commandes sont équivalentes : if (x>0) y=x*log(x) else y=0 y=ifelse(x>0,x*log(x),0) Structures itérativesCes commandes définissent des boucles pour exécuter plusieurs fois une instruction ou un bloc d'instructions. Les trois types de boucle sont : for (var in seq) { commandes } while (condition) { commandes } repeat { commandes ; if (condition) break }Dans une boucle `for`, le nombre d'itérations est fixe alors qu'il peut être infini pour les boucles `while` et `repeat`! La condition est évaluée avant toute exécution dans `while` alors que `repeat` exécute au moins une fois les commandes.
###Code
for (i in 1:10) print(i)
y=z=0;
for (i in 1:10) {
x=runif(1)
if (x>0.5) y=y+1
else z=z+1 }
y;z
for (i in c(2,4,5,8)) print(i)
x = rnorm(100)
y = ifelse(x>0, 1, -1) # condition
y;i=0
while (i<10){
print(i)
i=i+1}
###Output
_____no_output_____
###Markdown
**Questions**1. Que pensez-vous de : for (i in 1:length(b)) a[i]=cos(b[i])2. Obtenir l'équivalent de y et z dans la deuxième boucle `for` ci-dessus mais sans boucle.3. Dans l'enchaînement de commandes ci-dessous, supprimer d'abord la boucle `for` sur `j` puis les 2 boucles.
###Code
M=matrix(1:20,nr=5,nc=4)
res=rep(0,5)
for (i in 1:5){
tmp=0
for (j in 1:4) {tmp = tmp + M[i,j]}
res[i]=tmp}
res
###Output
_____no_output_____
###Markdown
**Réponses**1. Cette boucle est inutile. Il suffit de saisir \code{a=cos(b)}. L'élément de base de R est la matrice dont le vecteur est un cas particulier.2. Une solution consiste à sommer les éléments `TRUE` d'un vecteur logique
###Code
x=runif(10);y=sum(x>0.5)
z=10-y
y;z
###Output
_____no_output_____
###Markdown
3.Suppression de boucles - Boucle `for` sur `j`
###Code
for (i in 1:5) res[i]=sum(M[i,])
res
###Output
_____no_output_____
###Markdown
- Deux boucles
###Code
res=apply(M,1,sum)
res
###Output
_____no_output_____
###Markdown
Fonctions PrincipesIl est possible sous R de construire ses propres fonctions. Il est conseillé d'écrire sa fonction dans un fichier `nomfonction.R`. source("nomfonction.R") a pour effet de charger la fonction dans l'environnempent de travail. Il est aussi possible de définir directement la fonction par la syntaxe suivante : nomfonction=function(arg1[=exp1],arg2[=exp2],...) { bloc d'instructions sortie = ... return(sortie) }Les accolades signalent le début et la fin du code source de la fonction, les crochets indiquent le caractère facultatif des valeurs par défaut des arguments. L'objet sortie contient le ou les résultats retournés par la fonction, on peut en particulier utiliser une liste pour retourner plusieurs résultats. ExemplesCréation d'une fonction élémentaire.
###Code
MaFonction=function(x){x+2}
ls()
MaFonction
MaFonction(3)
x = MaFonction(4);x
###Output
_____no_output_____
###Markdown
Gestion des paramètres avec une valeur par défaut.
###Code
Fonction2=function(a,b=7){a+b}
Fonction2(2,b=3)
Fonction2(5)
###Output
_____no_output_____
###Markdown
Résultats multiples dans un objet de type liste.
###Code
Calcule=function(r){
p=2*pi*r;s=pi*r*r;
list(rayon=r,perimetre=p,
surface=s)}
resultat=Calcule(3)
resultat$ray
2*pi*resultat$r==resultat$perim
resultat$surface
###Output
_____no_output_____
###Markdown
**Questions**1. le recours à un objet de type `list` est-il indispensable pour la fonction `Calcule()` ?2. Ecrire une fonction qui calcule le périmètre et la surface d'un rectangle à partir des longueurs l1 et l2 des deux côtés. La fonction renvoie également la longueur et la largeur du rectangle.3. Ecrire une fonction qui calcule les *n* premiers termes de la suite de Fibonacci:*u_1=0, u_2=1*, pour tout *n>2, u_n=u_{n-1}+u_{n-2}*4. Utiliser cette fonction pour calculer le rapport entre 2 termes consécutifs. Représenter ce rapport en fonction du nombre de termes pour *n=20*. Que constatez-vous ? Avez-vous lu *Da Vinci Code* ?5. Ecrire une fonction qui supprime les lignes d'un `data.frame` ou d'une matrice présentant au moins une valeur manquante.**Réponses**1. Les 3 éléments à renvoyer étant de type numérique, un vecteur peut suffire.2. Fonction `rectangle()` (la fonction `rect()` existe déjà):
###Code
rectangle=function(l1,l2){
p=(l1+l2)*2
s=l1*l2
list(largeur=min(l1,l2),longueur=max(l1,l2),
perimetre=p,surface=s)}
rectangle(4,6)
###Output
_____no_output_____
###Markdown
3.Utilisation de la fonction pour calculer les n premiers termes de la suite de Fibonacci:
###Code
fibo=function(n){
res=rep(0,n);res[1]=0;res[2]=1
for (i in 3:n) res[i]=res[i-1]+res[i-2]
res}
# Calcul du rapport de 2 termes consécutifs
res=fibo(20)
ratio=res[2:20]/res[1:19]
plot(1:19,ratio,type="b")
###Output
_____no_output_____
###Markdown
Le rapport tend vers le nombre d'or $\frac{1+\sqrt{5}}{2} \approx 1.618034$.4.Une façon, parmi beaucoup d'autres, de répondre à la question consiste à créer une fonction `ligne.NA` qui repère s'il y a au moins une valeur manquante dans un vecteur. Cette fonction filtre les lignes en question.
###Code
ligne.NA=function(vec){any(is.na(vec))}
filtre.NA=function(mat){
tmp = apply(mat,1,ligne.NA)
mat[!tmp,]}
# Application sur une matrice de test
matrice.test = matrix(1:40,nc=5)
matrice.test[2,5]=NA;matrice.test[4,2]=NA
matrice.test[7,1]=NA;matrice.test[7,5]=NA
matrice.test
filtre.NA(matrice.test)
###Output
_____no_output_____
###Markdown
Commandes de type `apply`Comme déjà expliqué, il est vivement recommandé d'éviter les boucles très chronophages. La fonction `apply` et ses variantes sur des vecteurs, matrices ou listes permettent d'appliquer une même fonction `FUN` sur toutes les lignes `(MARGIN=1)` ou les colonnes `(MARGIN=2)` d'une matrice `MAT`: apply(MAT , MARGIN, FUN)}Les fonctions `lapply` et `sapply` calculent la même fonction sur tous les éléments d'un vecteur ou d'une liste. lapply(X,FUN, ARG.COMMUN)} permet d'appliquer la fonction `FUN` à tous les éléments du vecteur ou de la liste X. Les valeurs de X sont affectées au premier argument de la fonction `FUN`. Si la fonction FUN a plusieurs paramètres d'entrée, ils sont spécifiés dans `ARG.COMMUN`. Cette fonction retourne le résultat sous la forme de listes. La fonction `sapply` est similaire à `lapply` mais le résultat est retourné si possible sous forme de vecteurs. tapply(X,GRP,FUN,...) applique une fonction `FUN` sur les sous-groupes d'un vecteur `X` définis par une variable `GRP` de type `factor`. Exemples :
###Code
data(iris)
apply(iris[,1:4],2,sum)
lapply(iris[,1:4],sum)
sapply(iris[,1:4],sum)
tapply(iris[,1],iris[,5],sum)
###Output
_____no_output_____ |
benchmark_notebooks/budget/AL Budget 6000 CIFAR10 v2.ipynb | ###Markdown
PrefaceThe locations requiring configuration for your experiment are commented in capital text. Setup Installations
###Code
!pip install sphinxcontrib-napoleon
!pip install sphinxcontrib-bibtex
!pip install -i https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ submodlib
!git clone https://github.com/decile-team/distil.git
!git clone https://github.com/circulosmeos/gdown.pl.git
import sys
sys.path.append("/content/distil/")
###Output
_____no_output_____
###Markdown
**Experiment-Specific Imports**
###Code
from distil.utils.models.resnet import ResNet18 # IMPORT YOUR MODEL HERE
###Output
_____no_output_____
###Markdown
Main Imports
###Code
import pandas as pd
import numpy as np
import copy
from torch.utils.data import Dataset, DataLoader, Subset, ConcatDataset
import torch.nn.functional as F
from torch import nn
from torchvision import transforms
from torchvision import datasets
from PIL import Image
import torch
import torch.optim as optim
from torch.autograd import Variable
import sys
sys.path.append('../')
import matplotlib.pyplot as plt
import time
import math
import random
import os
import pickle
from numpy.linalg import cond
from numpy.linalg import inv
from numpy.linalg import norm
from scipy import sparse as sp
from scipy.linalg import lstsq
from scipy.linalg import solve
from scipy.optimize import nnls
from distil.active_learning_strategies.badge import BADGE
from distil.active_learning_strategies.glister import GLISTER
from distil.active_learning_strategies.margin_sampling import MarginSampling
from distil.active_learning_strategies.entropy_sampling import EntropySampling
from distil.active_learning_strategies.random_sampling import RandomSampling
from distil.active_learning_strategies.gradmatch_active import GradMatchActive
from distil.active_learning_strategies.fass import FASS
from distil.active_learning_strategies.adversarial_bim import AdversarialBIM
from distil.active_learning_strategies.adversarial_deepfool import AdversarialDeepFool
from distil.active_learning_strategies.core_set import CoreSet
from distil.active_learning_strategies.least_confidence_sampling import LeastConfidenceSampling
from distil.active_learning_strategies.margin_sampling import MarginSampling
from distil.active_learning_strategies.bayesian_active_learning_disagreement_dropout import BALDDropout
from distil.utils.train_helper import data_train
from distil.utils.utils import LabeledToUnlabeledDataset
from google.colab import drive
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Checkpointing and Logs
###Code
class Checkpoint:
def __init__(self, acc_list=None, indices=None, state_dict=None, experiment_name=None, path=None):
# If a path is supplied, load a checkpoint from there.
if path is not None:
if experiment_name is not None:
self.load_checkpoint(path, experiment_name)
else:
raise ValueError("Checkpoint contains None value for experiment_name")
return
if acc_list is None:
raise ValueError("Checkpoint contains None value for acc_list")
if indices is None:
raise ValueError("Checkpoint contains None value for indices")
if state_dict is None:
raise ValueError("Checkpoint contains None value for state_dict")
if experiment_name is None:
raise ValueError("Checkpoint contains None value for experiment_name")
self.acc_list = acc_list
self.indices = indices
self.state_dict = state_dict
self.experiment_name = experiment_name
def __eq__(self, other):
# Check if the accuracy lists are equal
acc_lists_equal = self.acc_list == other.acc_list
# Check if the indices are equal
indices_equal = self.indices == other.indices
# Check if the experiment names are equal
experiment_names_equal = self.experiment_name == other.experiment_name
return acc_lists_equal and indices_equal and experiment_names_equal
def save_checkpoint(self, path):
# Get current time to use in file timestamp
timestamp = time.time_ns()
# Create the path supplied
os.makedirs(path, exist_ok=True)
# Name saved files using timestamp to add recency information
save_path = os.path.join(path, F"c{timestamp}1")
copy_save_path = os.path.join(path, F"c{timestamp}2")
# Write this checkpoint to the first save location
with open(save_path, 'wb') as save_file:
pickle.dump(self, save_file)
# Write this checkpoint to the second save location
with open(copy_save_path, 'wb') as copy_save_file:
pickle.dump(self, copy_save_file)
def load_checkpoint(self, path, experiment_name):
# Obtain a list of all files present at the path
timestamp_save_no = [f for f in os.listdir(path) if os.path.isfile(os.path.join(path, f))]
# If there are no such files, set values to None and return
if len(timestamp_save_no) == 0:
self.acc_list = None
self.indices = None
self.state_dict = None
return
# Sort the list of strings to get the most recent
timestamp_save_no.sort(reverse=True)
# Read in two files at a time, checking if they are equal to one another.
# If they are equal, then it means that the save operation finished correctly.
# If they are not, then it means that the save operation failed (could not be
# done atomically). Repeat this action until no possible pair can exist.
while len(timestamp_save_no) > 1:
# Pop a most recent checkpoint copy
first_file = timestamp_save_no.pop(0)
# Keep popping until two copies with equal timestamps are present
while True:
second_file = timestamp_save_no.pop(0)
# Timestamps match if the removal of the "1" or "2" results in equal numbers
if (second_file[:-1]) == (first_file[:-1]):
break
else:
first_file = second_file
# If there are no more checkpoints to examine, set to None and return
if len(timestamp_save_no) == 0:
self.acc_list = None
self.indices = None
self.state_dict = None
return
# Form the paths to the files
load_path = os.path.join(path, first_file)
copy_load_path = os.path.join(path, second_file)
# Load the two checkpoints
with open(load_path, 'rb') as load_file:
checkpoint = pickle.load(load_file)
with open(copy_load_path, 'rb') as copy_load_file:
checkpoint_copy = pickle.load(copy_load_file)
# Do not check this experiment if it is not the one we need to restore
if checkpoint.experiment_name != experiment_name:
continue
# Check if they are equal
if checkpoint == checkpoint_copy:
# This checkpoint will suffice. Populate this checkpoint's fields
# with the selected checkpoint's fields.
self.acc_list = checkpoint.acc_list
self.indices = checkpoint.indices
self.state_dict = checkpoint.state_dict
return
# Instantiate None values in acc_list, indices, and model
self.acc_list = None
self.indices = None
self.state_dict = None
def get_saved_values(self):
return (self.acc_list, self.indices, self.state_dict)
def delete_checkpoints(checkpoint_directory, experiment_name):
# Iteratively go through each checkpoint, deleting those whose experiment name matches.
timestamp_save_no = [f for f in os.listdir(checkpoint_directory) if os.path.isfile(os.path.join(checkpoint_directory, f))]
for file in timestamp_save_no:
delete_file = False
# Get file location
file_path = os.path.join(checkpoint_directory, file)
if not os.path.exists(file_path):
continue
# Unpickle the checkpoint and see if its experiment name matches
with open(file_path, "rb") as load_file:
checkpoint_copy = pickle.load(load_file)
if checkpoint_copy.experiment_name == experiment_name:
delete_file = True
# Delete this file only if the experiment name matched
if delete_file:
os.remove(file_path)
#Logs
def write_logs(logs, save_directory, rd):
file_path = save_directory + 'run_'+'.txt'
with open(file_path, 'a') as f:
f.write('---------------------\n')
f.write('Round '+str(rd)+'\n')
f.write('---------------------\n')
for key, val in logs.items():
if key == 'Training':
f.write(str(key)+ '\n')
for epoch in val:
f.write(str(epoch)+'\n')
else:
f.write(str(key) + ' - '+ str(val) +'\n')
###Output
_____no_output_____
###Markdown
AL Loop
###Code
def train_one(full_train_dataset, initial_train_indices, test_dataset, net, n_rounds, budget, args, nclasses, strategy, save_directory, checkpoint_directory, experiment_name):
# Split the full training dataset into an initial training dataset and an unlabeled dataset
train_dataset = Subset(full_train_dataset, initial_train_indices)
initial_unlabeled_indices = list(set(range(len(full_train_dataset))) - set(initial_train_indices))
unlabeled_dataset = Subset(full_train_dataset, initial_unlabeled_indices)
# Set up the AL strategy
if strategy == "random":
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = RandomSampling(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset), net, nclasses, strategy_args)
elif strategy == "entropy":
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = EntropySampling(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset), net, nclasses, strategy_args)
elif strategy == "margin":
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = MarginSampling(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset), net, nclasses, strategy_args)
elif strategy == "least_confidence":
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = LeastConfidenceSampling(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset), net, nclasses, strategy_args)
elif strategy == "badge":
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = BADGE(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset), net, nclasses, strategy_args)
elif strategy == "coreset":
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = CoreSet(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset), net, nclasses, strategy_args)
elif strategy == "fass":
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = FASS(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset), net, nclasses, strategy_args)
elif strategy == "glister":
strategy_args = {'batch_size' : args['batch_size'], 'lr': args['lr'], 'device':args['device']}
strategy = GLISTER(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset), net, nclasses, strategy_args, typeOf='rand', lam=0.1)
elif strategy == "adversarial_bim":
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = AdversarialBIM(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset), net, nclasses, strategy_args)
elif strategy == "adversarial_deepfool":
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = AdversarialDeepFool(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset), net, nclasses, strategy_args)
elif strategy == "bald":
strategy_args = {'batch_size' : args['batch_size'], 'device':args['device']}
strategy = BALDDropout(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset), net, nclasses, strategy_args)
# Define acc initially
acc = np.zeros(n_rounds+1)
initial_unlabeled_size = len(unlabeled_dataset)
initial_round = 1
# Define an index map
index_map = np.array([x for x in range(initial_unlabeled_size)])
# Attempt to load a checkpoint. If one exists, then the experiment crashed.
training_checkpoint = Checkpoint(experiment_name=experiment_name, path=checkpoint_directory)
rec_acc, rec_indices, rec_state_dict = training_checkpoint.get_saved_values()
# Check if there are values to recover
if rec_acc is not None:
# Restore the accuracy list
for i in range(len(rec_acc)):
acc[i] = rec_acc[i]
# Restore the indices list and shift those unlabeled points to the labeled set.
index_map = np.delete(index_map, rec_indices)
# Record initial size of the training dataset
intial_seed_size = len(train_dataset)
restored_unlabeled_points = Subset(unlabeled_dataset, rec_indices)
train_dataset = ConcatDataset([train_dataset, restored_unlabeled_points])
remaining_unlabeled_indices = list(set(range(len(unlabeled_dataset))) - set(rec_indices))
unlabeled_dataset = Subset(unlabeled_dataset, remaining_unlabeled_indices)
# Restore the model
net.load_state_dict(rec_state_dict)
# Fix the initial round
initial_round = (len(train_dataset) - initial_seed_size) // budget + 1
# Ensure loaded model is moved to GPU
if torch.cuda.is_available():
net = net.cuda()
strategy.update_model(net)
strategy.update_data(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset))
dt = data_train(train_dataset, net, args)
else:
if torch.cuda.is_available():
net = net.cuda()
dt = data_train(train_dataset, net, args)
acc[0] = dt.get_acc_on_set(test_dataset)
print('Initial Testing accuracy:', round(acc[0]*100, 2), flush=True)
logs = {}
logs['Training Points'] = len(train_dataset)
logs['Test Accuracy'] = str(round(acc[0]*100, 2))
write_logs(logs, save_directory, 0)
#Updating the trained model in strategy class
strategy.update_model(net)
# Record the training transform and test transform for disabling purposes
train_transform = full_train_dataset.transform
test_transform = test_dataset.transform
##User Controlled Loop
for rd in range(initial_round, n_rounds+1):
print('-------------------------------------------------')
print('Round', rd)
print('-------------------------------------------------')
sel_time = time.time()
full_train_dataset.transform = test_transform # Disable any augmentation while selecting points
idx = strategy.select(budget)
full_train_dataset.transform = train_transform # Re-enable any augmentation done during training
sel_time = time.time() - sel_time
print("Selection Time:", sel_time)
selected_unlabeled_points = Subset(unlabeled_dataset, idx)
train_dataset = ConcatDataset([train_dataset, selected_unlabeled_points])
remaining_unlabeled_indices = list(set(range(len(unlabeled_dataset))) - set(idx))
unlabeled_dataset = Subset(unlabeled_dataset, remaining_unlabeled_indices)
# Update the index map
index_map = np.delete(index_map, idx, axis = 0)
print('Number of training points -', len(train_dataset))
# Start training
strategy.update_data(train_dataset, LabeledToUnlabeledDataset(unlabeled_dataset))
dt.update_data(train_dataset)
t1 = time.time()
clf, train_logs = dt.train(None)
t2 = time.time()
acc[rd] = dt.get_acc_on_set(test_dataset)
logs = {}
logs['Training Points'] = len(train_dataset)
logs['Test Accuracy'] = str(round(acc[rd]*100, 2))
logs['Selection Time'] = str(sel_time)
logs['Trainining Time'] = str(t2 - t1)
logs['Training'] = train_logs
write_logs(logs, save_directory, rd)
strategy.update_model(clf)
print('Testing accuracy:', round(acc[rd]*100, 2), flush=True)
# Create a checkpoint
used_indices = np.array([x for x in range(initial_unlabeled_size)])
used_indices = np.delete(used_indices, index_map).tolist()
round_checkpoint = Checkpoint(acc.tolist(), used_indices, clf.state_dict(), experiment_name=experiment_name)
round_checkpoint.save_checkpoint(checkpoint_directory)
print('Training Completed')
return acc
###Output
_____no_output_____
###Markdown
CIFAR10 Parameter DefinitionsParameters related to the specific experiment are placed here. You should examine each and modify them as needed.
###Code
data_set_name = "CIFAR10" # DSET NAME HERE
dataset_root_path = '../downloaded_data/'
net = ResNet18() # MODEL HERE
# MODIFY AS NECESSARY
logs_directory = '/content/gdrive/MyDrive/colab_storage/logs/'
checkpoint_directory = '/content/gdrive/MyDrive/colab_storage/check/'
model_directory = "/content/gdrive/MyDrive/colab_storage/model/"
experiment_name = "CIFAR10 BUDGET 6000"
initial_seed_size = 1000 # INIT SEED SIZE HERE
training_size_cap = 25000 # TRAIN SIZE CAP HERE
budget = 6000 # BUDGET HERE
# CHANGE ARGS AS NECESSARY
args = {'n_epoch':300, 'lr':float(0.01), 'batch_size':20, 'max_accuracy':float(0.99), 'islogs':True, 'isreset':True, 'isverbose':True, 'device':'cuda'}
# Train on approximately the full dataset given the budget contraints
n_rounds = (training_size_cap - initial_seed_size) // budget
###Output
_____no_output_____
###Markdown
Initial Loading and TrainingYou may choose to train a new initial model or to continue to load a specific model. If this notebook is being executed in Colab, you should consider whether or not you need the gdown line.
###Code
# Mount drive containing possible saved model and define file path.
colab_model_storage_mount = "/content/gdrive"
drive.mount(colab_model_storage_mount)
# Retrieve the model from a download link and save it to the drive
os.makedirs(logs_directory, exist_ok = True)
os.makedirs(checkpoint_directory, exist_ok = True)
os.makedirs(model_directory, exist_ok = True)
model_directory = F"{model_directory}/{data_set_name}"
#!/content/gdown.pl/gdown.pl "INSERT SHARABLE LINK HERE" "INSERT DOWNLOAD LOCATION HERE (ideally, same as model_directory)" # MAY NOT NEED THIS LINE IF NOT CLONING MODEL FROM COLAB
# Load the dataset
if data_set_name == "CIFAR10":
train_transform = transforms.Compose([transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))])
test_transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))])
full_train_dataset = datasets.CIFAR10(dataset_root_path, download=True, train=True, transform=train_transform, target_transform=torch.tensor)
test_dataset = datasets.CIFAR10(dataset_root_path, download=True, train=False, transform=test_transform, target_transform=torch.tensor)
nclasses = 10 # NUM CLASSES HERE
elif data_set_name == "CIFAR100":
train_transform = transforms.Compose([transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.5071, 0.4867, 0.4408), (0.2675, 0.2565, 0.2761))])
test_transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5071, 0.4867, 0.4408), (0.2675, 0.2565, 0.2761))])
full_train_dataset = datasets.CIFAR100(dataset_root_path, download=True, train=True, transform=train_transform, target_transform=torch.tensor)
test_dataset = datasets.CIFAR100(dataset_root_path, download=True, train=False, transform=test_transform, target_transform=torch.tensor)
nclasses = 100 # NUM CLASSES HERE
elif data_set_name == "MNIST":
image_dim=28
train_transform = transforms.Compose([transforms.RandomCrop(image_dim, padding=4), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
test_transform = transforms.Compose([transforms.Resize((image_dim, image_dim)), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
full_train_dataset = datasets.MNIST(dataset_root_path, download=True, train=True, transform=train_transform, target_transform=torch.tensor)
test_dataset = datasets.MNIST(dataset_root_path, download=True, train=False, transform=test_transform, target_transform=torch.tensor)
nclasses = 10 # NUM CLASSES HERE
elif data_set_name == "FashionMNIST":
train_transform = transforms.Compose([transforms.RandomCrop(28, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
test_transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) # Use mean/std of MNIST
full_train_dataset = datasets.FashionMNIST(dataset_root_path, download=True, train=True, transform=train_transform, target_transform=torch.tensor)
test_dataset = datasets.FashionMNIST(dataset_root_path, download=True, train=False, transform=test_transform, target_transform=torch.tensor)
nclasses = 10 # NUM CLASSES HERE
elif data_set_name == "SVHN":
train_transform = transforms.Compose([transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))])
test_transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))]) # ImageNet mean/std
full_train_dataset = datasets.SVHN(dataset_root_path, split='train', download=True, transform=train_transform, target_transform=torch.tensor)
test_dataset = datasets.SVHN(dataset_root_path, split='test', download=True, transform=test_transform, target_transform=torch.tensor)
nclasses = 10 # NUM CLASSES HERE
elif data_set_name == "ImageNet":
train_transform = transforms.Compose([transforms.RandomCrop(32, padding=4), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))])
test_transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))]) # ImageNet mean/std
# Note: Not automatically downloaded due to size restrictions. Notebook needs to be adapted to run on local device.
full_train_dataset = datasets.ImageNet(dataset_root_path, download=False, split='train', transform=train_transform, target_transform=torch.tensor)
test_dataset = datasets.ImageNet(dataset_root_path, download=False, split='val', transform=test_transform, target_transform=torch.tensor)
nclasses = 1000 # NUM CLASSES HERE
args['nclasses'] = nclasses
dim = full_train_dataset[0][0].shape
# Seed the random number generator for reproducibility and create the initial seed set
np.random.seed(42)
initial_train_indices = np.random.choice(len(full_train_dataset), replace=False, size=initial_seed_size)
# COMMENT OUT ONE OR THE OTHER IF YOU WANT TO TRAIN A NEW INITIAL MODEL
load_model = False
#load_model = True
# Only train a new model if one does not exist.
if load_model:
net.load_state_dict(torch.load(model_directory))
initial_model = net
else:
dt = data_train(Subset(full_train_dataset, initial_train_indices), net, args)
initial_model, _ = dt.train(None)
torch.save(initial_model.state_dict(), model_directory)
print("Training for", n_rounds, "rounds with budget", budget, "on unlabeled set size", training_size_cap)
###Output
_____no_output_____
###Markdown
Random Sampling
###Code
strategy = "random"
strat_logs = logs_directory+F'{data_set_name}/{strategy}/'
os.makedirs(strat_logs, exist_ok = True)
train_one(full_train_dataset, initial_train_indices, test_dataset, copy.deepcopy(initial_model), n_rounds, budget, args, nclasses, strategy, strat_logs, checkpoint_directory, F"{experiment_name}_{strategy}")
###Output
_____no_output_____
###Markdown
Entropy
###Code
strategy = "entropy"
strat_logs = logs_directory+F'{data_set_name}/{strategy}/'
os.makedirs(strat_logs, exist_ok = True)
train_one(full_train_dataset, initial_train_indices, test_dataset, copy.deepcopy(initial_model), n_rounds, budget, args, nclasses, strategy, strat_logs, checkpoint_directory, F"{experiment_name}_{strategy}")
###Output
_____no_output_____
###Markdown
GLISTER
###Code
strategy = "glister"
strat_logs = logs_directory+F'{data_set_name}/{strategy}/'
os.makedirs(strat_logs, exist_ok = True)
train_one(full_train_dataset, initial_train_indices, test_dataset, copy.deepcopy(initial_model), n_rounds, budget, args, nclasses, strategy, strat_logs, checkpoint_directory, F"{experiment_name}_{strategy}")
###Output
_____no_output_____
###Markdown
FASS
###Code
strategy = "fass"
strat_logs = logs_directory+F'{data_set_name}/{strategy}/'
os.makedirs(strat_logs, exist_ok = True)
train_one(full_train_dataset, initial_train_indices, test_dataset, copy.deepcopy(initial_model), n_rounds, budget, args, nclasses, strategy, strat_logs, checkpoint_directory, F"{experiment_name}_{strategy}")
###Output
_____no_output_____
###Markdown
BADGE
###Code
strategy = "badge"
strat_logs = logs_directory+F'{data_set_name}/{strategy}/'
os.makedirs(strat_logs, exist_ok = True)
train_one(full_train_dataset, initial_train_indices, test_dataset, copy.deepcopy(initial_model), n_rounds, budget, args, nclasses, strategy, strat_logs, checkpoint_directory, F"{experiment_name}_{strategy}")
###Output
_____no_output_____
###Markdown
CoreSet
###Code
strategy = "coreset"
strat_logs = logs_directory+F'{data_set_name}/{strategy}/'
os.makedirs(strat_logs, exist_ok = True)
train_one(full_train_dataset, initial_train_indices, test_dataset, copy.deepcopy(initial_model), n_rounds, budget, args, nclasses, strategy, strat_logs, checkpoint_directory, F"{experiment_name}_{strategy}")
###Output
_____no_output_____
###Markdown
Least Confidence
###Code
strategy = "least_confidence"
strat_logs = logs_directory+F'{data_set_name}/{strategy}/'
os.makedirs(strat_logs, exist_ok = True)
train_one(full_train_dataset, initial_train_indices, test_dataset, copy.deepcopy(initial_model), n_rounds, budget, args, nclasses, strategy, strat_logs, checkpoint_directory, F"{experiment_name}_{strategy}")
###Output
_____no_output_____
###Markdown
Margin
###Code
strategy = "margin"
strat_logs = logs_directory+F'{data_set_name}/{strategy}/'
os.makedirs(strat_logs, exist_ok = True)
train_one(full_train_dataset, initial_train_indices, test_dataset, copy.deepcopy(initial_model), n_rounds, budget, args, nclasses, strategy, strat_logs, checkpoint_directory, F"{experiment_name}_{strategy}")
###Output
_____no_output_____ |
notebooks/harmonic_oscillator_bound_states.ipynb | ###Markdown
An Emulator Example with a Perturbed Oscillator PotentialAuthor: Jordan MelendezI would have liked to make this much nicer and better documented, but I ran out of time. I hope someone can still find value in this example.Start by setting the parameters.
###Code
ell = 0 # Partial wave
n_max = 10 # Oscillator basis size
mass = 1 # mass
b = 1 # Oscillator parameter
# Gaussian quadrature points
r, dr = leggauss_shifted(100, 0, 10)
# Gridded points (for plotting)
r_grid = np.linspace(0, 3, 301)
###Output
_____no_output_____
###Markdown
Create all the operators and wave functions that will be needed later on
###Code
osc_wfs = np.stack([ho_radial_wf(r=r_grid, n=i+1, ell=ell, b=b) for i in range(n_max+1)], axis=-1)
# Make Gaussian perturbations to the oscillator
H1_params = [0.5, 2, 4]
H1_r_grid = np.stack([np.exp(-(r_grid/a)**2) for a in H1_params], axis=-1)
H1_r = np.stack([np.exp(-(r/a)**2) for a in H1_params], axis=-1)
# Constant term: expected shape = (N_ho_basis, N_ho_basis)
H0 = np.diag([ho_energy(n, ell, omega=1) for n in range(1, n_max+2)])
# Linear term: expected shape = (N_ho_basis, N_ho_basis, n_parameters)
H1 = np.stack([
convert_from_r_to_ho_basis(H1_r_i, n_max=n_max, ell=ell, r=r, dr=dr, b=b)
for H1_r_i in H1_r.T
], axis=-1)
R = convert_from_r_to_ho_basis(r, n_max=n_max, ell=ell, r=r, dr=dr, b=b)
def oscillator_potential(r, mass, omega):
return 0.5 * mass * (omega * r) ** 2
rng = np.random.default_rng(1)
p_train = rng.uniform(-5, 5, size=(6, H1.shape[-1]))
p_valid = rng.uniform(-5, 5, size=(50, H1.shape[-1]))
###Output
_____no_output_____
###Markdown
This is the emulator object that is doing all the work here!Check it out in the `emulate` package.Now `fit` will train the emulator. Later we will use `predict` to get the output energies either using the emulator or the exact solver.
###Code
ham = BoundStateHamiltonian('Osc w/Gaussian Perturbation', H0=H0, H1=H1)
ham.fit(p_train)
###Output
_____no_output_____
###Markdown
Also create one that doesn't use exact training wave functions as the basis. Instead, use the lowest 6 harmonic oscillator wave functions as the basis. This is like no-core shell model (NCSM), so that is how we will label it here.
###Code
ham_ncsm = BoundStateHamiltonian('Osc w/Gaussian Perturbation', H0=H0, H1=H1)
ham_ncsm.fit(p_train)
# Fake the basis wave functions as harmonic oscillator states
# Because we are in the HO basis, these are just vectors with 1's and 0's
ham_ncsm.setup_projections(np.eye(*ham.X.shape))
E_pred_ncsm = np.stack([ham_ncsm.predict(p_i, use_emulator=True) for p_i in p_valid])
###Output
_____no_output_____
###Markdown
Get the energies and the basis wave functions used for "training" the emulator. They are in the HO basis, but multiply by the harmonic oscillator wave function in position space to tranform to position space.
###Code
E_pred = np.stack([ham.predict(p_i, use_emulator=True) for p_i in p_valid])
E_full = np.stack([ham.predict(p_i, use_emulator=False) for p_i in p_valid])
wf_train = osc_wfs @ ham.X
###Output
_____no_output_____
###Markdown
Plot the training wave functions in position space
###Code
fig, axes = plt.subplots(len(p_train)//2, 2+(len(p_train)%2), figsize=(4, 3.5), sharey=True, sharex=True)
V0 = oscillator_potential(r_grid, mass=mass, omega=1)
for i, p_i in enumerate(p_train):
ax = axes.ravel()[i]
ax.plot(r_grid, wf_train[:, i]+ham.E_train[i], label=r'$u_0(r)$')
ax.axhline(0, 0, 1, c='lightgrey', lw=0.8, zorder=0)
V = V0 + H1_r_grid @ p_i
ax.plot(r_grid, V, c='k', lw=1, label=r"$V(\vec{a})$")
ax.text(
0.94, 0.11, fr"$\vec{{a}}_{{{i}}}$",
transform=ax.transAxes, ha='right', va='bottom',
bbox=dict(facecolor='w', boxstyle='round'),
)
axes[0, -1].legend(loc='upper left', bbox_to_anchor=(1.03,1), borderaxespad=0)
for ax in axes[-1]:
ax.set_xlabel(r"$r$")
ax.set_xticks([0, 1, 2, 3])
ax.set_ylim(-6, 6)
fig.suptitle("Wave Functions in Oscillator + Gaussian Perturbed Potential")
fig.savefig("wave_functions_efficient_basis.png")
from emulate.graphs import PRED_KWARGS, BASIS_KWARGS, FULL_KWARGS
###Output
_____no_output_____
###Markdown
Emulate the wave functions at unseen parameter locations. Also predict them exactly and compare
###Code
fig, ax = plt.subplots(figsize=(4, 2))
ax.plot(r_grid, wf_train[:, 0], label='Train', **BASIS_KWARGS)
ax.plot(r_grid, wf_train, **BASIS_KWARGS)
for i in range(3):
if i == 0:
label_full = 'Exact'
label_pred = 'Emulated'
else:
label_full = label_pred = None
ax.plot(r_grid, osc_wfs @ ham.exact_wave_function(p_valid[i]), label=label_full, **FULL_KWARGS)
ax.plot(r_grid, osc_wfs @ ham.emulate_wave_function(p_valid[i]), label=label_pred, **PRED_KWARGS)
ax.set_xlabel(r"$r$")
ax.set_ylabel(r"$u_0(r)$")
ax.set_title("Emulated Radial Wave Functions")
ax.legend(loc='upper left', bbox_to_anchor=(1.03,1), borderaxespad=0)
fig.savefig("perturbed_oscillator_efficient.png")
fig, ax = plt.subplots(figsize=(4, 2))
ax.plot(r_grid, osc_wfs[:, 0], label='Train', **BASIS_KWARGS)
ax.plot(r_grid, osc_wfs, **BASIS_KWARGS)
for i in range(3):
if i == 0:
label_full = 'Exact'
label_pred = 'Emulated'
else:
label_full = label_pred = None
ax.plot(r_grid, osc_wfs @ ham_ncsm.exact_wave_function(p_valid[i]), label=label_full, **FULL_KWARGS)
ax.plot(r_grid, osc_wfs @ ham_ncsm.emulate_wave_function(p_valid[i]), label=label_pred, **PRED_KWARGS)
ax.set_xlabel(r"$r$")
ax.set_ylabel(r"$u_0(r)$")
ax.set_title("Emulated Radial Wave Functions (NCSM)")
ax.legend(loc='upper left', bbox_to_anchor=(1.03,1), borderaxespad=0)
fig.savefig("perturbed_oscillator_ncsm.png")
###Output
_____no_output_____
###Markdown
How wrong were we?
###Code
fig, ax = plt.subplots(figsize=(4, 2))
for i in range(3):
# ax.plot(
# r_grid,
# osc_wfs @ (ham.exact_wave_function(p_valid[i])-ham.emulate_wave_function(p_valid[i])),
# ls='-', label='Efficient' if i == 0 else None
# )
ax.plot(
r_grid,
osc_wfs @ (ham_ncsm.exact_wave_function(p_valid[i])-ham_ncsm.emulate_wave_function(p_valid[i])),
ls='-', label='NCSM' if i == 0 else None
)
ax.legend(loc='upper left', bbox_to_anchor=(1.03,1), borderaxespad=0)
ax.set_xlabel(r"$r$")
ax.set_title("Wave Function Residuals")
ax.axhline(0, 0, 1, c='lightgrey', lw=0.8, zorder=0)
fig.savefig("perturbed_oscillator_ground_state_wave_function_residuals_no_ec.png")
fig, ax = plt.subplots(figsize=(4, 2))
for i in range(3):
ax.plot(
r_grid,
osc_wfs @ (ham_ncsm.exact_wave_function(p_valid[i])-ham_ncsm.emulate_wave_function(p_valid[i])),
ls='-', label='NCSM' if i == 0 else None
)
ax.plot(
r_grid,
osc_wfs @ (ham.exact_wave_function(p_valid[i])-ham.emulate_wave_function(p_valid[i])),
ls='--', label='Efficient' if i == 0 else None
)
ax.legend(loc='upper left', bbox_to_anchor=(1.03,1), borderaxespad=0)
ax.set_xlabel(r"$r$")
ax.set_title("Wave Function Residuals")
ax.axhline(0, 0, 1, c='lightgrey', lw=0.8, zorder=0)
fig.savefig("perturbed_oscillator_ground_state_wave_function_residuals.png")
###Output
_____no_output_____
###Markdown
The above charts are pretty convincing that the efficient emulator is better. Let's see it on a log plot too.
###Code
fig, ax = plt.subplots(figsize=(4, 2))
for i in range(3):
ax.semilogy(
r_grid,
np.abs(osc_wfs @ (ham.exact_wave_function(p_valid[i])-ham.emulate_wave_function(p_valid[i]))),
ls='-', label='Efficient' if i == 0 else None
)
ax.semilogy(
r_grid,
np.abs(osc_wfs @ (ham_ncsm.exact_wave_function(p_valid[i])-ham_ncsm.emulate_wave_function(p_valid[i]))),
ls='--', label='NCSM' if i == 0 else None
)
ax.legend(loc='upper left', bbox_to_anchor=(1.03,1), borderaxespad=0)
ax.set_xlabel(r"$r$")
ax.set_title("Wave Function Absolute Residuals")
###Output
_____no_output_____
###Markdown
Let's also compare to a GP trained on the energies.
###Code
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel as C
kernel = C(1) * RBF(length_scale=[1,1,1])
gp = GaussianProcessRegressor(kernel=kernel)
gp.fit(p_train, ham.E_train)
E_pred_gp, E_std_gp = gp.predict(p_valid, return_std=True)
fig, ax = plt.subplots(figsize=(4, 2))
ax.semilogy(np.arange(len(E_full)), np.abs(E_pred_gp-E_full), label='GP', ls='-.')
ax.legend(loc='upper left', bbox_to_anchor=(1.03,1), borderaxespad=0)
ax.set_title("Ground-State Energy Residuals")
ax.set_xlabel("Validation Index")
fig.savefig("perturbed_oscillator_ground_state_energy_residuals_gp_only.png")
fig, ax = plt.subplots(figsize=(4, 2))
ax.semilogy(np.arange(len(E_full)), np.abs(E_pred_gp-E_full), label='GP', ls='-.')
ax.semilogy((E_pred_ncsm-E_full), label='NCSM', ls='--')
ax.legend(loc='upper left', bbox_to_anchor=(1.03,1), borderaxespad=0)
ax.set_title("Ground-State Energy Residuals")
ax.set_xlabel("Validation Index")
fig.savefig("perturbed_oscillator_ground_state_energy_residuals_no_ec.png")
###Output
_____no_output_____
###Markdown
Uncertainty Quantification
###Code
n_valid_1d = 100
p0_valid_1d = np.linspace(-1, 1, n_valid_1d)
p_valid_1d = np.stack([p0_valid_1d, 4.5*np.ones(n_valid_1d), -3.55 * np.ones(n_valid_1d)], axis=-1)
p_valid_1d.shape
E_valid_1d_true = np.array([ham.predict(p, use_emulator=False) for p in p_valid_1d])
E_valid_1d_pred = np.array([ham.predict(p, use_emulator=True) for p in p_valid_1d])
psi_valid_1d_true = np.array([ham.exact_wave_function(p) for p in p_valid_1d])
psi_valid_1d_pred = np.array([ham.emulate_wave_function(p) for p in p_valid_1d])
abs_residual_1d = np.abs(E_valid_1d_pred - E_valid_1d_true)
psi_residual_1d = np.linalg.norm(psi_valid_1d_pred - psi_valid_1d_true, axis=-1)
stdv_psi_valid_1d = np.sqrt(np.array([ham.variance_expensive(p) for p in p_valid_1d]))
stdv_E_valid_1d = stdv_psi_valid_1d ** 2
fig, axes = plt.subplots(2, 1, figsize=(3.5, 4), sharex=True)
ax = axes[0]
ax.plot(p0_valid_1d, psi_residual_1d, label=r"$||\,|{\Delta\psi}\rangle\,||$", **FULL_KWARGS)
ax.plot(p0_valid_1d, stdv_psi_valid_1d * np.average(psi_residual_1d/stdv_psi_valid_1d), label="Error Emulator", **PRED_KWARGS)
ax.legend()
ax.set_title("Bound State Error Emulator")
ax = axes[1]
ax.plot(p0_valid_1d, abs_residual_1d, label=r"$|\Delta E|$", **FULL_KWARGS)
ax.plot(p0_valid_1d, stdv_E_valid_1d * np.average(abs_residual_1d/stdv_E_valid_1d), label="Error Emulator", **PRED_KWARGS)
ax.set_xlabel("$a_0$ Parameter Value")
ax.legend()
fig.savefig("bound_state_error_emulator.png")
###Output
_____no_output_____
###Markdown
OperatorsEverything above has used the Hamiltonian, since we're looking at energies and wave functions.But there is also an operator class that takes the trained `BoundStateHamiltonian` object and an operator as given.It can then predict the expectation value exactly or emulate it.
###Code
op = BoundStateOperator(name='R', ham=ham, op0=R)
op_ncsm = BoundStateOperator(name='R (NCSM)', ham=ham_ncsm, op0=R)
op
###Output
_____no_output_____
###Markdown
It has similar methods as the Hamiltonian object. `predict` returns the expectation value of the operator.
###Code
R_full = np.stack([op.predict(p_i, use_emulator=False) for p_i in p_valid])
R_pred = np.stack([op.predict(p_i, use_emulator=True) for p_i in p_valid])
R_pred_ncsm = np.stack([op_ncsm.predict(p_i, use_emulator=True) for p_i in p_valid])
###Output
_____no_output_____
###Markdown
Train a GP on the radius at the same training points as the other emulators.
###Code
kernel = C(1) * RBF(length_scale=[1,1,1])
gp_op = GaussianProcessRegressor(kernel=kernel)
gp_op.fit(p_train, np.stack([op.predict(p_i, use_emulator=False) for p_i in p_train]))
R_pred_gp, R_std_gp = gp_op.predict(p_valid, return_std=True)
###Output
_____no_output_____
###Markdown
Check it out!
###Code
fig, ax = plt.subplots(figsize=(4, 2))
ax.semilogy(np.arange(len(R_full)), np.abs(R_pred_gp-R_full), label='GP', ls='-.')
ax.legend(loc='upper left', bbox_to_anchor=(1.03,1), borderaxespad=0)
ax.set_title("Ground-State Radius Residuals")
ax.set_xlabel("Validation Index")
fig.savefig("perturbed_oscillator_ground_state_radius_residuals_gp_only.png")
fig, ax = plt.subplots(figsize=(4, 2))
ax.semilogy(np.arange(len(R_full)), np.abs(R_pred_gp-R_full), label='GP', ls='-.')
ax.semilogy(np.abs(R_pred_ncsm-R_full), label='NCSM', ls='--')
ax.legend(loc='upper left', bbox_to_anchor=(1.03,1), borderaxespad=0)
ax.set_title("Ground-State Radius Residuals")
ax.set_xlabel("Validation Index")
fig.savefig("perturbed_oscillator_ground_state_radius_residuals_no_ec.png")
fig, ax = plt.subplots(figsize=(4, 2))
ax.semilogy(np.arange(len(R_full)), np.abs(R_pred_gp-R_full), label='GP', ls='-.')
ax.semilogy(np.abs(R_pred_ncsm-R_full), label='NCSM', ls='--')
ax.semilogy(np.abs(R_pred-R_full), label='Efficient')
ax.legend(loc='upper left', bbox_to_anchor=(1.03,1), borderaxespad=0)
ax.set_title("Ground-State Radius Residuals")
ax.set_xlabel("Validation Index")
fig.savefig("perturbed_oscillator_ground_state_radius_residuals.png")
###Output
_____no_output_____ |
geoapps/inversion/airborne_electromagnetics/notebook.ipynb | ###Markdown
SimPEG EM1D InversionThis application provides an interface to the open-source [SimPEG](https://simpeg.xyz/) package for the inversion of electromagnetic (EM) data using a Laterally Constrained 1D approach. - The application supports several known EM systems, for both frequency and time-domain. - Conductivity models (S/m) are stored along vertically draped surfaces. New user? Visit the [**Getting Started**](https://geoapps.readthedocs.io/en/latest/content/installation.html) page.[**Online Documentation**](https://geoapps.readthedocs.io/en/latest/content/applications/em1d_inversion.html)*Click on the cell below and press **Shift+Enter** to run the application*
###Code
from geoapps.inversion.airborne_electromagnetics.application import InversionApp
# Start the inversion widget
app = InversionApp()
app()
###Output
_____no_output_____
###Markdown
Plot convergence curveDisplay the misfit and regularization as a function of iterations by changing the path to inversion workspace (`*.geoh5`)
###Code
from geoapps.utils.plotting import plot_convergence_curve
out = plot_convergence_curve(r"..\..\..\assets\Temp\EM1DInversion_.geoh5")
display(out)
###Output
_____no_output_____ |
Homeworks/HW3/Q3/Q3.ipynb | ###Markdown
HW3,Q3 Ghazaleh Zehtab Q3.a
###Code
import pandas as pd
import matplotlib as plt
import numpy as np
import plot_utils
import seaborn as sns
phone=pd.read_csv("smartphone.csv")
phone
import statsmodels.api as sm
tab = pd.crosstab(phone['Capacity'],phone['Company'])
tab
table = sm.stats.Table(tab)
table
###Output
_____no_output_____
###Markdown
Q3.b
###Code
tab1 = pd.crosstab([phone.Company,phone.inch],phone.Weight)
tab1
###Output
_____no_output_____
###Markdown
Q3.c
###Code
plot_utils.contingency_table(phone.Company,phone.OS)
###Output
_____no_output_____ |
S1/MAPSI/TME/TME2/enonce/.ipynb_checkpoints/TME2Durand-checkpoint.ipynb | ###Markdown
MAPSI - TME - Rappels de Proba/stats I- La planche de Galton ( obligatoire) I.1- Loi de BernouilliÉcrire une fonction `bernouilli: float ->int` qui prend en argument la paramètre $p \in [0,1]$ et qui renvoie aléatoirement $0$ (avec la probabilité $1-p$) ou $1$ (avec la probabilité $p$).
###Code
def bernouilli(p):
if(np.random.random(1)>p):
return 1
else:
return 0
###Output
_____no_output_____
###Markdown
I.2- Loi binomialeÉcrire une fonction `binomiale: int , float -> int` qui prend en argument un entier $n$ et $p \in [0,1]$ et qui renvoie aléatoirement un nimbre tiré selon la distribution ${\cal B}(n,p)$.
###Code
def binomiale(n, p):
acc=0
for i in range(n):
acc += bernouilli(p)
return acc
###Output
_____no_output_____
###Markdown
I.3- Histogramme de la loi binomialeDans cette question, on considère une planche de Galton de hauteur $n$. On rappelle que des bâtons horizontaux (oranges) sont cloués à cette planche comme le montre la figure ci-contre. Des billes bleues tombent du haut de la planche et, à chaque niveau, se retrouvent à la verticale d'un des bâtons. Elles vont alors tomber soit à gauche, soit à droite du bâton, jusqu'à atteindre le bas de la planche. Ce dernier est constitué de petites boites dont les bords sont symbolisés par les lignes verticales grises. Chaque boite renferme des billes qui sont passées exactement le même nombre de fois à droite des bâtons oranges. Par exemple, la boite la plus à gauche renferme les billes qui ne sont jamais passées à droite d'un bâton, celle juste à sa droite renferme les billes passées une seule fois à droite d'un bâton et toutes les autres fois à gauche, et ainsi de suite. La répartition des billes dans les boites suit donc une loi binomiale ${\cal B}(n,0.5)$. Écrire un script qui crée un tableau de $1000$ cases dont le contenu correspond à $1000$ instanciations de la loi binomiale ${\cal B}(n,0.5)$. Afin de voir la répartition des billes dans la planche de Galton, tracer l'histogramme de ce tableau. Vous pourrez utiliser la fonction hist de matplotlib.pyplot:
###Code
import matplotlib.pyplot as plt
nb_billes = 10000
nb_etape = 100
proba = 0.5
tab = np.array([])
for i in range(nb_billes):
tab = np.append(tab, binomiale(nb_etape, proba))
plt.hist(tab, nb_etape, width=1.1)
plt.title("Histogramme répartition des billets dans la planche de Galton")
plt.show()
###Output
_____no_output_____
###Markdown
Pour le nombre de bins, calculez le nombre de valeurs différentes dans votre tableau.
###Code
nb_unique = np.size(np.unique(tab))
print("Nombre de barres de l'histogramme:", nb_unique)
###Output
Nombre de barres de l'histogramme: 37
###Markdown
II- Visualisation d'indépendances ( obligatoire) II.1- Loi normale centrée réduiteOn souhaite visualiser la fonction de densité de la loi normale. Pour cela, on va créer un ensemble de $k$ points $(x_i,y_i$), pour des $x_i$ équi-espacés variant de $-2σ$ à $2σ$, les $y_i$ correspondant à la valeur de la fonction de densité de la loi normale centrée de variance $σ^2$, autrement dit ${\cal N}(0,σ^2)$.Écrire une fonction `normale : int , float -> float np.array` qui, étant donné un paramètre entier `k` impair et un paramètre réel `sigma` renvoie l'`array numpy` des $k$ valeurs $y_i$. Afin que l'`array numpy` soit bien symmétrique, on lèvera une exception si $k$ est pair.
###Code
import random
import math
def normale(k, sigma):
if(k%2==0):
raise ValueError("le nombre k doit etre impair")
else:
res = np.zeros(shape=(k, 2))
x = np.linspace(-2*sigma, 2*sigma, k)
for i in range(k):
x_val = x[i]
y = (np.exp(-0.5*(x_val/sigma)**2))/(sigma*np.sqrt(2*np.pi))
# print("x:",x)
# print("y:",y)
res[i][0] = x_val
res[i][1] = y
return res
###Output
_____no_output_____
###Markdown
Vérfier la validité de votre fonction en affichant grâce à la fonction plot les points générés dans une figure.
###Code
nb_points = 2001
sigma = 10
points = normale(nb_points, sigma)
plt.plot(points[:,0], points[:,1])
plt.title("Loi normale entre -2 sigma et +2 sigma")
plt.show()
###Output
_____no_output_____
###Markdown
II.2- Distribution de probabilité affineDans cette question, on considère une généralisation de la distribution uniforme: une distribution affine, c'est-à-dire que la fonction de densité est une droite, mais pas forcément horizontale, comme le montre la figure ci-contre. Écrire une fonction `proba_affine : int , float -> float np.array` qui, comme dans la question précédente, va générer un ensemble de $k$ points $y_i, i=0,...,k−1$, représentant cette distribution (paramétrée par sa pente `slope`). On vérifiera ici aussi que l'entier $k$ est impair. Si la pente est égale à $0$, c'est-à-dire si la distribution est uniforme, chaque point $y_i$ devrait être égal à $\frac{1}{k}$ (afin que $\sum y_i=1$). Si la pente est différente de $0$, il suffit de choisir, $\forall i=0,...,k−1$,$$y_i=\frac{1}{k}+(i−\frac{k−1}{2})×slope$$Vous pourrez aisément vérifier que, ici aussi, $\sum y_i=1$. Afin que la distribution soit toujours positive (c'est quand même un minimum pour une distribution de probabilité), il faut que la pente slope ne soit ni trop grande ni trop petite. Le bout de code ci-dessous lèvera une exception si la pente est trop élevée et indiquera la pente maximale possible.
###Code
def proba_affine(k, slope):
if k % 2 == 0:
raise ValueError("le nombre k doit etre impair")
if abs ( slope ) > 2. / ( k * k ):
raise ValueError("la pente est trop raide : pente max = " +
str ( 2. / ( k * k ) ) )
res = np.zeros(shape=(k, 2))
for i in range(k):
res[i][0] = i
res[i][1] = (1/k) + (i - ((k-1)/2)) * slope
return res
nb_points = 101
slope = 1e-4
points = proba_affine(nb_points, slope)
plt.plot(points[:,0], points[:,1])
plt.title("Courbe distribution de probabilité affine")
plt.show()
###Output
_____no_output_____
###Markdown
II.3- Distribution jointeÉcrire une fonction `Pxy : float np.array , float np.array -> float np.2D-array` qui, étant donné deux tableaux numpy de nombres réels à $1$ dimension générés par les fonctions des questions précédentes et représentant deux distributions de probabilités $P(A)$ et $P(B)$, renvoie la distribution jointe $P(A,B)$ sous forme d'un tableau numpy à $2$ dimensions de nombres réels, en supposant que $A$ et $B$ sont des variables aléatoires indépendantes. Par exemple, si:
###Code
PA = np.array ( [0.2, 0.7, 0.1] )
PB = np.array ( [0.4, 0.4, 0.2] )
###Output
_____no_output_____
###Markdown
alors `Pxy(A,B)` renverra le tableau :```np.array([[ 0.08, 0.08, 0.04], [ 0.28, 0.28, 0.14], [ 0.04, 0.04, 0.02]])```
###Code
def Pxy(x,y):
n1 = np.size(x)
n2 = np.size(y)
res = np.zeros(shape=(n1, n2))
for i in range(n1):
for j in range(n2):
res[i][j] = x[i]*y[j]
return res
print("Distribution jointe de PA PB:\n",Pxy(PA, PB))
###Output
Distribution jointe de PA PB:
[[0.08 0.08 0.04]
[0.28 0.28 0.14]
[0.04 0.04 0.02]]
###Markdown
II.4- Affichage de la distribution jointeLe code ci-dessous permet d'afficher en 3D une probabilité jointe générée par la fonction précédente. Exécutez-le avec une probabilité jointe résultant de la combinaison d'une loi normale et d'une distribution affine. Si la commande `%matplotlib notebook` fonctione, vous pouvez interagir avec la courbe. Si le contenu de la fenêtre est vide, redimensionnez celle-ci et le contenu devrait apparaître. Cliquez à la souris à l'intérieur de la fenêtre et bougez la souris en gardant le bouton appuyé afin de faire pivoter la courbe. Observez sous différents angles cette courbe. Refaites l'expérience avec une probaiblité jointe résultant de deux lois normales. Essayez de comprendre ce que signifie, visuellement, l'indépendance probabiliste. Vous pouvez également recommencer l'expérience avec le logarithme des lois jointes.
###Code
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
# essayer `%matplotib notebook` pour interagir avec la visualisation 3D
def dessine ( P_jointe ):
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
x = np.linspace ( -3, 3, P_jointe.shape[0] )
y = np.linspace ( -3, 3, P_jointe.shape[1] )
X, Y = np.meshgrid(x, y)
ax.plot_surface(X, Y, P_jointe, rstride=1, cstride=1 )
ax.set_xlabel('A')
ax.set_ylabel('B')
ax.set_zlabel('P(A) * P(B)')
plt.show ()
dessine(Pxy(PA,PB))
k = 101
sigma = 10
slope = 1e-5
dessine(Pxy(normale(k, sigma)[1], proba_affine(k, slope)[1]))
###Output
_____no_output_____
###Markdown
III- Indépendances conditionnelles ( obligatoire)Dans cet exercice, on considère quatre variables aléatoires booléennes $X$, $Y$, $Z$ et $T$ ainsi que leur distribution jointe $P(X,Y,Z,T)$ encodée en python de la manière suivante :
###Code
# creation de P(X,Y,Z,T)
P_XYZT = np.array([[[[ 0.0192, 0.1728],
[ 0.0384, 0.0096]],
[[ 0.0768, 0.0512],
[ 0.016 , 0.016 ]]],
[[[ 0.0144, 0.1296],
[ 0.0288, 0.0072]],
[[ 0.2016, 0.1344],
[ 0.042 , 0.042 ]]]])
###Output
_____no_output_____
###Markdown
Ainsi, $\forall (x,y,z,t) \in \{0,1\}^4$, `P_XYZT[x][y][z][t]` correspond à $P(X=x,Y=y,Z=z,T=t)$ ou, en version abrégée, à $P(x,y,z,t)$. III.1- Indépendance de X et T conditionnellement à (Y,Z)On souhaite tester si les variables aléatoires $X$ et $T$ sont indépendantes conditionnellement à $(Y,Z)$. Il s'agit donc de vérifer que dans la loi $P$, $$P(X,T|Y,Z)=P(X|Y,Z)\cdot P(T|Y,Z)$$Pour cela, tout d'abord, calculer à partir de `P_XYZT` le tableau `P_YZ` représentant la distribution $P(Y,Z)$. On rappelle que $$P(Y,Z)=\sum_{X,T} P(X,Y,Z,T)$$Le tableau `P_YZ` est donc un tableau à deux dimensions, dont la première correspond à $Y$ et la deuxième à $Z$. Si vous ne vous êtes pas trompé(e)s, vous devez obtenir le tableau suivant : ```np.array([[ 0.336, 0.084], [ 0.464, 0.116]])```Ainsi $P(Y=0,Z=1)=$ `P_YZ[0][1]` $=0.084$
###Code
P_YZ = np.zeros((2,2))
for x in range(P_XYZT.shape[0]):
for y in range(P_XYZT.shape[1]):
for z in range(P_XYZT.shape[2]):
for t in range(P_XYZT.shape[3]):
P_YZ[y][z] += P_XYZT[x][y][z][t]
print(P_YZ)
###Output
[[0.336 0.084]
[0.464 0.116]]
###Markdown
Ensuite, calculer le tableau `P_XTcondYZ` représentant la distribution $P(X,T|Y,Z)$. Ce tableau a donc 4 dimensions, chacune correspondant à une des variables aléatoires. De plus, les valeurs de `P_XTcondYZ` sont obtenues en utilisant la formule des probabilités conditionnelles: $$P(X,T|Y,Z)=\frac{P(X,Y,Z,T)}{P(Y,Z)}$$
###Code
P_XTcondYZ = np.zeros((2,2,2,2))
for x in range(P_XYZT.shape[0]):
for y in range(P_XYZT.shape[1]):
for z in range(P_XYZT.shape[2]):
for t in range(P_XYZT.shape[3]):
P_XTcondYZ[x][y][z][t] += P_XYZT[x][y][z][t] / P_YZ[y][z]
print(P_XTcondYZ)
###Output
[[[[0.05714286 0.51428571]
[0.45714286 0.11428571]]
[[0.16551724 0.11034483]
[0.13793103 0.13793103]]]
[[[0.04285714 0.38571429]
[0.34285714 0.08571429]]
[[0.43448276 0.28965517]
[0.36206897 0.36206897]]]]
###Markdown
Calculer à partir de `P_XTcondYZ` les tableaux à 3 dimensions `P_XcondYZ` et `P_TcondYZ` représentant respectivement les distributions $P(X|Y,Z)$ et $P(T|Y,Z)$. On rappelle que $$P(X|Y,Z)=∑_Y P(X,T|Y,Z)$$
###Code
P_XcondYZ = np.zeros((2,2,2))
P_TcondYZ = np.zeros((2,2,2))
for x in range(P_XYZT.shape[0]):
for y in range(P_XYZT.shape[1]):
for z in range(P_XYZT.shape[2]):
for t in range(P_XYZT.shape[3]):
P_XcondYZ[x][y][z] += P_XTcondYZ[x][y][z][t]
P_TcondYZ[y][z][t] += P_XTcondYZ[x][y][z][t]
print(P_XcondYZ)
print(P_TcondYZ)
###Output
[[[0.57142857 0.57142857]
[0.27586207 0.27586207]]
[[0.42857143 0.42857143]
[0.72413793 0.72413793]]]
[[[0.1 0.9]
[0.8 0.2]]
[[0.6 0.4]
[0.5 0.5]]]
###Markdown
Enfin, tester si $X$ et $T$ sont indépendantes conditionnellement à $(Y,Z)$: si c'est bien le cas, on doit avoir $$P(X,T|Y,Z)=P(X|Y,Z)×P(T|Y,Z)$$
###Code
# A = np.round(P_XcondYZ * P_TcondYZ, 5) # Erreur !
A = np.zeros((2, 2, 2, 2))
ecart = 0
epsilon = 1e-3
for x in range(P_XYZT.shape[0]):
for y in range(P_XYZT.shape[1]):
for z in range(P_XYZT.shape[2]):
for t in range(P_XYZT.shape[3]):
# A[x][y][z][t] = P_XcondYZ[x][y][z] * P_TcondYZ[y][z][t]
ecart += np.abs((P_XcondYZ[x][y][z] * P_TcondYZ[y][z][t]) - P_XTcondYZ[x][y][z][t])
if ecart < epsilon:
print("indépendant")
else:
print("pas indépendant")
# A = np.round(A, 5)
# B = np.round(P_XTcondYZ, 5)
# print(A.shape)
# print("=======")
# print(B.shape)
# print(A == B)
###Output
indépendant
###Markdown
III.2- Indépendance de X et (Y,Z)On souhaite maintenant déterminer si $X$ et $(Y,Z)$ sont indépendantes. Pour cela, commencer par calculer à partir de `P_XYZT` le tableau `P_XYZ` représentant la distribution $P(X,Y,Z)$. Ensuite, calculer à partir de `P_XYZ` les tableaux `P_X` et `P_YZ` représentant respectivement les distributions $P(X)$ et $P(Y,Z)$. On rappelle que $$P(X)=∑_Y∑_Z P(X,Y,Z)$$Si vous ne vous êtes pas trompé(e), P_X doit être égal au tableau suivant : ```np.array([ 0.4, 0.6])```
###Code
P_XYZ = np.zeros((2,2,2))
for x in range(P_XYZT.shape[0]):
for y in range(P_XYZT.shape[1]):
for z in range(P_XYZT.shape[2]):
for t in range(P_XYZT.shape[3]):
P_XYZ[x][y][z] += P_XYZT[x][y][z][t]
print(P_XYZ)
# P_X = np.zeros((2))
# for x in range(P_XYZT.shape[0]):
# for y in range(P_XYZT.shape[1]):
# for z in range(P_XYZT.shape[2]):
# for t in range(P_XYZT.shape[3]):
# P_X[x] += P_XYZT[x][y][z][t]
# print(P_X)
P_X = np.zeros((2))
for x in range(P_XYZT.shape[0]):
for y in range(P_XYZT.shape[1]):
for z in range(P_XYZT.shape[2]):
P_X[x] += P_XYZ[x][y][z]
print(P_X)
###Output
[[[0.192 0.048]
[0.128 0.032]]
[[0.144 0.036]
[0.336 0.084]]]
[0.4 0.6]
###Markdown
Enfin, si $X$ et $(Y,Z)$ sont bien indépendantes, on doit avoir $$P(X,Y,Z)=P(X)×P(Y,Z)$$
###Code
A = np.zeros((2, 2, 2))
ecart = 0
epsilon = 1e-3
for x in range(P_XYZ.shape[0]):
for y in range(P_XYZ.shape[1]):
for z in range(P_XYZ.shape[2]):
A[x][y][z] = P_X[x] * P_YZ[y][z]
ecart += np.abs(P_X[x] * P_YZ[y][z] - P_XYZ[x][y][z])
if ecart < epsilon:
print("indépendant")
else:
print("pas indépendant")
# A = np.round(A, 5)
# B = np.round(P_XYZ, 5)
# print(A.shape)
# print("=======")
# print(B.shape)
# print(A == B)
###Output
pas indépendant
###Markdown
IV- Indépendances conditionnelles et consommation mémoire ( obligatoire)Le but de cet exercice est d'exploiter les probabilités conditionnelles et les indépendances conditionnelles afin de décomposer une probabilité jointe en un produit de "petites probabilités conditionnelles". Cela permet de stocker des probabilités jointes de grandes tailles sur des ordinateurs "standards". Au cours de l'exercice, vous allez donc partir d'une probabilité jointe et, progressivement, construire un programme qui identifie ces indépendances conditionnelles.Pour simplifier, dans la suite de cet exercice, nous allons considérer un ensemble $X_0,…,X_n$ de variables aléatoires binaires (elles ne peuvent prendre que 2 valeurs : 0 et 1). Simplification du code : utilisation de pyAgrumManipuler des probabilités et des opérations sur des probabilités complexes est difficiles avec les outils classiques. La difficulté principale est certainement le problème du mapping entre axe et variable aléatoire. `pyAgrum` propose une gestion de `Potential` qui sont des tableaux multidimensionnels dont les axes sont caractérisés par des variables et sont donc non ambigüs.Par exemple, après l'initiation du `Potential PABCD` :
###Code
import pyAgrum as gum
import pyAgrum.lib.notebook as gnb
X,Y,Z,T=[gum.LabelizedVariable(x,x,2) for x in "XYZT"]
pXYZT=gum.Potential().add(T).add(Z).add(Y).add(X)
pXYZT[:]=[[[[ 0.0192, 0.1728],
[ 0.0384, 0.0096]],
[[ 0.0768, 0.0512],
[ 0.016 , 0.016 ]]],
[[[ 0.0144, 0.1296],
[ 0.0288, 0.0072]],
[[ 0.2016, 0.1344],
[ 0.042 , 0.042 ]]]]
###Output
_____no_output_____
###Markdown
On peut alors utiliser la méthode `margSumOut` qui supprime les variables par sommations: `p.margSumOut(['X','Y'])` correspond à calculer $\sum_{X,Y} p$La réponse a question III.1 se calcule donc ainsi :
###Code
pXT_YZ=pXYZT/pXYZT.margSumOut(['X','T'])
pX_YZ=pXT_YZ.margSumOut(['T'])
pT_YZ=pXT_YZ.margSumOut(['X'])
if pXT_YZ==pX_YZ*pT_YZ:
print("=> X et T sont indépendants conditionnellemnt à Y et Z")
else:
print("=> pas d'indépendance trouvée")
###Output
=> X et T sont indépendants conditionnellemnt à Y et Z
###Markdown
La réponse à la question III.2 se calcule ainsi :
###Code
pXYZ=pXYZT.margSumOut("T")
pYZ=pXYZ.margSumOut("X")
pX=pXYZ.margSumOut(["Y","Z"])
if pXYZ==pX*pYZ:
print("=> X et YZ sont indépendants")
else:
print("=> pas d'indépendance trouvée")
gnb.sideBySide(pXYZ,pX,pYZ,pX*pYZ,
captions=['$P(X,Y,Z)$','$P(X)$','$P(Y,Z)$','$P(X)\cdot P(Y,Z)$'])
###Output
_____no_output_____
###Markdown
`asia.txt` contient la description d'une probabilité jointe sur un ensemble de $8$ variables aléatoires binaires (256 paramètres). Le fichier est produit à partir du site web suivant `http://www.bnlearn.com/bnrepository/`.Le code suivant permet de lire ce fichier et d'en récupérer la probabilité jointe (sous forme d'une `gum.Potential`) qu'il contient :
###Code
def read_file ( filename ):
"""
Renvoie les variables aléatoires et la probabilité contenues dans le
fichier dont le nom est passé en argument.
"""
Pres = gum.Potential ()
vars=[]
with open ( filename, 'r' ) as fic:
# on rajoute les variables dans le potentiel
nb_vars = int ( fic.readline () )
for i in range ( nb_vars ):
name, domsize = fic.readline ().split ()
vars.append(name)
variable = gum.LabelizedVariable(name,name,int (domsize))
Pres.add(variable)
# on rajoute les valeurs de proba dans le potentiel
cpt = []
for line in fic:
cpt.append ( float(line) )
Pres.fillWith( cpt )
return vars,Pres
vars,Pjointe=read_file('asia.txt')
# afficher Pjointe est un peu délicat (retire le commentaire de la ligne suivante)
# Pjointe
print('Les variables : '+str(vars))
# Noter qu'il existe une fonction margSumIn qui, à l'inverse de MargSumOut, élimine
# toutes les variables qui ne sont pas dans les arguments
Pjointe.margSumIn(['tuberculosis?','lung_cancer?'])
###Output
_____no_output_____
###Markdown
IV.1- test d'indépendance conditionnelleEn utilisant la méthode `margSumIn` (voir juste au dessus), écrire une fonction `conditional_indep: Potential,str,str,list[str]->bool` qui rend vrai si dans le `Potential`, on peut lire l'indépendance conditionnelle.Par exemple, l'appel`conditional_indep(Pjointe,'bronchitis?', 'positive_Xray?',['tuberculosis?','lung_cancer?'])` vérifie si bronchitis est indépendant de `posititve_Xray` conditionnellement à `tuberculosis?` et `lung_cancer?`D'un point de vue général, on vérifie que $X$ et $Y$ sont indépendants conditionnellement à $Z_1,\cdots,Z_d$ par l'égalité :$$P(X,Y|Z_1,\cdots,Z_d)=P(X|Z_1,\cdot,Z_d)\cdot P(Y|Z_1,\cdots,Z_d)$$Ces trois probabilités sont calculables à partir de la loi jointe de $P(X,Y,Z_1,\cdots,Z_d)$.Remarque Vérifier l'égalité `P==Q` de 2 `Potential` peut être problématique si les 2 sont des résultats de calcul : il peut exister une petite variation. Un meilleur test est de vérifier `(P-Q).abs().max()<epsilon` avec `epsilon` assez petit.
###Code
epsilon = 1e-3
# def conditional_indep(P,X,Y,Zs): # VERIFIER
# if len(Zs) != 0:
# pX_Zs = P.margSumIn(X) / P.margSumIn(Zs)
# pY_Zs = P.margSumIn(Y) / P.margSumIn(Zs)
# PXY_Zs = P.margSumIn([X, Y]) / P.margSumIn(Zs)
# A = pX_Zs * pY_Zs
# B = PXY_Zs
# print("A:",A)
# print("B:",B)
# print("A-B",A-B)
# test = (A-B).abs().max() < epsilon
# if(test):
# return "=> X et Y sont indépendants conditionnellemnt à Zs"
# else:
# return "=> X et Y ne sont pas indépendants conditionnellemnt à Zs"
# else:
# pX = P.margSumIn(X)
# pY = P.margSumIn(Y)
# PXY = P.margSumIn([X, Y])
# A = pX * pY
# B = PXY
# print("A:",A)
# print("B:",B)
# print("A-B",A-B)
# test = (A-B).abs().max() < epsilon
# if(test):
# return "=> X et Y sont indépendants conditionnellemnt à Zs"
# else:
# return "=> X et Y ne sont pas indépendants conditionnellemnt à Zs"
def conditional_indep(P,X,Y,Zs):
variables = [X, Y]
for var in Zs:
variables.append(var)
if len(Zs) != 0:
pXYZs = P.margSumIn(variables) # Besoin de cette valeur pour calculer PXY_ZS
pXY_Zs = pXYZs/pXYZs.margSumOut([X, Y]) # On enleve X Y pour avoir X et Y conditionnellement à Zs
pX_Zs = pXY_Zs.margSumOut([Y])
pY_Zs = pXY_Zs.margSumOut([X])
A = pX_Zs * pY_Zs
B = pXY_Zs
# print("A:",A)
# print("B:",B)
# print("A-B",A-B)
test = (A-B).abs().max() < epsilon
if(test):
return 1 # Independant
else:
return 0 # Pas independant
else:
pX = P.margSumIn(X)
pY = P.margSumIn(Y)
PXY = P.margSumIn([X, Y])
A = pX * pY
B = PXY
# print("A:",A)
# print("B:",B)
# print("A-B",A-B)
test = (A-B).abs().max() < epsilon
if(test):
return 1
else:
return 0
conditional_indep(Pjointe,
'bronchitis?',
'positive_Xray?',
['tuberculosis?','lung_cancer?'])
conditional_indep(Pjointe,
'bronchitis?',
'visit_to_Asia?',
[])
###Output
_____no_output_____
###Markdown
IV.2- Factorisation compacte de loi jointeOn sait que si un ensemble de variables aléatoires ${\cal S} = \{X_{i_0},\ldots,X_{i_{n-1}}\}$ peut être partitionné en deux sous-ensembles $\cal K$ et $\cal L$ (c'est-à-dire tels que ${\cal K} \cap {\cal L} = \emptyset$ et ${\cal K} \cup {\cal L} = \{X_{i_0},\ldots,X_{i_{n-1}}\}$) tels qu'une variable $X_{i_n}$ est indépendante de ${\cal L}$ conditionnellement à ${\cal K}$, alors:$$P(X_{i_n}|X_{i_0},\ldots,X_{i_{n-1}}) = P(X_{i_n} | {\cal K},{\cal L}) = P(X_{i_n} | {\cal K})$$C'est ce que nous avons vu au cours n°2 (cf. définition des probabilités conditionnelles). Cette formule est intéressante car elle permet de réduire la taille mémoire consommée pour stocker $P(X_{i_n}|X_{i_0},\ldots,X_{i_{n-1}})$: il suffit en effet de stocker uniquement $P(X_{i_n} | {\cal K})$ pour obtenir la même information. Écrire une fonction `compact_conditional_proba: Potential,str-> Potential` qui, étant donné une probabilité jointe $P(X_{i_0},\ldots,X_{i_n})$, une variable aléatoire $X_{i_n}$, retourne cette probabilité conditionnelle $P(X_{i_n} | {\cal K})$. Pour cela, nous vous proposons l'algorithme itératif suivant:```K=SPour tout X in K: Si X indépendante de Xin conditionnellement à K\{X) alors Supprimer X de Kretourner P(Xin|K)$```Trois petites aides :1- La fonction precédente `conditional_indep` devrait vous servir...2- Obtenir la liste des noms des variables dans un `Potential` se fait par l'attribut ```P.var_names```3- Afin que l'affichage soit plus facile à comprendre, il peut être judicieux de placer la variable $X_{i_n}$ en premier dans la liste des variables du Potential, ce que l'on peut faire avec le code suivant : ```proba = proba.putFirst(Xin)```
###Code
def compact_conditional_proba(P, X):
K = P.var_names # Ensemble des variables dans la Pjointe
K.remove(X)
for k in K:
tmp_K = K.copy()
tmp_K.remove(k)
if conditional_indep(P, k, X, K): # Check si les variables dans K sont independantes
K.remove(k) # Supprime de la liste si independantes
var = []
var.append(X)
for k in K:
var.append(k)
PX_K = P.margSumIn(var) / P.margSumIn(K)
PX_K = PX_K.putFirst(X)
return PX_K
compact_conditional_proba(Pjointe,"visit_to_Asia?")
compact_conditional_proba(Pjointe,"dyspnoea?")
###Output
_____no_output_____
###Markdown
IV.3- Création d'un réseau bayésienUn réseau bayésien est simplement la décomposition d'une distribution de probabilité jointe en un produit de probabilités conditionnelles: vous avez vu en cours que $P(A,B) = P(A|B)P(B)$, et ce quel que soient les ensembles de variables aléatoires disjoints $A$ et $B$. En posant $A = X_n$ et $B = \{X_0,\ldots,X_{n-1}\}$, on obtient donc:$$P(X_0,\ldots,X_n) = P(X_n | X_0,\ldots,X_{n-1}) P(X_0,\ldots,X_{n-1})$$On peut réitérer cette opération pour le terme de droite en posant $A = X_{n-1}$ et $B=\{X_0,\ldots,X_{n-2}\}$, et ainsi de suite. Donc, par récurrence, on a:$$P(X_0,\ldots,X_n) = P(X_0) \times \prod_{i=1}^n P(X_i | X_0,\ldots,X_{i-1} )$$Si on applique à chaque terme $P(X_i | X_0,\ldots,X_{i-1} )$ la fonction `compact_conditional_proba`, on obtient une décomposition:$$P(X_0,\ldots,X_n) = P(X_0) \times \prod_{i=1}^n P(X_i | {\cal K_i})$$avec $K_i \subseteq \{X_0,\ldots,X_{i-1}\}$}. Cette décomposition est dite ''compacte'' car son stockage nécessite en pratique beaucoup moins de mémoire que celui de la distribution jointe. C'est ce que l'on appelle un réseau bayésien.Écrire une fonction `create_bayesian_network : Potential -> Potential list` qui, étant donné une probabilité jointe, vous renvoie la liste des $P(X_i | {\cal K_i})$. Pour cela, il vous suffit d'appliquer l'algorithme suivant:```liste = [] P = P(X_0,...,X_n)Pour i de n à 0 faire: calculer Q = compact_conditional_proba(P,X_i) afficher la liste des variables de Q rajouter Q à liste supprimer X_i de P par marginalisationretourner liste```Il est intéressant ici de noter les affichages des variables de Q: comme toutes les variables sont binaires, Q nécessite uniquement (2 puissance le nombre de ces variables) nombres réels. Ainsi une probabilité sur 3 variables ne nécessite que {$2^3=8$} nombres réels.
###Code
def create_bayesian_network(P):
res = []
liste_names = P.var_names
for X_i in reversed(liste_names):
print(X_i)
Q = compact_conditional_proba(P, X_i)
# print(Q)
res.append(Q)
P = P.margSumOut(X_i)
return res
create_bayesian_network(Pjointe)
###Output
visit_to_Asia?
tuberculosis?
smoking?
lung_cancer?
tuberculosis_or_lung_cancer?
bronchitis?
positive_Xray?
dyspnoea?
###Markdown
IV.4- Gain en compressionOn souhaite observer le gain en termes de consommation mémoire obtenu par votre décomposition. Si `P` est un `Potential`, alors `P.toarray().size` est égal à la taille (le nombre de paramètres) de la table `P`. Calculez donc le nombre de paramètres nécessaires pour stocker la probabilité jointe lue dans le fichier `asia.txt` ainsi que la somme des nombres de paramètres des tables que vous avez créées grâce à votre fonction `create_bayesian_network`.
###Code
# votre code
###Output
_____no_output_____
###Markdown
V- Applications pratiques (optionnelle) La technique de décomposition que vous avez vue est effectivement utilisée en pratique. Vous pouvez voir le gain que l'on peut obtenir sur différentes distributions de probabilité du site :http://www.bnlearn.com/bnrepository/Cliquez sur le nom du dataset que vous voulez visualiser et téléchargez son .bif ou .dsl. Afin de visualiser le contenu du fichier, vous allez utiliser pyAgrum. Le code suivant vous permettra alors de visualiser votre dataset: la valeur indiquée après "domainSize" est la taille de la probabilité jointe d'origine (en nombre de paramètres) et celle après "dim" est la taille de la probabilité sous forme compacte (somme des tailles des probabilités conditionnelles compactes).
###Code
# chargement de pyAgrum
import pyAgrum as gum
import pyAgrum.lib.notebook as gnb
# chargement du fichier bif ou dsl
bn = gum.loadBN ( "asia.bif" )
# affichage de la taille des probabilités jointes compacte et non compacte
print(bn)
# affichage graphique du réseau bayésien
bn
###Output
BN{nodes: 8, arcs: 8, domainSize: 256, dim: 36}
|
Cambridge Street Safety.ipynb | ###Markdown
Load the data
###Code
file_path_historical = './data/Police_Department_Crash_Data_-_Historical.csv'
file_path_updated = './data/Police_Department_Crash_Data_-_Updated.csv'
data_historical = pd.read_csv(file_path_historical)
data_updated = pd.read_csv(file_path_updated)
###Output
_____no_output_____
###Markdown
Data wrangling
###Code
# Fix column names to match across datasets
data_historical = data_historical.rename(columns={"Day Of Week": "Day of Week", "Steet Name": "Street Name"})
# Extract coordinates for each crash location
data_historical = data_historical.dropna(subset=['Latitude', 'Longitude'])
# If coordinates don't exist drop row
data_historical = data_historical.drop(columns=['Coordinates'])
# Drop duplicates, need to specify columns to match against because there are slight variations for some reason
data_historical = data_historical.drop_duplicates(subset=['Date Time', 'Day of Week', 'Object 1', 'Object 2'])
# Drop rows without location
data_updated = data_updated.dropna(subset=['Location'])
# Drop rows without coordinates (these just use city center)
data_updated = data_updated.drop(data_updated[data_updated['Location'].apply(lambda x: len(x.split('\n')) != 3)].index)
# Create Coordinate column
data_updated['Coordinates'] = data_updated['Location'].apply(lambda x: x.split('\n')[2])
# Create Latitude and Longitude columns
data_updated['Latitude'] = data_updated['Coordinates'].apply(lambda x: float(x.split(',')[0].replace('(', '')))
data_updated['Longitude'] = data_updated['Coordinates'].apply(lambda x: float(x.split(',')[1].replace(')', '')))
# Drop the no longer needed Coordinates column
data_updated = data_updated.drop(columns=['Coordinates'])
# Drop duplicates, need to specify columns to match against because there are slight variations for some reason
data_updated = data_updated.drop_duplicates(subset=['Date Time', 'Day of Week', 'Object 1', 'Object 2'])
# Combine datasets
data = pd.concat([data_historical, data_updated], ignore_index=True)
# Filter for only the interesting and filled in data
data = data[['Date Time', 'Day of Week', 'Object 1', 'Object 2', 'Street Number', 'Street Name', 'Cross Street', 'Location', 'Latitude', 'Longitude', 'May Involve Pedestrian', 'May involve cyclist']]
# Remove duplicates
# Caused by the two datasets overlapped reporting and lousy data
# Need to specify columns to match against because there are slight variations between the two datasets
# Keep the last rather than first since the updated dataset is more detailed, but worse coordinates
# data = data.drop_duplicates(subset=['Date Time', 'Day of Week', 'Object 1', 'Object 2'], keep='last')
data = data.groupby('Date Time').agg({
'Day of Week': 'last',
'Object 1': 'last',
'Object 2': 'last',
'Street Number': 'last',
'Street Name': 'last',
'Cross Street': 'last',
'Location': 'first',
'Latitude': 'first',
'Longitude': 'first',
'May Involve Pedestrian': 'last',
'May involve cyclist': 'last'
}).reset_index()
# Convert data types for easier analysis
data['Date Time'] = data['Date Time'].apply(lambda x: pd.to_datetime(x))
# Create new columns for analysis
data['Coordinates'] = data.apply(lambda x: str(x["Latitude"]) + ',' + str(x["Longitude"]), axis=1)
data['Hour of Day'] = data['Date Time'].apply(lambda x: x.hour)
data['Year'] = data['Date Time'].apply(lambda x: x.year)
data['Month of Year'] = data['Date Time'].apply(lambda x: x.month)
data['Objects Involved'] = data.apply(lambda x: str(x["Object 1"]) + '-' + str(x["Object 2"]), axis=1)
data['Bicycle Involved'] = data.apply(lambda x: (x['Object 1'] == 'Bicycle') | (x['Object 2'] == 'Bicycle') | (x['May involve cyclist'] == True), axis=1)
data['Pedestrian Involved'] = data.apply(lambda x: (x['Object 1'] == 'Pedestrian') | (x['Object 2'] == 'Pedestrian') | (x['May Involve Pedestrian'] == True), axis=1)
data['No Bike or Pedestrian Involved'] = data.apply(lambda x: (x['Bicycle Involved'] == False) and (x['Pedestrian Involved'] == False), axis=1)
data['Bicycle Accident'] = data.apply(lambda x: x["Bicycle Involved"] and not x["Pedestrian Involved"], axis=1)
data['Pedestrian Accident'] = data.apply(lambda x: x["Pedestrian Involved"], axis=1)
data['Date'] = data['Date Time'].apply(lambda x: pd.to_datetime(x.date()))
###Output
_____no_output_____
###Markdown
Analysis Interesting questions* Where did the accidents take place?* Who were they between? Bicycles? Pedestrians?* What time of day/day of week?* Did they increase/decrease over time? Per location?* Look at variables that could have made a difference (junction type, surface condition, street vs intersection, weather condition) Where do accidents take place?
###Code
location_groups = data.groupby(['Latitude', 'Longitude'])
locations_df = location_groups.size().to_frame(name='# of accidents').reset_index()
locations_df.sort_values(by=['# of accidents'], ascending=False).head()
###Output
_____no_output_____
###Markdown
Who are they between?
###Code
object_groups = data.groupby(['Objects Involved'])
objects_df = object_groups.size().to_frame(name='# of accidents').reset_index()
objects_df.sort_values(by=['# of accidents'], ascending=False).head(15)
###Output
_____no_output_____
###Markdown
Bicycles
###Code
bicycle_data = data[(data['Bicycle Involved'] == True)]
bicycle_groups = bicycle_data.groupby(['Objects Involved'])
bicycles_df = bicycle_groups.size().to_frame(name='# of accidents').reset_index()
bicycles_df.sort_values(by=['# of accidents'], ascending=False).head(15)
###Output
_____no_output_____
###Markdown
Pedestrians
###Code
pedestrian_data = data[(data['Pedestrian Involved'] == True)]
pedestrian_groups = pedestrian_data.groupby(['Objects Involved'])
pedestrians_df = pedestrian_groups.size().to_frame(name='# of accidents').reset_index()
pedestrians_df.sort_values(by=['# of accidents'], ascending=False).head(15)
###Output
_____no_output_____
###Markdown
When do accidents take place? Day of Week
###Code
day_of_week_groups = data.groupby(['Day of Week'])
day_of_week_df = day_of_week_groups.size().to_frame(name='# of accidents').reset_index()
day_of_week_df.to_csv('output/day-of-week-all.csv')
day_of_week_df.sort_values(by=['# of accidents'], ascending=False).head(7)
###Output
_____no_output_____
###Markdown
Bicycles
###Code
day_of_week_groups_bicycle = bicycle_data.groupby(['Day of Week'])
day_of_week_bicycle_df = day_of_week_groups_bicycle.size().to_frame(name='# of accidents').reset_index()
day_of_week_bicycle_df.to_csv('output/day-of-week-bicycle.csv')
day_of_week_bicycle_df.sort_values(by=['# of accidents'], ascending=False).head(7)
###Output
_____no_output_____
###Markdown
Pedestrians
###Code
day_of_week_groups_pedestrian = pedestrian_data.groupby(['Day of Week'])
day_of_week_pedestrian_df = day_of_week_groups_pedestrian.size().to_frame(name='# of accidents').reset_index()
day_of_week_pedestrian_df.to_csv('output/day-of-week-pedestrian.csv')
day_of_week_pedestrian_df.sort_values(by=['# of accidents'], ascending=False).head(7)
###Output
_____no_output_____
###Markdown
Hour of Day
###Code
time_ranges = pd.cut(data['Hour of Day'], [0, 4, 9, 13, 16, 20, 23], labels=['12am-5am', '5am-10am', '10am-1pm', '1pm-4pm', '4pm-8pm', '8pm-11:59pm'])
data['Time Range'] = time_ranges
time_range_groups = data.groupby(['Time Range'])
time_range_df = time_range_groups.size().to_frame(name="# of accidents").reset_index()
time_range_df.to_csv('output/time-ranges-all.csv')
###Output
_____no_output_____
###Markdown
Bicycles
###Code
bicycle_time_ranges = pd.cut(bicycle_data['Hour of Day'], [0, 4, 9, 13, 16, 20, 23], labels=['12am-5am', '5am-10am', '10am-1pm', '1pm-4pm', '4pm-8pm', '8pm-11:59pm'])
bicycle_data['Time Range'] = bicycle_time_ranges
bicycle_time_range_groups = bicycle_data.groupby(['Time Range'])
bicycle_time_range_df = bicycle_time_range_groups.size().to_frame(name="# of accidents").reset_index()
bicycle_time_range_df.to_csv('output/time-ranges-bicycles.csv')
###Output
_____no_output_____
###Markdown
Pedestrians
###Code
pedestrian_time_ranges = pd.cut(pedestrian_data['Hour of Day'], [0, 4, 9, 13, 16, 20, 23], labels=['12am-5am', '5am-10am', '10am-1pm', '1pm-4pm', '4pm-8pm', '8pm-11:59pm'])
pedestrian_data['Time Range'] = pedestrian_time_ranges
pedestrian_time_range_groups = pedestrian_data.groupby(['Time Range'])
pedestrian_time_range_df = pedestrian_time_range_groups.size().to_frame(name="# of accidents").reset_index()
pedestrian_time_range_df.to_csv('output/time-ranges-pedestrians.csv')
###Output
_____no_output_____
###Markdown
How are they doing over time?
###Code
# Annual Datasets
data_2010 = data[(data['Year'] == 2010)]
data_2011 = data[(data['Year'] == 2011)]
data_2012 = data[(data['Year'] == 2012)]
data_2013 = data[(data['Year'] == 2013)]
data_2014 = data[(data['Year'] == 2014)]
data_2015 = data[(data['Year'] == 2015)]
data_2016 = data[(data['Year'] == 2016)]
data_2017 = data[(data['Year'] == 2017)]
accidents_by_day = data.groupby('Date').size()
accidents_by_day_df = accidents_by_day.to_frame(name='# of accidents').reset_index()
accidents_by_month = accidents_by_day.resample('M').sum()
accidents_by_year = accidents_by_day.resample('Y').sum()
accidents_by_day.sort_values(ascending=False).head()
accidents_by_day.plot()
accidents_by_month_df = accidents_by_month.to_frame(name='# of accidents').reset_index()
accidents_by_month_df.sort_values(['# of accidents'], ascending=False).head(10)
accidents_by_month_df.to_csv('output/accidents-by-month-all.csv')
accidents_by_month_of_year = data.groupby('Month of Year').size()
accidents_by_month_of_year_df = accidents_by_month_of_year.to_frame(name='# of accidents').reset_index()
accidents_by_month_of_year.sort_values(ascending=False).head(12)
accidents_by_month.plot()
accidents_by_year.sort_values(ascending=False).head(10)
accidents_by_year_groups = data.groupby('Year').size()
accidents_by_year_df = accidents_by_year_groups.to_frame(name='# of accidents').reset_index()
accidents_by_year_df.sort_values(['# of accidents'], ascending=False).head(10)
accident_trend_plot_all = sns.regplot(accidents_by_year_df['Year'],accidents_by_year_df['# of accidents'])
accident_trend_plot_all_figure = accident_trend_plot_all.get_figure()
plt.savefig('output/accident_trend_plot_all.png')
accidents_by_year.plot()
###Output
_____no_output_____
###Markdown
Bicycle Accidents over time
###Code
bicycle_accidents_by_day = bicycle_data.groupby('Date').size()
bicycle_accidents_by_month = bicycle_accidents_by_day.resample('M').sum()
bicycle_accidents_by_year = bicycle_accidents_by_day.resample('Y').sum()
bicycle_accidents_by_day.plot()
bicycle_accidents_by_month.plot()
bicycle_accidents_by_year.plot()
bicycle_accidents_by_year_groups = bicycle_data.groupby('Year').size()
bicycle_accidents_by_year_df = bicycle_accidents_by_year_groups.to_frame(name='# of accidents').reset_index()
bicycle_accident_trend_plot_all = sns.regplot(bicycle_accidents_by_year_df['Year'],bicycle_accidents_by_year_df['# of accidents'])
bicycle_accident_trend_plot_all_figure = bicycle_accident_trend_plot_all.get_figure()
plt.savefig('output/accident_trend_plot_bicycle.png')
###Output
_____no_output_____
###Markdown
Pedestrian Accidents over time
###Code
pedestrian_accidents_by_day = pedestrian_data.groupby('Date').size()
pedestrian_accidents_by_month = pedestrian_accidents_by_day.resample('M').sum()
pedestrian_accidents_by_year = pedestrian_accidents_by_day.resample('Y').sum()
pedestrian_accidents_by_day.plot()
pedestrian_accidents_by_month.plot()
pedestrian_accidents_by_year.plot()
pedestrian_accidents_by_year_groups = pedestrian_data.groupby('Year').size()
pedestrian_accidents_by_year_df = pedestrian_accidents_by_year_groups.to_frame(name='# of accidents').reset_index()
pedestrian_accident_trend_plot_all = sns.regplot(pedestrian_accidents_by_year_df['Year'],pedestrian_accidents_by_year_df['# of accidents'])
pedestrian_accident_trend_plot_all_figure = pedestrian_accident_trend_plot_all.get_figure()
plt.savefig('output/accident_trend_plot_pedestrian.png')
###Output
_____no_output_____
###Markdown
Combined
###Code
combined_accidents_by_day = data.groupby(['Date', 'Bicycle Accident', 'Pedestrian Accident', 'No Bike or Pedestrian Involved']).size()
combined_accidents_by_day_df = combined_accidents_by_day.to_frame(name='# of accidents').reset_index()
combined_accidents_by_day_df
def define_type(x):
if x['Bicycle Accident']:
return 'Bicycle'
elif x['Pedestrian Accident']:
return 'Pedestrian'
elif x['No Bike or Pedestrian Involved']:
return 'Other'
data['Accident Type'] = data.apply(lambda x: define_type(x), axis=1)
combined_accidents_by_year_groups = data.groupby(['Year', 'Accident Type']).size()
combined_accidents_by_year_df = combined_accidents_by_year_groups.to_frame(name='# of accidents').reset_index()
combined_accidents_by_year_groups.unstack(level=-1).to_csv('output/combined-accidents-by-year.csv')
###Output
_____no_output_____
###Markdown
Map Data
###Code
# all accidents
df_to_geojson(data, filename='output/all_accidents.geojson',
properties=['Object 1', 'Object 2', 'Day of Week', 'Year', 'Bicycle Involved', 'Pedestrian Involved', 'No Bike or Pedestrian Involved'],
lat='Latitude', lon='Longitude', precision=7)
###Output
_____no_output_____ |
src/jupyter/verinet_nn_to_nnet.ipynb | ###Markdown
Convert Cifar10 to nnet
###Code
model = Cifar10()
model.load("./data/models_torch/cifar10_state_dict.pth")
nnet = NNET()
img_size = 32*32
input_mean = np.zeros(3*32*32)
input_mean[:32*32] = 0.4914
input_mean[32*32:2*32*32] = 0.4822
input_mean[2*32*32:] = 0.4465
input_range = np.zeros(3*32*32)
input_range[:32*32] = 0.2023
input_range[32*32:2*32*32] = 0.1994
input_range[2*32*32:] = 0.2010
nnet.init_nnet_from_verinet_nn(model=model, input_shape=np.array((3, 32, 32)), min_values=0.0,
max_values=1.0, input_mean=input_mean, input_range=input_range)
nnet.write_nnet_to_file("./data/models_nnet/cifar10_conv.nnet")
###Output
_____no_output_____
###Markdown
Convert eran to nnetConverts the mnist Sigmoid/Tanh networks from eran to nnet format
###Code
model_name = "ffnnTANH__PGDK_w_0.1_6_500"
model_path = f"/home/patrick/Desktop/VeriNet/eran-benchmark/eran/data/{model_name}.pyt"
act_func = nn.Tanh
layers = [
nn.Sequential(nn.Linear(784, 500), act_func()),
nn.Sequential(nn.Linear(500, 500), act_func()),
nn.Sequential(nn.Linear(500, 500), act_func()),
nn.Sequential(nn.Linear(500, 500), act_func()),
nn.Sequential(nn.Linear(500, 500), act_func()),
nn.Sequential(nn.Linear(500, 500), act_func()),
nn.Sequential(nn.Linear(500, 10)),
]
with open(model_path, "r") as file:
# skip header
file.readline()
file.readline()
for j in range(len(layers)):
layer = list(layers[j].children())[0]
in_size = layer.in_features
out_size = layer.out_features
weights = torch.Tensor([float(w) for w in file.readline().replace('[', '').replace(']', '').split(", ")])
layer.weight.data = weights.reshape(out_size, in_size)
bias = torch.Tensor([float(w) for w in file.readline().replace('[', '').replace(']', '').split(", ")])
layer.bias.data = bias
file.readline()
model = VeriNetNN(layers)
nnet = NNET()
nnet.init_nnet_from_verinet_nn(model=model, input_shape=np.array((784)), min_values=0.0,
max_values=1.0, input_mean=0.1307, input_range=0.3081)
nnet.write_nnet_to_file(f"./data/models_nnet/{model_name}.nnet")
###Output
_____no_output_____ |
03_sec-dsrg/03_sec-adp-func.ipynb | ###Markdown
03_sec-adp-func
###Code
import os
import time
import pickle
import cv2
import numpy.matlib
import tensorflow as tf
import skimage.color as imgco
import skimage.io as imgio
import multiprocessing
import pandas as pd
import traceback
from utilities import *
from lib.crf import crf_inference
from DSRG import DSRG
from SEC import SEC
import argparse
from model import Model
MODEL_WSSS_ROOT = '../database/models_wsss'
method = 'SEC'
dataset = 'ADP-func'
phase = 'predict'
seed_type = 'VGG16'
if dataset in ['ADP-morph', 'ADP-func']:
setname = 'segtest'
sess_id = dataset + '_' + setname + '_' + seed_type
else:
sess_id = dataset + '_' + seed_type
h, w = (321, 321)
seed_size = 41
batch_size = 16
should_saveimg = False
verbose = True
parser = argparse.ArgumentParser()
parser.add_argument('-m', '--method', help='The WSSS method to be used (either SEC or DSRG)', type=str)
parser.add_argument('-d', '--dataset', help='The dataset to run on (either ADP-morph, ADP-func, VOC2012, '
'DeepGlobe_train75, or DeepGlobe_train37.5)', type=str)
parser.add_argument('-n', '--setname', help='The name of the segmentation validation set in the ADP dataset, if '
'applicable (either tuning or segtest)', type=str)
parser.add_argument('-s', '--seed', help='The type of classification network to use for seeding (either VGG16, X1.7 for '
'ADP-morph or ADP-func, or M7 for all other datasets)', type=str)
parser.add_argument('-b', '--batchsize', help='The batch size', default=16, type=int)
parser.add_argument('-i', '--saveimg', help='Toggle whether to save output segmentation as images', action='store_true')
parser.add_argument('-v', '--verbose', help='Toggle verbosity of debug messages', action='store_true')
args = parser.parse_args(['--method', method, '--dataset', dataset,
'--seed', seed_type, '--setname', setname, '-v'])
mdl = Model(args)
mdl.load()
###Output
len:{'train': 14134, 'segtest': 50}
###Markdown
Build model
###Code
mdl.sess = tf.Session()
data_x = {}
data_label = {}
id_of_image = {}
iterator = {}
for val_category in mdl.run_categories[1:]:
data_x[val_category], data_label[val_category], id_of_image[val_category], \
iterator[val_category] = mdl.next_batch(category=val_category, max_epochs=1)
first_cat = mdl.run_categories[1]
mdl.model.build(net_input=data_x[first_cat], net_label=data_label[first_cat], net_id=id_of_image[first_cat],
phase=first_cat)
mdl.sess.run(tf.global_variables_initializer())
mdl.sess.run(tf.local_variables_initializer())
for val_category in mdl.run_categories[1:]:
mdl.sess.run(iterator[val_category].initializer)
###Output
WARNING:tensorflow:From C:\Users\chanlynd\Anaconda3\lib\site-packages\tensorflow\python\ops\control_flow_ops.py:423: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.
Instructions for updating:
Colocations handled automatically by placer.
WARNING:tensorflow:From C:\Users\chanlynd\Documents\Grad Research\wsss-analysis\03_sec-dsrg\model.py:308: calling squeeze (from tensorflow.python.ops.array_ops) with squeeze_dims is deprecated and will be removed in a future version.
Instructions for updating:
Use the `axis` argument instead
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
* https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
* https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:From C:\Users\chanlynd\Documents\Grad Research\wsss-analysis\03_sec-dsrg\SEC.py:283: py_func (from tensorflow.python.ops.script_ops) is deprecated and will be removed in a future version.
Instructions for updating:
tf.py_func is deprecated in TF V2. Instead, use
tf.py_function, which takes a python function which manipulates tf eager
tensors instead of numpy arrays. It's easy to convert a tf eager tensor to
an ndarray (just call tensor.numpy()) but having access to eager tensors
means `tf.py_function`s can use accelerators such as GPUs as well as
being differentiable using a gradient tape.
###Markdown
Load model from file
###Code
# Resume training from latest checkpoint if it exists
saver = tf.train.Saver(max_to_keep=1, var_list=mdl.model.trainable_list)
latest_ckpt = mdl.get_latest_checkpoint()
if latest_ckpt is not None:
if verbose:
print('Loading model from previous checkpoint %s' % latest_ckpt)
mdl.restore_from_model(saver, latest_ckpt)
###Output
Loading model from previous checkpoint ../database/models_wsss\SEC\ADP-func_VGG16\final-0
WARNING:tensorflow:From C:\Users\chanlynd\Anaconda3\lib\site-packages\tensorflow\python\training\saver.py:1266: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from ../database/models_wsss\SEC\ADP-func_VGG16\final-0
###Markdown
Predict segmentation on a single batch
###Code
val_category = mdl.run_categories[1]
layer = mdl.model.net['rescale_output']
input = mdl.model.net['input']
dropout = mdl.model.net['drop_prob']
img,id_,gt_ = mdl.sess.run([data_x[val_category], id_of_image[val_category], data_label[val_category],])
# Generate predicted segmentation in current batch
output_scale = mdl.sess.run(layer,feed_dict={input:img, dropout:0.0})
img_ids = list(id_)
gt_ = gt_[:, :, :, :3]
j = 0
# Read original image
img_curr = cv2.cvtColor(cv2.imread(os.path.join(mdl.dataset_dir, 'JPEGImages',
id_[j].decode('utf-8') + '.jpg')), cv2.COLOR_RGB2BGR)
# Read GT segmentation
if dataset == 'VOC2012' or 'DeepGlobe' in dataset:
gt_curr = cv2.cvtColor(cv2.imread(os.path.join(mdl.dataset_dir, 'SegmentationClassAug',
id_[j].decode('utf-8') + '.png')), cv2.COLOR_RGB2BGR)
elif dataset == 'ADP-morph':
gt_curr = cv2.cvtColor(cv2.imread(os.path.join(mdl.dataset_dir, 'SegmentationClassAug', 'ADP-morph',
id_[j].decode('utf-8') + '.png')), cv2.COLOR_RGB2BGR)
elif dataset == 'ADP-func':
gt_curr = cv2.cvtColor(cv2.imread(os.path.join(mdl.dataset_dir, 'SegmentationClassAug', 'ADP-func',
id_[j].decode('utf-8') + '.png')), cv2.COLOR_RGB2BGR)
# Read predicted segmentation
if 'DeepGlobe' not in dataset:
pred_curr = cv2.resize(output_scale[j], (gt_curr.shape[1], gt_curr.shape[0]))
img_curr = cv2.resize(img_curr, (gt_curr.shape[1], gt_curr.shape[0]))
# Apply dCRF
pred_curr = crf_inference(img_curr, mdl.model.crf_config_test, mdl.num_classes, pred_curr,
use_log=True)
else:
# Apply dCRF
pred_curr = crf_inference(np.uint8(img[j]), mdl.model.crf_config_test, mdl.num_classes,
output_scale[j], use_log=True)
pred_curr = cv2.resize(pred_curr, (gt_curr.shape[1], gt_curr.shape[0]))
# Read original image
img_curr = cv2.cvtColor(cv2.imread(os.path.join(mdl.dataset_dir, 'JPEGImages',
id_[j].decode('utf-8') + '.jpg')), cv2.COLOR_RGB2BGR)
# Read GT segmentation
if dataset == 'VOC2012' or 'DeepGlobe' in dataset:
gt_curr = cv2.cvtColor(cv2.imread(os.path.join(mdl.dataset_dir, 'SegmentationClassAug',
id_[j].decode('utf-8') + '.png')), cv2.COLOR_RGB2BGR)
elif dataset == 'ADP-morph':
gt_curr = cv2.cvtColor(cv2.imread(os.path.join(mdl.dataset_dir, 'SegmentationClassAug', 'ADP-morph',
id_[j].decode('utf-8') + '.png')), cv2.COLOR_RGB2BGR)
elif dataset == 'ADP-func':
gt_curr = cv2.cvtColor(cv2.imread(os.path.join(mdl.dataset_dir, 'SegmentationClassAug', 'ADP-func',
id_[j].decode('utf-8') + '.png')), cv2.COLOR_RGB2BGR)
# Read predicted segmentation
if 'DeepGlobe' not in dataset:
pred_curr = cv2.resize(output_scale[j], (gt_curr.shape[1], gt_curr.shape[0]))
img_curr = cv2.resize(img_curr, (gt_curr.shape[1], gt_curr.shape[0]))
# Apply dCRF
pred_curr = crf_inference(img_curr, mdl.model.crf_config_test, mdl.num_classes, pred_curr,
use_log=True)
else:
# Apply dCRF
pred_curr = crf_inference(np.uint8(img[j]), mdl.model.crf_config_test, mdl.num_classes,
output_scale[j], use_log=True)
pred_curr = cv2.resize(pred_curr, (gt_curr.shape[1], gt_curr.shape[0]))
plt.figure
plt.subplot(121)
plt.imshow(img_curr.astype('uint8'))
plt.title('Original image')
plt.subplot(122)
plt.imshow(gt_curr.astype('uint8'))
plt.title('Functional\n ground truth')
Y_raw = np.zeros((gt_curr.shape[0], gt_curr.shape[1], 3))
P_raw = cv2.resize(output_scale[j], (gt_curr.shape[1], gt_curr.shape[0]))
for k, gt_colour in enumerate(mdl.label2rgb_colors):
pred_mask = np.argmax(P_raw, axis=-1) == k
Y_raw += np.expand_dims(pred_mask, axis=2) * np.expand_dims(np.expand_dims(gt_colour, axis=0), axis=0)
Y = np.zeros((gt_curr.shape[0], gt_curr.shape[1], 3))
P = cv2.resize(pred_curr, (gt_curr.shape[1], gt_curr.shape[0]))
for k, gt_colour in enumerate(mdl.label2rgb_colors):
pred_mask = np.argmax(P, axis=-1) == k
Y += np.expand_dims(pred_mask, axis=2) * np.expand_dims(np.expand_dims(gt_colour, axis=0), axis=0)
plt.figure
plt.subplot(121)
plt.imshow(Y_raw.astype('uint8'))
plt.title('Raw Prediction')
plt.subplot(122)
plt.imshow(Y.astype('uint8'))
plt.title('Post-CRF Prediction')
###Output
_____no_output_____ |
data_scientist_nanodegree/projects/p2_image_classifier/Image Classifier Project.ipynb | ###Markdown
Developing an AI applicationGoing forward, AI algorithms will be incorporated into more and more everyday applications. For example, you might want to include an image classifier in a smart phone app. To do this, you'd use a deep learning model trained on hundreds of thousands of images as part of the overall application architecture. A large part of software development in the future will be using these types of models as common parts of applications. In this project, you'll train an image classifier to recognize different species of flowers. You can imagine using something like this in a phone app that tells you the name of the flower your camera is looking at. In practice you'd train this classifier, then export it for use in your application. We'll be using [this dataset](http://www.robots.ox.ac.uk/~vgg/data/flowers/102/index.html) of 102 flower categories, you can see a few examples below. The project is broken down into multiple steps:* Load and preprocess the image dataset* Train the image classifier on your dataset* Use the trained classifier to predict image contentWe'll lead you through each part which you'll implement in Python.When you've completed this project, you'll have an application that can be trained on any set of labeled images. Here your network will be learning about flowers and end up as a command line application. But, what you do with your new skills depends on your imagination and effort in building a dataset. For example, imagine an app where you take a picture of a car, it tells you what the make and model is, then looks up information about it. Go build your own dataset and make something new.First up is importing the packages you'll need. It's good practice to keep all the imports at the beginning of your code. As you work through this notebook and find you need to import a package, make sure to add the import up here.
###Code
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import helper
import time
import os
import json
import matplotlib.pyplot as plt
import torch
import numpy as np
import torch.nn.functional as F
from PIL import Image
from collections import OrderedDict
from torch import nn
from torch import optim
from torch.autograd import Variable
from torchvision import datasets, transforms, models
np.set_printoptions(suppress=True)
###Output
_____no_output_____
###Markdown
Load the dataHere you'll use `torchvision` to load the data ([documentation](http://pytorch.org/docs/0.3.0/torchvision/index.html)). The data should be included alongside this notebook, otherwise you can [download it here](https://s3.amazonaws.com/content.udacity-data.com/nd089/flower_data.tar.gz). The dataset is split into three parts, training, validation, and testing. For the training, you'll want to apply transformations such as random scaling, cropping, and flipping. This will help the network generalize leading to better performance. You'll also need to make sure the input data is resized to 224x224 pixels as required by the pre-trained networks.The validation and testing sets are used to measure the model's performance on data it hasn't seen yet. For this you don't want any scaling or rotation transformations, but you'll need to resize then crop the images to the appropriate size.The pre-trained networks you'll use were trained on the ImageNet dataset where each color channel was normalized separately. For all three sets you'll need to normalize the means and standard deviations of the images to what the network expects. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`, calculated from the ImageNet images. These values will shift each color channel to be centered at 0 and range from -1 to 1.
###Code
data_dir = 'flowers'
train_dir = os.path.join(data_dir, 'train')
valid_dir = os.path.join(data_dir, 'valid')
test_dir = os.path.join(data_dir, 'test')
# TODO: Define your transforms for the training, validation, and testing sets
train_transforms = transforms.Compose([
# resize 224x224
transforms.Resize(224),
transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
validation_transforms = transforms.Compose([
transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
test_transforms = transforms.Compose([
transforms.Resize(224),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])
])
# TODO: Load the datasets with ImageFolder
train_data = datasets.ImageFolder(train_dir, transform=train_transforms)
validation_data = datasets.ImageFolder(valid_dir, transform=validation_transforms)
test_data = datasets.ImageFolder(test_dir, transform=test_transforms)
# TODO: Using the image datasets and the trainforms, define the dataloaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=32, shuffle=True)
validation_loader = torch.utils.data.DataLoader(validation_data, batch_size=32)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=32)
###Output
_____no_output_____
###Markdown
Test dataloaders
###Code
images, labels = next(iter(train_loader))
helper.imshow(images[0], normalize=True)
images, labels = next(iter(validation_loader))
helper.imshow(images[0], normalize=True)
images, labels = next(iter(test_loader))
helper.imshow(images[0], normalize=True)
###Output
_____no_output_____
###Markdown
Label mappingYou'll also need to load in a mapping from category label to category name. You can find this in the file `cat_to_name.json`. It's a JSON object which you can read in with the [`json` module](https://docs.python.org/2/library/json.html). This will give you a dictionary mapping the integer encoded categories to the actual names of the flowers.
###Code
with open('cat_to_name.json', 'r') as f:
cat_to_name = json.load(f)
cat_to_name['1']
len(cat_to_name)
###Output
_____no_output_____
###Markdown
Building and training the classifierNow that the data is ready, it's time to build and train the classifier. As usual, you should use one of the pretrained models from `torchvision.models` to get the image features. Build and train a new feed-forward classifier using those features.We're going to leave this part up to you. If you want to talk through it with someone, chat with your fellow students! You can also ask questions on the forums or join the instructors in office hours.Refer to [the rubric](https://review.udacity.com/!/rubrics/1663/view) for guidance on successfully completing this section. Things you'll need to do:* Load a [pre-trained network](http://pytorch.org/docs/master/torchvision/models.html) (If you need a starting point, the VGG networks work great and are straightforward to use)* Define a new, untrained feed-forward network as a classifier, using ReLU activations and dropout* Train the classifier layers using backpropagation using the pre-trained network to get the features* Track the loss and accuracy on the validation set to determine the best hyperparametersWe've left a cell open for you below, but use as many as you need. Our advice is to break the problem up into smaller parts you can run separately. Check that each part is doing what you expect, then move on to the next. You'll likely find that as you work through each part, you'll need to go back and modify your previous code. This is totally normal!When training make sure you're updating only the weights of the feed-forward network. You should be able to get the validation accuracy above 70% if you build everything right. Make sure to try different hyperparameters (learning rate, units in the classifier, epochs, etc) to find the best model. Save those hyperparameters to use as default values in the next part of the project.
###Code
# TODO: Build and train your network
pretrained_model = models.vgg16(pretrained=True)
pretrained_model
def create_classifier(layer_sizes):
layers = OrderedDict()
for index, value in enumerate(layer_sizes):
layer_name = 'fc' + str(index + 1)
#print((layer_name, value))
if index == len(layer_sizes) - 1: # if last index add softmax
layers.update({'output': nn.LogSoftmax(dim=1)})
else:
# get next layer size; next item might be list
current_size = value[0] if isinstance(value, list) else value
next_value = layer_sizes[index + 1]
next_size = layer_sizes[index + 1][0] if isinstance(next_value, list) else layer_sizes[index + 1]
layers.update({layer_name: nn.Linear(current_size, next_size)})
if index < len(layer_sizes) - 2: # if second to last index, don't add relu
layers.update({'relu' + str(index + 1): nn.ReLU()})
if isinstance(value, list): # add dropout
layers.update({'dropout' + str(index + 1): nn.Dropout(p=value[1])})
return nn.Sequential(layers)
def create_model(pretrained_model, layer_sizes):
# Freeze parameters so we don't backprop through them
for param in pretrained_model.parameters():
param.requires_grad = False
classifier = create_classifier(layer_sizes)
pretrained_model.classifier = classifier
return pretrained_model
# hyper parameters
learning_rate = 0.001
epochs = 10
# if inner list, then 1st item is size and second is dropout
layer_sizes = [25088, [12544, 0.5], 6272, len(cat_to_name)]
optimizer_algorithm = optim.Adam
model = create_model(pretrained_model, layer_sizes)
model.classifier
criterion = nn.NLLLoss()
optimizer = optimizer_algorithm(model.classifier.parameters(), lr=learning_rate)
# Implement a function for the validation pass
def validation(model, loader, criterion):
test_loss = 0
accuracy = 0
for inputs, labels in loader:
inputs, labels = inputs.to(device), labels.to(device)
output = model.forward(inputs)
test_loss += criterion(output, labels).item()
ps = torch.exp(output)
equality = (labels.data == ps.max(dim=1)[1])
accuracy += equality.type(torch.FloatTensor).mean()
return test_loss, accuracy
def do_deep_learning(model, trainloader, epochs, print_every, criterion, optimizer, device):
steps = 0
running_loss = 0
print('Device: `{}`'.format(device))
model.to(device)
for e in range(epochs):
# https://classroom.udacity.com/nanodegrees/nd025/parts/55eca560-1498-4446-8ab5-65c52c660d0d/modules/627e46ca-85de-4830-b2d6-5471241d5526/lessons/e1eeafe1-2ba0-4f3d-97a0-82cbd844fdfc/concepts/43cb782f-2d8c-432e-94ef-cb8068f26042
# PyTorch allows you to set a model in "training" or "evaluation" modes with model.train() and model.eval(), respectively. In training mode, dropout is turned on, while in evaluation mode, dropout is turned off.
model.train()
for ii, (inputs, labels) in enumerate(trainloader):
start = time.time()
steps += 1
inputs, labels = inputs.to(device), labels.to(device)
optimizer.zero_grad()
output = model.forward(inputs)
loss = criterion(output, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
# Make sure network is in eval mode for inference
model.eval()
# Turn off gradients for validation, saves memory and computations
with torch.no_grad():
validation_loss, accuracy = validation(model, validation_loader, criterion)
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss/print_every),
"Validation Loss: {:.3f}.. ".format(validation_loss/len(validation_loader)),
"Validation Accuracy: {:.3f}".format(accuracy/len(validation_loader)),
"\n\tTime per batch: {0} seconds".format(round(time.time() - start))
)
running_loss = 0
# Make sure training is back on
model.train()
device = 'cuda' if torch.cuda.is_available() else 'cpu'
device
do_deep_learning(model, train_loader, epochs, 40, criterion, optimizer, device)
###Output
Device: `cuda`
Epoch: 1/10.. Training Loss: 4.918.. Validation Loss: 3.507.. Validation Accuracy: 0.242
Time per batch: 19 seconds
Epoch: 1/10.. Training Loss: 3.130.. Validation Loss: 2.650.. Validation Accuracy: 0.366
Time per batch: 19 seconds
Epoch: 1/10.. Training Loss: 2.559.. Validation Loss: 1.880.. Validation Accuracy: 0.508
Time per batch: 19 seconds
Epoch: 1/10.. Training Loss: 2.274.. Validation Loss: 1.773.. Validation Accuracy: 0.547
Time per batch: 20 seconds
Epoch: 1/10.. Training Loss: 2.187.. Validation Loss: 1.691.. Validation Accuracy: 0.568
Time per batch: 19 seconds
Epoch: 2/10.. Training Loss: 1.915.. Validation Loss: 1.605.. Validation Accuracy: 0.581
Time per batch: 19 seconds
Epoch: 2/10.. Training Loss: 1.877.. Validation Loss: 1.396.. Validation Accuracy: 0.643
Time per batch: 19 seconds
Epoch: 2/10.. Training Loss: 1.863.. Validation Loss: 1.227.. Validation Accuracy: 0.678
Time per batch: 19 seconds
Epoch: 2/10.. Training Loss: 1.816.. Validation Loss: 1.423.. Validation Accuracy: 0.636
Time per batch: 19 seconds
Epoch: 2/10.. Training Loss: 1.928.. Validation Loss: 1.319.. Validation Accuracy: 0.646
Time per batch: 19 seconds
Epoch: 3/10.. Training Loss: 1.595.. Validation Loss: 1.232.. Validation Accuracy: 0.688
Time per batch: 19 seconds
Epoch: 3/10.. Training Loss: 1.612.. Validation Loss: 1.260.. Validation Accuracy: 0.691
Time per batch: 19 seconds
Epoch: 3/10.. Training Loss: 1.597.. Validation Loss: 1.576.. Validation Accuracy: 0.666
Time per batch: 19 seconds
Epoch: 3/10.. Training Loss: 1.731.. Validation Loss: 1.235.. Validation Accuracy: 0.715
Time per batch: 19 seconds
Epoch: 3/10.. Training Loss: 1.596.. Validation Loss: 1.164.. Validation Accuracy: 0.711
Time per batch: 19 seconds
Epoch: 4/10.. Training Loss: 1.491.. Validation Loss: 1.214.. Validation Accuracy: 0.719
Time per batch: 19 seconds
Epoch: 4/10.. Training Loss: 1.569.. Validation Loss: 1.159.. Validation Accuracy: 0.741
Time per batch: 19 seconds
Epoch: 4/10.. Training Loss: 1.468.. Validation Loss: 1.452.. Validation Accuracy: 0.678
Time per batch: 20 seconds
Epoch: 4/10.. Training Loss: 1.489.. Validation Loss: 1.028.. Validation Accuracy: 0.754
Time per batch: 19 seconds
Epoch: 4/10.. Training Loss: 1.623.. Validation Loss: 1.135.. Validation Accuracy: 0.717
Time per batch: 19 seconds
Epoch: 5/10.. Training Loss: 1.326.. Validation Loss: 1.111.. Validation Accuracy: 0.760
Time per batch: 19 seconds
Epoch: 5/10.. Training Loss: 1.421.. Validation Loss: 1.155.. Validation Accuracy: 0.737
Time per batch: 19 seconds
Epoch: 5/10.. Training Loss: 1.327.. Validation Loss: 1.088.. Validation Accuracy: 0.751
Time per batch: 19 seconds
Epoch: 5/10.. Training Loss: 1.409.. Validation Loss: 1.026.. Validation Accuracy: 0.761
Time per batch: 19 seconds
Epoch: 5/10.. Training Loss: 1.408.. Validation Loss: 1.117.. Validation Accuracy: 0.733
Time per batch: 19 seconds
Epoch: 6/10.. Training Loss: 1.371.. Validation Loss: 1.105.. Validation Accuracy: 0.742
Time per batch: 19 seconds
Epoch: 6/10.. Training Loss: 1.401.. Validation Loss: 0.980.. Validation Accuracy: 0.769
Time per batch: 19 seconds
Epoch: 6/10.. Training Loss: 1.264.. Validation Loss: 1.227.. Validation Accuracy: 0.744
Time per batch: 20 seconds
Epoch: 6/10.. Training Loss: 1.401.. Validation Loss: 1.019.. Validation Accuracy: 0.771
Time per batch: 19 seconds
Epoch: 6/10.. Training Loss: 1.365.. Validation Loss: 0.981.. Validation Accuracy: 0.766
Time per batch: 19 seconds
Epoch: 7/10.. Training Loss: 1.344.. Validation Loss: 1.095.. Validation Accuracy: 0.758
Time per batch: 19 seconds
Epoch: 7/10.. Training Loss: 1.223.. Validation Loss: 0.964.. Validation Accuracy: 0.781
Time per batch: 19 seconds
Epoch: 7/10.. Training Loss: 1.249.. Validation Loss: 1.027.. Validation Accuracy: 0.786
Time per batch: 19 seconds
Epoch: 7/10.. Training Loss: 1.345.. Validation Loss: 0.965.. Validation Accuracy: 0.778
Time per batch: 19 seconds
Epoch: 7/10.. Training Loss: 1.412.. Validation Loss: 0.985.. Validation Accuracy: 0.780
Time per batch: 19 seconds
Epoch: 8/10.. Training Loss: 1.266.. Validation Loss: 0.960.. Validation Accuracy: 0.782
Time per batch: 20 seconds
Epoch: 8/10.. Training Loss: 1.100.. Validation Loss: 1.090.. Validation Accuracy: 0.765
Time per batch: 19 seconds
Epoch: 8/10.. Training Loss: 1.216.. Validation Loss: 0.937.. Validation Accuracy: 0.788
Time per batch: 19 seconds
Epoch: 8/10.. Training Loss: 1.303.. Validation Loss: 1.077.. Validation Accuracy: 0.768
Time per batch: 20 seconds
Epoch: 8/10.. Training Loss: 1.403.. Validation Loss: 1.288.. Validation Accuracy: 0.737
Time per batch: 19 seconds
Epoch: 8/10.. Training Loss: 1.223.. Validation Loss: 1.009.. Validation Accuracy: 0.786
Time per batch: 19 seconds
Epoch: 9/10.. Training Loss: 1.130.. Validation Loss: 0.958.. Validation Accuracy: 0.778
Time per batch: 19 seconds
Epoch: 9/10.. Training Loss: 1.291.. Validation Loss: 0.856.. Validation Accuracy: 0.798
Time per batch: 19 seconds
Epoch: 9/10.. Training Loss: 1.220.. Validation Loss: 0.948.. Validation Accuracy: 0.807
Time per batch: 19 seconds
Epoch: 9/10.. Training Loss: 1.108.. Validation Loss: 0.920.. Validation Accuracy: 0.798
Time per batch: 19 seconds
Epoch: 9/10.. Training Loss: 1.260.. Validation Loss: 1.084.. Validation Accuracy: 0.768
Time per batch: 19 seconds
Epoch: 10/10.. Training Loss: 1.098.. Validation Loss: 1.059.. Validation Accuracy: 0.804
Time per batch: 19 seconds
Epoch: 10/10.. Training Loss: 1.288.. Validation Loss: 1.081.. Validation Accuracy: 0.774
Time per batch: 19 seconds
Epoch: 10/10.. Training Loss: 1.249.. Validation Loss: 1.008.. Validation Accuracy: 0.794
Time per batch: 19 seconds
Epoch: 10/10.. Training Loss: 1.106.. Validation Loss: 0.967.. Validation Accuracy: 0.791
Time per batch: 19 seconds
Epoch: 10/10.. Training Loss: 1.275.. Validation Loss: 0.864.. Validation Accuracy: 0.802
Time per batch: 19 seconds
###Markdown
Testing your networkIt's good practice to test your trained network on test data, images the network has never seen either in training or validation. This will give you a good estimate for the model's performance on completely new images. Run the test images through the network and measure the accuracy, the same way you did validation. You should be able to reach around 70% accuracy on the test set if the model has been trained well.
###Code
# TODO: Do validation on the test set
def check_accuracy_on_test(model, loader, device):
model.to(device)
correct = 0
total = 0
with torch.no_grad():
for data in loader:
inputs, labels = data
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy: %d %%' % (100 * correct / total))
check_accuracy_on_test(model, test_loader, device)
###Output
Accuracy: 78 %
###Markdown
Save the checkpointNow that your network is trained, save the model so you can load it later for making predictions. You probably want to save other things such as the mapping of classes to indices which you get from one of the image datasets: `image_datasets['train'].class_to_idx`. You can attach this to the model as an attribute which makes inference easier later on.```model.class_to_idx = image_datasets['train'].class_to_idx```Remember that you'll want to completely rebuild the model later so you can use it for inference. Make sure to include any information you need in the checkpoint. If you want to load the model and keep training, you'll want to save the number of epochs as well as the optimizer state, `optimizer.state_dict`. You'll likely want to use this trained model in the next part of the project, so best to save it now.
###Code
# TODO: Save the checkpoint
checkpoint = {
'layer_sizes': layer_sizes,
'state_dict': model.state_dict(),
'class_to_idx': train_data.class_to_idx,
}
torch.save(checkpoint, 'checkpoint.pth')
###Output
_____no_output_____
###Markdown
Loading the checkpointAt this point it's good to write a function that can load a checkpoint and rebuild the model. That way you can come back to this project and keep working on it without having to retrain the network.
###Code
# TODO: Write a function that loads a checkpoint and rebuilds the model
def load_checkpoint(filepath, pretrained_model):
checkpoint = torch.load(filepath)
model = create_model(models.vgg16(pretrained=True), checkpoint['layer_sizes'])
model.load_state_dict(checkpoint['state_dict'])
model.class_to_idx = checkpoint['class_to_idx']
return model
loaded_model = load_checkpoint(filepath='checkpoint.pth',
pretrained_model=models.vgg16(pretrained=True))
###Output
_____no_output_____
###Markdown
Check Accuracy On Loaded Dataset
###Code
check_accuracy_on_test(loaded_model, test_loader, device)
###Output
Accuracy: 78 %
###Markdown
Inference for classificationNow you'll write a function to use a trained network for inference. That is, you'll pass an image into the network and predict the class of the flower in the image. Write a function called `predict` that takes an image and a model, then returns the top $K$ most likely classes along with the probabilities. It should look like ```pythonprobs, classes = predict(image_path, model)print(probs)print(classes)> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]> ['70', '3', '45', '62', '55']```First you'll need to handle processing the input image such that it can be used in your network. Image PreprocessingYou'll want to use `PIL` to load the image ([documentation](https://pillow.readthedocs.io/en/latest/reference/Image.html)). It's best to write a function that preprocesses the image so it can be used as input for the model. This function should process the images in the same manner used for training. First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the [`thumbnail`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.htmlPIL.Image.Image.thumbnail) or [`resize`](http://pillow.readthedocs.io/en/3.1.x/reference/Image.htmlPIL.Image.Image.thumbnail) methods. Then you'll need to crop out the center 224x224 portion of the image.Color channels of images are typically encoded as integers 0-255, but the model expected floats 0-1. You'll need to convert the values. It's easiest with a Numpy array, which you can get from a PIL image like so `np_image = np.array(pil_image)`.As before, the network expects the images to be normalized in a specific way. For the means, it's `[0.485, 0.456, 0.406]` and for the standard deviations `[0.229, 0.224, 0.225]`. You'll want to subtract the means from each color channel, then divide by the standard deviation. And finally, PyTorch expects the color channel to be the first dimension but it's the third dimension in the PIL image and Numpy array. You can reorder dimensions using [`ndarray.transpose`](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ndarray.transpose.html). The color channel needs to be first and retain the order of the other two dimensions.
###Code
def process_image(image_path):
'''Scales, crops, and normalizes a PIL image for a PyTorch model,
returns an Numpy array
'''
img = Image.open(image_path)
width, height = img.size
aspect_ratio = width / height
short_side = 256
# First, resize the images where the shortest side is 256 pixels, keeping the aspect ratio. This can be done with the thumbnail or resize methods.
if width > height:
# width, height
# if width is greater than height then shortest side is height; change height to 256, adjust width
# width should be 256 (i.e. height) multiplied by the same aspect ratio
img.thumbnail((short_side * aspect_ratio, short_side)) # width > height
else:
img.thumbnail((short_side, short_side * aspect_ratio)) # width <= height
# Then you'll need to crop out the center 224x224 portion of the image.
width, height = img.size
new_width = 224
new_height = new_width
left_margin = (img.width - new_width) / 2
bottom_margin = (img.height - new_height) / 2
right_margin = left_margin + new_width
top_margin = bottom_margin + new_height
img = img.crop((left_margin, bottom_margin, right_margin, top_margin))
# the network expects the images to be normalized in a specific way. For the means, it's [0.485, 0.456, 0.406] and for the standard deviations [0.229, 0.224, 0.225]. You'll want to subtract the means from each color channel, then divide by the standard deviation.
img = np.array(img) / 255
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
img = (img - mean)/std
# Move color channels to first dimension as expected by PyTorch
img = img.transpose((2, 0, 1))
return img
###Output
_____no_output_____
###Markdown
To check your work, the function below converts a PyTorch tensor and displays it in the notebook. If your `process_image` function works, running the output through this function should return the original image (except for the cropped out portions).
###Code
def imshow(image, ax=None, title=None):
"""Imshow for Tensor."""
if ax is None:
fig, ax = plt.subplots()
# PyTorch tensors assume the color channel is the first dimension
# but matplotlib assumes is the third dimension
image = image.transpose((1, 2, 0))
# Undo preprocessing
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = std * image + mean
# Image needs to be clipped between 0 and 1 or it looks like noise when displayed
image = np.clip(image, 0, 1)
ax.imshow(image)
return ax
image_path = './flowers/test/1/image_06764.jpg'
image = process_image(image_path)
imshow(image)
###Output
_____no_output_____
###Markdown
Class PredictionOnce you can get images in the correct format, it's time to write a function for making predictions with your model. A common practice is to predict the top 5 or so (usually called top-$K$) most probable classes. You'll want to calculate the class probabilities then find the $K$ largest values.To get the top $K$ largest values in a tensor use [`x.topk(k)`](http://pytorch.org/docs/master/torch.htmltorch.topk). This method returns both the highest `k` probabilities and the indices of those probabilities corresponding to the classes. You need to convert from these indices to the actual class labels using `class_to_idx` which hopefully you added to the model or from an `ImageFolder` you used to load the data ([see here](Save-the-checkpoint)). Make sure to invert the dictionary so you get a mapping from index to class as well.Again, this method should take a path to an image and a model checkpoint, then return the probabilities and classes.```pythonprobs, classes = predict(image_path, model)print(probs)print(classes)> [ 0.01558163 0.01541934 0.01452626 0.01443549 0.01407339]> ['70', '3', '45', '62', '55']```
###Code
def predict(image_path, model, topk=5):
''' Predict the class (or classes) of an image using a trained deep learning model.
'''
model.eval()
image = process_image(image_path)
image= transforms.ToTensor()(image)
image = image.view(1, 3, 224, 224)
#image.to(device)
#model.to(device)
model.to(device)
with torch.no_grad():
output = model.forward(image.type(torch.FloatTensor).cuda())
probabilities = torch.exp(output).cpu() # used LogSoftmax so convert back
top_probs, top_classes = probabilities.topk(topk)
return top_probs.numpy()[0], top_classes.numpy()[0]
probs, classes = predict(image_path, loaded_model)
print(probs)
print(classes)
###Output
[81 76 80 77 15]
###Markdown
Sanity CheckingNow that you can use a trained model for predictions, check to make sure it makes sense. Even if the testing accuracy is high, it's always good to check that there aren't obvious bugs. Use `matplotlib` to plot the probabilities for the top 5 classes as a bar graph, along with the input image. It should look like this:You can convert from the class integer encoding to actual flower names with the `cat_to_name.json` file (should have been loaded earlier in the notebook). To show a PyTorch tensor as an image, use the `imshow` function defined above.
###Code
# TODO: Display an image along with the top 5 classes
imshow(image)
classes
class_names = [cat_to_name[str(cls)] for cls in classes]
class_names
probs
plt.barh(class_names, probs) #align='center', alpha=0.5)
###Output
_____no_output_____ |
docs/T189677_Graph_embeddings_using_SDNE.ipynb | ###Markdown
Graph embeddings using SDNE
###Code
!pip install git+https://github.com/palash1992/GEM.git
!pip install -U Ipython
%tensorflow_version 1.x
from gem.embedding.sdne import SDNE
from matplotlib import pyplot as plt
from IPython.display import Code
import networkx as nx
import inspect
Code(inspect.getsource(SDNE), language='python')
graph = nx.karate_club_graph()
m1 = SDNE(d=2, beta=5, alpha=1e-5, nu1=1e-6, nu2=1e-6, K=3,n_units=[50, 15,], rho=0.3, n_iter=50,
xeta=0.01,n_batch=100,
modelfile=['enc_model.json', 'dec_model.json'],
weightfile=['enc_weights.hdf5', 'dec_weights.hdf5'])
m1.learn_embedding(graph)
x, y = list(zip(*m1.get_embedding()))
plt.plot(x, y, 'o',linewidth=None)
###Output
_____no_output_____ |
Basic Optimisation.ipynb | ###Markdown
Basic Optimisation using PythonJessica LeungIn this notebook, we are going to demonstrate how to solve an optimization problem in Python with different tools available. We will be using the following packages:- `scipy.optimize` (comes with Anaconda),- `Gurobi` (To install, go to installation guide)- `cvxpy` (To install, type `conda install cvx` to your terminal/anaconda prompt, or go to https://www.cvxpy.org/install/index.html)All three packages are very powerful tools with very comprehensive documentation. Depending on the problem that you want to solve, you may find one package more user-friendly than the other.- `Scipy.optimize` documentation: https://docs.scipy.org/doc/scipy/reference/optimize.html- `Gurobi` documentation: https://www.gurobi.com/documentation/- `cvxpy` documentation: https://www.cvxpy.org/tutorial/index.html Using `Scipy.optimize`Let's start by finding the minimum of the scalar function $$f(x) = -0.3e^{(x-0.6)^2}$$Here we provide initial guess as $1.0$.
###Code
import numpy as np
from scipy.optimize import minimize
def f(x):
return -0.3*np.exp(-(x-0.6)**2)
result = minimize(fun = f, x0 = [1.0])
print (result.x)
###Output
[0.59999517]
###Markdown
The syntax `.x` here tells the package that we would like to know the current value of our variable. (No matter what our variable is, the syntax remain to be `.x`.)We can plot the function and our minimum point to verify the result.
###Code
import matplotlib.pyplot as plt
x = np.linspace(-3,3,200)
plt.plot(x,f(x))
plt.plot(0.59999517, f(0.59999517), 'ro')
plt.show()
###Output
_____no_output_____
###Markdown
`scipy.optimise` can also handles multivariate functions. Depending on the function that you are trying to optimise (convex or not, smooth or not, linear or not), there is a wide variety of methods to choose from in the `scipy.optimise` package.
###Code
from scipy.optimize import minimize
def g(y):
return np.sqrt((2*y[0] - 5)**2 + (6*y[1] - 3)**2)
result = minimize(fun = g, x0 = [0,0])
print (result.x)
x, y = np.mgrid[-2.03:4.2:.04, -1.6:3.2:.04]
x = x.T
y = y.T
plt.figure(1, figsize=(5, 5))
contours = plt.contour(np.sqrt((2*x - 5)**2 + (6*y - 3)**2),
extent=[-2.03, 4.2, -1.6, 3.2],
cmap=plt.cm.gnuplot)
plt.plot(2.49999999,0.49999999, 'rx')
###Output
_____no_output_____
###Markdown
Using `cvxpy`CVXPY is a Python-embedded modeling language for convex optimization problems. It automatically transforms the problem into standard form, calls a solver, and unpacks the results.To install, type `conda install cvxpy` to the terminal/anaconda prompt.Let's start with a simple LP problem. Consider the following problem:General Auto manufactures luxury cars and trucks. The company believes its target customers are high-income men and women. To reach this group, General Auto has embarked on an ambitious TV advertising campaign and will purchase 1-minute commercial spots on two types of programs: comedy shows and football games.- Each comedy commercial is seen by 7 million high income women and 2 million high-income men and costs \$50,000.- Each football game is seen by 2 million high-income women and 12 million high-income men and costs \$100,000.- General Auto would like for commercials to be seen by at least 28 million high-income women and 24 million high-income men.Use LP to determine how General Auto can meet its advertising requirements at minimum cost.General Auto must decide how many comedy and football ads should be purchased, so the decision variables are- $x_{comedy}$ - number of 1-minute comedy ads purchased- $x_{football}$ - number of 1-minute football ads purchasedThen the problem can be formulated as:$$\text{minimize} \quad 50 x_{comedy} + 100 x_{football} $$$$ \text{subject to} \quad 7 x_{comedy} + 2 x_{football} \geq 28 $$$$\quad 2 x_{comedy} + 12 x_{football} \geq 24 $$$$\quad x_{comedy}, x_{football}\geq 0$$
###Code
import cvxpy as cp
# Create two scalar optimization variables.
x1 = cp.Variable()
x2 = cp.Variable()
# Create two constraints.
constraints = [7*x1 + 2*x2 >= 28,
2*x1 + 12*x2 >= 24,
x1 >= 0,
x2 >= 0]
# Form objective.
obj = cp.Minimize(50*x1 + 100*x2)
# Form and solve problem.
prob = cp.Problem(obj, constraints)
prob.solve() # Returns the optimal value.
print("status:", prob.status)
print("optimal value", prob.value)
print("optimal var", x1.value, x2.value)
###Output
status: optimal
optimal value 320.0
optimal var 3.6000000000000005 1.4
###Markdown
A Simple Machine Learning Example using CVXPY Logistic Regression with $\ell_1$ regularisationIn this example, we train a logistic regression classifier with $\ell_1$ regularisation. Given data $x_i \in \mathbf{R}^n$ and $y_i \in \{0,1\}, i = 1,...,n$, our goal is to learn a linear classifier $\hat{y} = I[x \beta > 0]$.We can model this relationship as follows:$$ log \dfrac{Pr(Y = 1| X = x)}{Pr(Y = 0| X = x)} = x \beta$$Therefore, we fit $\beta$ by maximising the log-likelihood of the data, plus a regularisation term $\lambda \|\beta\|_1$ where $\lambda > 0$:$$ \ell(\beta) = \sum_{i = 1}^n y_i (x\beta)_i - log(1 + exp((x\beta)_i)) - \lambda \|\beta\|_1$$This objective function is concave in $\beta$, so maximising this concave function is a convex optimisation problem. For simplification, we take $\lambda = 0.1$ in this example. In practice, you can perform cross validation to find the best value of $\lambda$. In the following code we generate data with $p=50$ features by randomly choosing $x_i$ and supplying a sparse $\beta_{true} \in \mathbf{R}^n$.
###Code
np.random.seed(0)
p = 50
n = 50
def sigmoid(z):
return 1/(1 + np.exp(-z))
beta_true = np.array([1, 0.5, -0.5] + [0]*(p - 3))
X = (np.random.random((n, p)) - 0.5)*10
Y = np.round(sigmoid(X @ beta_true + np.random.randn(n)*0.5))
beta = cp.Variable(n)
lambd = 0.1
log_likelihood = cp.sum(
cp.multiply(Y, X @ beta) - cp.logistic(X @ beta))
problem = cp.Problem(cp.Maximize(log_likelihood/p - lambd * cp.norm(beta, 1)))
problem.solve()
print("status:", problem.status)
print("optimal value", problem.value)
print("optimal var", beta.value)
plt.plot(beta_true, label=r"True $\beta$")
plt.plot(beta.value, label=r"Reconstructed $\beta$")
plt.xlabel(r"$i$", fontsize=16)
plt.ylabel(r"$\beta_i$", fontsize=16)
plt.legend(loc="upper right")
###Output
status: optimal
optimal value -0.3489231061667745
optimal var [ 7.72881910e-01 4.52165463e-01 -2.94472768e-01 -3.45121087e-12
8.10959783e-12 2.25227569e-12 3.98691707e-12 1.43401127e-02
5.08455937e-11 -1.36329821e-10 4.22511774e-12 8.90807297e-12
-2.16837167e-11 7.82449968e-11 -3.54859367e-12 1.28195462e-11
-4.10075409e-12 1.66532091e-02 2.04175804e-12 -6.29757368e-14
-1.01985866e-11 -3.36732664e-11 -3.40010453e-12 -4.02429238e-02
6.13643400e-11 -2.22784055e-11 -1.46346483e-12 -5.65635676e-13
-1.92671561e-11 1.23094858e-11 -2.46574874e-12 4.16244416e-11
1.53792215e-11 -8.54203591e-12 8.05739746e-02 1.28275506e-11
-2.96233234e-12 6.99195110e-12 2.55893246e-02 -3.44169794e-12
-9.07045573e-12 5.27104626e-11 -1.19968367e-11 1.36487864e-09
2.09062772e-12 7.19106408e-12 -7.01972501e-12 -9.08733160e-02
-1.96099709e-11 8.47393249e-13]
###Markdown
More powerful solver Using `Gurobi`Gurobi is a commerical solver that is capable of handling complex mathematical problems, including linear programming (LP), mixed-integer linear programming (MILP), quadratic programming (QP), mixed-integer quadratic programming (MIQP), Quadratically-constrained programming (QCP) and mixed-integer quadratically constrainted programming (MIQCP).Free academic license available on https://www.gurobi.com/. Please refer to the installation guie for more details on installation and the license.Let's solve the same problem (General Auto) with `gurobi` instead. Define model and input dataFirst, we should import the package `gurobipy`. Naming the module by abbreviation `grb` allows easy reference to the module and avoid confusion with other packages. You can also name it with any other abbreviations. Then we define a model `m` and name it `General Auto`.
###Code
import gurobipy as grb
# Define a model
m = grb.Model('General Auto')
# Input data
Comedy = {'women':7,'men':2}
Football = {'women':2,'men':12}
MinView = {'women':28,'men':24}
Cost = {'Comedy':50, 'Football':100}
###Output
_____no_output_____
###Markdown
Define variablesCreate an empty vector x and define the elements in the vector. Use the function 'addVar' to add variables into the model 'm'. As input arguments of the function, You should include the variable types 'vtype=' and the name of the variables 'name='.By default, the decision variables are non-negative. Since we are defining varibles for the Gurobi solver, 'grb.GRB.' must be reffered.Let $x_{comedy}$ and $x_{football}$ are continuous variables which correspond to the number of one minutes comedy and football ads purchased respectively.You can define the objective function by including the coefficients while defining the decision variables. Python will sum the product of the decision variables and the coefficient as the objective function.
###Code
#Define the decision variables
x={}
for k,v in Cost.items():
x[k] = m.addVar(vtype = grb.GRB.CONTINUOUS, obj = v, name= 'x_{}'.format(k))
###Output
_____no_output_____
###Markdown
Define model senseDefine the objective function using the function 'modelSense' on model 'm'. The sense of the model represents whether to maximise or to minimise the objective function. Since we are Gurobi as the solver, 'grb.GRB.' must be reffered.We are trying to minimise the cost, thus we should minimise the objective function.
###Code
#Define model sense
m.modelSense = grb.GRB.MINIMIZE
###Output
_____no_output_____
###Markdown
**Alternatively**, the objective function can be set manually using the function 'setObjective'. Remember to include the sense of the objective (maximise or minimise) as an argument of the function. Once the function 'setObjective' is used, the coefficients that has been defined using 'addVar' will be ignored.
###Code
m.setObjective(50*x['Comedy']+100*x['Football'], grb.GRB.MINIMIZE)
###Output
_____no_output_____
###Markdown
Add constraintsUse the function 'addConstr' to add constraints into the model 'm'. The constraints can be typed explicitly with the decision variables defined earlier. Name the constraints as reference and avoid confusion.
###Code
# constraints
for k,v in MinView.items():
m.addConstr(x['Comedy']*Comedy[k]+x['Football']*Football[k]>= v, name='MinView_{}'.format(k))
###Output
_____no_output_____
###Markdown
Solve the mathematical problemSolve the problem by optimising the model `m`
###Code
# Solve
m.optimize()
###Output
Optimize a model with 2 rows, 2 columns and 4 nonzeros
Coefficient statistics:
Matrix range [2e+00, 1e+01]
Objective range [5e+01, 1e+02]
Bounds range [0e+00, 0e+00]
RHS range [2e+01, 3e+01]
Presolve time: 0.01s
Presolved: 2 rows, 2 columns, 4 nonzeros
Iteration Objective Primal Inf. Dual Inf. Time
0 0.0000000e+00 6.500000e+00 0.000000e+00 0s
2 3.2000000e+02 0.000000e+00 0.000000e+00 0s
Solved in 2 iterations and 0.02 seconds
Optimal objective 3.200000000e+02
###Markdown
Display the solutionHere we use the syntax `.x` to obtain the current value of our variabels and `.objVal` to obtain the optimal of the model.
###Code
print ('----------------------------------')
for i in Cost:
print ('{:.2f} ads on {} is purchsed.'.format(x[i].x,i))
print ('----------------------------------')
print ('Total cost: ${:.2f}'.format(m.objVal))
###Output
----------------------------------
3.60 ads on Comedy is purchsed.
1.40 ads on Football is purchsed.
----------------------------------
Total cost: $320.00
###Markdown
Conclusion and interpretation: Total cost is \$320 with 1.4 1-min football ads and 3.6 1-min Comedy ads purchased respectively. Hard Code in GurobiAlternatively, you can also hard code your mathematical problem in Gurobi. It may be simpler and faster for some problem but not necessarily scalable. Again, it all depends on your problem/application.
###Code
d = grb.Model('General Auto2')
x1 = d.addVar(vtype = grb.GRB.CONTINUOUS, obj = 50)
x2 = d.addVar(vtype = grb.GRB.CONTINUOUS, obj = 100)
d.modelSense = grb.GRB.MINIMIZE
d.addConstr(7*x1 + 2*x2 >= 28)
d.addConstr(2*x1 + 12*x2 >= 24)
d.optimize()
print ('----------------------------------')
print ('{:.2f} ads on Comedy is purhcased.'.format(x1.x))
print ('{:.2f} ads on Football is purhcased.'.format(x2.x))
print ('----------------------------------')
print ('Total cost: ${:.2f}'.format(m.objVal))
###Output
Optimize a model with 2 rows, 2 columns and 4 nonzeros
Coefficient statistics:
Matrix range [2e+00, 1e+01]
Objective range [5e+01, 1e+02]
Bounds range [0e+00, 0e+00]
RHS range [2e+01, 3e+01]
Presolve time: 0.02s
Presolved: 2 rows, 2 columns, 4 nonzeros
Iteration Objective Primal Inf. Dual Inf. Time
0 0.0000000e+00 6.500000e+00 0.000000e+00 0s
2 3.2000000e+02 0.000000e+00 0.000000e+00 0s
Solved in 2 iterations and 0.03 seconds
Optimal objective 3.200000000e+02
----------------------------------
3.60 ads on Comedy is purhcased.
1.40 ads on Football is purhcased.
----------------------------------
Total cost: $320.00
|
keras/mlp/binary_addition_mlp.ipynb | ###Markdown
###Code
# keras and tf
import tensorflow as tf
import keras
# models
from keras.models import Sequential
# layers
from keras.layers import Input, add, Conv2D, Flatten, Dense, Dropout, Activation, MaxPooling2D, LSTM
# optimizers
from keras.optimizers import SGD
# numpy
import numpy as np
def pad_and_seperate_binaries(binaries, pad_number = 8):
for i in range(len(binaries)):
binaries[i] = binaries[i].replace("0b","")
for i in range(len(binaries)):
if(len(binaries[i]) < pad_number):
binaries[i] = "0"*(pad_number-len(binaries[i])) + binaries[i]
for i in range(len(binaries)):
binaries[i] = list(map(int,list(binaries[i])))
return binaries
binaries = []
sums = []
for b in range(256):
binaries.append(bin(b))
for i in binaries:
for j in binaries:
sums.append(bin(int(i, 2) + int(j, 2)))
binaries = pad_and_seperate_binaries(binaries)
sums = pad_and_seperate_binaries(sums, pad_number=9)
x = []
for i in binaries:
for j in binaries:
x.append(i+j)
x = np.array(x)
y = np.array(sums)
model = Sequential()
model.add(Dense(64, activation='relu', input_dim=16))
model.add(Dense(64, activation='relu'))
model.add(Dense(32, activation='relu'))
model.add(Dense(16, activation='relu'))
model.add(Dense(9, activation='softmax'))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(loss = "mse", optimizer = opt, metrics=['accuracy'])
model.fit(x,y,epochs=50)
###Output
_____no_output_____ |
Computer Vision/Image_segmentation.ipynb | ###Markdown
Image segmentation using K-means Clustering In this notebook here it demonstrates the process of applying segmentation to an image using K-means clustering. Image segmentationImage segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as image objects).The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. K-means ClusteringK-means clustering is a method which clustering data points or vectors with respect to nearest mean points (Here K is numbers of mean or cluster).This results in a partitioning of the data points or vectors space into Voronoi cells. Algorithm in pseudocode:- Initialize k means with random values- For a given number of iterations: - Iterate through items: - Find the mean closest to the item - Assign item to mean - Update mean
###Code
# Loading required libraries
import numpy as np
import matplotlib.pyplot as plt
import cv2
###Output
_____no_output_____
###Markdown
cv2.imreadUse the function cv2.imread() to read an image. argument 1 : The image should be in the working directory or a full path of image should be given.argument 2 : Is a flag which specifies the way image should be read. - cv2.IMREAD_COLOR : Loads a color image. Any transparency of image will be neglected. It is the default flag. - cv2.IMREAD_GRAYSCALE : Loads image in grayscale mode - cv2.IMREAD_UNCHANGED : Loads image as such including alpha channel Note: Instead of these three flags, you can simply pass integers 1, 0 or -1 respectivel
###Code
image = cv2.imread('images/lamborghini.jpg')
###Output
_____no_output_____
###Markdown
cv2.imshowUse this function to display an image in a window. The window automatically fits to the image size.argument 1 : window name which is a string.argument 2 : image
###Code
cv2.imshow('image',image)
#argument is the time in milliseconds if 0 it waits indefinitely for a key stroke
cv2.waitKey(0)
#cv2.destroyAllWindows() # It distroy all window
cv2.destroyWindow('image') # Distroying the initiated window
###Output
_____no_output_____
###Markdown
cv2.cvtColorIt is used to convert an image from one color space to another. There are more than 150 color-space conversion methods available in OpenCV Parameters:- src : It is the image whose color space is to be changed.- code : It is the color space conversion code.- dst : It is the output image of the same size and depth as src image. It is an optional parameter.- dstCn : It is the number of channels in the destination image. If the parameter is 0 then the number of the channels is derived automatically from src and code. It is an optional parameter. Return Value:- It returns an image
###Code
# Change color to RGB (from BGR)
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
plt.imshow(image) # https://matplotlib.org/3.3.2/api/_as_gen/matplotlib.pyplot.imshow.html
# Reshaping the image into a 2D array of pixels and 3 color values (RGB)
pixel_vals = image.reshape((-1,3)) # numpy reshape operation -1 unspecified
# Convert to float type only for supporting cv2.kmean
pixel_vals = np.float32(pixel_vals)
###Output
_____no_output_____
###Markdown
cv2.kmeans Parameters:- samples : It should be of np.float32 data type, and each feature should be put in a single column.- nclusters(K) : Number of clusters required at end- criteria : It is the iteration termination criteria. When this criteria is satisfied, algorithm iteration stops. Actually, it should be a tuple of 3 parameters. They are `( type, max_iter, epsilon )`: - type of termination criteria. It has 3 flags as below: - cv.TERM_CRITERIA_EPS - stop the algorithm iteration if specified accuracy, epsilon, is reached. - cv.TERM_CRITERIA_MAX_ITER - stop the algorithm after the specified number of iterations, max_iter. - cv.TERM_CRITERIA_EPS + cv.TERM_CRITERIA_MAX_ITER - stop the iteration when any of the above condition is met. - max_iter - An integer specifying maximum number of iterations.epsilon - Required accuracy- attempts : Flag to specify the number of times the algorithm is executed using different initial labellings. The algorithm returns the labels that yield the best compactness. This compactness is returned as output.- flags : This flag is used to specify how initial centers are taken. Normally two flags are used for this : cv.KMEANS_PP_CENTERS and cv.KMEANS_RANDOM_CENTERS. Return Value:- compactness : It is the sum of squared distance from each point to their corresponding centers.- labels : This is the label array (same as 'code' in previous article) where each element marked '0', '1'.....- centers : This is array of centers of clusters.
###Code
#criteria
criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 100, 0.85)
# Choosing number of cluster
k = 5
retval, labels, centers = cv2.kmeans(pixel_vals, k, None, criteria, 10, cv2.KMEANS_RANDOM_CENTERS)
# convert data into 8-bit values
centers = np.uint8(centers)
segmented_data = centers[labels.flatten()] # Mapping labels to center points( RGB Value)
# reshape data into the original image dimensions
segmented_image = segmented_data.reshape((image.shape))
plt.imshow(segmented_image)
###Output
_____no_output_____ |
End-to-end Stegnography-Working-RandomSamplinginBatch-2.ipynb | ###Markdown
Paper Implementation END-TO-END TRAINED CNN ENCODER-DECODER NETWORKS FOR IMAGE STEGANOGRAPHY - Atique $et.al$ Tensorflow 2.0 Notebook Author: Saad Zia
###Code
from IPython import display
import numpy as np
import tensorflow as tf
import pickle
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
%load_ext autoreload
%autoreload 2
import tensorflow as tf
# For process to not allocate entire GPU memory
physical_devices = tf.config.experimental.list_physical_devices('GPU')
assert len(physical_devices) > 0, "Not enough GPU hardware devices available"
tf.config.experimental.set_memory_growth(physical_devices[0], True)
###Output
_____no_output_____
###Markdown
Setting up Data Pipeline
###Code
(x, y), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
x = x.astype(np.float32)
x_test = x_test.astype(np.float32)
###Output
_____no_output_____
###Markdown
Setting up tf.keras Model
###Code
from tensorflow.keras import Model
from tensorflow.keras.layers import Input
tf.keras.backend.set_floatx('float32')
tf.keras.backend.floatx()
from model_skip_branched.encoder import EncoderNetwork
from model_skip_branched.decoder import DecoderNetwork
carrier_image_shape = (32, 32, 3)
payload_image_shape = (32, 32, 3)
encoder_network = EncoderNetwork(carrier_shape=carrier_image_shape,
payload_shape=payload_image_shape)
decoder_network = DecoderNetwork(target_image_shape=payload_image_shape)
input_carrier = Input(shape=carrier_image_shape, name='input_carrier')
input_payload = Input(shape=payload_image_shape, name='input_payload')
encoded_output = encoder_network.get_network(input_carrier, input_payload)
decoded_output, decoded_host_output = decoder_network.get_network(encoded_output)
steganography_model = Model(inputs=[input_carrier, input_payload],
outputs=[encoded_output, decoded_output, decoded_host_output])
from tensorflow.keras.utils import plot_model
plot_model(steganography_model, show_shapes=True)
steganography_model.summary()
# Defining Loss Function
@tf.function
def branched_loss_function(payload, host, encoder_output, decoder_output,
decoder_host_output, alpha=1., beta=2., gamma=1.):
loss = tf.math.reduce_mean(
beta * tf.math.squared_difference(payload, decoder_output) +
alpha * tf.math.squared_difference(host, encoder_output) +
gamma * tf.math.squared_difference(host, decoder_host_output))
return loss
@tf.function
def loss_function(payload, host, encoder_output, decoder_output):
loss = tf.math.reduce_mean(
tf.math.squared_difference(payload, decoder_output) +
tf.math.squared_difference(host, encoder_output))
return loss
optimizer = tf.keras.optimizers.Adam(0.0001)
test_loss = tf.keras.metrics.Mean(name='test_loss')
train_loss = tf.keras.metrics.Mean(name='train_loss')
@tf.function
def train_step(payload, host):
with tf.GradientTape() as tape:
encoded_host, decoded_payload, decoded_host = steganography_model([host, payload])
loss = branched_loss_function(payload, host, encoded_host, decoded_payload, decoded_host)
train_loss(loss)
gradients = tape.gradient(loss, steganography_model.trainable_variables)
optimizer.apply_gradients(
zip(gradients, steganography_model.trainable_variables))
train_host_psnr = tf.reduce_mean(tf.image.psnr(host, encoded_host, 1))
train_payload_psnr = tf.reduce_mean(
tf.image.psnr(payload, decoded_payload, 1))
train_host_ssim = tf.reduce_mean(tf.image.ssim(host, encoded_host, 1))
train_payload_ssim = tf.reduce_mean(
tf.image.ssim(payload, decoded_payload, 1))
return train_host_psnr, train_payload_psnr, train_host_ssim, train_payload_ssim
@tf.function
def test_step(payload, host):
encoded_host, decoded_payload, decoded_host = steganography_model([host, payload])
t_loss = branched_loss_function(payload, host, encoded_host, decoded_payload, decoded_host)
test_loss(t_loss)
test_host_psnr = tf.reduce_mean(tf.image.psnr(host, encoded_host, 1))
test_payload_psnr = tf.reduce_mean(
tf.image.psnr(payload, decoded_payload, 1))
test_host_ssim = tf.reduce_mean(tf.image.ssim(host, encoded_host, 1))
test_payload_ssim = tf.reduce_mean(
tf.image.ssim(payload, decoded_payload, 1))
return test_host_psnr, test_payload_psnr, test_host_ssim, test_payload_ssim
import time
EPOCHS = 500
SUMMARY_DIR = './summary'
for epoch in range(EPOCHS):
start = time.time()
# for when payload is rgb
train_size = x.shape[0]
test_size = x_test.shape[0]
payload_train_idx = np.arange(train_size)
np.random.shuffle(payload_train_idx)
payload_train = x[payload_train_idx]
if payload_image_shape[-1] == 1:
# for when payload is grayscale
payload_train = np.expand_dims(np.mean(payload_train, axis=-1),
axis=-1)
host_train_idx = np.arange(train_size)
np.random.shuffle(host_train_idx)
host_train = x[host_train_idx]
payload_test_idx = np.arange(test_size)
np.random.shuffle(payload_test_idx)
payload_test = x_test[payload_test_idx]
if payload_image_shape[-1] == 1:
# for when payload is grayscale
# for when payload is grayscale
payload_test = np.expand_dims(np.mean(payload_test, axis=-1), axis=-1)
host_test_idx = np.arange(test_size)
np.random.shuffle(host_test_idx)
host_test = x_test[host_test_idx]
# Normalization function
def deprecated_normalize(payload, host):
payload = tf.image.per_image_standardization(payload)
host = tf.image.per_image_standardization(host)
return payload, host
def normalize(payload, host):
payload = tf.divide(
tf.math.subtract(payload, tf.reduce_min(payload)),
tf.math.subtract(tf.reduce_max(payload), tf.reduce_min(payload)))
host = tf.divide(
tf.math.subtract(host, tf.reduce_min(host)),
tf.math.subtract(tf.reduce_max(host), tf.reduce_min(host)))
return payload, host
# Instantiate the Dataset class
train_dataset = tf.data.Dataset.from_tensor_slices(
(payload_train, host_train))
# Adding shuffle, normalization and batching operations to the dataset object
train_dataset = train_dataset.map(normalize).shuffle(5000).batch(
2048, drop_remainder=True)
# Instantiate the test Dataset class
test_dataset = tf.data.Dataset.from_tensor_slices(
(payload_test, host_test))
test_dataset = (test_dataset.map(normalize).batch(
1024, drop_remainder=True)).shuffle(500)
for payload, host in train_dataset:
train_host_psnr, train_payload_psnr, train_host_ssim, train_payload_ssim = train_step(
payload, host)
for payload, host in test_dataset:
test_host_psnr, test_payload_psnr, test_host_ssim, test_payload_ssim = test_step(
payload, host)
elapsed = time.time() - start
print('elapsed: %f' % elapsed)
template = 'Epoch {}, Train Loss: {}, Test Loss: {}, TrainH PSNR: {}, TrainP PSNR: {}, TestH PSNR: {}, TestP PSNR: {}, TrainH SSIM: {}, TrainP SSIM: {}, TestH SSIM: {}, TestP SSIM: {}'
print(
template.format(epoch + 1, train_loss.result(), test_loss.result(),
train_host_psnr, train_payload_psnr, test_host_psnr,
test_payload_psnr, train_host_ssim, train_payload_ssim,
test_host_ssim, test_payload_ssim))
# Reset the metrics for the next epoch
test_loss.reset_states()
print('Training Finished.')
payload_test = x_test[np.random.choice(np.arange(x_test.shape[0]),
size=x_test.shape[0])]
if payload_image_shape[-1] == 1:
# for when payload is grayscale
# for when payload is grayscale
payload_test = np.expand_dims(np.mean(payload_test, axis=-1), axis=-1)
host_test = x_test[np.random.choice(np.arange(x_test.shape[0]),
size=x_test.shape[0])]
# Instantiate the test Dataset class
test_dataset = tf.data.Dataset.from_tensor_slices((payload_test, host_test))
test_dataset = (test_dataset.map(normalize).batch(
256, drop_remainder=True)).shuffle(500)
for payload, host in test_dataset:
test_host_psnr, test_payload_psnr, test_host_ssim, test_payload_ssim = test_step(
payload, host)
print("Test Loss: ", test_loss.result(), "PSNR-H: ", test_host_psnr,
"PSNR-P: ", test_payload_psnr)
def test_normalize(payload, host):
normalized_payload = tf.divide(
tf.math.subtract(payload, tf.reduce_min(payload)),
tf.math.subtract(tf.reduce_max(payload), tf.reduce_min(payload)))
normalized_host = tf.divide(
tf.math.subtract(host, tf.reduce_min(host)),
tf.math.subtract(tf.reduce_max(host), tf.reduce_min(host)))
return normalized_payload, normalized_host, payload, host
###Output
_____no_output_____
###Markdown
Inference
###Code
def normed_to_image(normed_image, min_, max_):
return (normed_image * (max_ - min_)) + min_
example_ids = np.arange(len(host_test))[:100]
example_id = np.random.choice(example_ids)
# showing host
fig, axs = plt.subplots(ncols=2)
host_example = host_test.astype(int)[example_id]
payload_example = payload_test.astype(int)[example_id]
# payload_example = np.concatenate(
# (payload_example, payload_example, payload_example), axis=-1)
axs[0].imshow(host_example)
axs[1].imshow(payload_example, cmap='gray')
# showing host
fig, axs = plt.subplots(ncols=3)
inference_dataset = tf.data.Dataset.from_tensor_slices(
(payload_test[example_ids],
host_test[example_ids])).map(test_normalize).batch(len(example_ids))
for norm_payload, norm_host, payload, host in inference_dataset:
encoded_host, decoded_payload, decoded_host = steganography_model(
[norm_host, norm_payload])
host_output = encoded_host.numpy()
payload_output = decoded_payload.numpy()
decoded_host_output = decoded_host.numpy()
host_output = normed_to_image(host_output, np.min(host), np.max(host))
payload_output = normed_to_image(payload_output, np.min(payload),
np.max(payload))
decoded_host_output = normed_to_image(decoded_host_output, np.min(host),
np.max(host))
host_output = host_output.astype(int)[example_id]
payload_output = payload_output.astype(int)[example_id]
decoded_host_output = decoded_host_output.astype(int)[example_id]
# payload_output = np.concatenate(
# (payload_output, payload_output, payload_output), axis=-1)
axs[0].imshow(host_output)
axs[1].imshow(payload_output)
axs[2].imshow(decoded_host_output)
###Output
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
Clipping input data to the valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers).
|
experiments/20210518_matching_grid.ipynb | ###Markdown
Matching Grid ExperimentHere the grids used for sampling test points for human experiment were exactly replicated for the machine. The number of sampled points for both human and machine is expected to be identical after the matching procedure.
###Code
# changing cwd
%cd ..
###Output
c:\Users\jongm\Desktop\temp_workspace\JOVO\inductive-bias
###Markdown
Load packages
###Code
from src.inductive_bias import IB
ib = IB() #instantiate inductive bias package
ib.load_sampledData()
###Output
[ c:\Users\jongm\Desktop\temp_workspace\JOVO\inductive-bias\clf/SampledData.pickle ] loaded
###Markdown
Time and Date of the experiment
###Code
print(ib.date)
###Output
2021-11-02 12:31:08.000026
###Markdown
Load dependencies
###Code
import seaborn as sns
import pandas as pd
import numpy as np
import pickle
from tqdm.notebook import tqdm
import matplotlib.pyplot as plt
from matplotlib.patches import Rectangle
from matplotlib.patches import Circle
cmap = 'PRGn'
import warnings
warnings.filterwarnings('ignore')
#image output
import os
import imageio
###Output
_____no_output_____
###Markdown
Extract human coordinates
###Code
ib.extract_human_coord()
ib.humanLoc[0][0].head()
###Output
_____no_output_____
###Markdown
Retraining ML models and predicting exactly the same xy coordinates from human experiment
###Code
ib.mask = ib.generate_mask(h=0.1)
ib.mask.shape
#generic global
uX = ib.mask
uX0, uX1 = uX[:,0], uX[:,1]
label = ib.mtype[:3] + ['Human'] #excluding QDA
dtype = ib.dtype[2:5:2]
fsize = 25
#retrain global
reps = 126
saved_clf = ib.clf #using already optimized hyper-parameter from previous models
N_sample = 100 #same number of samples that of human
h = 0.1
rng = 3
#train
TRAIN_NOW = False
#figure
SAVEFIG = True
# run only for the first time
if TRAIN_NOW:
ib.get_sampledData(saved_clf=saved_clf, reps=reps, N_sample=N_sample, h=0.1, rng=3)
###Output
_____no_output_____
###Markdown
The grid matching yields equal number of points between human and machine
###Code
# ML grid vs human grid
ib.estpst_sample[0][0].shape, ib.human[0].shape, ib.estpst_sample[1][0].shape, ib.human[1].shape
###Output
_____no_output_____
###Markdown
The grid size matches after point-wise averaging
###Code
ib.pointwise_gridAverage(ib.estpst_sample[0][0]).shape, ib.pointwise_gridAverage(np.column_stack([ib.human[0][:,3], ib.human[0][:,5], ib.human[0][:,0]])).shape
ib.pointwise_gridAverage(ib.estpst_sample[1][0]).shape, ib.pointwise_gridAverage(np.column_stack([ib.human[1][:,3], ib.human[1][:,5], ib.human[1][:,0]])).shape
###Output
_____no_output_____
###Markdown
Point-wise averaging and gaussian smoothing estimated posterior
###Code
mtype = []
ib.mask = ib.generate_mask(h=0.1)
for ii in range(2): #S-XOR and spiral
mtype.append([])
for jj in range(4):
if jj == 3:
mtype_i = np.column_stack([ib.human[ii][:,3], ib.human[ii][:,5], ib.human[ii][:,0]]) # human estimates
else:
mtype_i = ib.estpst_sample[ii][jj] # ML estimates
mtype_i = ib.pointwise_gridAverage(mtype_i).to_numpy()
xy, original, down, alls = ib.smooth_gaussian_distance(mtype_i, step=0.01, method=None, sigma=1, k=10)
mtype[ii].append(alls)
fig, axs = plt.subplots(1,4,figsize = (5*4,5))
for i in range(4):
mlp = axs[i].scatter(ib.mask[:,0], ib.mask[:,1], c=mtype[0][i], cmap=cmap, vmin=0, vmax=1)
axs[i].set_title(label[i], fontsize=18)
axs[i].set_xlim([-3,3])
axs[i].set_ylim([-3,3])
if SAVEFIG:
plt.savefig(f'figs/[20210518_matching_grid]_model_display_spiral_{str(ib.date.date())}.jpg', bbox_inches='tight')
fig, axs = plt.subplots(1,4,figsize = (5*4,5))
for i in range(4):
axs[i].scatter(ib.mask[:,0], ib.mask[:,1], c=mtype[1][i], cmap=cmap, vmin=0, vmax=1)
axs[i].set_title(label[i], fontsize=18)
axs[i].set_xlim([-3,3])
axs[i].set_ylim([-3,3])
if SAVEFIG:
plt.savefig(f'figs/[20210518_matching_grid]_model_display_sxor_{str(ib.date.date())}.jpg', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Generate compiled plot and output images
###Code
angle_step = 12
new_dtype = ['Original'] + dtype
fdtype = ['spiral', 'sxor'] #dtype defined for filename
palette = sns.color_palette('bright', len(ib.mtype))
col = 6
row = 2
step = 0.2
r = 4
x_range = np.arange(-r,r,step)
for sw, lb in enumerate(fdtype):
for deg in np.arange(0, 180+angle_step, angle_step):
line_plot = []
line_post = []
fig, axs = plt.subplots(row,col,figsize = (5*(col+2),5*row))
# camera = Camera(fig)
gs = axs[0, 4].get_gridspec()
# remove the underlying axes
for ax in axs[:, -1]:
ax.remove()
for ax in axs[:, -2]:
ax.remove()
for i in range(2):
line_plot.append([])
line_post.append([])
for j in range(4):
line_plot[i].append([])
line_post[i].append(ib.select_linear_region(mtype[i][j], degree=deg, step=0.001))
lp, li = line_post[i][j] #line posterior, line index
x = ib.mask[li][:,0]
y = ib.mask[li][:,1]
dist = np.sqrt(x**2 + y**2)
dist[y < 0] *= -1 # negative radial distance wrt y-coordinate
for rad in x_range:
line_plot[i][j].append(np.array(lp[(dist >= rad) * (dist < rad+step)]).mean())
for m in range(2):
for i in range(4):
if m == 0:
axs[m,i].scatter(ib.mask[:,0], ib.mask[:,1], c=mtype[sw][i], cmap=cmap, vmin=0, vmax=1) #switch here
else:
ibXY = ib.mask[line_post[m-1][i][1]]
axs[m,i].scatter(ibXY[:,0], ibXY[:,1], c=line_post[sw][i][0], cmap=cmap, vmin=0, vmax=1)
# figure styling
if m == 0:
axs[m,i].set_title(label[i], fontsize=fsize)
if i == 0:
axs[m,i].set_ylabel(new_dtype[m], fontsize=fsize)
axs[m,i].set_xlim(-3,3)
axs[m,i].set_ylim(-3,3)
# axs[m,i].set_xticks([])
# axs[m,i].set_yticks([])
axbig = fig.add_subplot(gs[:, -2:])
tempdf = pd.DataFrame(np.array(line_plot[sw]).T, columns=label) #switch here
tempdf.index = x_range
for i, lab in enumerate(label):
sns.lineplot(data=tempdf[[lab]], ax=axbig, palette=[palette[i]])
axbig.legend(loc=1, fontsize='xx-large')
axbig.set_title(['Spiral', 'S-XOR'][sw], fontsize=fsize)
axbig.set_ylabel('mean posterior', fontsize=fsize)
axbig.set_xlabel('distance from origin', fontsize=fsize)
axbig.set_ylim([-0.1,1.1])
axbig.set_xlim(-3,3)
# camera.snap()
# plt.pause(0.5)
if SAVEFIG:
plt.savefig(f'figs/[20210518_matching_grid]_fullplot_{lb}_{deg}deg_{str(ib.date.date())}.jpg', bbox_inches='tight')
else:
plt.show()
plt.clf();
# animation = camera.animate()
# animation.save(f'figs/[20210512_line_analysis]_posterior_animation_{str(ib.date.date())}.gif', bitrate=10000 )#writer ='imagemagick', dpi=600) fps=10
###Output
_____no_output_____
###Markdown
Construct gif using image files
###Code
imgpath = 'figs'
imglist = os.listdir(imgpath)
filtered = [i for i in imglist if '[20210518_matching_grid]' in i]
filtered = [i for i in filtered if 'jpg' in i]
filtered_spiral = [i for i in filtered if 'spiral' in i]
filtered_sxor = [i for i in filtered if 'sxor' in i]
temp = [[],[]]
for i, lists in enumerate([filtered_spiral, filtered_sxor]):
for filename in lists:
if i == 0:
s = filename.rfind('iral_')
else:
s = filename.rfind('sxor_')
e = filename.rfind('deg_')
temp[i].append(filename[s+5:e])
idx = np.argsort(np.array(temp[0]).astype(int))
filtered_spiral = np.array(filtered_spiral)[idx].tolist()
idx = np.argsort(np.array(temp[1]).astype(int))
filtered_sxor = np.array(filtered_sxor)[idx].tolist()
filtered_spiral
filtered_sxor
if SAVEFIG:
for fidx, filenames in enumerate([filtered_spiral, filtered_sxor]):
images = []
for filename in filenames:
images.append(imageio.imread(imgpath + '/' + filename))
imgname = f'figs/[20210518_matching_grid]_fullplot_animated_{fdtype[fidx]}_{str(ib.date.date())}.gif'
imageio.mimsave(imgname, images, fps=2)
###Output
_____no_output_____ |
community/awards/teach_me_quantum_2018/qml_mooc/06_Adiabatic Quantum Computing.ipynb | ###Markdown
When we talk about quantum computing, we actually talk about several different paradigms. The most common one is gate-model quantum computing, in the vein we discussed in the previous notebook. In this case, gates are applied on qubit registers to perform arbitrary transformations of quantum states made up of qubits.The second most common paradigm is quantum annealing. This paradigm is often also referred to as adiabatic quantum computing, although there are subtle differences. Quantum annealing solves a more specific problem -- universality is not a requirement -- which makes it an easier, albeit still difficult engineering challenge to scale it up. The technology is up to 2000 superconducting qubits in 2018, compared to the less than 100 qubits on gate-model quantum computers. D-Wave Systems has been building superconducting quantum annealers for over a decade and this company holds the record for the number of qubits -- 2048. More recently, an IARPA project was launched to build novel superconducting quantum annealers. A quantum optics implementation was also made available by QNNcloud that implements a coherent Ising model. Its restrictions are different from superconducting architectures.Gate-model quantum computing is conceptually easier to understand: it is the generalization of digital computing. Instead of deterministic logical operations of bit strings, we have deterministic transformations of (quantum) probability distributions over bit strings. Quantum annealing requires some understanding of physics, which is why we introduced classical and quantum many-body physics in a previous notebook. Over the last few years, quantum annealing inspired gate-model algorithms that work on current and near-term quantum computers (see the notebook on variational circuits). So in this sense, it is worth developing an understanding of the underlying physics model and how quantum annealing works, even if you are only interested in gate-model quantum computing.While there is a plethora of quantum computing languages, frameworks, and libraries for the gate-model, quantum annealing is less well-established. D-Wave Systems offers an open source suite called Ocean. A vendor-independent solution is XACC, an extensible compilation framework for hybrid quantum-classical computing architectures, but the only quantum annealer it maps to is that of D-Wave Systems. Since XACC is a much larger initiative that extends beyond annealing, we choose a few much simpler packages from Ocean to illustrate the core concepts of this paradigm. However, before diving into the details of quantum annealing, it is worth taking a slight detour to connect the unitary evolution we discussed in a closed system and in the gate-model paradigm and the Hamiltonian describing a quantum many-body system. We also briefly discuss the adiabatic theorem, which provides the foundation why quantum annealing would work at all. Unitary evolution and the HamiltonianWe introduced the Hamiltonian as an object describing the energy of a classical or quantum system. Something more is true: it gives a description of a system evolving with time. This formalism is expressed by the Schrödinger equation:$$\imath\hbar {\frac {d}{dt}}|\psi(t)\rangle = H|\psi(t)\rangle,$$where $\hbar$ is the reduced Planck constant. Previously we said that it is a unitary operator that evolves state. That is exactly what we get if we solve the Schrödinger equation for some time $t$: $U = \exp(-\imath Ht/\hbar)$. Note that we used that the Hamiltonian does not depend on time. In other words, every unitary we talked about so far has some underlying Hamiltonian.The Schrödinger equation in the above form is the time-dependent variant: the state depends on time. The time-independent Schröndinger equation reflects what we said about the Hamiltonian describing the energy of the system: $$ H|\psi \rangle =E|\psi \rangle,$$where $E$ is the total energy of the system. The adiabatic theorem and adiabatic quantum computingAn adiabatic process means that conditions change slowly enough for the system to adapt to the new configuration. For instance, in a quantum mechanical system, we can start from some Hamiltonian $H_0$ and slowly change it to some other Hamiltonian $H_1$. The simplest change could be a linear schedule:$$H(t) = (1-t) H_0 + t H_1,$$for $t\in[0,1]$ on some time scale. This Hamiltonian depends on time, so solving the Schrödinger equation is considerably more complicated. The adiabatic theorem says that if the change in the time-dependent Hamiltonian occurs slowly, the resulting dynamics remain simple: starting close to an eigenstate, the system remains close toan eigenstate. This implies that if the system started in the ground state, if certain conditions are met, the system stays in the ground state. We call the energy difference between the ground state and the first excited state the gap. If $H(t)$ has a nonnegative gap for each $t$ during the transition and the change happens slowly, then the system stays in the ground state. If we denote the time-dependent gap by $\Delta(t)$, a course approximation of the speed limit scales as $1/\min(\Delta(t))^2$.This theorem allows something highly unusual. We can reach the ground state of an easy-to-solve quantum many body system, and change the Hamiltonian to a system we are interested in. For instance, we could start with the Hamiltonian $\sum_i \sigma^X_i$ -- its ground state is just the equal superposition. Let's see this on two sites:
###Code
import numpy as np
np.set_printoptions(precision=3, suppress=True)
X = np.array([[0, 1], [1, 0]])
IX = np.kron(np.eye(2), X)
XI = np.kron(X, np.eye(2))
H_0 = - (IX + XI)
λ, v = np.linalg.eigh(H_0)
print("Eigenvalues:", λ)
print("Eigenstate for lowest eigenvalue", v[:, 0])
###Output
Eigenvalues: [-2. -0. 0. 2.]
Eigenstate for lowest eigenvalue [-0.5 -0.5 -0.5 -0.5]
###Markdown
Then we could turn this Hamiltonian slowly into a classical Ising model and read out the global solution.Adiabatic quantum computation exploits this phenomenon and it is able to perform universal calculations with the final Hamiltonian being $H=-\sum_{} J_{ij} \sigma^Z_i \sigma^Z_{j} - \sum_i h_i \sigma^Z_i - \sum_{} g_{ij} \sigma^X_i\sigma^X_j$. Note that is not the transverse-field Ising model: the last term is an X-X interaction. If a quantum computer respects the speed limit, guarantees the finite gap, and implements this Hamiltonian, then it is equivalent to the gate model with some overhead.The quadratic scaling on the gap does not appear too bad. So can we solve NP-hard problems faster with this paradigm? It is unlikely. The gap is highly dependent on the problem, and actually difficult problems tend to have an exponentially small gap. So our speed limit would be quadratic over the exponentially small gap, so the overall time required would be exponentially large. Quantum annealingA theoretical obstacle to adiabatic quantum computing is that calculating the speed limit is clearly not trivial; in fact, it is harder than solving the original problem of finding the ground state of some Hamiltonian of interest. Engineering constraints also apply: the qubits decohere, the environment has finite temperature, and so on. *Quantum annealing* drops the strict requirements and instead of respecting speed limits, it repeats the transition (the annealing) over and over again. Having collected a number of samples, we pick the spin configuration with the lowest energy as our solution. There is no guarantee that this is the ground state.Quantum annealing has a slightly different software stack than gate-model quantum computers. Instead of a quantum circuit, the level of abstraction is the classical Ising model -- the problem we are interested in solving must be in this form. Then, just like superconducting gate-model quantum computers, superconducting quantum annealers also suffer from limited connectivity. In this case, it means that if our problem's connectivity does not match that of the hardware, we have to find a graph minor embedding. This will combine several physical qubits into a logical qubit. The workflow is summarized in the following diagram [[1](1)]:A possible classical solver for the Ising model is the simulated annealer that we have seen before:
###Code
import dimod
J = {(0, 1): 1.0, (1, 2): -1.0}
h = {0:0, 1:0, 2:0}
model = dimod.BinaryQuadraticModel(h, J, 0.0, dimod.SPIN)
sampler = dimod.SimulatedAnnealingSampler()
response = sampler.sample(model, num_reads=10)
print("Energy of samples:")
print([solution.energy for solution in response.data()])
###Output
Energy of samples:
[-2.0, -2.0, -2.0, -2.0, -2.0, -2.0, -2.0, -2.0, -2.0, -2.0]
###Markdown
Let's take a look at the minor embedding problem. This part is NP-hard in itself, so we normally use probabilistic heuristics to find an embedding. For instance, for many generations of the quantum annealer that D-Wave Systems produces has unit cells containing a $K_{4,4}$ bipartite fully-connected graph, with two remote connections from each qubit going to qubits in neighbouring unit cells. A unit cell with its local and remote connections indicated is depicted following figure:This is called the Chimera graph. The current largest hardware has 2048 qubits, consisting of $16\times 16$ unit cells of 8 qubits each. The Chimera graph is available as a `networkx` graph in the package `dwave_networkx`. We draw a smaller version, consisting of $2\times 2$ unit cells.
###Code
import matplotlib.pyplot as plt
import dwave_networkx as dnx
%matplotlib inline
connectivity_structure = dnx.chimera_graph(2, 2)
dnx.draw_chimera(connectivity_structure)
plt.show()
###Output
/home/pwittek/.anaconda3/envs/qiskit/lib/python3.7/site-packages/networkx/drawing/nx_pylab.py:611: MatplotlibDeprecationWarning: isinstance(..., numbers.Number)
if cb.is_numlike(alpha):
###Markdown
Let's create a graph that certainly does not fit this connectivity structure. For instance, the complete graph $K_n$ on nine nodes:
###Code
import networkx as nx
G = nx.complete_graph(9)
plt.axis('off')
nx.draw_networkx(G, with_labels=False)
import minorminer
embedded_graph = minorminer.find_embedding(G.edges(), connectivity_structure.edges())
###Output
_____no_output_____
###Markdown
Let's plot this embedding:
###Code
dnx.draw_chimera_embedding(connectivity_structure, embedded_graph)
plt.show()
###Output
_____no_output_____
###Markdown
Qubits that have the same colour corresponding to a logical node in the original problem defined by the $K_9$ graph. Qubits combined in such way form a chain. Even though our problem only has 9 variables (nodes), we used almost all 32 available on the toy Chimera graph. Let's find the maximum chain length:
###Code
max_chain_length = 0
for _, chain in embedded_graph.items():
if len(chain) > max_chain_length:
max_chain_length = len(chain)
print(max_chain_length)
###Output
4
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.