path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
.ipynb_checkpoints/facial-recognition-checkpoint.ipynb | ###Markdown
Import Dependencies
###Code
# Import opencv
import cv2
###Output
_____no_output_____
###Markdown
Read Images from directory Run real-time detection on webcam
###Code
def facial_recognition():
# face recognition
#Computes the PCA of the face
faceTestPCA = pca.transform(face.reshape(1, -1))
pred = clf.predict(faceTestPCA)
print(names[pred[0]])
cv2.putText(img, str(names[pred[0]]), (x+5,y-5), font, 1, (255,255,255) , 2)
def face_detect(img,face_cascade):
# Convert into grayscale
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Documentation: https://realpython.com/face-recognition-with-python/
# https://www.bogotobogo.com/python/ONo pet or pet is dead. Use /start <name> to create a new pet!
# OpenCV_Python/python_opencv3_Image_Object_Detection_Face_Detection_Haar_Cascade_Classifiers.php
# Pre-trained weights: https://github.com/opencv/opencv/tree/master/data/haarcascades
# Detect faces # detection algorithm uses a moving window to detect objects
faces = face_cascade.detectMultiScale(gray,
scaleFactor=1.1, # Since some faces may be closer to the camera, they would appear bigger than the faces in the back. The scale factor compensates for this.
minNeighbors=4 #minNeighbors defines how many objects are detected near the current one before it declares the face found
#minSize=(30, 30), #minSize, meanwhile, gives the size of each window.
)
# Draw rectangle around the faces
for (x, y, w, h) in faces:
#draw rectangles where face is detected
cv2.rectangle(img, (x, y), (x+w, y+h), (255, 0, 0), 2)
#Extracts the face from grayimg, resizes and flattens
face = gray[y:y + h, x:x + w]
face = cv2.resize(face, (200,200))
face = face.ravel()
return img, faces
import cv2
import numpy as np
from PIL import Image
import os
# Load the cascade
face_cascade = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
# Open a sample video available in sample-videos
video = cv2.VideoCapture(0)
width = video.get(cv2.CAP_PROP_FRAME_WIDTH) # float
height = video.get(cv2.CAP_PROP_FRAME_HEIGHT)
fps = video.get(cv2.CAP_PROP_FPS)
#fourcc = cv2.VideoWriter_fourcc('M','J','P','G')
#fourcc = cv2.VideoWriter_fourcc(*'H264')
#fourcc = 0x31637661
#fourcc = cv2.VideoWriter_fourcc(*'X264')
#fourcc = cv2.VideoWriter_fourcc(*'avc1')
#fourcc = 0x31637661
#videoWriter = cv2.VideoWriter(f'computer_vision/cv-images/{group_id}_video_temp.mp4', fourcc, fps, (int(width), int(height)))
#videoWriter = cv2.VideoWriter(f'computer_vision/cv-images/{group_id}_video_temp.mp4', fourcc, fps, (int(width), int(height)))
prediction_count = 0
while(True):
# Capture frame-by-frame
ret, frame = video.read()
if not ret:
print("failed to grab frame")
break
# Display the resulting frame
#cv2.imshow('frame',frame)
# run face detection
processed_img, predictions = face_detect(frame,face_cascade)
#videoWriter.write(processed_img)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(processed_img, f'Number of Faces Detected: {len(predictions)}', (100,100), font, 1, (255,255,255) , 2)
#cv2.putText(processed_img, f'Number of Faces Deteced: {len(predictions)}', (x+5,y-5), font, 1, (255,255,255) , 2)
# Display the output in window
cv2.imshow('face detection', processed_img)
if cv2.waitKey(10) & 0xFF == ord('q'):
break
# When everything done, release the capture
video.release()
cv2.destroyAllWindows()
#videoWriter.release()
###Output
_____no_output_____ |
kaggle_notebook/exo_2_select_from.ipynb | ###Markdown
**[SQL Micro-Course Home Page](https://www.kaggle.com/learn/intro-to-sql)**--- IntroductionTry writing some **SELECT** statements of your own to explore a large dataset of air pollution measurements.Run the cell below to set up the feedback system.
###Code
# Set up feedback system
from learntools.core import binder
binder.bind(globals())
from learntools.sql.ex2 import *
print("Setup Complete")
###Output
Using Kaggle's public dataset BigQuery integration.
Setup Complete
###Markdown
The code cell below fetches the `global_air_quality` table from the `openaq` dataset. We also preview the first five rows of the table.
###Code
from google.cloud import bigquery
# Create a "Client" object
client = bigquery.Client()
# Construct a reference to the "openaq" dataset
dataset_ref = client.dataset("openaq", project="bigquery-public-data")
# API request - fetch the dataset
dataset = client.get_dataset(dataset_ref)
# Construct a reference to the "global_air_quality" table
table_ref = dataset_ref.table("global_air_quality")
# API request - fetch the table
table = client.get_table(table_ref)
# Preview the first five lines of the "global_air_quality" table
client.list_rows(table, max_results=5).to_dataframe()
###Output
Using Kaggle's public dataset BigQuery integration.
###Markdown
Exercises 1) Units of measurementWhich countries have reported pollution levels in units of "ppm"? In the code cell below, set `first_query` to an SQL query that pulls the appropriate entries from the `country` column.In case it's useful to see an example query, here's some code from the tutorial:```query = """ SELECT city FROM `bigquery-public-data.openaq.global_air_quality` WHERE country = 'US' """```
###Code
# Query to select countries with units of "ppm"
first_query = """
SELECT DISTINCT country
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE unit = 'ppm'
"""
# Set up the query (cancel the query if it would use too much of
# your quota, with the limit set to 1 GB)
ONE_GB = 1000*1000*1000
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=ONE_GB)
first_query_job = client.query(first_query, job_config=safe_config)
# API request - run the query, and return a pandas DataFrame
first_results = first_query_job.to_dataframe()
# View top few rows of results
print(first_results.head())
# Check your answer
q_1.check()
###Output
country
0 US
1 AU
2 CL
3 MX
4 BA
###Markdown
For the solution, uncomment the line below.
###Code
#q_1.solution()
###Output
_____no_output_____
###Markdown
2) High air qualityWhich pollution levels were reported to be exactly 0? - Set `zero_pollution_query` to select **all columns** of the rows where the `value` column is 0.- Set `zero_pollution_results` to a pandas DataFrame containing the query results.
###Code
# Query to select all columns where pollution levels are exactly 0
zero_pollution_query = """
SELECT *
FROM `bigquery-public-data.openaq.global_air_quality`
WHERE value = 0
"""
# Set up the query
ONE_GB = 1000*1000*1000
safe_config = bigquery.QueryJobConfig(maximum_bytes_billed=ONE_GB)
query_job = client.query(zero_pollution_query, job_config=safe_config)
# API request - run the query and return a pandas DataFrame
zero_pollution_results = query_job.to_dataframe()
print(zero_pollution_results.head())
# Check your answer
q_2.check()
###Output
location ... averaged_over_in_hours
0 Victoria Memorial - WBSPCB ... 0.25
1 Rabindra Bharati University, Kolkata - WBSPCB ... 0.25
2 Końskie, MOBILNA ... NaN
3 Końskie, MOBILNA ... NaN
4 Płock-Gimnazjum ... NaN
[5 rows x 11 columns]
###Markdown
For the solution, uncomment the line below.
###Code
q_2.solution()
###Output
_____no_output_____ |
dip.ipynb | ###Markdown
AVERAGE FILTER
###Code
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('afridi.jpg')
blur = cv2.blur(img,(5,5))
plt.subplot(121),plt.imshow(img),plt.title('Original')
plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(blur),plt.title('Average')
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
BILATERAL FILTER
###Code
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('afridi.jpg')
blur = cv2.bilateralFilter(img,9,75,75)
plt.subplot(121),plt.imshow(img),plt.title('Original')
plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(blur),plt.title('Bilateral')
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
FOURIER TRANSFORMATION
###Code
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('afridi.jpg',0)
f = np.fft.fft2(img)
fshift = np.fft.fftshift(f)
magnitude_spectrum = 20*np.log(np.abs(fshift))
plt.subplot(121),plt.imshow(img, cmap = 'gray')
plt.title('Input Image'), plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(magnitude_spectrum, cmap = 'gray')
plt.title('Fourier Transformation'), plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
GAUSSIAN FILTER
###Code
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('afridi.jpg')
kernel = np.ones((5,5),np.float32)/25
dst = cv2.filter2D(img,-1,kernel)
plt.subplot(121),plt.imshow(img),plt.title('Original')
plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(dst),plt.title('Gaussian')
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
LAPLACIAN FILTER
###Code
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('afridi.jpg',0)
laplacian = cv2.Laplacian(img,cv2.CV_64F)
sobelx = cv2.Sobel(img,cv2.CV_64F,1,0,ksize=5)
sobely = cv2.Sobel(img,cv2.CV_64F,0,1,ksize=5)
plt.subplot(2,2,1),plt.imshow(img,cmap = 'gray')
plt.title('Original'), plt.xticks([]), plt.yticks([])
plt.subplot(2,2,2),plt.imshow(laplacian,cmap = 'gray')
plt.title('Laplacian'), plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
MEDIAN FILTER
###Code
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('afridi.jpg')
median = cv2.medianBlur(img,5)
plt.subplot(121),plt.imshow(img),plt.title('Original')
plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(blur),plt.title('Median Blurred')
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
2D FILTER
###Code
import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('afridi.jpg')
kernel = np.ones((3,3),np.float32) * (-1)
kernel[1,1] = 8
print(kernel)
dst = cv2.filter2D(img,-1,kernel)
plt.subplot(121),plt.imshow(img),plt.title('Original')
plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(dst),plt.title('Filters')
plt.xticks([]), plt.yticks([])
plt.show()
###Output
_____no_output_____
###Markdown
LOW AND HIGH PASS FILTER
###Code
import matplotlib.pyplot as plt
import numpy as np
from scipy import ndimage
import Image
def plot(data, title):
plot.i += 1
plt.subplot(2,2,plot.i)
plt.imshow(data)
plt.gray()
plt.title(title)
plot.i = 0
im = Image.open('afridi.jpg')
data = np.array(im, dtype=float)
plot(data, 'Original')
kernel = np.array([[-1, -1, -1],
[-1, 8, -1],
[-1, -1, -1]])
highpass_3x3 = ndimage.convolve(data, kernel)
plot(highpass_3x3, '3x3 Lowpass')
kernel = np.array([[-1, -1, -1, -1, -1],
[-1, 1, 2, 1, -1],
[-1, 2, 4, 2, -1],
[-1, 1, 2, 1, -1],
[-1, -1, -1, -1, -1]])
highpass_5x5 = ndimage.convolve(data, kernel)
plot(highpass_5x5, '5x5 Highpass')
lowpass = ndimage.ideal_filter(data, 3)
gauss_highpass = data - lowpass
plot(gauss_highpass, r'Highpass, $\sigma = 3 pixels$')
plt.show()
###Output
_____no_output_____ |
04_mini-projects/02_e-vehicle-powertrain-model.ipynb | ###Markdown
[Table of contents](../toc.ipynb) A machine learning model of a electric vehicle power trainThe steps herein are similar to a recent paper of mine [[Rhode2020]](../references.bib), were online machine learning was used to create a power prediction model for an electric vehicle.Given map data and a planned velocity profile, this prediction model can be used to estimate electric power and energy. Vehicle black box modelFirst, we start with some fundamentals in vehicle science.The instantaneous tractive force reads$F_{t} = m_V \dot{v_{t}} + f_r m_V g \cos \theta_{t} + m_V g \sin \theta_{t} + \frac{\rho}{2} c_w A_V v_t^2,$and the instantaneous power$p_t = F_t v_t.$The summands in first equation are also known as acceleration force, rolling resistance, climbing force, and aerodynamic resistance, respectively.Note that the velocity and acceleration plays an important role in most terms. With this in mind, we can come up with a black box model. The right hand side figure illustrates the adopted non-linear black-box vehicle model,$p_t \approx f(v_t, {a_x}_t)$which includes two measured inputs, the velocity ($v_t$) and longitudinal acceleration ($a_x$), and the measured output power ($p$), being the product of the measured electric current and voltage. Our objective is to approximate the unknown non-linear function $f(\cdot)$ of the model equation given the measurements $v_t, {a_x}_t$, and $p_t$.Note that the body fixed acceleration sensor considers road angle influence.${a_x}_t = \dot{v}_t + g \sin \theta_t$ Data recordThe electric vehicle was propelled by two electric engines and their current and voltage was recorded at 100 Hz. Additionally, longitudinal acceleration and velocity from CAN bus signals as well as break pressure form disc brakes were stored.All raw signals were smoothed with a Savitzky-Golay filter (window size 50) and down-sampled to 2 Hz.Additionally, driving states with break pressure > 0 were removed from data because the black box model does not consider mechanical braking. First, let us examine the data There are three `.mat` files called `dat1.mat`, `dat2.mat`, and `dat3.mat`.After a quick fix for missing path error in Travis - not needed for running Jupyter locally - we will import the data with `scipy.io.loadmat`.
###Code
# This if else is a fix to make the file available for Jupyter and Travis CI
import os
def find_mat(filename):
if os.path.isfile(filename):
file = filename
else:
file = '04_mini-projects/' + filename
return file
import scipy.io
dat1 = scipy.io.loadmat(find_mat('dat1.mat'))
###Output
_____no_output_____
###Markdown
Let us take a look at the keys of the data and the dimensions.
###Code
dat1.keys()
print(dat1['A'].shape)
print(dat1['B'].shape)
print(dat1['Aval'].shape)
print(dat1['Bval'].shape)
print(dat1['k'])
print(dat1['m'])
###Output
(2204, 2)
(2204, 1)
(200, 2)
(200, 1)
[[200]]
[[2204]]
###Markdown
Now, you need some background information about the data. The field names `A` and `Aval` mean input data and validation input data. The shape of the `A` is 2204 times 2, which means 2204 measurement of two inputs. Actually, `Aval` contains of the last 200 samples of `A`. So be cautious to train any model just with `A` up to the last 200 samples. `B` and `Bval` are the respective outputs.You might ask, why is the data not nice and clearly defined like in many sklearn tutorials with `X_train`, `X_test`, ...? Because it is very common to spend much time with data cleansing of messy data in practice. Exercise: Data wrangling Therefore, the first task for you is to:* Write a function which load one of the `.mat` files,* returns X_train, X_test, y_train, y_test given an argument of test size in samples.We will just split the data, no shuffling here. SolutionOne possible function is defined in [solution_load_measurement](solution_load_measurement.py).
###Code
import sys
sys.path.append("04_mini-projects")
from solution_load_measurement import *
X_train, y_train, X_test, y_test = load_measurement('dat1.mat', test_size=400)
###Output
_____no_output_____
###Markdown
And to check if the split worked, we will compare `X_test` with the original `dat1['A']` and `y_test` with original `dat1['B']`.
###Code
print(X_test[0:5])
print(dat1['A'][-400:-395])
print(y_test[0:5])
print(dat1['B'][-400:-395])
###Output
[[ -420.61029092]
[ -342.24263429]
[-2112.49021421]
[-4496.25616576]
[-4598.45211968]]
###Markdown
And now let us plot the data Exercise: Plot the dataThe first column of `X_train` and `X_test` is the velocity in meter per second and the second column is the acceleration. `y_train` and `y_test` contain the electric power. Now, please plot the data in three panels where the training and test data are shown together (over samples) with different markers or colors.
###Code
#own solution
from matplotlib import pyplot as plt
import numpy as np
def make_plots(X_train, X_test, y_train, y_test):
plt.figure(figsize = (15, 10))
samples_train = X_train.shape[0]
samples_test = X_test.shape[0]
#velocity
ax1 = plt.subplot(311)
plt.plot(np.arange(0, samples_train) ,X_train[:,0])
plt.plot(np.arange(samples_train, samples_train + samples_test) ,X_test[:,0])
plt.ylabel("Velocity $[m/s]$")
#acceleration
ax2 = plt.subplot(312)
plt.plot(np.arange(0, samples_train) ,X_train[:,1])
plt.plot(np.arange(samples_train, samples_train + samples_test) ,X_test[:,1])
plt.ylabel("Acceleration $[m/s^2]$")
#Power
ax3 = plt.subplot(313)
plt.plot(np.arange(0, samples_train) ,y_train[:,0])
plt.plot(np.arange(samples_train, samples_train + samples_test) ,y_test[:,0])
plt.ylabel("Power")
make_plots(X_train, X_test, y_train, y_test)
###Output
_____no_output_____
###Markdown
SolutionI prefer to use a function because we have three data sets and my plot is defined in [solution_plot_data](solution_plot_data.py).
###Code
from solution_plot_data import *
plot_data(X_train, y_train, X_test, y_test)
###Output
_____no_output_____
###Markdown
Modeling with different regression methodsIn the original paper, Recursive Least Squares (an adaptive filter form of linear regression), Kernel adaptive filters (a special kind of non-linear adaptive filter), and Neural Networks were compared. We cannot go into details of adaptive filtering herein, but very briefly, these methods provide solutions for a regression problem in an iterative way.At every time step a regression result is returned. This makes adaptive filters ideal for tracking of time variant systems and when data receives as data stream, see figure on the right.Next, we will train a **Linear Model** and a **Gaussian Process** with sklearn in the sequel. Linear ModelThe fit of the linear model is done with a few lines of code.
###Code
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X=X_train, y=y_train)
###Output
_____no_output_____
###Markdown
Gaussian ProcessSame holds for the Gaussian Process.
###Code
from sklearn.gaussian_process import GaussianProcessRegressor
gp = GaussianProcessRegressor(n_restarts_optimizer=20)
gp.fit(X=X_train, y=y_train)
###Output
_____no_output_____
###Markdown
Compare both models with mean squared error
###Code
from sklearn.metrics import mean_squared_error
print("mse Linear Model \t", mean_squared_error(y_test, lin_reg.predict(X_test)))
print("mse GP \t\t\t", mean_squared_error(y_test, gp.predict(X_test)))
###Output
mse Linear Model 85914421.79698046
mse GP 49813117.43331423
###Markdown
The mean squared error of the GP is way smaller than the error of the linear model. Note that the linear model tries to model the problem with just two parameters, because `X_train` has herein two columns (or two sensor measurements). Let us plot the predictions and compare them with the test data
###Code
plt.figure(figsize=(16, 6))
plt.plot(y_test, 'k')
plt.plot(lin_reg.predict(X_test), 'r')
plt.plot(gp.predict(X_test), 'g')
plt.legend(['y_test', 'Linear Model', 'GP'])
###Output
_____no_output_____
###Markdown
[Table of contents](../toc.ipynb) A machine learning model of a electric vehicle power trainThe steps herein are similar to a recent paper of mine [[Rhode2020]](../references.bib), were online machine learning was used to create a power prediction model for an electric vehicle.Given map data and a planned velocity profile, this prediction model can be used to estimate electric power and energy. Vehicle black box modelFirst, we start with some fundamentals in vehicle science.The instantaneous tractive force reads$F_{t} = m_V \dot{v_{t}} + f_r m_V g \cos \theta_{t} + m_V g \sin \theta_{t} + \frac{\rho}{2} c_w A_V v_t^2,$and the instantaneous power$p_t = F_t v_t.$The summands in first equation are also known as acceleration force, rolling resistance, climbing force, and aerodynamic resistance, respectively.Note that the velocity and acceleration plays an important role in most terms. With this in mind, we can come up with a black box model. The right hand side figure illustrates the adopted non-linear black-box vehicle model,$p_t \approx f(v_t, {a_x}_t)$which includes two measured inputs, the velocity ($v_t$) and longitudinal acceleration ($a_x$), and the measured output power ($p$), being the product of the measured electric current and voltage. Our objective is to approximate the unknown non-linear function $f(\cdot)$ of the model equation given the measurements $v_t, {a_x}_t$, and $p_t$.Note that the body fixed acceleration sensor considers road angle influence.${a_x}_t = \dot{v}_t + g \sin \theta_t$ Data recordThe electric vehicle was propelled by two electric engines and their current and voltage was recorded at 100 Hz. Additionally, longitudinal acceleration and velocity from CAN bus signals as well as break pressure form disc brakes were stored.All raw signals were smoothed with a Savitzky-Golay filter (window size 50) and down-sampled to 2 Hz.Additionally, driving states with break pressure > 0 were removed from data because the black box model does not consider mechanical braking. First, let us examine the data There are three `.mat` files called `dat1.mat`, `dat2.mat`, and `dat3.mat`.After a quick fix for missing path error in Travis - not needed for running Jupyter locally - we will import the data with `scipy.io.loadmat`.
###Code
# This if else is a fix to make the file available for Jupyter and Travis CI
import os
def find_mat(filename):
if os.path.isfile(filename):
file = filename
else:
file = '04_mini-projects/' + filename
return file
import scipy.io
dat1 = scipy.io.loadmat(find_mat('dat1.mat'))
###Output
_____no_output_____
###Markdown
Let us take a look at the keys of the data and the dimensions.
###Code
dat1.keys()
print(dat1['A'].shape)
print(dat1['B'].shape)
print(dat1['Aval'].shape)
print(dat1['Bval'].shape)
print(dat1['k'])
print(dat1['m'])
###Output
(2204, 2)
(2204, 1)
(200, 2)
(200, 1)
[[200]]
[[2204]]
###Markdown
Now, you need some background information about the data. The field names `A` and `Aval` mean input data and validation input data. The shape of the `A` is 2204 times 2, which means 2204 measurement of two inputs. Actually, `Aval` contains of the last 200 samples of `A`. So be cautious to train any model just with `A` up to the last 200 samples. `B` and `Bval` are the respective outputs.You might ask, why is the data not nice and clearly defined like in many sklearn tutorials with `X_train`, `X_test`, ...? Because it is very common to spend much time with data cleansing of messy data in practice. Exercise: Data wrangling Therefore, the first task for you is to:* Write a function which load one of the `.mat` files,* returns X_train, X_test, y_train, y_test given an argument of test size in samples.We will just split the data, no shuffling here. SolutionOne possible function is defined in [solution_load_measurement](solution_load_measurement.py).
###Code
import sys
sys.path.append("04_mini-projects")
from solution_load_measurement import *
X_train, y_train, X_test, y_test = load_measurement('dat1.mat', test_size=400)
###Output
_____no_output_____
###Markdown
And to check if the split worked, we will compare `X_test` with the original `dat1['A']` and `y_test` with original `dat1['B']`.
###Code
print(X_test[0:5])
print(dat1['A'][-400:-395])
print(y_test[0:5])
print(dat1['B'][-400:-395])
###Output
[[ -420.61029092]
[ -342.24263429]
[-2112.49021421]
[-4496.25616576]
[-4598.45211968]]
###Markdown
And now let us plot the data Exercise: Plot the dataThe first column of `X_train` and `X_test` is the velocity in meter per second and the second column is the acceleration. `y_train` and `y_test` contain the electric power. Now, please plot the data in three panels where the training and test data are shown together (over samples) with different markers or colors. SolutionI prefer to use a function because we have three data sets and my plot is defined in [solution_plot_data](solution_plot_data.py).
###Code
from solution_plot_data import *
plot_data(X_train, y_train, X_test, y_test)
###Output
_____no_output_____
###Markdown
Modeling with different regression methodsIn the original paper, Recursive Least Squares (an adaptive filter form of linear regression), Kernel adaptive filters (a special kind of non-linear adaptive filter), and Neural Networks were compared. We cannot go into details of adaptive filtering herein, but very briefly, these methods provide solutions for a regression problem in an iterative way.At every time step a regression result is returned. This makes adaptive filters ideal for tracking of time variant systems and when data receives as data stream, see figure on the right.Next, we will train a **Linear Model** and a **Gaussian Process** with sklearn in the sequel. Linear ModelThe fit of the linear model is done with a few lines of code.
###Code
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X=X_train, y=y_train)
###Output
_____no_output_____
###Markdown
Gaussian ProcessSame holds for the Gaussian Process.
###Code
from sklearn.gaussian_process import GaussianProcessRegressor
gp = GaussianProcessRegressor(n_restarts_optimizer=20)
gp.fit(X=X_train, y=y_train)
###Output
_____no_output_____
###Markdown
Compare both models with mean squared error
###Code
from sklearn.metrics import mean_squared_error
print("mse Linear Model \t", mean_squared_error(y_test, lin_reg.predict(X_test)))
print("mse GP \t\t\t", mean_squared_error(y_test, gp.predict(X_test)))
###Output
mse Linear Model 85914421.79698046
mse GP 49813117.43331423
###Markdown
The mean squared error of the GP is way smaller than the error of the linear model. Note that the linear model tries to model the problem with just two parameters, because `X_train` has herein two columns (or two sensor measurements). Let us plot the predictions and compare them with the test data
###Code
plt.figure(figsize=(16, 6))
plt.plot(y_test, 'k')
plt.plot(lin_reg.predict(X_test), 'r')
plt.plot(gp.predict(X_test), 'g')
plt.legend(['y_test', 'Linear Model', 'GP'])
###Output
_____no_output_____ |
Lab-2/My Solutions/Part2_Debiasing.ipynb | ###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 2: Debiasing Facial Detection SystemsIn the second portion of the lab, we'll explore two prominent aspects of applied deep learning: facial detection and algorithmic bias. Deploying fair, unbiased AI systems is critical to their long-term acceptance. Consider the task of facial detection: given an image, is it an image of a face? This seemingly simple, but extremely important, task is subject to significant amounts of algorithmic bias among select demographics. In this lab, we'll investigate [one recently published approach](http://introtodeeplearning.com/AAAI_MitigatingAlgorithmicBias.pdf) to addressing algorithmic bias. We'll build a facial detection model that learns the *latent variables* underlying face image datasets and uses this to adaptively re-sample the training data, thus mitigating any biases that may be present in order to train a *debiased* model.Run the next code block for a short video from Google that explores how and why it's important to consider bias when thinking about machine learning:
###Code
import IPython
IPython.display.YouTubeVideo('59bMh59JQDo')
###Output
_____no_output_____
###Markdown
Let's get started by installing the relevant dependencies:
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
import IPython
import functools
import matplotlib.pyplot as plt
import numpy as np
from tqdm import tqdm
# Download and import the MIT 6.S191 package
!pip install mitdeeplearning
import mitdeeplearning as mdl
###Output
TensorFlow 2.x selected.
Requirement already satisfied: mitdeeplearning in /usr/local/lib/python3.6/dist-packages (0.1.2)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.1)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.28.1)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.1)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.12.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
###Markdown
2.1 DatasetsWe'll be using three datasets in this lab. In order to train our facial detection models, we'll need a dataset of positive examples (i.e., of faces) and a dataset of negative examples (i.e., of things that are not faces). We'll use these data to train our models to classify images as either faces or not faces. Finally, we'll need a test dataset of face images. Since we're concerned about the potential *bias* of our learned models against certain demographics, it's important that the test dataset we use has equal representation across the demographics or features of interest. In this lab, we'll consider skin tone and gender. 1. **Positive training data**: [CelebA Dataset](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html). A large-scale (over 200K images) of celebrity faces. 2. **Negative training data**: [ImageNet](http://www.image-net.org/). Many images across many different categories. We'll take negative examples from a variety of non-human categories. [Fitzpatrick Scale](https://en.wikipedia.org/wiki/Fitzpatrick_scale) skin type classification system, with each image labeled as "Lighter'' or "Darker''.Let's begin by importing these datasets. We've written a class that does a bit of data pre-processing to import the training data in a usable format.
###Code
# Get the training data: both images from CelebA and ImageNet
path_to_training_data = tf.keras.utils.get_file('train_face.h5', 'https://www.dropbox.com/s/l5iqduhe0gwxumq/train_face.h5?dl=1')
# Instantiate a TrainingDatasetLoader using the downloaded dataset
loader = mdl.lab2.TrainingDatasetLoader(path_to_training_data)
###Output
Opening /root/.keras/datasets/train_face.h5
Loading data into memory...
###Markdown
We can look at the size of the training dataset and grab a batch of size 100:
###Code
number_of_training_examples = loader.get_train_size()
(images, labels) = loader.get_batch(100)
###Output
_____no_output_____
###Markdown
Play around with displaying images to get a sense of what the training data actually looks like!
###Code
### Examining the CelebA training dataset ###
#@title Change the sliders to look at positive and negative training examples! { run: "auto" }
face_images = images[np.where(labels==1)[0]]
not_face_images = images[np.where(labels==0)[0]]
idx_face = 26 #@param {type:"slider", min:0, max:50, step:1}
idx_not_face = 35 #@param {type:"slider", min:0, max:50, step:1}
plt.figure(figsize=(5,5))
plt.subplot(1, 2, 1)
plt.imshow(face_images[idx_face])
plt.title("Face"); plt.grid(False)
plt.subplot(1, 2, 2)
plt.imshow(not_face_images[idx_not_face])
plt.title("Not Face"); plt.grid(False)
###Output
_____no_output_____
###Markdown
Thinking about biasRemember we'll be training our facial detection classifiers on the large, well-curated CelebA dataset (and ImageNet), and then evaluating their accuracy by testing them on an independent test dataset. Our goal is to build a model that trains on CelebA *and* achieves high classification accuracy on the the test dataset across all demographics, and to thus show that this model does not suffer from any hidden bias. What exactly do we mean when we say a classifier is biased? In order to formalize this, we'll need to think about [*latent variables*](https://en.wikipedia.org/wiki/Latent_variable), variables that define a dataset but are not strictly observed. As defined in the generative modeling lecture, we'll use the term *latent space* to refer to the probability distributions of the aforementioned latent variables. Putting these ideas together, we consider a classifier *biased* if its classification decision changes after it sees some additional latent features. This notion of bias may be helpful to keep in mind throughout the rest of the lab. 2.2 CNN for facial detection First, we'll define and train a CNN on the facial classification task, and evaluate its accuracy. Later, we'll evaluate the performance of our debiased models against this baseline CNN. The CNN model has a relatively standard architecture consisting of a series of convolutional layers with batch normalization followed by two fully connected layers to flatten the convolution output and generate a class prediction. Define and train the CNN modelLike we did in the first part of the lab, we'll define our CNN model, and then train on the CelebA and ImageNet datasets using the `tf.GradientTape` class and the `tf.GradientTape.gradient` method.
###Code
### Define the CNN model ###
n_filters = 12 # base number of convolutional filters
'''Function to define a standard CNN model'''
def make_standard_classifier(n_outputs=1):
Conv2D = functools.partial(tf.keras.layers.Conv2D, padding='same', activation='relu')
BatchNormalization = tf.keras.layers.BatchNormalization
Flatten = tf.keras.layers.Flatten
Dense = functools.partial(tf.keras.layers.Dense, activation='relu')
model = tf.keras.Sequential([
Conv2D(filters=1*n_filters, kernel_size=5, strides=2),
BatchNormalization(),
Conv2D(filters=2*n_filters, kernel_size=5, strides=2),
BatchNormalization(),
Conv2D(filters=4*n_filters, kernel_size=3, strides=2),
BatchNormalization(),
Conv2D(filters=6*n_filters, kernel_size=3, strides=2),
BatchNormalization(),
Flatten(),
Dense(512),
Dense(n_outputs, activation=None),
])
return model
standard_classifier = make_standard_classifier()
###Output
_____no_output_____
###Markdown
Now let's train the standard CNN!
###Code
### Train the standard CNN ###
# Training hyperparameters
batch_size = 32
num_epochs = 2 # keep small to run faster
learning_rate = 5e-4
optimizer = tf.keras.optimizers.Adam(learning_rate) # define our optimizer
loss_history = mdl.util.LossHistory(smoothing_factor=0.99) # to record loss evolution
plotter = mdl.util.PeriodicPlotter(sec=2, scale='semilogy')
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
@tf.function
def standard_train_step(x, y):
with tf.GradientTape() as tape:
# feed the images into the model
logits = standard_classifier(x)
# Compute the loss
loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
# Backpropagation
grads = tape.gradient(loss, standard_classifier.trainable_variables)
optimizer.apply_gradients(zip(grads, standard_classifier.trainable_variables))
return loss
# The training loop!
for epoch in range(num_epochs):
for idx in tqdm(range(loader.get_train_size()//batch_size)):
# Grab a batch of training data and propagate through the network
x, y = loader.get_batch(batch_size)
loss = standard_train_step(x, y)
# Record the loss and plot the evolution of the loss as a function of training
loss_history.append(loss.numpy().mean())
plotter.plot(loss_history.get())
###Output
_____no_output_____
###Markdown
Evaluate performance of the standard CNNNext, let's evaluate the classification performance of our CelebA-trained standard CNN on the training dataset.
###Code
### Evaluation of standard CNN ###
# TRAINING DATA
# Evaluate on a subset of CelebA+Imagenet
(batch_x, batch_y) = loader.get_batch(5000)
y_pred_standard = tf.round(tf.nn.sigmoid(standard_classifier.predict(batch_x)))
acc_standard = tf.reduce_mean(tf.cast(tf.equal(batch_y, y_pred_standard), tf.float32))
print("Standard CNN accuracy on (potentially biased) training set: {:.4f}".format(acc_standard.numpy()))
###Output
Standard CNN accuracy on (potentially biased) training set: 0.9976
###Markdown
We will also evaluate our networks on an independent test dataset containing faces that were not seen during training. For the test data, we'll look at the classification accuracy across four different demographics, based on the Fitzpatrick skin scale and sex-based labels: dark-skinned male, dark-skinned female, light-skinned male, and light-skinned female. Let's take a look at some sample faces in the test set.
###Code
### Load test dataset and plot examples ###
test_faces = mdl.lab2.get_test_faces()
keys = ["Light Female", "Light Male", "Dark Female", "Dark Male"]
for group, key in zip(test_faces,keys):
plt.figure(figsize=(5,5))
plt.imshow(np.hstack(group))
plt.title(key, fontsize=15)
# Images in Testset
np.array(test_faces).shape
###Output
_____no_output_____
###Markdown
Now, let's evaluated the probability of each of these face demographics being classified as a face using the standard CNN classifier we've just trained.
###Code
### Evaluate the standard CNN on the test data ###
standard_classifier_logits = [standard_classifier(np.array(x, dtype=np.float32)) for x in test_faces]
standard_classifier_probs = tf.squeeze(tf.sigmoid(standard_classifier_logits))
# Plot the prediction accuracies per demographic
xx = range(len(keys))
yy = standard_classifier_probs.numpy().mean(1)
plt.bar(xx, yy)
plt.xticks(xx, keys)
plt.ylim(max(0,yy.min()-yy.ptp()/2.), yy.max()+yy.ptp()/2.)
plt.title("Standard classifier predictions");
###Output
_____no_output_____
###Markdown
Take a look at the accuracies for this first model across these four groups. What do you observe? Would you consider this model biased or unbiased? What are some reasons why a trained model may have biased accuracies? 2.3 Mitigating algorithmic biasImbalances in the training data can result in unwanted algorithmic bias. For example, the majority of faces in CelebA (our training set) are those of light-skinned females. As a result, a classifier trained on CelebA will be better suited at recognizing and classifying faces with features similar to these, and will thus be biased.How could we overcome this? A naive solution -- and on that is being adopted by many companies and organizations -- would be to annotate different subclasses (i.e., light-skinned females, males with hats, etc.) within the training data, and then manually even out the data with respect to these groups.But this approach has two major disadvantages. First, it requires annotating massive amounts of data, which is not scalable. Second, it requires that we know what potential biases (e.g., race, gender, pose, occlusion, hats, glasses, etc.) to look for in the data. As a result, manual annotation may not capture all the different features that are imbalanced within the training data.Instead, let's actually **learn** these features in an unbiased, unsupervised manner, without the need for any annotation, and then train a classifier fairly with respect to these features. In the rest of this lab, we'll do exactly that. 2.4 Variational autoencoder (VAE) for learning latent structureAs you saw, the accuracy of the CNN varies across the four demographics we looked at. To think about why this may be, consider the dataset the model was trained on, CelebA. If certain features, such as dark skin or hats, are *rare* in CelebA, the model may end up biased against these as a result of training with a biased dataset. That is to say, its classification accuracy will be worse on faces that have under-represented features, such as dark-skinned faces or faces with hats, relevative to faces with features well-represented in the training data! This is a problem. Our goal is to train a *debiased* version of this classifier -- one that accounts for potential disparities in feature representation within the training data. Specifically, to build a debiased facial classifier, we'll train a model that **learns a representation of the underlying latent space** to the face training data. The model then uses this information to mitigate unwanted biases by sampling faces with rare features, like dark skin or hats, *more frequently* during training. The key design requirement for our model is that it can learn an *encoding* of the latent features in the face data in an entirely *unsupervised* way. To achieve this, we'll turn to variational autoencoders (VAEs).As shown in the schematic above and in Lecture 4, VAEs rely on an encoder-decoder structure to learn a latent representation of the input data. In the context of computer vision, the encoder network takes in input images, encodes them into a series of variables defined by a mean and standard deviation, and then draws from the distributions defined by these parameters to generate a set of sampled latent variables. The decoder network then "decodes" these variables to generate a reconstruction of the original image, which is used during training to help the model identify which latent variables are important to learn. Let's formalize two key aspects of the VAE model and define relevant functions for each. Understanding VAEs: loss functionIn practice, how can we train a VAE? In learning the latent space, we constrain the means and standard deviations to approximately follow a unit Gaussian. Recall that these are learned parameters, and therefore must factor into the loss computation, and that the decoder portion of the VAE is using these parameters to output a reconstruction that should closely match the input image, which also must factor into the loss. What this means is that we'll have two terms in our VAE loss function:1. **Latent loss ($L_{KL}$)**: measures how closely the learned latent variables match a unit Gaussian and is defined by the Kullback-Leibler (KL) divergence.2. **Reconstruction loss ($L_{x}{(x,\hat{x})}$)**: measures how accurately the reconstructed outputs match the input and is given by the $L^1$ norm of the input image and its reconstructed output. The equations for both of these losses are provided below:$$ L_{KL}(\mu, \sigma) = \frac{1}{2}\sum\limits_{j=0}^{k-1}\small{(\sigma_j + \mu_j^2 - 1 - \log{\sigma_j})} $$$$ L_{x}{(x,\hat{x})} = ||x-\hat{x}||_1 $$ Thus for the VAE loss we have: $$ L_{VAE} = c\cdot L_{KL} + L_{x}{(x,\hat{x})} $$where $c$ is a weighting coefficient used for regularization. Now we're ready to define our VAE loss function:
###Code
### Defining the VAE loss function ###
''' Function to calculate VAE loss given:
an input x,
reconstructed output x_recon,
encoded means mu,
encoded log of standard deviation logsigma,
weight parameter for the latent loss kl_weight
'''
def vae_loss_function(x, x_recon, mu, logsigma, kl_weight=0.0005):
# TODO: Define the latent loss. Note this is given in the equation for L_{KL}
# in the text block directly above
latent_loss = 0.5 * tf.reduce_sum(tf.exp(logsigma) + tf.square(mu) - 1 - logsigma)
# TODO: Define the reconstruction loss as the mean absolute pixel-wise
# difference between the input and reconstruction. Hint: you'll need to
# use tf.reduce_mean, and supply an axis argument which specifies which
# dimensions to reduce over. For example, reconstruction loss needs to average
# over the height, width, and channel image dimensions.
# https://www.tensorflow.org/api_docs/python/tf/math/reduce_mean
reconstruction_loss = tf.norm(x-x_recon, ord=1)
# TODO: Define the VAE loss. Note this is given in the equation for L_{VAE}
# in the text block directly above
vae_loss = kl_weight*latent_loss + reconstruction_loss
return vae_loss
###Output
_____no_output_____
###Markdown
Great! Now that we have a more concrete sense of how VAEs work, let's explore how we can leverage this network structure to train a *debiased* facial classifier. Understanding VAEs: reparameterization As you may recall from lecture, VAEs use a "reparameterization trick" for sampling learned latent variables. Instead of the VAE encoder generating a single vector of real numbers for each latent variable, it generates a vector of means and a vector of standard deviations that are constrained to roughly follow Gaussian distributions. We then sample from the standard deviations and add back the mean to output this as our sampled latent vector. Formalizing this for a latent variable $z$ where we sample $\epsilon \sim \mathcal{N}(0,(I))$ we have: $$ z = \mathbb{\mu} + e^{\left(\frac{1}{2} \cdot \log{\Sigma}\right)}\circ \epsilon $$where $\mu$ is the mean and $\Sigma$ is the covariance matrix. This is useful because it will let us neatly define the loss function for the VAE, generate randomly sampled latent variables, achieve improved network generalization, **and** make our complete VAE network differentiable so that it can be trained via backpropagation. Quite powerful!Let's define a function to implement the VAE sampling operation:
###Code
### VAE Reparameterization ###
"""Reparameterization trick by sampling from an isotropic unit Gaussian.
# Arguments
z_mean, z_logsigma (tensor): mean and log of standard deviation of latent distribution (Q(z|X))
# Returns
z (tensor): sampled latent vector
"""
def sampling(z_mean, z_logsigma):
# By default, random.normal is "standard" (ie. mean=0 and std=1.0)
batch, latent_dim = z_mean.shape
epsilon = tf.random.normal(shape=(batch, latent_dim))
# TODO: Define the reparameterization computation!
# Note the equation is given in the text block immediately above.
z = z_mean + tf.exp(0.5*z_logsigma) * epsilon
return z
###Output
_____no_output_____
###Markdown
2.5 Debiasing variational autoencoder (DB-VAE)Now, we'll use the general idea behind the VAE architecture to build a model, termed a [*debiasing variational autoencoder*](https://lmrt.mit.edu/sites/default/files/AIES-19_paper_220.pdf) or DB-VAE, to mitigate (potentially) unknown biases present within the training idea. We'll train our DB-VAE model on the facial detection task, run the debiasing operation during training, evaluate on the PPB dataset, and compare its accuracy to our original, biased CNN model. The DB-VAE modelThe key idea behind this debiasing approach is to use the latent variables learned via a VAE to adaptively re-sample the CelebA data during training. Specifically, we will alter the probability that a given image is used during training based on how often its latent features appear in the dataset. So, faces with rarer features (like dark skin, sunglasses, or hats) should become more likely to be sampled during training, while the sampling probability for faces with features that are over-represented in the training dataset should decrease (relative to uniform random sampling across the training data). A general schematic of the DB-VAE approach is shown here: Recall that we want to apply our DB-VAE to a *supervised classification* problem -- the facial detection task. Importantly, note how the encoder portion in the DB-VAE architecture also outputs a single supervised variable, $z_o$, corresponding to the class prediction -- face or not face. Usually, VAEs are not trained to output any supervised variables (such as a class prediction)! This is another key distinction between the DB-VAE and a traditional VAE. Keep in mind that we only want to learn the latent representation of *faces*, as that's what we're ultimately debiasing against, even though we are training a model on a binary classification problem. We'll need to ensure that, **for faces**, our DB-VAE model both learns a representation of the unsupervised latent variables, captured by the distribution $q_\phi(z|x)$, **and** outputs a supervised class prediction $z_o$, but that, **for negative examples**, it only outputs a class prediction $z_o$. Defining the DB-VAE loss functionThis means we'll need to be a bit clever about the loss function for the DB-VAE. The form of the loss will depend on whether it's a face image or a non-face image that's being considered. For **face images**, our loss function will have two components:1. **VAE loss ($L_{VAE}$)**: consists of the latent loss and the reconstruction loss.2. **Classification loss ($L_y(y,\hat{y})$)**: standard cross-entropy loss for a binary classification problem. In contrast, for images of **non-faces**, our loss function is solely the classification loss. We can write a single expression for the loss by defining an indicator variable $\mathcal{I}_f$which reflects which training data are images of faces ($\mathcal{I}_f(y) = 1$ ) and which are images of non-faces ($\mathcal{I}_f(y) = 0$). Using this, we obtain:$$L_{total} = L_y(y,\hat{y}) + \mathcal{I}_f(y)\Big[L_{VAE}\Big]$$Let's write a function to define the DB-VAE loss function:
###Code
### Loss function for DB-VAE ###
"""Loss function for DB-VAE.
# Arguments
x: true input x
x_pred: reconstructed x
y: true label (face or not face)
y_logit: predicted labels
mu: mean of latent distribution (Q(z|X))
logsigma: log of standard deviation of latent distribution (Q(z|X))
# Returns
total_loss: DB-VAE total loss
classification_loss = DB-VAE classification loss
"""
def debiasing_loss_function(x, x_pred, y, y_logit, mu, logsigma):
# TODO: call the relevant function to obtain VAE loss
vae_loss = vae_loss_function(x,x_pred,mu,logsigma) # TODO
# TODO: define the classification loss using sigmoid_cross_entropy
# https://www.tensorflow.org/api_docs/python/tf/nn/sigmoid_cross_entropy_with_logits
classification_loss = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=y_logit)
# Use the training data labels to create variable face_indicator:
# indicator that reflects which training data are images of faces
face_indicator = tf.cast(tf.equal(y, 1), tf.float32)
# TODO: define the DB-VAE total loss! Use tf.reduce_mean to average over all
# samples
total_loss = tf.reduce_mean(classification_loss + tf.multiply(vae_loss,face_indicator))
return total_loss, classification_loss
###Output
_____no_output_____
###Markdown
DB-VAE architectureNow we're ready to define the DB-VAE architecture. To build the DB-VAE, we will use the standard CNN classifier from above as our encoder, and then define a decoder network. We will create and initialize the two models, and then construct the end-to-end VAE. We will use a latent space with 100 latent variables.The decoder network will take as input the sampled latent variables, run them through a series of deconvolutional layers, and output a reconstruction of the original input image.
###Code
### Define the decoder portion of the DB-VAE ###
n_filters = 12 # base number of convolutional filters, same as standard CNN
latent_dim = 150 # number of latent variables
def make_face_decoder_network():
# Functionally define the different layer types we will use
Conv2DTranspose = functools.partial(tf.keras.layers.Conv2DTranspose, padding='same', activation='relu')
BatchNormalization = tf.keras.layers.BatchNormalization
Flatten = tf.keras.layers.Flatten
Dense = functools.partial(tf.keras.layers.Dense, activation='relu')
Reshape = tf.keras.layers.Reshape
# Build the decoder network using the Sequential API
decoder = tf.keras.Sequential([
# Transform to pre-convolutional generation
Dense(units=4*4*6*n_filters), # 4x4 feature maps (with 6N occurances)
Reshape(target_shape=(4, 4, 6*n_filters)),
# Upscaling convolutions (inverse of encoder)
Conv2DTranspose(filters=4*n_filters, kernel_size=3, strides=2),
Conv2DTranspose(filters=2*n_filters, kernel_size=3, strides=2),
Conv2DTranspose(filters=1*n_filters, kernel_size=5, strides=2),
Conv2DTranspose(filters=3, kernel_size=5, strides=2),
])
return decoder
###Output
_____no_output_____
###Markdown
Now, we will put this decoder together with the standard CNN classifier as our encoder to define the DB-VAE. Note that at this point, there is nothing special about how we put the model together that makes it a "debiasing" model -- that will come when we define the training operation. Here, we will define the core VAE architecture by sublassing the `Model` class; defining encoding, reparameterization, and decoding operations; and calling the network end-to-end.
###Code
### Defining and creating the DB-VAE ###
class DB_VAE(tf.keras.Model):
def __init__(self, latent_dim):
super(DB_VAE, self).__init__()
self.latent_dim = latent_dim
# Define the number of outputs for the encoder. Recall that we have
# `latent_dim` latent variables, as well as a supervised output for the
# classification.
num_encoder_dims = 2*self.latent_dim + 1 # latent_dim --> mean and variance and one classification output
self.encoder = make_standard_classifier(num_encoder_dims) # Encoder --> CNN
self.decoder = make_face_decoder_network() # Deocder
# function to feed images into encoder, encode the latent space, and output
# classification probability
def encode(self, x):
# encoder output
encoder_output = self.encoder(x)
# classification prediction
y_logit = tf.expand_dims(encoder_output[:, 0], -1)
# latent variable distribution parameters
z_mean = encoder_output[:, 1:self.latent_dim+1]
z_logsigma = encoder_output[:, self.latent_dim+1:]
return y_logit, z_mean, z_logsigma
# VAE reparameterization: given a mean and logsigma, sample latent variables
def reparameterize(self, z_mean, z_logsigma):
# TODO: call the sampling function defined above
z = sampling(z_mean, z_logsigma)
return z
# Decode the latent space and output reconstruction
def decode(self, z):
# TODO: use the decoder to output the reconstruction
reconstruction = self.decoder(z)
return reconstruction
# The call function will be used to pass inputs x through the core VAE
def call(self, x):
# Encode input to a prediction and latent space
y_logit, z_mean, z_logsigma = self.encode(x)
# TODO: reparameterization
z = self.reparameterize(z_mean,z_logsigma)
# TODO: reconstruction
recon = self.decode(z)
return y_logit, z_mean, z_logsigma, recon
# Predict face or not face logit for given input x
def predict(self, x):
y_logit, z_mean, z_logsigma = self.encode(x)
return y_logit
dbvae = DB_VAE(latent_dim)
###Output
_____no_output_____
###Markdown
As stated, the encoder architecture is identical to the CNN from earlier in this lab. Note the outputs of our constructed DB_VAE model in the `call` function: `y_logit, z_mean, z_logsigma, z`. Think carefully about why each of these are outputted and their significance to the problem at hand. Adaptive resampling for automated debiasing with DB-VAESo, how can we actually use DB-VAE to train a debiased facial detection classifier?Recall the DB-VAE architecture: as input images are fed through the network, the encoder learns an estimate $\mathcal{Q}(z|X)$ of the latent space. We want to increase the relative frequency of rare data by increased sampling of under-represented regions of the latent space. We can approximate $\mathcal{Q}(z|X)$ using the frequency distributions of each of the learned latent variables, and then define the probability distribution of selecting a given datapoint $x$ based on this approximation. These probability distributions will be used during training to re-sample the data.You'll write a function to execute this update of the sampling probabilities, and then call this function within the DB-VAE training loop to actually debias the model. First, we've defined a short helper function `get_latent_mu` that returns the latent variable means returned by the encoder after a batch of images is inputted to the network:
###Code
# Function to return the means for an input image batch
def get_latent_mu(images, dbvae, batch_size=1024):
N = images.shape[0]
mu = np.zeros((N, latent_dim))
for start_ind in range(0, N, batch_size):
end_ind = min(start_ind+batch_size, N+1)
batch = (images[start_ind:end_ind]).astype(np.float32)/255.
_, batch_mu, _ = dbvae.encode(batch)
mu[start_ind:end_ind] = batch_mu
return mu
###Output
_____no_output_____
###Markdown
Now, let's define the actual resampling algorithm `get_training_sample_probabilities`. Importantly note the argument `smoothing_fac`. This parameter tunes the degree of debiasing: for `smoothing_fac=0`, the re-sampled training set will tend towards falling uniformly over the latent space, i.e., the most extreme debiasing.
###Code
### Resampling algorithm for DB-VAE ###
'''Function that recomputes the sampling probabilities for images within a batch
based on how they distribute across the training data'''
def get_training_sample_probabilities(images, dbvae, bins=10, smoothing_fac=0.001):
print("Recomputing the sampling probabilities")
# TODO: run the input batch and get the latent variable means
mu = get_latent_mu(images,dbvae)
# sampling probabilities for the images
training_sample_p = np.zeros(mu.shape[0])
# consider the distribution for each latent variable
for i in range(latent_dim):
latent_distribution = mu[:,i]
# generate a histogram of the latent distribution
hist_density, bin_edges = np.histogram(latent_distribution, density=True, bins=bins)
# find which latent bin every data sample falls in
bin_edges[0] = -float('inf')
bin_edges[-1] = float('inf')
# TODO: call the digitize function to find which bins in the latent distribution
# every data sample falls in to
# https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.digitize.html
bin_idx = np.digitize(latent_distribution, bin_edges)
# smooth the density function
hist_smoothed_density = hist_density + smoothing_fac
hist_smoothed_density = hist_smoothed_density / np.sum(hist_smoothed_density)
# invert the density function
p = 1.0/(hist_smoothed_density[bin_idx-1])
# TODO: normalize all probabilities
p = p/np.sum(p)
# TODO: update sampling probabilities by considering whether the newly
# computed p is greater than the existing sampling probabilities.
training_sample_p = np.maximum(p, training_sample_p) # Mentioned a Product of W's of Latent Variables but consider np.maximum() W = 1/(Q' + smoothing_fac)
# final normalization
training_sample_p /= np.sum(training_sample_p)
return training_sample_p
###Output
_____no_output_____
###Markdown
Now that we've defined the resampling update, we can train our DB-VAE model on the CelebA/ImageNet training data, and run the above operation to re-weight the importance of particular data points as we train the model. Remember again that we only want to debias for features relevant to *faces*, not the set of negative examples. Complete the code block below to execute the training loop!
###Code
### Training the DB-VAE ###
# Hyperparameters
batch_size = 32
learning_rate = 5e-4
latent_dim = 100
# DB-VAE needs slightly more epochs to train since its more complex than
# the standard classifier so we use 6 instead of 2
num_epochs = 6
# instantiate a new DB-VAE model and optimizer
dbvae = DB_VAE(100)
optimizer = tf.keras.optimizers.Adam(learning_rate)
# To define the training operation, we will use tf.function which is a powerful tool
# that lets us turn a Python function into a TensorFlow computation graph.
@tf.function
def debiasing_train_step(x, y):
with tf.GradientTape() as tape:
# Feed input x into dbvae. Note that this is using the DB_VAE call function!
y_logit, z_mean, z_logsigma, x_recon = dbvae.call(x)
'''TODO: call the DB_VAE loss function to compute the loss'''
loss, class_loss = debiasing_loss_function(x, x_recon, y, y_logit, z_mean, z_logsigma)
'''TODO: use the GradientTape.gradient method to compute the gradients.
Hint: this is with respect to the trainable_variables of the dbvae.'''
grads = tape.gradient(loss, dbvae.trainable_variables)
# apply gradients to variables
optimizer.apply_gradients(zip(grads, dbvae.trainable_variables))
return loss
# get training faces from data loader
all_faces = loader.get_all_train_faces()
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
# The training loop -- outer loop iterates over the number of epochs
for i in range(num_epochs):
IPython.display.clear_output(wait=True)
print("Starting epoch {}/{}".format(i+1, num_epochs))
# Recompute data sampling proabilities
'''TODO: recompute the sampling probabilities for debiasing'''
p_faces = get_training_sample_probabilities(all_faces, dbvae)
# get a batch of training data and compute the training step
for j in tqdm(range(loader.get_train_size() // batch_size)):
# load a batch of data
(x, y) = loader.get_batch(batch_size, p_pos=p_faces)
# loss optimization
loss = debiasing_train_step(x, y)
# plot the progress every 200 steps
if j % 500 == 0:
mdl.util.plot_sample(x, y, dbvae)
###Output
Starting epoch 6/6
Recomputing the sampling probabilities
###Markdown
Wonderful! Now we should have a trained and (hopefully!) debiased facial classification model, ready for evaluation! 2.6 Evaluation of DB-VAE on Test DatasetFinally let's test our DB-VAE model on the test dataset, looking specifically at its accuracy on each the "Dark Male", "Dark Female", "Light Male", and "Light Female" demographics. We will compare the performance of this debiased model against the (potentially biased) standard CNN from earlier in the lab.
###Code
dbvae_logits = [dbvae.predict(np.array(x, dtype=np.float32)) for x in test_faces]
dbvae_probs = tf.squeeze(tf.sigmoid(dbvae_logits))
xx = np.arange(len(keys))
plt.bar(xx, standard_classifier_probs.numpy().mean(1), width=0.2, label="Standard CNN")
plt.bar(xx+0.2, dbvae_probs.numpy().mean(1), width=0.2, label="DB-VAE")
plt.xticks(xx, keys);
plt.title("Network predictions on test dataset")
plt.ylabel("Probability"); plt.legend(bbox_to_anchor=(1.04,1), loc="upper left");
###Output
_____no_output_____ |
method_2/main.ipynb | ###Markdown
Proposed System II Initialisation* **Step 1:** Importing Libraries* **Step 2:** Declaring Constants* **Step 3:** Accepting Input------ **Step 1:** Importing Libraries
###Code
import re
import os
import math
import spacy
import subprocess
import matplotlib.pyplot as plt
from spacy.lang.hi import STOP_WORDS as STOP_WORDS_HI
from wordcloud import WordCloud
###Output
_____no_output_____
###Markdown
**Step 2:** Declaring Constants
###Code
# global variables
tf = {}
nlp = spacy.load('hin-dep-parser-treebank')
lendoc = 0
###Output
_____no_output_____
###Markdown
**Step 3:** Accepting Input
###Code
article = input("Enter the article name")
###Output
Enter the article name4854.txt
###Markdown
Data Preparation* **Step 1:**: Extracting Sentences* **Step 2:**: Cleaning Sentences------ **Step 1:** Extracting Sentences given a file name
###Code
def getSent(articleName, directory):
articleName = directory + '/' + articleName
f = open(articleName).read()
sentences = f.split('।')
return sentences
###Output
_____no_output_____
###Markdown
**Step 2:** Cleaning sentences using basic regex
###Code
def cleanSent(unclean):
clean = []
for sent in unclean:
sent = re.sub('\\n', '', sent)
sent = re.sub('[a-zA-z]', '', sent)
sent = sent.strip()
if len(sent) != 0:
clean.append(sent)
return clean
###Output
_____no_output_____
###Markdown
Calculating TF* **Step 1:** Getting list of all articles* **Step 2:** Calculating TF------ **Step 1**: Getting list of all documents, cleaning each one, and then sending it to calculate TF
###Code
def eachArticle(directory):
global lendoc
documents = os.listdir(directory)
sorted(documents)
x = 0
for doc in documents:
unclean = getSent(doc, directory)
clean = cleanSent(unclean)
if len(clean) > 100:
lendoc += 1
prepTF(clean, doc)
print(lendoc)
return
###Output
_____no_output_____
###Markdown
**Step 2:** Calculating TF of each document using a Lemmatizer
###Code
def prepTF(clean, docs):
global tf
for sent in clean:
doc = nlp(sent)
for w in doc:
if (w.lemma_, docs) in tf:
tf[(w.lemma_, docs)] += 1
else:
tf[(w.lemma_, docs)] = 1
return
###Output
_____no_output_____
###Markdown
**Important: Run this cell only once at the beginning**
###Code
eachArticle('valid')
###Output
1067
###Markdown
Generating Summary* **Step 1:** Calculating TF-IDF given an article* **Step 2:** Generating the summary based on TF-IDF------ **Step 1:** Generating TF-IDF by generating the IDF and using the global TF variable
###Code
def oneArticle(articleName, directory):
global tf
global lendoc
df = {}
unclean = getSent(articleName, directory)
clean = cleanSent(unclean)
documents = os.listdir(directory)
if len(clean) <= 100:
prepTF(clean, articleName)
for sent in clean:
doc = nlp(sent)
for w in doc:
if w.lemma_ in df:
continue
for docs in documents:
if (w.lemma_, docs) in tf:
if w.lemma_ in df:
df[w.lemma_] += tf[(w.lemma_, docs)]
else:
df[w.lemma_] = tf[(w.lemma_, docs)]
idf = {}
for word in df:
if word not in idf:
idf[word] = math.log(lendoc/(df[word] + 1))
tfidf = {}
for word in idf:
if word not in tfidf:
tfidf[word] = tf[(word, articleName)] * idf[word]
return tfidf
tfidf = oneArticle(article, 'valid')
###Output
_____no_output_____
###Markdown
**Step 2:** Generating the final summary based on sorting according to TF-IDF of sentences, also adding weights accordingly to represent linguistic heuristic
###Code
def getSummary(articleName, directory, tfidf):
unclean = getSent(articleName, directory)
clean = cleanSent(unclean)
heading = clean[0].split(' ')
slicelen = slice(1, len(clean))
text = clean[slicelen]
size = round(0.3 * len(text))
sent_tfidf = {}
for sentI in range(0, len(text)):
w1 = 0
w2 = 0
w3 = 0
sent = text[sentI]
doc = nlp(sent)
stfidf = 0
for w in doc:
if w.lemma_ not in STOP_WORDS_HI:
stfidf += tfidf[w.lemma_]
if w.text in heading:
w1 += 15
if w.tag_ == 'NNP':
w2 += 7
elif str(w.tag_)[0] == 'N':
w3 += 5
sent_tfidf[sentI] = stfidf/len(doc) + w1 + w2 + w3
sent_tfidf = sorted(sent_tfidf.items(), key = lambda kv:(kv[1], kv[0]), reverse=True)
sent_limit = []
for i in range(0, size):
sent_limit.append(sent_tfidf[i])
sent_limit = sorted(sent_limit)
summary = ""
actual = ""
for i in sent_limit:
summary += text[i[0]] + " । "
for i in range(0, len(clean)):
actual += clean[i] + " । "
summary = clean[0] + " । " + summary
return actual, summary
actual, summary = getSummary(article, 'valid', tfidf)
###Output
_____no_output_____
###Markdown
Presenting Output* **Step 1:** Representing the summary as wordcloud* **Step 2:** Storing everything externally as a file------ **Step 1:** Generating the wordcloud
###Code
font= "Poppins-Light.ttf"
summary = re.sub("।", '', summary)
wordcloud = WordCloud(
width=400,
height=300,
max_font_size=50,
max_words=1000,
background_color="white",
stopwords=STOP_WORDS_HI,
regexp=r"[\u0900-\u097F]+",
font_path=font
).generate(summary)
plt.figure()
plt.imshow(wordcloud, interpolation="bilinear")
plt.axis("off")
plt.show()
name = "summary.png"
###Output
_____no_output_____
###Markdown
**Step 2:** Exporting the summary and wordcloud as external files
###Code
wordcloud.to_file(name)
f = open('summary.txt', 'w+')
f.write(summary)
f.close()
subprocess.call(["mv", 'summary.png', "output/"])
subprocess.call(["mv", 'summary.txt', "output/"])
###Output
_____no_output_____ |
Install the HCA CLI tools/Install.ipynb | ###Markdown
Install the HCA CLI ❎ Installation just works
###Code
%%bash
pip3 install hca
import hca; dir(hca)
%%bash
hca --help
###Output
usage: hca [-h] [--version] [--log-level {ERROR,DEBUG,CRITICAL,INFO,WARNING}]
{help,upload,dss,query} ...
Human Cell Atlas Command Line Interface
For general help, run ``hca help``.
For help with individual commands, run ``hca <command> --help``.
Positional Arguments:
{help,upload,dss,query}
upload Upload data to DCP
dss Interact with the HCA Data Storage System
query Interact with the HCA DCP Query Service
Optional Arguments:
-h, --help show this help message and exit
--version show program's version number and exit
--log-level {ERROR,DEBUG,CRITICAL,INFO,WARNING}
['DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL']
|
20210115/.ipynb_checkpoints/onePeriodPolicyGradient-checkpoint.ipynb | ###Markdown
The model (one step reward)$u(c) = log(c)$ utility function $y = 1$ Deterministic income $p(r = 0.02) = 0.5$ $p(r = -0.01) = 0.5$ Explicit form of policy is linear:$$ c(w) = \frac{y+w}{2 \beta +1} = 0.3448275862068966 + 0.3448275862068966 w$$
###Code
# infinite horizon MDP problem
%pylab inline
import numpy as np
from scipy.optimize import minimize
import warnings
warnings.filterwarnings("ignore")
# discounting factor
beta = 0.95
# wealth level
eps = 0.001
w_low = eps
w_high = 10
# interest rate
r_up = 0.02
r_down = 0.01
# deterministic income
y = 1
# good state and bad state economy with equal probability 0.5
# with good investment return 0.02 or bad investment return -0.01
ws = np.linspace(w_low, w_high**(0.5),100)**2
Vs = np.zeros(100)
Cs = np.zeros(100)
def u(c):
return np.log(c)
def uB(b):
B = 2
return B*u(b)
for i in range(len(ws)):
w = ws[i]
def obj(c):
return -(u(c) + beta*(uB((y+w-c)*(1+r_up)) + uB(y+w-c)*(1-r_down))/2)
bounds = [(eps, y+w-eps)]
res = minimize(obj, eps, method='SLSQP', bounds=bounds)
Cs[i] = res.x[0]
Vs[i] = -res.fun
plt.plot(ws, Cs)
plt.plot(ws,(1+ws)/(2*beta+1))
###Output
_____no_output_____
###Markdown
policy gradientAssume the policy form $\theta = (a,b, \sigma = 0.1)$, then $\pi_\theta$ ~ $N(ax+b, \sigma)$Assume the initial value $a = 1$, $b = 1$, $\sigma = 0.1$ $$\theta_{k+1} = \theta_{k} + \alpha \nabla_\theta V(\pi_\theta)|\theta_k$$
###Code
T = 1
# simulation step T = 100
def poly(theta, w):
return theta[0] * w + theta[1]
def mu(theta, w):
return poly(theta, w)
def simSinglePath(theta):
wPath = np.zeros(T)
aPath = np.zeros(T)
rPath = np.zeros(T)
w = np.random.uniform(w_low, w_high)
for t in range(T):
c = np.random.normal(mu(theta, w), theta[-1])
c = max(min(c, w+y-eps), eps)
wPath[t] = w
aPath[t] = c
rPath[t] = u(c)
if np.random.uniform(0,1) > 0.5:
w = (w+y-c) * (1+r_up)
rPath[t] += beta*uB(w)
else:
w = (w+y-c) * (1-r_down)
rPath[t] += beta*uB(w)
return wPath, aPath, rPath
def gradientV(theta, D = 1000):
'''
D is the sample size
'''
notValid = True
while notValid:
grad = np.zeros(len(theta))
newGrad = np.zeros(len(theta))
for d in range(D):
wp, ap, rp = simSinglePath(theta)
newGrad[0] = np.sum((ap - mu(theta, wp))*(wp))
newGrad[1] = np.sum((ap - mu(theta, wp))*(1))
grad += newGrad * np.sum(rp)
grad /= D
if numpy.isnan(grad).any() == False:
notValid = False
return grad
def updateTheta(theta):
theta = theta + alpha * gradientV(theta)
return theta
def plot3(theta):
plt.plot(ws, Cs, 'b')
plt.plot(ws, mu(theta, ws), 'r')
# initial theta
N = 10000
theta = [0,0,0.1]
# gradient ascend step size
alpha = 0.01
# store theta
THETA3 = np.zeros((len(theta)-1,N))
for i in range(N):
if i%1000 ==0:
print(i)
print(theta)
theta = updateTheta(theta)
THETA3[:,i] = theta[:len(theta)-1]
plot3(theta)
THETA3[:,-1]
# First set up the figure, the axis, and the plot element we want to animate
from IPython.display import HTML
from matplotlib import animation
fig = plt.figure()
ax = plt.axes(xlim=(0, 10), ylim=(0, 10))
line, = ax.plot([], [], lw=2)
# initialization function: plot the background of each frame
def init():
plt.plot(ws, Cs, 'b')
line.set_data([], [])
return line,
def animate(i):
x = ws
y = mu(THETA3[:,i], ws)
line.set_data(x, y)
return line,
# call the animator. blit=True means only re-draw the parts that have changed.
anim = animation.FuncAnimation(fig, animate, init_func=init,
frames=1000, interval=10, blit=True)
HTML(anim.to_html5_video())
###Output
_____no_output_____ |
object_detection_tutorial_Webcam.ipynb | ###Markdown
Imports
###Code
import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image
if tf.__version__ < '1.4.0':
raise ImportError('Please upgrade your tensorflow installation to v1.4.* or later!')
###Output
C:\ProgramData\Anaconda3\lib\site-packages\h5py\__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
###Markdown
Env setup
###Code
# This is needed to display the images.
%matplotlib inline
# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
###Output
_____no_output_____
###Markdown
Object detection importsHere are the imports from the object detection module.
###Code
from utils import label_map_util
from utils import visualization_utils as vis_util
###Output
D:\My_Work\OpenCV_libs\models\research\object_detection\utils\visualization_utils.py:25: UserWarning:
This call to matplotlib.use() has no effect because the backend has already
been chosen; matplotlib.use() must be called *before* pylab, matplotlib.pyplot,
or matplotlib.backends is imported for the first time.
The backend was *originally* set to 'module://ipykernel.pylab.backend_inline' by the following code:
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\ProgramData\Anaconda3\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel_launcher.py", line 16, in <module>
app.launch_new_instance()
File "C:\ProgramData\Anaconda3\lib\site-packages\traitlets\config\application.py", line 658, in launch_instance
app.start()
File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\kernelapp.py", line 486, in start
self.io_loop.start()
File "C:\ProgramData\Anaconda3\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start
super(ZMQIOLoop, self).start()
File "C:\ProgramData\Anaconda3\lib\site-packages\tornado\ioloop.py", line 888, in start
handler_func(fd_obj, events)
File "C:\ProgramData\Anaconda3\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 440, in _handle_events
self._handle_recv()
File "C:\ProgramData\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv
self._run_callback(callback, msg)
File "C:\ProgramData\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback
callback(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper
return fn(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher
return self.dispatch_shell(stream, msg)
File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 233, in dispatch_shell
handler(stream, idents, msg)
File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request
user_expressions, allow_stdin)
File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\ipkernel.py", line 208, in do_execute
res = shell.run_cell(code, store_history=store_history, silent=silent)
File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\zmqshell.py", line 537, in run_cell
return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2728, in run_cell
interactivity=interactivity, compiler=compiler, result=result)
File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2850, in run_ast_nodes
if self.run_code(code, result):
File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2910, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-2-d1be25ef39a9>", line 2, in <module>
get_ipython().run_line_magic('matplotlib', 'inline')
File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2095, in run_line_magic
result = fn(*args,**kwargs)
File "<decorator-gen-108>", line 2, in matplotlib
File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\magic.py", line 187, in <lambda>
call = lambda f, *a, **k: f(*a, **k)
File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\magics\pylab.py", line 99, in matplotlib
gui, backend = self.shell.enable_matplotlib(args.gui)
File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2978, in enable_matplotlib
pt.activate_matplotlib(backend)
File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\pylabtools.py", line 308, in activate_matplotlib
matplotlib.pyplot.switch_backend(backend)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\pyplot.py", line 232, in switch_backend
matplotlib.use(newbackend, warn=False, force=True)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\__init__.py", line 1305, in use
reload(sys.modules['matplotlib.backends'])
File "C:\ProgramData\Anaconda3\lib\importlib\__init__.py", line 166, in reload
_bootstrap._exec(spec, module)
File "C:\ProgramData\Anaconda3\lib\site-packages\matplotlib\backends\__init__.py", line 14, in <module>
line for line in traceback.format_stack()
import matplotlib; matplotlib.use('Agg') # pylint: disable=multiple-statements
###Markdown
Model preparation VariablesAny model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_CKPT` to point to a new .pb file. By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies.
###Code
# What model to download.
MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_FILE = MODEL_NAME + '.tar.gz'
DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'
# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_CKPT = MODEL_NAME + '/frozen_inference_graph.pb'
# List of the strings that is used to add correct label for each box.
PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
NUM_CLASSES = 90
###Output
_____no_output_____
###Markdown
Download Model
###Code
opener = urllib.request.URLopener()
opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
tar_file = tarfile.open(MODEL_FILE)
for file in tar_file.getmembers():
file_name = os.path.basename(file.name)
if 'frozen_inference_graph.pb' in file_name:
tar_file.extract(file, os.getcwd())
###Output
_____no_output_____
###Markdown
Load a (frozen) Tensorflow model into memory.
###Code
detection_graph = tf.Graph()
with detection_graph.as_default():
od_graph_def = tf.GraphDef()
with tf.gfile.GFile(PATH_TO_CKPT, 'rb') as fid:
serialized_graph = fid.read()
od_graph_def.ParseFromString(serialized_graph)
tf.import_graph_def(od_graph_def, name='')
###Output
_____no_output_____
###Markdown
Loading label mapLabel maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`. Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine
###Code
label_map = label_map_util.load_labelmap(PATH_TO_LABELS)
categories = label_map_util.convert_label_map_to_categories(label_map, max_num_classes=NUM_CLASSES, use_display_name=True)
category_index = label_map_util.create_category_index(categories)
import cv2
cap=cv2.VideoCapture(0) # 0 stands for very first webcam attach
filename="D:\\My_Work\\OpenCV_libs\\output_video.avi" #[place were i stored my output file]
codec=cv2.VideoWriter_fourcc('m','p','4','v')#fourcc stands for four character code
framerate=30
resolution=(640,480)
VideoFileOutput=cv2.VideoWriter(filename,codec,framerate, resolution)
with detection_graph.as_default():
with tf.Session(graph=detection_graph) as sess:
ret=True
while (ret):
ret, image_np=cap.read()
# Definite input and output Tensors for detection_graph
image_tensor = detection_graph.get_tensor_by_name('image_tensor:0')
# Each box represents a part of the image where a particular object was detected.
detection_boxes = detection_graph.get_tensor_by_name('detection_boxes:0')
# Each score represent how level of confidence for each of the objects.
# Score is shown on the result image, together with the class label.
detection_scores = detection_graph.get_tensor_by_name('detection_scores:0')
detection_classes = detection_graph.get_tensor_by_name('detection_classes:0')
num_detections = detection_graph.get_tensor_by_name('num_detections:0')
# Expand dimensions since the model expects images to have shape: [1, None, None, 3]
image_np_expanded = np.expand_dims(image_np, axis=0)
# Actual detection.
(boxes, scores, classes, num) = sess.run(
[detection_boxes, detection_scores, detection_classes, num_detections],
feed_dict={image_tensor: image_np_expanded})
# Visualization of the results of a detection.
vis_util.visualize_boxes_and_labels_on_image_array(
image_np,
np.squeeze(boxes),
np.squeeze(classes).astype(np.int32),
np.squeeze(scores),
category_index,
use_normalized_coordinates=True,
line_thickness=8)
VideoFileOutput.write(image_np)
cv2.imshow('live_detection',image_np)
if cv2.waitKey(25) & 0xFF==ord('q'):
break
cv2.destroyAllWindows()
cap.release()
###Output
_____no_output_____ |
04_stats_for_data_analysis/02_w1q_quiz_questions.ipynb | ###Markdown
Quiz. Доверительные интервалы для оценки среднего
###Code
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import scipy.stats as sts
import seaborn as sns
from contextlib import contextmanager
sns.set()
sns.set_style("whitegrid")
color_palette = sns.color_palette('deep') + sns.color_palette('husl', 6) + sns.color_palette('bright') + sns.color_palette('pastel')
%matplotlib inline
sns.palplot(color_palette)
def ndprint(a, precision=3):
with np.printoptions(precision=precision, suppress=True):
print(a)
###Output
_____no_output_____
###Markdown
02. MortalityДля 61 большого города в Англии и Уэльсе известны средняя годовая смертность на 100000 населения (по данным 1958–1964) и концентрация кальция в питьевой воде (в частях на миллион). Чем выше концентрация кальция, тем жёстче вода. Города дополнительно поделены на северные и южные.Постройте 95% доверительный интервал для средней годовой смертности в больших городах. Чему равна его нижняя граница? Округлите ответ до 4 знаков после десятичной точки.Нас интересует несмещённая оценка стандартного отклонения. Чтобы не думать всё время о том, правильно ли вычисляется в вашем случае std(), можно всегда использовать std(ddof=1) (ddof — difference in degrees of freedom), тогда нормировка всегда будет на n-1.
###Code
data = pd.read_csv('data/02_water.txt', sep='\t')
data.sample(5)
sample = data['mortality']
n = len(sample)
df = n - 1
sample_mean = sample.mean()
sample_var = sample.std(ddof=1)
print sample_mean, sample_var
from scipy.stats import t
t_int = np.array(t.interval(0.95, df))
t_int
t_int * (sample_var / np.sqrt(n))
conf_int = sample_mean + t_int * (sample_var / np.sqrt(n))
conf_int
round(conf_int[0], 4)
###Output
_____no_output_____
###Markdown
Все то же самое, что выше, только в одной функции 03. South mortalityНа данных из предыдущего вопроса постройте 95% доверительный интервал для средней годовой смертности по всем южным городам. Чему равна его верхняя граница? Округлите ответ до 4 знаков после десятичной точки.
###Code
def get_conf_interval(sample, conf_alpha=0.95):
n = len(sample)
t_int = np.array(t.interval(conf_alpha, n - 1))
return np.mean(sample) + t_int * (np.std(sample, ddof=1) / np.sqrt(n))
get_conf_interval(sample) == conf_int
south_sample = data['mortality'][data['location'] == 'South']
north_sample = data['mortality'][data['location'] == 'North']
south_conf_int = get_conf_interval(south_sample)
north_conf_int = get_conf_interval(north_sample)
round(south_conf_int[1], 4)
###Output
_____no_output_____
###Markdown
04. North mortalityНа тех же данных постройте 95% доверительный интервал для средней годовой смертности по всем северным городам. Пересекается ли этот интервал с предыдущим? Как вы думаете, какой из этого можно сделать вывод?
###Code
print 'south: ', south_conf_int
print 'north: ', north_conf_int
###Output
south: [1320.15174629 1433.46363832]
north: [1586.5605252 1680.6394748]
###Markdown
05. Water hardnessПересекаются ли 95% доверительные интервалы для средней жёсткости воды в северных и южных городах?
###Code
south_hardness_sample = data['hardness'][data['location'] == 'South']
north_hardness_sample = data['hardness'][data['location'] == 'North']
south_hardness_conf_int = get_conf_interval(south_hardness_sample)
north_hardness_conf_int = get_conf_interval(north_hardness_sample)
print 'south: ', south_hardness_conf_int
print 'north: ', north_hardness_conf_int
###Output
south: [53.46719869 86.07126285]
north: [21.42248729 39.37751271]
###Markdown
06. Sample sizeВспомним формулу доверительного интервала для среднего нормально распределённой случайной величины с дисперсией $\sigma^2$:$$\bar{X}_n\pm z_{1-\frac{\alpha}{2}} \frac{\sigma}{\sqrt{n}}$$При $\sigma=1$ какой нужен объём выборки, чтобы на уровне доверия 95% оценить среднее с точностью $\pm0.1$? $0.1 = z_{0.975} * \frac{1}{\sqrt{n}}$$z_{0.975} \approx 2$$n \approx 20^2 = 400$
###Code
from scipy.stats import norm
z_95 = norm.ppf(0.975)
np.ceil((z_95 / 0.1)**2)
###Output
_____no_output_____ |
lecture2/Python-Programming-master/Module-1:Basic of python/lec:1.1/Variable.ipynb | ###Markdown
Module-1---Basics of Python ProgrammingCourse-1 Week-1 Day-3---Instructor : Muhammad Iqran (Software Engineer) Lecture:1.3Variables, Numbers Datatype And Operators Learning Agenda of this notebook Python is dynamically typedIntellisense / Code Completion --> Variables and variable naming conventions Assigning values to multiple variables in single line Checking type of a variable using Built-in type() function Checking ID of a variable using Built-in id() function Do we actually store data inside variables? Deleting a variable from Kernel memory 1. You don't have to specify a data type in Python, since it is a dynamically typed language
###Code
name_of_instructor = "Muhammad Iqran"
name_of_instructor
# type(name_of_instructor)
###Output
_____no_output_____
###Markdown
>**Variables**: While working with a programming language such as Python, information is stored in *variables*. You can think of variables as containers for storing data. The data stored within a variable is called its *value*. Well,there are Three Properties that are associated with every Variable it's Type it's Value it's identityA statement in python is something that results in an a action or excution of a commend.An expression,however, always results in value. A variable in python is reference rather a container. Python is Dynamically typed (caries out type-checking in runtime ), which means you don't have to associate a type with variable name.So we don't have explicitly declare a variable.rather the variable is created in the same Statement,Where we assign a object to the Variable. 2. Variable name convensions- In programming languages, **identifiers** are names used to identify a variable, function, or other entities in a program. Variable names can be short (`a`, `x`, `y`, etc.) or descriptive ( `my_favorite_color`, - An identifier or variable's name must start with a letter or the underscore character `_`. It cannot begin with a number. - A variable name can only contain lowercase (small) or uppercase (capital) letters, digits, or underscores (`a`-`z`, `A`-`Z`, `0`-`9`, and `_`). - Spaces are not allowed. Instead, we must use snake_case to make variable names readable. - Variable names are case-sensitive, i.e., `a_variable`, `A_Variable`, and `A_VARIABLE` are all different variables.- Keywords are reserved words. Each keyword has a specific meaning to the Python interpreter. A reserved keyword may not be used as an identifier. Here is a list of the Python keywords.```False class from orNone continue global passTrue def if raiseand del import returnas elif in tryassert else is whileasync except lambda withawait finally nonlocal yieldbreak for not ```- To get help about these keywords: Type `help('keyword')` in the cell below
###Code
# A variable name cannot start with a special character or digit
var1 = 25
# 1var = 530
#@i = 980
# if = 40
# help("if")
###Output
_____no_output_____
###Markdown
3. Assigning values to multiple variables in single line
###Code
#Assigning multiple values to multiple variables
a, b, c = 5, 3.2, "Hello"
print ('a = ',a,' b = ',b,' c = ',c)
###Output
a = 5 b = 3.2 c = Hello
###Markdown
4. Checking type of a variable using Built-in type() function
###Code
# to check the type of variable
name = "Iqran Khan"
print("name is of ", type(name))
x = 234
print("x is of ", type(x))
y = 5.321
print("y is of ", type(y))
###Output
name is of <class 'str'>
x is of <class 'int'>
y is of <class 'float'>
###Markdown
5. To Check the ID of a Variable- Every Pyton object has an associated ID (memory address). The Python built-in `id()` function returns the identity of an object
###Code
x = 234
y = 5.321
id(x), id(y)
###Output
_____no_output_____
###Markdown
6. Do we actually store data inside variables
###Code
a = 10
b = 10
id(a), id(b)
###Output
_____no_output_____
###Markdown
>- Both the variables `a` and `b` have same ID, i.e., both a and b are pointing to same memory location.>- Variables in Python are not actual objects, rather are references to objects that are present in memory.>- So both the variables a and b are refering to same object 10 in memory and thus having the same ID.
###Code
var1="deep learning"
id(var1)
var1 = "Muhammad Iqran"
id(var1)
###Output
_____no_output_____
###Markdown
>Note that the string object "Deep Learning" has become an orphaned object, as no variable is refering to it now. This is because the reference `var1` is now pointing/referring to a new object "Muhammad Iqran". All orphan objects are reaped by Python garbage collector. 8. Use of `dir()`, and `del` Keyword- The built-in `dir()` function, when called without an argument, return the names in the current scope.- If passed a - object name: - module name: then returns the module's attributes - class name: then returns its attributes and recursively the attributes of its base classes - object name: then returns its attributes, its class's attributes, and recursively the attributes of its class's base classes.
###Code
print(dir())
newvar = 10
print("newvar=", newvar)
print(dir())
del var1
print(dir())
var1
# This is a Built-in math library
import math
print(dir(math)) #this line show what inside this library
help("math.isqrt")
###Output
Help on built-in function isqrt in math:
math.isqrt = isqrt(n, /)
Return the integer part of the square root of the input.
|
notebooks/01_object-recognition_inceptV2resnet.ipynb | ###Markdown
POC using object recognition in Encoder-Part (CNN)Approach:* Transfer captions to entities* Check quality of recognition* Use net as Decoder-Part Imports
###Code
import pandas as pd
import numpy as np
import os
import math
import seaborn as sns
import urllib.request
from urllib.parse import urlparse
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import tensorflow_addons as tfa
#https://machinelearningmastery.com/how-to-use-transfer-learning-when-developing-convolutional-neural-network-models/
from keras.applications.inception_resnet_v2 import InceptionResNetV2
#from keras.applications.xception import Xception
from keras.models import Model
from keras import metrics
from keras.callbacks import ModelCheckpoint, TensorBoard
from numba import cuda
import sklearn.model_selection as skms
from sklearn.utils import class_weight
import sklearn.feature_extraction.text as skfet
import spacy
import nltk
import nltk.stem as nstem
import sklego.meta as sklmet
#from wcs.google import google_drive_share
import urllib.request
from urllib.parse import urlparse
#from google.colab import drive
import datetime as dt
import time
import warnings
warnings.simplefilter(action='ignore')
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
###Output
_____no_output_____
###Markdown
Configuration
###Code
# Runtime config
RUN_ON_KAGGLE = False
# Model
MODEL_NAME = "InceptionV3_customized"
DO_TRAIN_VALID_SPLIT = False
BATCH_SIZE = 64*4
EPOCHS = 10
AUTOTUNE = tf.data.experimental.AUTOTUNE # Adapt preprocessing and prefetching dynamically to reduce GPU and CPU idle time
DO_SHUFFLE = False
SHUFFLE_BUFFER_SIZE = 1024 # Shuffle the training data by a chunck of 1024 observations
IMG_DIMS = [299, 299]
IMG_CHANNELS = 3 # Keep RGB color channels to match the input format of the model
# Data pipeline
DP_TFDATA = "Data pipeline using tf.data"
DP_IMGGEN = "Data pipeline using tf.keras.ImageGenerator"
DP = DP_TFDATA
# Directories and filenames
if RUN_ON_KAGGLE:
FP_CAPTIONS = '../input/flickr8k/captions.txt'
DIR_IMAGES = '../input/flickr8k/Images/'
DIR_IMAGE_FEATURES = '../input/aida-image-captioning/Images/'
DIR_MODEL_STORE = './models/'
DIR_MODEL_LOG = './models/'
DIR_RESULT_STORE = './results/'
DIR_TENSORBOARD_LOG = './tensorboard/'
DIR_INTERIM = "./models/data/interim/"
DIR_RAW = "./models/data/raw/"
else:
FP_CAPTIONS = '../data/raw/flickr8k/captions.txt'
DIR_IMAGES = '../data/raw/flickr8k/Images/'
DIR_IMAGE_FEATURES = '../data/interim/aida-image-captioning/Images/'
DIR_MODEL_STORE = f'../models/{MODEL_NAME}/'
DIR_MODEL_LOG = f'../models/logs/{MODEL_NAME}/'
DIR_RESULT_STORE = f'../data/results/{MODEL_NAME}/'
DIR_TENSORBOARD_LOG = './tensorboard_logs/scalars/'
DIR_INTERIM = '../data/interim/'
DIR_RAW = "../data/raw/"
SEED = 42
# Set the max column width to see the complete caption
pd.set_option('display.max_colwidth',-1)
###Output
_____no_output_____
###Markdown
Helper functions
###Code
def get_lemma_text(text: str) -> str:
"""Get roots of words"""
# Split text to list
l_text = text.lower().split()
# Lemmatize
lemma = nstem.WordNetLemmatizer()
for i, t in enumerate(l_text):
tl = lemma.lemmatize(t)
if tl != t:
l_text[i] = tl
return " ".join(l_text)
# To get access to a GPU instance you can use the `change runtime type` and set the option to `GPU` from the `Runtime` tab in the notebook
# Checking the GPU availability for the notebook
#tf.test.gpu_device_name()
gpus = tf.config.list_physical_devices('GPU')
if gpus:
# Create virtual GPUs
try:
tf.config.experimental.set_virtual_device_configuration(
#OK, but solwer:
#gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024),
# tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024),
# tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024),
# tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024)],
#OK
gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=MEMORY_OF_GPU//2),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=MEMORY_OF_GPU//2)],
#Error using NCCL automatically on mirrored strategy: gpus[0], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=10*1024)],
)
tf.config.experimental.set_virtual_device_configuration(
#OK, but solwer:
#gpus[1], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024),
# tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024),
# tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024),
# tf.config.experimental.VirtualDeviceConfiguration(memory_limit=2.5*1024)],
#OK
gpus[1], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=MEMORY_OF_GPU//2),
tf.config.experimental.VirtualDeviceConfiguration(memory_limit=MEMORY_OF_GPU//2)],
#Error using NCCL automatically on mirrored strategy: gpus[1], [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=10*1024)],
)
except:
# Virtual devices must be set before GPUs have been initialized
print("Warning: During GPU handling.")
pass
finally:
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPU,", len(logical_gpus), "Logical GPUs\n")
# Set runtime context and batch size
l_rtc_names = [
"multi-GPU_MirroredStrategy",
"multi-GPU_CentralStorageStrategy",
"1-GPU",
"CPUs",
"multi-GPU_MirroredStrategy_NCCL-All-Reduced",
]
l_rtc = [
tf.distribute.MirroredStrategy().scope(),
tf.distribute.experimental.CentralStorageStrategy().scope(),
tf.device("/GPU:0"),
tf.device("/CPU:0"),
tf.distribute.MirroredStrategy(cross_device_ops=tf.distribute.NcclAllReduce()).scope(),
]
if len(gpus) == 0:
rtc_idx = 3
batch_size = 64
elif len(gpus) == 1:
rtc_idx = 2
batch_size = 4*256
elif len(gpus) > 1:
rtc_idx = 0
batch_size = 8*256
runtime_context = l_rtc[rtc_idx]
print(f"\nRuntime Context: {l_rtc_names[rtc_idx]}")
print(f"Recommended Batch Size: {batch_size} datasets")
###Output
Warning: During GPU handling.
1 Physical GPU, 2 Logical GPUs
WARNING:tensorflow:NCCL is not supported when using virtual GPUs, fallingback to reduction to one device
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1')
INFO:tensorflow:ParameterServerStrategy (CentralStorageStrategy if you are using a single machine) with compute_devices = ['/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1'], variable_device = '/device:CPU:0'
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1')
Runtime Context: 1-GPU
Recommended Batch Size: 1024 datasets
###Markdown
Datapipeline based on tf.data
###Code
def parse_function(filename, label):
"""Function that returns a tuple of normalized image array and labels array.
Args:
filename: string representing path to image
label: 0/1 one-dimensional array of size N_LABELS
"""
# Read an image from a file
image_string = tf.io.read_file(DIR_IMAGES + filename)
# Decode it into a dense vector
image_decoded = tf.image.decode_jpeg(image_string, channels=IMG_CHANNELS)
# Resize it to fixed shape
image_resized = tf.image.resize(image_decoded, [IMG_DIMS[0], IMG_DIMS[1]])
# Normalize it from [0, 255] to [0.0, 1.0]
image_normalized = image_resized / 255.0
return image_normalized, label
def create_dataset(filenames, labels, cache=True):
"""Load and parse dataset.
Args:
filenames: list of image paths
labels: numpy array of shape (BATCH_SIZE, N_LABELS)
is_training: boolean to indicate training mode
"""
# Create a first dataset of file paths and labels
dataset = tf.data.Dataset.from_tensor_slices((filenames, labels))
# Parse and preprocess observations in parallel
dataset = dataset.map(parse_function, num_parallel_calls=AUTOTUNE)
if cache == True:
# This is a small dataset, only load it once, and keep it in memory.
dataset = dataset.cache()
# Shuffle the data each buffer size
if DO_SHUFFLE:
dataset = dataset.shuffle(buffer_size=SHUFFLE_BUFFER_SIZE)
# Batch the data for multiple steps
dataset = dataset.batch(BATCH_SIZE)
# Fetch batches in the background while the model is training.
dataset = dataset.prefetch(buffer_size=AUTOTUNE)
return dataset
###Output
_____no_output_____
###Markdown
Preproc
###Code
# Create a dataframe which summarizes the image, path & captions as a dataframe
# Each image id has 5 captions associated with it therefore the total dataset should have 40455 samples.
df = pd.read_csv(FP_CAPTIONS)
df.head()
# Aggregateion by filename
df_agg = captions_df.groupby("image").first().reset_index()
df_agg.head()
# Get caption labels
sp = spacy.load('en_core_web_sm') # english language model
sp_cap = sp(df_agg.caption[1])
print(sp_cap)
for idx, token in enumerate(sp_cap):
if idx == 0:
print(f"{'Word pos.':^10} {'Text':<15} {'POS-Tag':^10} {'POS-Tag expl.'}")
print(f"{idx+1:^10} {token.text:<15} {token.tag_:^10} {token.pos_:<5} - {spacy.explain(token.tag_)}")
cap_cleaned = " ".join(set([token.text for token in sp_cap if token.tag_ in ["ADJ", "AUX", "JJ", "NN", "NNS", "VB", "VBG", "VBN", "VBZ"]]))
cap_cleaned
df["caption_short"] = df.caption.map(lambda x: " ".join(set([token.text for token in sp(x) if token.tag_ in ["ADJ", "AUX", "JJ", "NN", "NNS", "VB", "VBG", "VBN", "VBZ"]])))
df.head()
df["caption_lemm"] = df.apply(lambda r: get_lemma_text(r["caption_short"]), axis=1)
df.head()
# Train / test split
df_train, df_test = skms.train_test_split(df, test_size=.2, random_state=42)
print(df_train.shape, df_test.shape)
# Def feature and target column
feat_col = "caption_lemm"
# Build up feature and target structure
X_train = df_train[feat_col]
X_test = df_test[feat_col]
# Vectorize captions
vectorizer = skfet.TfidfVectorizer()
# Train it
X_train_CountVec = vectorizer.fit_transform(X_train)
print(f"{'Train feature matrix shape:':<30} {X_train_CountVec.shape}")
# Transform test features to tokens
X_test_CountVec = vectorizer.transform(X_test)
print(f"{'Test feature matrix shape:':<30} {X_test_CountVec.shape}")
###Output
Train feature matrix shape: (32364, 5925)
Test feature matrix shape: (8091, 5925)
###Markdown
Create ImageGenerators
###Code
print(DP)
if DO_TRAIN_VALID_SPLIT:
df_train, df_valid = skms.train_test_split(df, test_size=0.2, random_state=SEED)
else:
df_train = df
df_valid = df_test
df_train.shape, df_valid.shape
#tf.autograph.set_verbosity(3, True)
if DP == DP_IMGGEN:
datagen = ImageDataGenerator(rescale=1 / 255.)#, validation_split=0.1)
train_generator = datagen.flow_from_dataframe(
dataframe=df_train,
directory=IMAGES_DIR,
x_col="filename",
y_col="genre_ids2_list",
batch_size=BATCH_SIZE,
seed=SEED,
shuffle=True,
class_mode="categorical",
target_size=(299, 299),
subset='training',
validate_filenames=True
)
valid_generator = datagen.flow_from_dataframe(
dataframe=df_valid,
directory=IMAGES_DIR,
x_col="filename",
y_col="genre_ids2_list",
batch_size=BATCH_SIZE,
seed=SEED,
shuffle=False,
class_mode="categorical",
target_size=(299, 299),
subset='training',
validate_filenames=True
)
test_generator = datagen.flow_from_dataframe(
dataframe=df_test,
directory=IMAGES_DIR,
x_col="filename",
y_col="genre_ids2_list",
batch_size=BATCH_SIZE,
seed=SEED,
shuffle=False,
class_mode="categorical",
target_size=(299, 299),
subset='training',
validate_filenames=True
)
else:
X_train = df_train.filename.to_numpy()
y_train = df_train[LABEL_COLS].to_numpy()
X_valid = df_valid.filename.to_numpy()
y_valid = df_valid[LABEL_COLS].to_numpy()
X_test = df_test.filename.to_numpy()
y_test = df_test[LABEL_COLS].to_numpy()
train_generator = create_dataset(X_train, y_train, cache=True)
valid_generator = create_dataset(X_valid, y_valid, cache=True)
test_generator = create_dataset(X_test, y_test, cache=True)
print(f"{len(X_train)} training datasets, using {y_train.shape[1]} classes")
print(f"{len(X_valid)} validation datasets, unsing {y_valid.shape[1]} classes")
print(f"{len(X_test)} training datasets, using {y_test.shape[1]} classes")
# Show label distribution
df_tmp = df = pd.DataFrame(
{
'train': df_train[LABEL_COLS].sum()/len(df_train),
'valid': df_valid[LABEL_COLS].sum()/len(df_valid),
'test': df_test[LABEL_COLS].sum()/len(df_test)
},
index=LABEL_COLS
)
df_tmp.sort_values('train', ascending=False).plot.bar(figsize=(14,6), title='Label distributions')
df_train.info()
from sklearn.utils import class_weight
#In order to calculate the class weight do the following
class_weights = class_weight.compute_class_weight('balanced',
LABEL_COLS, # np.array(list(train_generator.class_indices.keys()),dtype="int"),
np.array(df_train.genre_names.explode()))
class_weights = dict(zip(list(range(len(class_weights))), class_weights))
number_of_classes = len(LABEL_COLS)
pd.DataFrame({'weight': [i[1] for i in class_weights.items()]}, index=[LABEL_COLS[i[0]] for i in class_weights.items()])
###Output
_____no_output_____
###Markdown
Create Model
###Code
def model_create(model_name: str):
"""Create the customized InceptionV3 model"""
base_inc_res = tf.keras.applications.InceptionV3(
include_top=False,
weights='imagenet',
input_shape=(299,299,3)
)
base_inc_res.trainable = False
inputs = keras.Input(shape=(299,299,3))
x = base_inc_res(inputs)
x = layers.BatchNormalization()(x)
x = layers.Dense(1024, activation='relu')(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(512, activation='relu')(x)
x = layers.Dropout(0.25)(x)
x = layers.Dense(19, activation='sigmoid')(x)
return keras.Model(inputs=inputs, outputs=x, name=model_name)
return model, model_name
model = model_create(MODEL_NAME)
model.summary()
###Output
Downloading data from https://storage.googleapis.com/tensorflow/keras-applications/inception_v3/inception_v3_weights_tf_dim_ordering_tf_kernels_notop.h5
87916544/87910968 [==============================] - 6s 0us/step
Model: "InceptionV3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 299, 299, 3)] 0
_________________________________________________________________
inception_v3 (Functional) (None, 8, 8, 2048) 21802784
_________________________________________________________________
batch_normalization_94 (Batc (None, 8, 8, 2048) 8192
_________________________________________________________________
dense (Dense) (None, 8, 8, 1024) 2098176
_________________________________________________________________
dropout (Dropout) (None, 8, 8, 1024) 0
_________________________________________________________________
dense_1 (Dense) (None, 8, 8, 512) 524800
_________________________________________________________________
dropout_1 (Dropout) (None, 8, 8, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 8, 8, 19) 9747
=================================================================
Total params: 24,443,699
Trainable params: 2,636,819
Non-trainable params: 21,806,880
_________________________________________________________________
###Markdown
Run model
###Code
#tf.debugging.set_log_device_placement(True)
l_rtc_names = [
"2-GPU_MirroredStrategy",
"2-GPU_CentralStorageStrategy",
"1-GPU",
"56_CPU",
"2-GPU_MirroredStrategy_NCCL-All-Reduced",
]
l_rtc = [
tf.distribute.MirroredStrategy().scope(),
tf.distribute.experimental.CentralStorageStrategy().scope(),
tf.device("/GPU:0"),
tf.device("/CPU:0"),
tf.distribute.MirroredStrategy(cross_device_ops=tf.distribute.NcclAllReduce()).scope(),
]
# Load Model
i = 0
runtime_context = l_rtc[i]
######for i, runtime_context in enumerate(l_rtc):
print(f"Runtime Context: {l_rtc_names[i]}")
# Create and train model
with runtime_context:
model_name = MODEL_NAME
model = create_model(model_name)
# Start time measurement
tic = time.perf_counter()
# Define Tensorflow callback log-entry
model_name_full = f"{model.name}_{l_rtc_names[i]}_{dt.datetime.now().strftime('%Y%m%d-%H%M%S')}"
tb_logdir = f"{TENSORBOARD_LOGDIR}{model_name_full}"
# mark loaded layers as not trainable
# except last layer
leng = len(model.layers)
print(leng)
for i,layer in enumerate(model.layers):
if leng-i == 5:
print("stopping at",i)
break
layer.trainable = False
# Def metrics
threshold = 0.5
f1_micro = tfa.metrics.F1Score(num_classes=19, average='micro', name='f1_micro',threshold=threshold),
f1_macro = tfa.metrics.F1Score(num_classes=19, average='macro', name='f1_macro',threshold=threshold)
f1_weighted = tfa.metrics.F1Score(num_classes=19, average='weighted', name='f1_score_weighted',threshold=threshold)
# Compile model
model.compile(
optimizer="adam",
loss="binary_crossentropy",
metrics=[
"accuracy",
"categorical_accuracy",
tf.keras.metrics.AUC(multi_label = True),#,label_weights=class_weights),
f1_micro,
f1_macro,
f1_weighted]
)
print("create callbacks")
#filepath = "model_checkpoints/{model_name}_saved-model-{epoch:02d}-{val_f1_score_weighted:.2f}.hdf5"
#cb_checkpoint = ModelCheckpoint(filepath, monitor='val_f1_score_weighted', verbose=1, save_best_only=True, mode='max')
cb_tensorboard = TensorBoard(
log_dir = tb_logdir,
histogram_freq=0,
update_freq='epoch',
write_graph=True,
write_images=False)
#callbacks_list = [cb_checkpoint, cb_tensorboard]
#callbacks_list = [cb_checkpoint]
callbacks_list = [cb_tensorboard]
# Model summary
print(model.summary())
# Train model
print("model fit")
history = model.fit(
train_generator,
validation_data=valid_generator,
batch_size=BATCH_SIZE,
epochs=EPOCHS,
# reduce steps per epochs for faster epochs
#steps_per_epoch = math.ceil(266957 / BATCH_SIZE /8),
class_weight = class_weights,
callbacks=callbacks_list,
use_multiprocessing=False
)
# Measure time of loop
toc = time.perf_counter()
secs_all = toc - tic
mins = int(secs_all / 60)
secs = int((secs_all - mins*60))
print(f"Time spend for current run: {secs_all:0.4f} seconds => {mins}m {secs}s")
# Predict testset
y_pred_test = model.predict(test_generator)
# Store resulting model
try:
fpath = MODEL_DIR + model_name_full
print(f"Saving final model to file {fpath}")
model.save(fpath)
except Exception as e:
print("-------------------------------------------")
print(f"Error during saving of final model\n{e}")
print("-------------------------------------------\n")
try:
fpath = MODEL_DIR + model_name_full + ".ckpt"
print(f"Saving final model weights to file {fpath}]")
model.save_weights(fpath)
except Exception as e:
print("-------------------------------------------")
print(f"Error during saving of final model weights\n{e}")
print("-------------------------------------------\n")
###Output
WARNING:tensorflow:NCCL is not supported when using virtual GPUs, fallingback to reduction to one device
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3')
INFO:tensorflow:ParameterServerStrategy (CentralStorageStrategy if you are using a single machine) with compute_devices = ['/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3'], variable_device = '/device:CPU:0'
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3')
Runtime Context: 2-GPU_MirroredStrategy
Loading model from file InceptionResNetV2_corrected_20210323T223046Z.hd5...
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:CPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:CPU:0',).
WARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually.
model InceptionResNetV2_Customized loaded in 3m 24s!
15
stopping at 10
create callbacks
Model: "InceptionResNetV2_Customized"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_12 (InputLayer) [(None, 299, 299, 3) 0
__________________________________________________________________________________________________
tf.math.truediv_6 (TFOpLambda) (None, 299, 299, 3) 0 input_12[0][0]
__________________________________________________________________________________________________
tf.math.truediv_7 (TFOpLambda) (None, 299, 299, 3) 0 input_12[0][0]
__________________________________________________________________________________________________
tf.math.subtract_6 (TFOpLambda) (None, 299, 299, 3) 0 tf.math.truediv_6[0][0]
__________________________________________________________________________________________________
tf.math.subtract_7 (TFOpLambda) (None, 299, 299, 3) 0 tf.math.truediv_7[0][0]
__________________________________________________________________________________________________
inception_resnet_v2 (Functional (None, 1536) 54336736 tf.math.subtract_6[0][0]
__________________________________________________________________________________________________
xception (Functional) (None, 2048) 20861480 tf.math.subtract_7[0][0]
__________________________________________________________________________________________________
batch_normalization_834 (BatchN (None, 1536) 6144 inception_resnet_v2[0][0]
__________________________________________________________________________________________________
batch_normalization_835 (BatchN (None, 2048) 8192 xception[0][0]
__________________________________________________________________________________________________
tf.concat_3 (TFOpLambda) (None, 3584) 0 batch_normalization_834[0][0]
batch_normalization_835[0][0]
__________________________________________________________________________________________________
dense_9 (Dense) (None, 1024) 3671040 tf.concat_3[0][0]
__________________________________________________________________________________________________
dropout_6 (Dropout) (None, 1024) 0 dense_9[0][0]
__________________________________________________________________________________________________
dense_10 (Dense) (None, 512) 524800 dropout_6[0][0]
__________________________________________________________________________________________________
dropout_7 (Dropout) (None, 512) 0 dense_10[0][0]
__________________________________________________________________________________________________
dense_11 (Dense) (None, 19) 9747 dropout_7[0][0]
==================================================================================================
Total params: 79,418,139
Trainable params: 4,205,587
Non-trainable params: 75,212,552
__________________________________________________________________________________________________
None
model fit
Epoch 1/10
WARNING:tensorflow:From C:\Users\A291127E01\.conda\envs\aida\lib\site-packages\tensorflow\python\data\ops\multi_device_iterator_ops.py:601: get_next_as_optional (from tensorflow.python.data.ops.iterator_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use `tf.data.Iterator.get_next_as_optional()` instead.
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
INFO:tensorflow:Reduce to /job:localhost/replica:0/task:0/device:GPU:0 then broadcast to ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1', '/job:localhost/replica:0/task:0/device:GPU:2', '/job:localhost/replica:0/task:0/device:GPU:3').
1/1070 [..............................] - ETA: 0s - loss: 0.4261 - accuracy: 0.0664 - categorical_accuracy: 0.0664 - auc: 0.4568 - f1_micro: 0.1387 - f1_macro: 0.0913 - f1_score_weighted: 0.2409WARNING:tensorflow:From C:\Users\A291127E01\.conda\envs\aida\lib\site-packages\tensorflow\python\ops\summary_ops_v2.py:1277: stop (from tensorflow.python.eager.profiler) is deprecated and will be removed after 2020-07-01.
Instructions for updating:
use `tf.profiler.experimental.stop` instead.
61/1070 [>.............................] - ETA: 1:28:53 - loss: 0.2441 - accuracy: 0.1795 - categorical_accuracy: 0.1795 - auc: 0.5134 - f1_micro: 0.0228 - f1_macro: 0.0146 - f1_score_weighted: 0.0228
###Markdown
Threshold optimization
###Code
from keras import metrics
threshold = 0.35
f1_micro = tfa.metrics.F1Score(num_classes=19, average='micro', name='f1_micro',threshold=threshold),
f1_macro = tfa.metrics.F1Score(num_classes=19, average='macro', name='f1_macro',threshold=threshold)
f1_weighted = tfa.metrics.F1Score(num_classes=19, average='weighted', name='f1_score_weighted',threshold=threshold)
y_true_test = [ [1 if i in e else 0 for i in range(19)] for e in test_generator.labels]
y_true_test = np.array(y_true_test)
from sklearn.metrics import f1_score
ths = np.linspace(0.1, 0.5, 10)
pd.DataFrame({
'threshold': ths,
'f1-micro': [f1_score(y_true_test, (y_pred_test > th)*1., average="micro") for th in ths],
'f1-weighted': [f1_score(y_true_test, (y_pred_test > th)*1., average="weighted") for th in ths],
'class' : "all"
}
)
from sklearn.metrics import f1_score
ths = np.linspace(0.1, 0.5, 9)
df_ths = pd.DataFrame({'threshold' : ths}
)
for cl in range(19):
col = pd.DataFrame({f'f1-class_{cl}': [f1_score(y_true_test[:,cl], (y_pred_test[:,cl] > th)*1.) for th in ths]
})
df_ths=pd.concat([df_ths,col],axis="columns")
df_ths.style.highlight_max(color = 'lightgreen', axis = 0)
df_ths
argmax_index=df_ths.iloc[:,1:].idxmax(axis=0)
class_thresholds = df_ths.threshold[argmax_index].values
class_thresholds
f1_micro_opt_th = f1_score(y_true_test, (y_pred_test > class_thresholds)*1., average="micro")
f1_weighted_opt_th = f1_score(y_true_test, (y_pred_test > class_thresholds)*1., average="weighted")
print("Class thresholds optimized on test set:",
f"f1_micro_opt_th: {f1_micro_opt_th:.3f}, f1_weighted_opt_th: {f1_weighted_opt_th:.3f}",
sep="\n")
#datagen = ImageDataGenerator(rescale=1 / 255.)#, validation_split=0.1)
BATCH_SIZE = 64
train2_generator = datagen.flow_from_dataframe(
dataframe=df.loc[~df.is_holdout].sample(20000),
directory=IMAGES_DIR,
x_col="filename",
y_col="genre_id",
batch_size=BATCH_SIZE,
seed=42,
shuffle=False,
class_mode="categorical",
target_size=(299, 299),
subset='training',
validate_filenames=False
)
y_pred_train = model.predict(train2_generator)
y_true_train = [ [1 if i in e else 0 for i in range(19)] for e in train2_generator.labels]
y_true_train = np.array(y_true_train)
from sklearn.metrics import f1_score
ths = np.linspace(0.1, 0.5, 9)
df_ths = pd.DataFrame({'threshold' : ths}
)
for cl in range(19):
col = pd.DataFrame({f'f1-class_{cl}': [f1_score(y_true_train[:,cl], (y_pred_train[:,cl] > th)*1.) for th in ths]
})
df_ths=pd.concat([df_ths,col],axis="columns")
df_ths.style.highlight_max(color = 'lightgreen', axis = 0)
df_ths
argmax_index=df_ths.iloc[:,1:].idxmax(axis=0)
class_thresholds = df_ths.threshold[argmax_index].values
class_thresholds
f1_micro_opt_th = f1_score(y_true, (y_pred > class_thresholds)*1., average="micro")
f1_weighted_opt_th = f1_score(y_true, (y_pred > class_thresholds)*1., average="weighted")
print("Class thresholds optimized on training set:",
f"f1_micro_opt_th: {f1_micro_opt_th:.3f}, f1_weighted_opt_th: {f1_weighted_opt_th:.3f}",
sep="\n")
df_train
df[df.original_title.str.contains("brian")==True]
###Output
_____no_output_____ |
7zip-in-Google-Colab.ipynb | ###Markdown
__Mount Google Drive__
###Code
#@markdown <br><center><img src='https://upload.wikimedia.org/wikipedia/commons/thumb/d/da/Google_Drive_logo.png/600px-Google_Drive_logo.png' height="50" alt="Gdrive-logo"/></center>
#@markdown <center><h3><b>Mount Google Drive</b></h3></center><br>
MODE = "MOUNT" #@param ["MOUNT", "UNMOUNT"]
#Mount your Gdrive!
from google.colab import drive
drive.mount._DEBUG = False
if MODE == "MOUNT":
drive.mount('/content/drive', force_remount=True)
elif MODE == "UNMOUNT":
try:
drive.flush_and_unmount()
except ValueError:
pass
get_ipython().system_raw("rm -rf /root/.config/Google/DriveFS")
###Output
_____no_output_____
###Markdown
__7-Zip__ * Run below cell to **Install 7-Zip** to the runtime
###Code
#@markdown <br><center><img src='https://raw.githubusercontent.com/dropcreations/7zip-in-Google-Colab/main/7zip-Logo.png' height="50" alt="7zip-logo"/></center>
#@markdown <center><h3><b>Install 7-Zip</b></h3></center><br>
from IPython.display import clear_output
!sudo apt update
!sudo apt install p7zip-full p7zip-rar unrar rar
clear_output()
print("Successfully Installed.")
###Output
_____no_output_____
###Markdown
__Compress Files and Folders__ * Create **zip, tar, 7z, gz, bz2, xz, wim** files.* If you want you can add **password** or **split** the archive.* If you want to save archive in **another** location **uncheck**: "Save_to_the_source_folder_location".
###Code
import os
Source = "" #@param {type:"string"}
File_format = "zip" #@param ["zip", "7z", "tar", "gzip", "bzip2", "xz", "wim"]
Password = "" #@param {type:"string"}
Split = "no" #@param ["no", "10m", "100m", "500m", "1g", "2g"] {allow-input: true}
Compress_level = 9 #@param {type:"slider", min:0, max:9, step:1}
Save_to_the_source_location = True #@param {type:"boolean"}
command_line = "-t" + File_format + " -mx=" + str(Compress_level)
if os.path.isfile(Source) is True:
folder_name = os.path.dirname(os.path.abspath(Source))
folder_content, file_content = os.path.split(Source)
ext_list = file_content.split('.')
file_ext = ext_list[-1]
file_ext_w_dot = int(len(file_ext) + 1)
file_name = file_content[0:len(file_content) - file_ext_w_dot]
else:
folder_content, file_name = os.path.split(Source)
if Password == "":
command_line = command_line
else:
command_line = command_line + " -p" + '"' + Password + '"'
if Split == "no":
command_line = command_line
else:
command_line = command_line + " -v" + '"' + Split + '"'
if Save_to_the_source_location == True:
if os.path.isfile(Source) is True:
command_line = command_line + " " + '"' + folder_name + "/" + file_name + '"'
else:
command_line = command_line + " " + '"' + Source + '"'
else:
output_path = input("Enter output path: ")
if output_path.endswith('zip') or output_path.endswith('7z') or output_path.endswith('tar') or output_path.endswith('gz') or output_path.endswith('bz2') or output_path.endswith('xz') or output_path.endswith('wim'):
folder_out, file_out = os.path.split(output_path)
ext_list = file_out.split('.')
file_ext = ext_list[-1]
file_ext_w_dot = int(len(file_ext) + 1)
file_name_out = file_out[0:len(file_out) - file_ext_w_dot]
command_line = command_line + " " + '"' + folder_out + "/" + file_name_out + '"'
else:
command_line = command_line + " " + '"' + output_path + "/" + file_name + '"'
!7z a {command_line} "{Source}"
###Output
_____no_output_____
###Markdown
__Uncompress Files__ * To **list content** of the file, use `list_file`. ***Uncheck this after viewing the content***.* Can also extract **splited** archives.* If **Extract_folder is blank** archive will extract to the **archive's location**.* If you want to extract files from archive **without using directory names**, use `extract_without_directory_names`.> NOTE : **Don't use** `extract_without_directory_names` at normal use.
###Code
compressed_file = "" #@param {type:"string"}
extract_folder = "" #@param {type:"string"}
list_files = False #@param {type:"boolean"}
extract_without_directory_names = False #@param {type:"boolean"}
import os
if extract_folder == "":
extract_folder = os.path.dirname(os.path.abspath(compressed_file))
if list_files == True:
!7z l "{compressed_file}"
else:
if extract_without_directory_names == True:
!7z e "{compressed_file}" -o"{extract_folder}"
else:
!7z x "{compressed_file}" -o"{extract_folder}"
###Output
_____no_output_____ |
discretization/.ipynb_checkpoints/Discretization_Solution-checkpoint.ipynb | ###Markdown
Discretization---In this notebook, you will deal with continuous state and action spaces by discretizing them. This will enable you to apply reinforcement learning algorithms that are only designed to work with discrete spaces. 1. Import the Necessary Packages
###Code
import sys
import gym
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Set plotting options
%matplotlib inline
plt.style.use('ggplot')
np.set_printoptions(precision=3, linewidth=120)
###Output
_____no_output_____
###Markdown
2. Specify the Environment, and Explore the State and Action SpacesWe'll use [OpenAI Gym](https://gym.openai.com/) environments to test and develop our algorithms. These simulate a variety of classic as well as contemporary reinforcement learning tasks. Let's use an environment that has a continuous state space, but a discrete action space.
###Code
# Create an environment and set random seed
env = gym.make('MountainCar-v0')
env.seed(505);
###Output
[33mWARN: gym.spaces.Box autodetected dtype as <class 'numpy.float32'>. Please provide explicit dtype.[0m
###Markdown
Run the next code cell to watch a random agent.
###Code
state = env.reset()
score = 0
for t in range(200):
action = env.action_space.sample()
env.render()
state, reward, done, _ = env.step(action)
score += reward
if done:
break
print('Final score:', score)
env.close()
###Output
Final score: -200.0
###Markdown
In this notebook, you will train an agent to perform much better! For now, we can explore the state and action spaces, as well as sample them.
###Code
# Explore state (observation) space
print("State space:", env.observation_space)
print("- low:", env.observation_space.low)
print("- high:", env.observation_space.high)
# Generate some samples from the state space
print("State space samples:")
print(np.array([env.observation_space.sample() for i in range(10)]))
# Explore the action space
print("Action space:", env.action_space)
# Generate some samples from the action space
print("Action space samples:")
print(np.array([env.action_space.sample() for i in range(10)]))
###Output
Action space: Discrete(3)
Action space samples:
[1 1 1 2 2 2 0 1 2 1]
###Markdown
3. Discretize the State Space with a Uniform GridWe will discretize the space using a uniformly-spaced grid. Implement the following function to create such a grid, given the lower bounds (`low`), upper bounds (`high`), and number of desired `bins` along each dimension. It should return the split points for each dimension, which will be 1 less than the number of bins.For instance, if `low = [-1.0, -5.0]`, `high = [1.0, 5.0]`, and `bins = (10, 10)`, then your function should return the following list of 2 NumPy arrays:```[array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8]), array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])]```Note that the ends of `low` and `high` are **not** included in these split points. It is assumed that any value below the lowest split point maps to index `0` and any value above the highest split point maps to index `n-1`, where `n` is the number of bins along that dimension.
###Code
def create_uniform_grid(low, high, bins=(10, 10)):
"""Define a uniformly-spaced grid that can be used to discretize a space.
Parameters
----------
low : array_like
Lower bounds for each dimension of the continuous space.
high : array_like
Upper bounds for each dimension of the continuous space.
bins : tuple
Number of bins along each corresponding dimension.
Returns
-------
grid : list of array_like
A list of arrays containing split points for each dimension.
"""
# TODO: Implement this
grid = [np.linspace(low[dim], high[dim], bins[dim] + 1)[1:-1] for dim in range(len(bins))]
print("Uniform grid: [<low>, <high>] / <bins> => <splits>")
for l, h, b, splits in zip(low, high, bins, grid):
print(" [{}, {}] / {} => {}".format(l, h, b, splits))
return grid
low = [-1.0, -5.0]
high = [1.0, 5.0]
create_uniform_grid(low, high) # [test]
###Output
Uniform grid: [<low>, <high>] / <bins> => <splits>
[-1.0, 1.0] / 10 => [-0.8 -0.6 -0.4 -0.2 0. 0.2 0.4 0.6 0.8]
[-5.0, 5.0] / 10 => [-4. -3. -2. -1. 0. 1. 2. 3. 4.]
###Markdown
Now write a function that can convert samples from a continuous space into its equivalent discretized representation, given a grid like the one you created above. You can use the [`numpy.digitize()`](https://docs.scipy.org/doc/numpy-1.9.3/reference/generated/numpy.digitize.html) function for this purpose.Assume the grid is a list of NumPy arrays containing the following split points:```[array([-0.8, -0.6, -0.4, -0.2, 0.0, 0.2, 0.4, 0.6, 0.8]), array([-4.0, -3.0, -2.0, -1.0, 0.0, 1.0, 2.0, 3.0, 4.0])]```Here are some potential samples and their corresponding discretized representations:```[-1.0 , -5.0] => [0, 0][-0.81, -4.1] => [0, 0][-0.8 , -4.0] => [1, 1][-0.5 , 0.0] => [2, 5][ 0.2 , -1.9] => [6, 3][ 0.8 , 4.0] => [9, 9][ 0.81, 4.1] => [9, 9][ 1.0 , 5.0] => [9, 9]```**Note**: There may be one-off differences in binning due to floating-point inaccuracies when samples are close to grid boundaries, but that is alright.
###Code
def discretize(sample, grid):
"""Discretize a sample as per given grid.
Parameters
----------
sample : array_like
A single sample from the (original) continuous space.
grid : list of array_like
A list of arrays containing split points for each dimension.
Returns
-------
discretized_sample : array_like
A sequence of integers with the same number of dimensions as sample.
"""
# TODO: Implement this
return list(int(np.digitize(s, g)) for s, g in zip(sample, grid)) # apply along each dimension
# Test with a simple grid and some samples
grid = create_uniform_grid([-1.0, -5.0], [1.0, 5.0])
samples = np.array(
[[-1.0 , -5.0],
[-0.81, -4.1],
[-0.8 , -4.0],
[-0.5 , 0.0],
[ 0.2 , -1.9],
[ 0.8 , 4.0],
[ 0.81, 4.1],
[ 1.0 , 5.0]])
discretized_samples = np.array([discretize(sample, grid) for sample in samples])
print("\nSamples:", repr(samples), sep="\n")
print("\nDiscretized samples:", repr(discretized_samples), sep="\n")
###Output
Uniform grid: [<low>, <high>] / <bins> => <splits>
[-1.0, 1.0] / 10 => [-0.8 -0.6 -0.4 -0.2 0. 0.2 0.4 0.6 0.8]
[-5.0, 5.0] / 10 => [-4. -3. -2. -1. 0. 1. 2. 3. 4.]
Samples:
array([[-1. , -5. ],
[-0.81, -4.1 ],
[-0.8 , -4. ],
[-0.5 , 0. ],
[ 0.2 , -1.9 ],
[ 0.8 , 4. ],
[ 0.81, 4.1 ],
[ 1. , 5. ]])
Discretized samples:
array([[0, 0],
[0, 0],
[1, 1],
[2, 5],
[5, 3],
[9, 9],
[9, 9],
[9, 9]])
###Markdown
4. VisualizationIt might be helpful to visualize the original and discretized samples to get a sense of how much error you are introducing.
###Code
import matplotlib.collections as mc
def visualize_samples(samples, discretized_samples, grid, low=None, high=None):
"""Visualize original and discretized samples on a given 2-dimensional grid."""
fig, ax = plt.subplots(figsize=(10, 10))
# Show grid
ax.xaxis.set_major_locator(plt.FixedLocator(grid[0]))
ax.yaxis.set_major_locator(plt.FixedLocator(grid[1]))
ax.grid(True)
# If bounds (low, high) are specified, use them to set axis limits
if low is not None and high is not None:
ax.set_xlim(low[0], high[0])
ax.set_ylim(low[1], high[1])
else:
# Otherwise use first, last grid locations as low, high (for further mapping discretized samples)
low = [splits[0] for splits in grid]
high = [splits[-1] for splits in grid]
# Map each discretized sample (which is really an index) to the center of corresponding grid cell
grid_extended = np.hstack((np.array([low]).T, grid, np.array([high]).T)) # add low and high ends
grid_centers = (grid_extended[:, 1:] + grid_extended[:, :-1]) / 2 # compute center of each grid cell
locs = np.stack(grid_centers[i, discretized_samples[:, i]] for i in range(len(grid))).T # map discretized samples
ax.plot(samples[:, 0], samples[:, 1], 'o') # plot original samples
ax.plot(locs[:, 0], locs[:, 1], 's') # plot discretized samples in mapped locations
ax.add_collection(mc.LineCollection(list(zip(samples, locs)), colors='orange')) # add a line connecting each original-discretized sample
ax.legend(['original', 'discretized'])
visualize_samples(samples, discretized_samples, grid, low, high)
###Output
_____no_output_____
###Markdown
Now that we have a way to discretize a state space, let's apply it to our reinforcement learning environment.
###Code
# Create a grid to discretize the state space
state_grid = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(10, 10))
state_grid
# Obtain some samples from the space, discretize them, and then visualize them
state_samples = np.array([env.observation_space.sample() for i in range(10)])
discretized_state_samples = np.array([discretize(sample, state_grid) for sample in state_samples])
visualize_samples(state_samples, discretized_state_samples, state_grid,
env.observation_space.low, env.observation_space.high)
plt.xlabel('position'); plt.ylabel('velocity'); # axis labels for MountainCar-v0 state space
###Output
_____no_output_____
###Markdown
You might notice that if you have enough bins, the discretization doesn't introduce too much error into your representation. So we may be able to now apply a reinforcement learning algorithm (like Q-Learning) that operates on discrete spaces. Give it a shot to see how well it works! 5. Q-LearningProvided below is a simple Q-Learning agent. Implement the `preprocess_state()` method to convert each continuous state sample to its corresponding discretized representation.
###Code
class QLearningAgent:
"""Q-Learning agent that can act on a continuous state space by discretizing it."""
def __init__(self, env, state_grid, alpha=0.02, gamma=0.99,
epsilon=1.0, epsilon_decay_rate=0.9995, min_epsilon=.01, seed=505):
"""Initialize variables, create grid for discretization."""
# Environment info
self.env = env
self.state_grid = state_grid
self.state_size = tuple(len(splits) + 1 for splits in self.state_grid) # n-dimensional state space
self.action_size = self.env.action_space.n # 1-dimensional discrete action space
self.seed = np.random.seed(seed)
print("Environment:", self.env)
print("State space size:", self.state_size)
print("Action space size:", self.action_size)
# Learning parameters
self.alpha = alpha # learning rate
self.gamma = gamma # discount factor
self.epsilon = self.initial_epsilon = epsilon # initial exploration rate
self.epsilon_decay_rate = epsilon_decay_rate # how quickly should we decrease epsilon
self.min_epsilon = min_epsilon
# Create Q-table
self.q_table = np.zeros(shape=(self.state_size + (self.action_size,)))
print("Q table size:", self.q_table.shape)
def preprocess_state(self, state):
"""Map a continuous state to its discretized representation."""
# TODO: Implement this
return tuple(discretize(state, self.state_grid))
def reset_episode(self, state):
"""Reset variables for a new episode."""
# Gradually decrease exploration rate
self.epsilon *= self.epsilon_decay_rate
self.epsilon = max(self.epsilon, self.min_epsilon)
# Decide initial action
self.last_state = self.preprocess_state(state)
self.last_action = np.argmax(self.q_table[self.last_state])
return self.last_action
def reset_exploration(self, epsilon=None):
"""Reset exploration rate used when training."""
self.epsilon = epsilon if epsilon is not None else self.initial_epsilon
def act(self, state, reward=None, done=None, mode='train'):
"""Pick next action and update internal Q table (when mode != 'test')."""
state = self.preprocess_state(state)
if mode == 'test':
# Test mode: Simply produce an action
action = np.argmax(self.q_table[state])
else:
# Train mode (default): Update Q table, pick next action
# Note: We update the Q table entry for the *last* (state, action) pair with current state, reward
self.q_table[self.last_state + (self.last_action,)] += self.alpha * \
(reward + self.gamma * max(self.q_table[state]) - self.q_table[self.last_state + (self.last_action,)])
# Exploration vs. exploitation
do_exploration = np.random.uniform(0, 1) < self.epsilon
if do_exploration:
# Pick a random action
action = np.random.randint(0, self.action_size)
else:
# Pick the best action from Q table
action = np.argmax(self.q_table[state])
# Roll over current state, action for next step
self.last_state = state
self.last_action = action
return action
q_agent = QLearningAgent(env, state_grid)
###Output
Environment: <TimeLimit<MountainCarEnv<MountainCar-v0>>>
State space size: (10, 10)
Action space size: 3
Q table size: (10, 10, 3)
###Markdown
Let's also define a convenience function to run an agent on a given environment. When calling this function, you can pass in `mode='test'` to tell the agent not to learn.
###Code
def run(agent, env, num_episodes=20000, mode='train'):
"""Run agent in given reinforcement learning environment and return scores."""
scores = []
max_avg_score = -np.inf
for i_episode in range(1, num_episodes+1):
# Initialize episode
state = env.reset()
action = agent.reset_episode(state)
total_reward = 0
done = False
# Roll out steps until done
while not done:
state, reward, done, info = env.step(action)
total_reward += reward
action = agent.act(state, reward, done, mode)
# Save final score
scores.append(total_reward)
# Print episode stats
if mode == 'train':
if len(scores) > 100:
avg_score = np.mean(scores[-100:])
if avg_score > max_avg_score:
max_avg_score = avg_score
if i_episode % 100 == 0:
print("\rEpisode {}/{} | Max Average Score: {}".format(i_episode, num_episodes, max_avg_score), end="")
sys.stdout.flush()
return scores
scores = run(q_agent, env)
###Output
Episode 20000/20000 | Max Average Score: -137.36
###Markdown
The best way to analyze if your agent was learning the task is to plot the scores. It should generally increase as the agent goes through more episodes.
###Code
# Plot scores obtained per episode
plt.plot(scores); plt.title("Scores");
###Output
_____no_output_____
###Markdown
If the scores are noisy, it might be difficult to tell whether your agent is actually learning. To find the underlying trend, you may want to plot a rolling mean of the scores. Let's write a convenience function to plot both raw scores as well as a rolling mean.
###Code
def plot_scores(scores, rolling_window=100):
"""Plot scores and optional rolling mean using specified window."""
plt.plot(scores); plt.title("Scores");
rolling_mean = pd.Series(scores).rolling(rolling_window).mean()
plt.plot(rolling_mean);
return rolling_mean
rolling_mean = plot_scores(scores)
###Output
_____no_output_____
###Markdown
You should observe the mean episode scores go up over time. Next, you can freeze learning and run the agent in test mode to see how well it performs.
###Code
# Run in test mode and analyze scores obtained
test_scores = run(q_agent, env, num_episodes=100, mode='test')
print("[TEST] Completed {} episodes with avg. score = {}".format(len(test_scores), np.mean(test_scores)))
_ = plot_scores(test_scores)
###Output
[TEST] Completed 100 episodes with avg. score = -176.1
###Markdown
It's also interesting to look at the final Q-table that is learned by the agent. Note that the Q-table is of size MxNxA, where (M, N) is the size of the state space, and A is the size of the action space. We are interested in the maximum Q-value for each state, and the corresponding (best) action associated with that value.
###Code
def plot_q_table(q_table):
"""Visualize max Q-value for each state and corresponding action."""
q_image = np.max(q_table, axis=2) # max Q-value for each state
q_actions = np.argmax(q_table, axis=2) # best action for each state
fig, ax = plt.subplots(figsize=(10, 10))
cax = ax.imshow(q_image, cmap='jet');
cbar = fig.colorbar(cax)
for x in range(q_image.shape[0]):
for y in range(q_image.shape[1]):
ax.text(x, y, q_actions[x, y], color='white',
horizontalalignment='center', verticalalignment='center')
ax.grid(False)
ax.set_title("Q-table, size: {}".format(q_table.shape))
ax.set_xlabel('position')
ax.set_ylabel('velocity')
plot_q_table(q_agent.q_table)
###Output
_____no_output_____
###Markdown
6. Modify the GridNow it's your turn to play with the grid definition and see what gives you optimal results. Your agent's final performance is likely to get better if you use a finer grid, with more bins per dimension, at the cost of higher model complexity (more parameters to learn).
###Code
# TODO: Create a new agent with a different state space grid
state_grid_new = create_uniform_grid(env.observation_space.low, env.observation_space.high, bins=(20, 20))
q_agent_new = QLearningAgent(env, state_grid_new)
q_agent_new.scores = [] # initialize a list to store scores for this agent
# Train it over a desired number of episodes and analyze scores
# Note: This cell can be run multiple times, and scores will get accumulated
q_agent_new.scores += run(q_agent_new, env, num_episodes=50000) # accumulate scores
rolling_mean_new = plot_scores(q_agent_new.scores)
# Run in test mode and analyze scores obtained
test_scores = run(q_agent_new, env, num_episodes=100, mode='test')
print("[TEST] Completed {} episodes with avg. score = {}".format(len(test_scores), np.mean(test_scores)))
_ = plot_scores(test_scores)
# Visualize the learned Q-table
plot_q_table(q_agent_new.q_table)
###Output
_____no_output_____
###Markdown
7. Watch a Smart Agent
###Code
state = env.reset()
score = 0
for t in range(200):
action = q_agent_new.act(state, mode='test')
env.render()
state, reward, done, _ = env.step(action)
score += reward
if done:
break
print('Final score:', score)
env.close()
###Output
Final score: -110.0
|
miscellaneous/NFL_helmet_notes.ipynb | ###Markdown
Notes1. Using optimization methods to match helmet boxes to NGS data based on relative distances.2. Tracking of helmets throughout the duration of a play and assigning labels to these tracks.3. Imputing boxes for partially occluded helmets based on the surrounding frames.4. Using computer vision techniques to identify player jersey numbers and pair with helmets. * Goal:A perfect submission would correctly identify the helmet box for every helmet in every frame of video- and assign that helmet the correct player label.Requirement: 1. Submission boxes must have at least a 0.35 Intersection over Union (IoU) with the ground truth helmet box.2. Each ground truth helmet box will only be paired with one helmet box per frame in the submitted solution. For each ground truth box, the submitted box with the highest IoU will be considered for scoring.3. No more than 22 helmet predictions per video frame (the maximum number of players participating on field at a time). In some situations, sideline players can be seen in the video footage. Sideline players' helmets are not scored in the grading algorithm and can be ignored. Sideline players will have the helmet labels "H00" and "V00". Sideline players should not be included in the submission to avoid exceeding the 22-box and unique label constraints.4. A player's helmet label must only be predicted once per video frame, i.e. no duplicated labels per frame.5. All submitted helmet boxes must be unique per video frame, i.e. no identical (left, right, height and width) boxes per frame. * Conclusion from dataset1. test videos are subset of train videos2. Total 120 videos 60 for each. Sideline and Endzone video pair for every plays!3. 25 plays out of 60 doesn't match and the difference is mostly 1 frame but there are 7 frame difference also4. 57584_000336_Sideline_0 has frame 05. 50games 60plays 52142 frames (frames means images) 41 games has only 1 play,8 games has 2 plays, 1 game has 3 plays [train_labels.csv]6. 196 players exist. Not sure they are all unique Home has 98 players and 1, 45 number player only exist at home visitor has 98 players and 9, 43 number player only exist at visitor7. 25 out of 60 plays include sideline player (need to exclude) 23 players mostly shown. And 25 Sideline videos, 5 Endzone videos shown sideline players. It's easy to see sideline players when the camera is at the side of sideline8. 5928 is the biggest helmet size shown in the video and occure when the camera is zooming. 9 is the smallest helmets sizeMostly the helmet size are around 1509. Definitive impact is subset of Impacts and all types of impact could be definitive impact Definitive impact moment is 500 times smaller than normal impact10. 50 games 60 plays 15180 frames (frames means images) min 113 max 456(TIME)(player tracking.csv) 11. ball snap is the starting point of the game, All plays are recorded before the ball snap!12. Mostly 22 players are tracked but in some frames less than 22 are tracked, There are 6 plays that are not consistent.The ID is 109, 336, 350, 1242, 2546, 415213. all the track time is longer than train videos and mostly 3 times longer! Some tracking information is approximately 6 times longer!14. It become more consistent when considering only the time period of train videos. But still 3 tracking information is not consistent. We need to think how to impute this missing data. Considering the whole tracking - 109, 336, 350, 1242, 2546, 4152 Only for the video time period - 109, 336, 415215. [image_labels.csv] can be used to train bbox16. 82 bbox was predicted in max for 1 frame, 2 bbox was predicted in max for 1 fram * so what we need to do?1. train helmet bbox with object detection with train_label.csv(change categories to helmet) + image_label.csv, which can update from baseline_helmet [we can try yolov4,v5,x,r,PPyolo etc in this part]2. map helmet data to tracking data with player number [we can try deepsort or metric learning,etc]3. test on testset frames to generate sumbit.csv [a definitive impact * more weight] Setup
###Code
import os
import pandas as pd
import numpy as np
import cv2
from PIL import Image, ImageDraw
import matplotlib.pyplot as plt
import seaborn as sns
import plotly
import warnings
warnings.filterwarnings("ignore")
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
# Read in data files
BASE_DIR = './train_local'
# Labels and sample submission
labels = pd.read_csv(f'{BASE_DIR}/train_labels.csv')
ss = pd.read_csv(f'{BASE_DIR}/sample_submission.csv')
# Player tracking data
tr_tracking = pd.read_csv(f'{BASE_DIR}/train_player_tracking.csv')
te_tracking = pd.read_csv(f'{BASE_DIR}/test_player_tracking.csv')
# Baseline helmet detection labels
tr_helmets = pd.read_csv(f'{BASE_DIR}/train_baseline_helmets.csv')
te_helmets = pd.read_csv(f'{BASE_DIR}/test_baseline_helmets.csv')
# Extra image labels
# Trained images using images_labels.csv and predict the train, test
# The prediction result is [train/test]_baseline_helmets.csv
# No player information is included
img_labels = pd.read_csv(f'{BASE_DIR}/image_labels.csv')
###Output
_____no_output_____
###Markdown
check data
###Code
#This file is only available for the training dataset and provides the ground truth for the 120 training videos.
labels[0:5]
ss[0:5]
#contain the tracking data for all players on the field during the play.
tr_tracking[0:5]
#imperfect helmet location
tr_helmets[0:5]
te_helmets[0:5]
# contains helmet boxes for random frames in videos.
img_labels[0:5]
###Output
_____no_output_____
###Markdown
Evaluate Function
###Code
from sklearn.metrics import accuracy_score
import pandas as pd
import numpy as np
class NFLAssignmentScorer:
def __init__(
self,
labels_df: pd.DataFrame = None,
labels_csv="train_labels.csv",
check_constraints=True,
weight_col="isDefinitiveImpact",
impact_weight=1000,
iou_threshold=0.35,
remove_sideline=True,
):
"""
Helper class for grading submissions in the
2021 Kaggle Competition for helmet assignment.
Version 1.0
https://www.kaggle.com/robikscube/nfl-helmet-assignment-getting-started-guide
Use:
```
scorer = NFLAssignmentScorer(labels)
scorer.score(submission_df)
or
scorer = NFLAssignmentScorer(labels_csv='labels.csv')
scorer.score(submission_df)
```
Args:
labels_df (pd.DataFrame, optional):
Dataframe containing theground truth label boxes.
labels_csv (str, optional): CSV of the ground truth label.
check_constraints (bool, optional): Tell the scorer if it
should check the submission file to meet the competition
constraints. Defaults to True.
weight_col (str, optional):
Column in the labels DataFrame used to applying the scoring
weight.
impact_weight (int, optional):
The weight applied to impacts in the scoring metrics.
Defaults to 1000.
iou_threshold (float, optional):
The minimum IoU allowed to correctly pair a ground truth box
with a label. Defaults to 0.35.
remove_sideline (bool, optional):
Remove slideline players from the labels DataFrame
before scoring.
"""
if labels_df is None:
# Read label from CSV
if labels_csv is None:
raise Exception("labels_df or labels_csv must be provided")
else:
self.labels_df = pd.read_csv(labels_csv)
else:
self.labels_df = labels_df.copy()
if remove_sideline:
self.labels_df = (
self.labels_df.query("isSidelinePlayer == False")
.reset_index(drop=True)
.copy()
)
self.impact_weight = impact_weight
self.check_constraints = check_constraints
self.weight_col = weight_col
self.iou_threshold = iou_threshold
def check_submission(self, sub):
"""
Checks that the submission meets all the requirements.
1. No more than 22 Boxes per frame.
2. Only one label prediction per video/frame
3. No duplicate boxes per frame.
Args:
sub : submission dataframe.
Returns:
True -> Passed the tests
False -> Failed the test
"""
# Maximum of 22 boxes per frame.
max_box_per_frame = sub.groupby(["video_frame"])["label"].count().max()
if max_box_per_frame > 22:
print("Has more than 22 boxes in a single frame")
return False
# Only one label allowed per frame.
has_duplicate_labels = sub[["video_frame", "label"]].duplicated().any()
if has_duplicate_labels:
print("Has duplicate labels")
return False
# Check for unique boxes
has_duplicate_boxes = (
sub[["video_frame", "left", "width", "top", "height"]].duplicated().any()
)
if has_duplicate_boxes:
print("Has duplicate boxes")
return False
return True
def add_xy(self, df):
"""
Adds `x1`, `x2`, `y1`, and `y2` columns necessary for computing IoU.
Note - for pixel math, 0,0 is the top-left corner so box orientation
defined as right and down (height)
"""
df["x1"] = df["left"]
df["x2"] = df["left"] + df["width"]
df["y1"] = df["top"]
df["y2"] = df["top"] + df["height"]
return df
def merge_sub_labels(self, sub, labels, weight_col="isDefinitiveImpact"):
"""
Perform an outer join between submission and label.
Creates a `sub_label` dataframe which stores the matched label for each submission box.
Ground truth values are given the `_gt` suffix, submission values are given `_sub` suffix.
"""
sub = sub.copy()
labels = labels.copy()
sub = self.add_xy(sub)
labels = self.add_xy(labels)
base_columns = [
"label",
"video_frame",
"x1",
"x2",
"y1",
"y2",
"left",
"width",
"top",
"height",
]
sub_labels = sub[base_columns].merge(
labels[base_columns + [weight_col]],
on=["video_frame"],
how="right",
suffixes=("_sub", "_gt"),
)
return sub_labels
def get_iou_df(self, df):
"""
This function computes the IOU of submission (sub)
bounding boxes against the ground truth boxes (gt).
"""
df = df.copy()
# 1. get the coordinate of inters
df["ixmin"] = df[["x1_sub", "x1_gt"]].max(axis=1)
df["ixmax"] = df[["x2_sub", "x2_gt"]].min(axis=1)
df["iymin"] = df[["y1_sub", "y1_gt"]].max(axis=1)
df["iymax"] = df[["y2_sub", "y2_gt"]].min(axis=1)
df["iw"] = np.maximum(df["ixmax"] - df["ixmin"] + 1, 0.0)
df["ih"] = np.maximum(df["iymax"] - df["iymin"] + 1, 0.0)
# 2. calculate the area of inters
df["inters"] = df["iw"] * df["ih"]
# 3. calculate the area of union
df["uni"] = (
(df["x2_sub"] - df["x1_sub"] + 1) * (df["y2_sub"] - df["y1_sub"] + 1)
+ (df["x2_gt"] - df["x1_gt"] + 1) * (df["y2_gt"] - df["y1_gt"] + 1)
- df["inters"]
)
# print(uni)
# 4. calculate the overlaps between pred_box and gt_box
df["iou"] = df["inters"] / df["uni"]
return df.drop(
["ixmin", "ixmax", "iymin", "iymax", "iw", "ih", "inters", "uni"], axis=1
)
def filter_to_top_label_match(self, sub_labels):
"""
Ensures ground truth boxes are only linked to the box
in the submission file with the highest IoU.
"""
return (
sub_labels.sort_values("iou", ascending=False)
.groupby(["video_frame", "label_gt"])
.first()
.reset_index()
)
def add_isCorrect_col(self, sub_labels):
"""
Adds True/False column if the ground truth label
and submission label are identical
"""
sub_labels["isCorrect"] = (
sub_labels["label_gt"] == sub_labels["label_sub"]
) & (sub_labels["iou"] >= self.iou_threshold)
return sub_labels
def calculate_metric_weighted(
self, sub_labels, weight_col="isDefinitiveImpact", weight=1000
):
"""
Calculates weighted accuracy score metric.
"""
sub_labels["weight"] = sub_labels.apply(
lambda x: weight if x[weight_col] else 1, axis=1
)
y_pred = sub_labels["isCorrect"].values
y_true = np.ones_like(y_pred)
weight = sub_labels["weight"]
return accuracy_score(y_true, y_pred, sample_weight=weight)
def score(self, sub, labels_df=None, drop_extra_cols=True):
"""
Scores the submission file against the labels.
Returns the evaluation metric score for the helmet
assignment kaggle competition.
If `check_constraints` is set to True, will return -999 if the
submission fails one of the submission constraints.
"""
if labels_df is None:
labels_df = self.labels_df.copy()
if self.check_constraints:
if not self.check_submission(sub):
return -999
sub_labels = self.merge_sub_labels(sub, labels_df, self.weight_col)
sub_labels = self.get_iou_df(sub_labels).copy()
sub_labels = self.filter_to_top_label_match(sub_labels).copy()
sub_labels = self.add_isCorrect_col(sub_labels)
score = self.calculate_metric_weighted(
sub_labels, self.weight_col, self.impact_weight
)
# Keep `sub_labels for review`
if drop_extra_cols:
drop_cols = [
"x1_sub",
"x2_sub",
"y1_sub",
"y2_sub",
"x1_gt",
"x2_gt",
"y1_gt",
"y2_gt",
]
sub_labels = sub_labels.drop(drop_cols, axis=1)
self.sub_labels = sub_labels
return score
SUB_COLUMNS = ss.columns # Expected submission columns
scorer = NFLAssignmentScorer(labels)
# Score the sample submission
ss_score = scorer.score(ss)
print(f'Sample submission scores: {ss_score:0.4f}')
# Score a submission with only impacts
perfect_impacts = labels.query('isDefinitiveImpact == True and isSidelinePlayer == False')
imp_score = scorer.score(perfect_impacts)
print(f'A submission with perfect predictions only for impacts scores: {imp_score:0.4f}')
# Score a submission with only non-impacts
perfect_nonimpacts = labels.query('isDefinitiveImpact == False and isSidelinePlayer == False')
nonimp_score = scorer.score(perfect_nonimpacts)
print(f'A submission with perfect predictions only for non-impacts scores: {nonimp_score:0.4f}')
# Score a perfect submission
perfect_train = labels.query('isSidelinePlayer == False')[SUB_COLUMNS].copy()
perfect_score = scorer.score(perfect_train)
print(f'A perfrect training submission scores: {perfect_score:0.4f}')
scorer.sub_labels.head()
# The sample submission meets these requirements.
check_submission(ss)
###Output
_____no_output_____
###Markdown
baseline
###Code
import os
import cv2
import subprocess
from IPython.core.display import Video, display
import pandas as pd
def video_with_baseline_boxes(
video_path: str, baseline_boxes: pd.DataFrame, gt_labels: pd.DataFrame, verbose=True
) -> str:
"""
Annotates a video with both the baseline model boxes and ground truth boxes.
Baseline model prediction confidence is also displayed.
"""
VIDEO_CODEC = "MP4V"
HELMET_COLOR = (0, 0, 0) # Black
BASELINE_COLOR = (255, 255, 255) # White
IMPACT_COLOR = (0, 0, 255) # Red
video_name = os.path.basename(video_path).replace(".mp4", "")
if verbose:
print(f"Running for {video_name}")
baseline_boxes = baseline_boxes.copy()
gt_labels = gt_labels.copy()
baseline_boxes["video"] = (
baseline_boxes["video_frame"].str.split("_").str[:3].str.join("_")
)
gt_labels["video"] = gt_labels["video_frame"].str.split("_").str[:3].str.join("_")
baseline_boxes["frame"] = (
baseline_boxes["video_frame"].str.split("_").str[-1].astype("int")
)
gt_labels["frame"] = gt_labels["video_frame"].str.split("_").str[-1].astype("int")
vidcap = cv2.VideoCapture(video_path)
fps = vidcap.get(cv2.CAP_PROP_FPS)
width = int(vidcap.get(cv2.CAP_PROP_FRAME_WIDTH))
height = int(vidcap.get(cv2.CAP_PROP_FRAME_HEIGHT))
output_path = "labeled_" + video_name + ".mp4"
tmp_output_path = "tmp_" + output_path
output_video = cv2.VideoWriter(
tmp_output_path, cv2.VideoWriter_fourcc(*VIDEO_CODEC), fps, (width, height)
)
frame = 0
while True:
it_worked, img = vidcap.read()
if not it_worked:
break
# We need to add 1 to the frame count to match the label frame index
# that starts at 1
frame += 1
# Let's add a frame index to the video so we can track where we are
img_name = f"{video_name}_frame{frame}"
cv2.putText(
img,
img_name,
(0, 50),
cv2.FONT_HERSHEY_SIMPLEX,
1.0,
HELMET_COLOR,
thickness=2,
)
# Now, add the boxes
boxes = baseline_boxes.query("video == @video_name and frame == @frame")
if len(boxes) == 0:
print("Boxes incorrect")
return
for box in boxes.itertuples(index=False):
cv2.rectangle(
img,
(box.left, box.top),
(box.left + box.width, box.top + box.height),
BASELINE_COLOR,
thickness=1,
)
cv2.putText(
img,
f"{box.conf:0.2}",
(box.left, max(0, box.top - 5)),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
BASELINE_COLOR,
thickness=1,
)
boxes = gt_labels.query("video == @video_name and frame == @frame")
if len(boxes) == 0:
print("Boxes incorrect")
return
for box in boxes.itertuples(index=False):
# Filter for definitive head impacts and turn labels red
if box.isDefinitiveImpact == True:
color, thickness = IMPACT_COLOR, 3
else:
color, thickness = HELMET_COLOR, 1
cv2.rectangle(
img,
(box.left, box.top),
(box.left + box.width, box.top + box.height),
color,
thickness=thickness,
)
cv2.putText(
img,
box.label,
(box.left + 1, max(0, box.top - 20)),
cv2.FONT_HERSHEY_SIMPLEX,
0.5,
color,
thickness=1,
)
output_video.write(img)
output_video.release()
# Not all browsers support the codec, we will re-load the file at tmp_output_path
# and convert to a codec that is more broadly readable using ffmpeg
if os.path.exists(output_path):
os.remove(output_path)
subprocess.run(
[
"ffmpeg",
"-i",
tmp_output_path,
"-crf",
"18",
"-preset",
"veryfast",
"-vcodec",
"libx264",
output_path,
]
)
os.remove(tmp_output_path)
return output_path
# !pip install -U kora
from kora.drive import upload_public
url = upload_public('train_local/train/57584_000336_Sideline.mp4')
# then display it
from IPython.display import HTML
HTML(f"""<video src={url} width=500 controls/>""")
example_video = './train_local/train/57584_000336_Sideline.mp4'
output_video = video_with_baseline_boxes(example_video,
tr_helmets, labels)
frac = 0.65 # scaling factor for display
display(Video(data=output_video,
embed=True,)
)
###Output
_____no_output_____
###Markdown
NGS tracking1. NGS data is sampled at a rate of 10Hz, while videos are sampled at roughly 59.94Hz.2. NGS data and videos can be approximately synced by linking the NGS data where event == "ball_snap" to the 10th frame of the video (approximately syncronized to the ball snap in the video).3. The NGS data and the orientation of the video cameras are not consistent. Your solution must account for matching the orientation of the video angle relative to the NGS data.
###Code
NGS data is sampled at a rate of 10Hz, while videos are sampled at roughly 59.94Hz.
NGS data and videos can be approximately synced by linking the NGS data where event == "ball_snap" to the 10th frame of the video (approximately syncronized to the ball snap in the video).
The NGS data and the orientation of the video cameras are not consistent. Your solution must account for matching the orientation of the video angle relative to the NGS data.
def add_track_features(tracks, fps=59.94, snap_frame=10):
"""
Add column features helpful for syncing with video data.
"""
tracks = tracks.copy()
tracks["game_play"] = (
tracks["gameKey"].astype("str")
+ "_"
+ tracks["playID"].astype("str").str.zfill(6)
)
tracks["time"] = pd.to_datetime(tracks["time"])
snap_dict = (
tracks.query('event == "ball_snap"')
.groupby("game_play")["time"]
.first()
.to_dict()
)
tracks["snap"] = tracks["game_play"].map(snap_dict)
tracks["isSnap"] = tracks["snap"] == tracks["time"]
tracks["team"] = tracks["player"].str[0].replace("H", "Home").replace("V", "Away")
tracks["snap_offset"] = (tracks["time"] - tracks["snap"]).astype(
"timedelta64[ms]"
) / 1_000
# Estimated video frame
tracks["est_frame"] = (
((tracks["snap_offset"] * fps) + snap_frame).round().astype("int")
)
return tracks
tr_tracking = add_track_features(tr_tracking)
te_tracking = add_track_features(te_tracking)
import matplotlib.patches as patches
import matplotlib.pylab as plt
def create_football_field(
linenumbers=True,
endzones=True,
highlight_line=False,
highlight_line_number=50,
highlighted_name="Line of Scrimmage",
fifty_is_los=False,
figsize=(12, 6.33),
field_color="lightgreen",
ez_color='forestgreen',
ax=None,
):
"""
Function that plots the football field for viewing plays.
Allows for showing or hiding endzones.
"""
rect = patches.Rectangle(
(0, 0),
120,
53.3,
linewidth=0.1,
edgecolor="r",
facecolor=field_color,
zorder=0,
)
if ax is None:
fig, ax = plt.subplots(1, figsize=figsize)
ax.add_patch(rect)
plt.plot([10, 10, 10, 20, 20, 30, 30, 40, 40, 50, 50, 60, 60, 70, 70, 80,
80, 90, 90, 100, 100, 110, 110, 120, 0, 0, 120, 120],
[0, 0, 53.3, 53.3, 0, 0, 53.3, 53.3, 0, 0, 53.3, 53.3, 0, 0, 53.3,
53.3, 0, 0, 53.3, 53.3, 0, 0, 53.3, 53.3, 53.3, 0, 0, 53.3],
color='black')
if fifty_is_los:
ax.plot([60, 60], [0, 53.3], color="gold")
ax.text(62, 50, "<- Player Yardline at Snap", color="gold")
# Endzones
if endzones:
ez1 = patches.Rectangle(
(0, 0),
10,
53.3,
linewidth=0.1,
edgecolor="black",
facecolor=ez_color,
alpha=0.6,
zorder=0,
)
ez2 = patches.Rectangle(
(110, 0),
120,
53.3,
linewidth=0.1,
edgecolor="black",
facecolor=ez_color,
alpha=0.6,
zorder=0,
)
ax.add_patch(ez1)
ax.add_patch(ez2)
ax.axis("off")
if linenumbers:
for x in range(20, 110, 10):
numb = x
if x > 50:
numb = 120 - x
ax.text(
x,
5,
str(numb - 10),
horizontalalignment="center",
fontsize=20, # fontname='Arial',
color="black",
)
ax.text(
x - 0.95,
53.3 - 5,
str(numb - 10),
horizontalalignment="center",
fontsize=20, # fontname='Arial',
color="black",
rotation=180,
)
if endzones:
hash_range = range(11, 110)
else:
hash_range = range(1, 120)
for x in hash_range:
ax.plot([x, x], [0.4, 0.7], color="black")
ax.plot([x, x], [53.0, 52.5], color="black")
ax.plot([x, x], [22.91, 23.57], color="black")
ax.plot([x, x], [29.73, 30.39], color="black")
if highlight_line:
hl = highlight_line_number + 10
ax.plot([hl, hl], [0, 53.3], color="yellow")
ax.text(hl + 2, 50, "<- {}".format(highlighted_name), color="yellow")
border = patches.Rectangle(
(-5, -5),
120 + 10,
53.3 + 10,
linewidth=0.1,
edgecolor="orange",
facecolor="white",
alpha=0,
zorder=0,
)
ax.add_patch(border)
ax.set_xlim((0, 120))
ax.set_ylim((0, 53.3))
return ax
game_play = "57584_000336"
example_tracks = tr_tracking.query("game_play == @game_play and isSnap == True")
ax = create_football_field()
for team, d in example_tracks.groupby("team"):
ax.scatter(d["x"], d["y"], label=team, s=65, lw=1, edgecolors="black", zorder=5)
ax.legend().remove()
ax.set_title(f"Tracking data for {game_play}: at snap", fontsize=15)
plt.show()
import plotly.express as px
import plotly.graph_objects as go
import plotly
def add_plotly_field(fig):
# Reference https://www.kaggle.com/ammarnassanalhajali/nfl-big-data-bowl-2021-animating-players
fig.update_traces(marker_size=20)
fig.update_layout(paper_bgcolor='#29a500', plot_bgcolor='#29a500', font_color='white',
width = 800,
height = 600,
title = "",
xaxis = dict(
nticks = 10,
title = "",
visible=False
),
yaxis = dict(
scaleanchor = "x",
title = "Temp",
visible=False
),
showlegend= True,
annotations=[
dict(
x=-5,
y=26.65,
xref="x",
yref="y",
text="ENDZONE",
font=dict(size=16,color="#e9ece7"),
align='center',
showarrow=False,
yanchor='middle',
textangle=-90
),
dict(
x=105,
y=26.65,
xref="x",
yref="y",
text="ENDZONE",
font=dict(size=16,color="#e9ece7"),
align='center',
showarrow=False,
yanchor='middle',
textangle=90
)]
,
legend=dict(
traceorder="normal",
font=dict(family="sans-serif",size=12),
orientation="h",
yanchor="bottom",
y=1.00,
xanchor="center",
x=0.5
),
)
####################################################
fig.add_shape(type="rect", x0=-10, x1=0, y0=0, y1=53.3,line=dict(color="#c8ddc0",width=3),fillcolor="#217b00" ,layer="below")
fig.add_shape(type="rect", x0=100, x1=110, y0=0, y1=53.3,line=dict(color="#c8ddc0",width=3),fillcolor="#217b00" ,layer="below")
for x in range(0, 100, 10):
fig.add_shape(type="rect", x0=x, x1=x+10, y0=0, y1=53.3,line=dict(color="#c8ddc0",width=3),fillcolor="#29a500" ,layer="below")
for x in range(0, 100, 1):
fig.add_shape(type="line",x0=x, y0=1, x1=x, y1=2,line=dict(color="#c8ddc0",width=2),layer="below")
for x in range(0, 100, 1):
fig.add_shape(type="line",x0=x, y0=51.3, x1=x, y1=52.3,line=dict(color="#c8ddc0",width=2),layer="below")
for x in range(0, 100, 1):
fig.add_shape(type="line",x0=x, y0=20.0, x1=x, y1=21,line=dict(color="#c8ddc0",width=2),layer="below")
for x in range(0, 100, 1):
fig.add_shape(type="line",x0=x, y0=32.3, x1=x, y1=33.3,line=dict(color="#c8ddc0",width=2),layer="below")
fig.add_trace(go.Scatter(
x=[2,10,20,30,40,50,60,70,80,90,98], y=[5,5,5,5,5,5,5,5,5,5,5],
text=["G","1 0","2 0","3 0","4 0","5 0","4 0","3 0","2 0","1 0","G"],
mode="text",
textfont=dict(size=20,family="Arail"),
showlegend=False,
))
fig.add_trace(go.Scatter(
x=[2,10,20,30,40,50,60,70,80,90,98], y=[48.3,48.3,48.3,48.3,48.3,48.3,48.3,48.3,48.3,48.3,48.3],
text=["G","1 0","2 0","3 0","4 0","5 0","4 0","3 0","2 0","1 0","G"],
mode="text",
textfont=dict(size=20,family="Arail"),
showlegend=False,
))
return fig
tr_tracking["track_time_count"] = (
tr_tracking.sort_values("time")
.groupby("game_play")["time"]
.rank(method="dense")
.astype("int")
)
fig = px.scatter(
tr_tracking.query("game_play == @game_play"),
x="x",
y="y",
range_x=[-10, 110],
range_y=[-10, 53.3],
hover_data=["player", "s", "a", "dir"],
color="team",
animation_frame="track_time_count",
text="player",
)
fig.update_traces(textfont_size=10)
fig = add_plotly_field(fig)
fig.show(renderer="colab")
###Output
_____no_output_____
###Markdown
submission
###Code
def random_label_submission(helmets, tracks):
"""
Creates a baseline submission with randomly assigned helmets
based on the top 22 most confident baseline helmet boxes for
a frame.
"""
# Take up to 22 helmets per frame based on confidence:
helm_22 = (
helmets.sort_values("conf", ascending=False)
.groupby("video_frame")
.head(22)
.sort_values("video_frame")
.reset_index(drop=True)
.copy()
)
# Identify player label choices for each game_play
game_play_choices = tracks.groupby(["game_play"])["player"].unique().to_dict()
# Loop through frames and randomly assign boxes
ds = []
helm_22["label"] = np.nan
for video_frame, data in helm_22.groupby("video_frame"):
game_play = video_frame[:12]
choices = game_play_choices[game_play]
np.random.shuffle(choices)
data["label"] = choices[: len(data)]
ds.append(data)
submission = pd.concat(ds)
return submission
train_submission = random_label_submission(tr_helmets, tr_tracking)
scorer = NFLAssignmentScorer(labels)
baseline_score = scorer.score(train_submission)
print(f"The score of random labels on the training set is {baseline_score:0.4f}")
te_tracking = add_track_features(te_tracking)
random_submission = random_label_submission(te_helmets, te_tracking)
# Check to make sure it meets the submission requirements.
assert check_submission(random_submission)
random_submission[ss.columns].to_csv("submission.csv", index=False)
###Output
_____no_output_____ |
examples/examples-cpu/nyc-taxi/data-aggregation.ipynb | ###Markdown
Data Aggregation for DashboardThis notebook contains the data aggregation code to prepare data files for the dashboard. You can run this notebook to see how Dask is used with a Saturn cluster for data processing, but the files generated here will not be used by any of the examples. The dashboard uses pre-aggregated files from Saturn's public S3 bucket.
###Code
import os
DATA_PATH = 'data'
if not os.path.exists(DATA_PATH):
os.makedirs(DATA_PATH)
import s3fs
import dask.dataframe as dd
import numpy as np
import hvplot.dask, hvplot.pandas
fs = s3fs.S3FileSystem(anon=True)
###Output
_____no_output_____
###Markdown
Launch Dask clusterWe will need to do some data processing that exceeds the capacity of our JupyterLab client. To monitor the status of your cluster, visit the "Logs" page.
###Code
from dask.distributed import Client, wait
from dask_saturn import SaturnCluster
n_workers = 3
cluster = SaturnCluster(n_workers=n_workers, scheduler_size='medium', worker_size='large', nthreads=2)
client = Client(cluster)
cluster
###Output
_____no_output_____
###Markdown
If you initialized your cluster here in this notebook, it might take a few minutes for all your nodes to become available. You can run the chunk below to block until all nodes are ready.>**Pro tip**: Create and/or start your cluster from the "Dask" page in Saturn if you want to get a head start!
###Code
client.wait_for_workers(n_workers=n_workers)
###Output
_____no_output_____
###Markdown
Load dataSetup a function to load files with Dask. Cleanup some column names and parse data types correctly.
###Code
usecols = ['VendorID', 'tpep_pickup_datetime', 'tpep_dropoff_datetime', 'passenger_count', 'trip_distance',
'RatecodeID', 'store_and_fwd_flag', 'PULocationID', 'DOLocationID', 'payment_type', 'fare_amount',
'extra', 'mta_tax', 'tip_amount', 'tolls_amount', 'improvement_surcharge', 'total_amount']
def read_taxi_csv(files):
ddf = dd.read_csv(files,
assume_missing=True,
parse_dates=[1, 2],
usecols=usecols,
storage_options={'anon': True})
# grab the columns we need and rename
ddf = ddf[['tpep_pickup_datetime', 'tpep_dropoff_datetime', 'PULocationID', 'DOLocationID',
'passenger_count', 'trip_distance', 'payment_type', 'tip_amount', 'fare_amount']]
ddf.columns = ['pickup_datetime', 'dropoff_datetime', 'pickup_taxizone_id', 'dropoff_taxizone_id',
'passenger_count', 'trip_distance', 'payment_type', 'tip_amount', 'fare_amount']
return ddf
###Output
_____no_output_____
###Markdown
Get a listing of files from the public S3 bucket
###Code
files = [f's3://{x}' for x in fs.glob('s3://nyc-tlc/trip data/yellow_tripdata_201*.csv')
if '2017' in x or '2018' in x or '2019' in x]
len(files), files[:2]
ddf = read_taxi_csv(files[:5]) # only load first 5 months of data
###Output
_____no_output_____
###Markdown
We are loading a small sample for this exercise, but if you want to use the full data and replicate the aggregated data hosted on Saturn's bucket, you will need to use a larger cluster. Here is a sample cluster configuration you can use, but you can play around with sizes and see how performance changes!```pythoncluster = SaturnCluster( n_workers=10, scheduler_size='xlarge', worker_size='8xlarge', nthreads=32,)```You will have to run `cluster.reset(...)` if the cluster has already been configured. Run the following to see what sizes are available:```pythonfrom dask_saturn.core import describe_sizesdescribe_sizes()```
###Code
# load all 3 years of data
# ddf = read_taxi_csv(files)
###Output
_____no_output_____
###Markdown
Aggregated files for DashboardCreate several CSV file to use for visualization in the dashboard. Note that each of these perform some Dask dataframe operations, then call `compute()` to pull down a pandas dataframe, and then write that dataframe ot a CSV file. Augment dataWe'll distill some features out of the datetime component of the data. This is similar to the feature engineering that is done in other places in this demo, but we'll only create the features that'll be most useful in the visuals.
###Code
ddf["pickup_hour"] = ddf.pickup_datetime.dt.hour
ddf["dropoff_hour"] = ddf.dropoff_datetime.dt.hour
ddf["pickup_weekday"] = ddf.pickup_datetime.dt.weekday
ddf["dropoff_weekday"] = ddf.dropoff_datetime.dt.weekday
ddf["percent_tip"] = (ddf["tip_amount"] / ddf["fare_amount"]).replace([np.inf, -np.inf], np.nan) * 100
###Output
_____no_output_____
###Markdown
We'll take out the extreme high values since they disrupt the mean
###Code
ddf["percent_tip"] = ddf["percent_tip"].apply(lambda x: np.nan if x > 1000 else x)
###Output
_____no_output_____
###Markdown
Notice that all of the above cells execute pretty much instantly. This is because of Dask's [lazy evaluation](https://tutorial.dask.org/01x_lazy.html). Calling `persist()` below tells Dask to run all the operations and keep the results in memory for faster computation. This cell takes some time to run because Dask needs to first parse all the CSV files.
###Code
%%time
ddf = ddf.persist()
_ = wait(ddf)
###Output
_____no_output_____
###Markdown
Timeseries datasetsWe'll resample to an hourly timestep so that we don't have to pass around so much data later on.
###Code
tip_ddf = ddf[["pickup_datetime", "percent_tip"]].set_index("pickup_datetime").dropna()
tips = tip_ddf.resample('1H').mean().compute()
tips.to_csv(f"{DATA_PATH}/pickup_average_percent_tip_timeseries.csv")
fare_ddf = ddf[["pickup_datetime", "fare_amount"]].set_index("pickup_datetime").dropna()
fare = fare_ddf.resample('1H').mean().compute()
fare.to_csv(f"{DATA_PATH}/pickup_average_fare_timeseries.csv")
###Output
_____no_output_____
###Markdown
Aggregate datasetsSince our data is rather large and will mostly be viewed in grouped aggregates, we can do some aggregation now and save it off for use in plots later.
###Code
for value in ["pickup", "dropoff"]:
data = (ddf
.groupby([
f"{value}_taxizone_id",
f"{value}_hour",
f"{value}_weekday",
])
.agg({
"fare_amount": ["mean", "count", "sum"],
"trip_distance": ["mean"],
"percent_tip": ["mean"],
})
.compute()
)
data.columns = data.columns.to_flat_index()
data = data.rename({
("fare_amount", "mean"): "average_fare",
("fare_amount", "count"): "total_rides",
("fare_amount", "sum"): "total_fare",
("trip_distance", "mean"): "average_trip_distance",
("percent_tip", "mean"): "average_percent_tip",
}, axis=1).reset_index(level=[1, 2])
data.to_csv(f"{DATA_PATH}/{value}_grouped_by_zone_and_time.csv")
grouped_zone_and_time = data
for value in ["pickup", "dropoff"]:
data = (ddf
.groupby([
f"{value}_taxizone_id",
])
.agg({
"fare_amount": ["mean", "count", "sum"],
"trip_distance": ["mean"],
"percent_tip": ["mean"],
})
.compute()
)
data.columns = data.columns.to_flat_index()
data = data.rename({
("fare_amount", "mean"): "average_fare",
("fare_amount", "count"): "total_rides",
("fare_amount", "sum"): "total_fare",
("trip_distance", "mean"): "average_trip_distance",
("percent_tip", "mean"): "average_percent_tip",
}, axis=1)
data.to_csv(f"{DATA_PATH}/{value}_grouped_by_zone.csv")
grouped_zone = data
value = "pickup"
data = (ddf
.groupby([
f"{value}_hour",
f"{value}_weekday"
])
.agg({
"fare_amount": ["mean", "count", "sum"],
"trip_distance": ["mean"],
"percent_tip": ["mean"],
})
.compute()
)
data.columns = data.columns.to_flat_index()
data = data.rename({
("fare_amount", "mean"): "average_fare",
("fare_amount", "count"): "total_rides",
("fare_amount", "sum"): "total_fare",
("trip_distance", "mean"): "average_trip_distance",
("percent_tip", "mean"): "average_percent_tip",
}, axis=1)
data.to_csv(f"{DATA_PATH}/{value}_grouped_by_time.csv")
grouped_time = data
###Output
_____no_output_____
###Markdown
Get shape files for dashboardThe shape files are stored in a zip on the public S3. Here we pull it down, unzip it, then place the files on our S3.
###Code
import zipfile
with fs.open('s3://nyc-tlc/misc/taxi_zones.zip') as f:
with zipfile.ZipFile(f) as zip_ref:
zip_ref.extractall(f'{DATA_PATH}/taxi_zones')
###Output
_____no_output_____
###Markdown
ExamplesTo make use of the new datasets we can visualize all the data at once using a grouped heatmap
###Code
grouped_zone_and_time.hvplot.heatmap(
x="dropoff_weekday",
y="dropoff_hour",
C="average_percent_tip",
groupby="dropoff_taxizone_id",
responsive=True, min_height=600, cmap="viridis", clim=(0, 20),
colorbar=False,
)
###Output
_____no_output_____
###Markdown
This dataset that is only grouped by zone can be paired with other information such as geography.
###Code
import geopandas as gpd
zones = gpd.read_file(f'{DATA_PATH}/taxi_zones/taxi_zones.shp').to_crs('epsg:4326')
joined = zones.join(grouped_zone, on="LocationID")
joined.hvplot(x="longitude", y="latitude", c="average_fare",
geo=True, tiles="CartoLight", cmap="fire", alpha=0.5,
hover_cols=["zone", "borough"],
title="Average fare by dropoff location",
height=600, width=800, clim=(0, 100))
###Output
_____no_output_____ |
LongitudinalPDHallucinations.ipynb | ###Markdown
Longitudinal cortical thickness and fixel based analysis in Visual Hallucinations- Author: A.Zarkali- Date last updated: 1/1/2021- Aim: Perform all demographics and tract comparisons for longitudinal VIPD data Load necessary libraries and data
###Code
#Import libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats as stats
import statsmodels.api as sm
import seaborn as sns
from scipy.stats import shapiro
from statsmodels.formula.api import ols
import scikit_posthocs as sp
from pandas.api.types import is_numeric_dtype
import numpy as np
# Enable inline plotting
%matplotlib inline
# Load database of all demographics
df1 = pd.read_excel(r"C:/Users/Angelika/Dropbox/PhD/EXPERIMENTS/02_StructuralMRI/02_ThalamicSegmentation/Visit1Data.xlsx")
df2 = pd.read_excel(r"C:/Users/Angelika/Dropbox/PhD/EXPERIMENTS/02_StructuralMRI/02_ThalamicSegmentation/\Visit2Data.xlsx")
df = pd.concat((df1, df2), axis=1)
# Load Thickness data
thick1 = pd.read_table(r"C:/Users/Angelika/Dropbox/PhD/EXPERIMENTS/02_StructuralMRI/02_ThalamicSegmentation/Session1Aseg.txt")
thick2 = pd.read_table(r"C:\Users\Angelika\Dropbox\PhD\EXPERIMENTS\02_StructuralMRI\02_ThalamicSegmentation\Session2Aseg.txt")
# thickCortex1 = pd.read_csv(r"C:\Users\Angelika\Dropbox\PhD\EXPERIMENTS\02_StructuralMRI\02_ThalamicSegmentation\Session1Thickness.csv")
# thickCortex2 = pd.read_csv(r"C:\Users\Angelika\Dropbox\PhD\EXPERIMENTS\02_StructuralMRI\02_ThalamicSegmentation\Session2Thickness.csv")
# Load thalamic data
thalVis1 = pd.read_csv(r"C:/Users/Angelika/Dropbox/PhD/EXPERIMENTS/02_StructuralMRI/02_ThalamicSegmentation/ThalamSegmVH_Visit1.csv", sep=",")
thalVis2 = pd.read_csv(r"C:/Users/Angelika/Dropbox/PhD/EXPERIMENTS/02_StructuralMRI/02_ThalamicSegmentation/\ThalamSegmVH_Visit2.csv", sep=",")
thalDif = pd.read_csv(r"C:/Users/Angelika/Dropbox/PhD/EXPERIMENTS/02_StructuralMRI/02_ThalamicSegmentation/\ThalamusDif.csv", sep=",")
# Load tract data
fcVis1 = pd.read_table(r"C:/Users/Angelika/Dropbox/PhD/EXPERIMENTS/02_StructuralMRI/02_ThalamicSegmentation/MRI_DATA/Mean_ThalamicTracts/IndividualNuclei/Baseline_fc_Thresholded.txt")
fcDif = pd.read_table(r"C:/Users/Angelika/Dropbox/PhD/EXPERIMENTS/02_StructuralMRI/02_ThalamicSegmentation/MRI_DATA/Mean_ThalamicTracts/IndividualNuclei/VisitDif_fc_Thresholded.txt")
fcVis2 = fcVis1 + fcDif
#### Extract FS mean TIV
participants = [df.Participant.values][0]
indeces = []
for i in range(len(participants)):
for k in range(len(thick1)):
if participants[i] in thick1["Measure:volume"].iloc[k]:
#thick1["Measure:volume"].iloc[i] in participants:
indeces.append(i)
thick1.EstimatedTotalIntraCranialVol[indeces].to_csv("ETIV_Session1.csv")
# Correct the thalamic volumes by ETIV
for i in range(len(thalVis2)):
for col in thalVis2.drop(columns=["Participant"]).columns:
thalVis2[col].iloc[i] = thalVis2[col].iloc[i] / df.EstimatedTotalIntraCranialVol.iloc[i]
thalVis2.to_csv("ThalamicVolumesPercentageTIV_Session2.csv")
# Combine clinical and thalamic segmentation data to a single dataframe
thalamusVis1 = pd.concat([thalVis1,df], axis=1)
thalamusVis2 = pd.concat([thalVis2,df], axis=1)
thalamusDif = pd.concat([thalDif,df], axis=1)
# Combine clinical and tract data to a single dataframe
tractVis1 = pd.concat([fcVis1, df], axis=1)
tractVis2 = pd.concat([fcVis2, df], axis=1)
tractDif = pd.concat([fcDif, df], axis=1)
#### Create a long dataframe format
#### Thalamus
thalVis1["Session"] = 1
thalVis2["Session"] = 2
demographics = ["Age", "Gender", "PD_VHAny", "PD", "IntracranialVolume", "Time2Scan"]
for col in demographics:
thalVis1[col] = df[col]
thalVis1["Time2Scan"] = 0
thalVis2[col] = df[col]
longthal = pd.concat([thalVis1, thalVis2], axis=0)
#### Tracts
fcVis1["Session"] = 1
fcVis2["Session"] = 2
demographics = ["Age", "Gender", "PD_VHAny", "PD", "IntracranialVolume", "Time2Scan", "Participant"]
for col in demographics:
fcVis1[col] = df[col]
fcVis1["Time2Scan"] = 0
fcVis2[col] = df[col]
longFC = pd.concat([fcVis1, fcVis2], axis=0)
###Output
_____no_output_____
###Markdown
Group thalamic nuclei to categories
###Code
### Lists to Group thalamic nuclei according to Iglesias et al. A probabilistic atlas of the human thalamic nuclei combining ex vivo MRI and histology. Neuroimage 2018 Dec; 183: 314–326
LeftAnterior = ["LeftAV"]
RightAnterior = ["RightAV"]
LeftLateral = ["LeftLD", "LeftLP"]
RightLateral = ["RightLD", "RightLP"]
LeftVentral = ["LeftVA", "LeftVAmc", "LeftVLa", "LeftVLp", "LeftVPL", "LeftVM"]
RightVentral = ["RightVA", "RightVAmc", "RightVLa", "RightVLp", "RightVPL", "RightVM"]
LeftIntralaminar = ["LeftCeM", "LeftCL", "LeftPc", "LeftCM", "LeftPf"]
RightIntralaminar = ["RightCeM", "RightCL", "RightPc", "RightCM", "RightPf"]
LeftMedial = ["LeftPt", "LeftMV_Re", "LeftMDm", "LeftMDl"]
RightMedial = ["RightPt", "RightMV-Re", "RightMDm", "RightMDl"]
LeftMedialGN = ["LeftMGN", "LeftL_SG"]
RightMedialGN = ["RightMGN", "RightL_SG"]
LeftPulvinar = ["LeftPuA", "LeftPuM", "LeftPuL", "LeftPuI"]
RightPulvinar = ["RightPuA", "RightPuM", "RightPuL", "RightPuI"]
LeftPosterior = LeftMedialGN + LeftPulvinar
RightPosterior = RightMedialGN + RightPulvinar
## List of the columns
SumColumns = ["LeftAnterior", "RightAnterior", "LeftLateral", "RightLateral", "LeftVentral", "RightVentral", "LeftIntralaminar", "RightIntralaminar", "LeftMedial", "RightMedial", "LeftPulvinar", "RightPulvinar", "LeftMedialGN", "RightMedialGN"]
### Create Columns of sum thalamic volumes per nuclei category
databases = thalamusVis1, thalamusVis2, thalamusDif
for base in databases:
base["LeftAnterior"] = base["LeftAV"]
base["RightAnterior"] = base["RightAV"]
base["LeftLateral"] = base["LeftLD"] + base["LeftLP"]
base["RightLateral"] = base["RightLD"] + base["RightLP"]
base["LeftVentral"] = base["LeftVA"] + base["LeftVAmc"] + base["LeftVLa"] + base["LeftVLp"] + base["LeftVPL"] + base["LeftVM"]
base["RightVentral"] = base["RightVA"] + base["RightVAmc"] + base["RightVLa"] + base["RightVLp"] + base["RightVPL"] + base["RightVM"]
base["LeftIntralaminar"] = base["LeftCeM"] + base["LeftCL"] + base["LeftPc"] + base["LeftCM"] + base["LeftPf"]
base["RightIntralaminar"] = base["RightCeM"] + base["RightCL"] + base["RightPc"] + base["RightCM"] + base["RightPf"]
#base["LeftMedial"] = base["LeftMDl"] + ["LeftMDm"] + base["LeftPt"] + base["LeftMV_Re"]
#base["RightMedial"] = base["RightPt"] + base["RightMV_Re"] + ["RightMDm"] + base["RightMDl"]
base["LeftMedialGN"] = base["LeftMGN"] + base["LeftL_Sg"]
base["RightMedialGN"] = base["RightMGN"] + base["RightL_Sg"]
base["LeftPulvinar"] = base["LeftPuA"] + base["LeftPuM"] + base["LeftPuL"] + base["LeftPuI"]
base["RightPulvinar"] = base["RightPuA"] + base["RightPuM"] + base["RightPuL"] + base["RightPuI"]
### Medial one - loop doesnt work
thalamusVis1["LeftMedial"] = thalamusVis1.LeftMDl + thalamusVis1.LeftMDm + thalamusVis1.LeftPt + thalamusVis1.LeftMV_Re
thalamusVis1["RightMedial"] = thalamusVis1.RightMDl + thalamusVis1.RightMDm + thalamusVis1.RightPt + thalamusVis1.RightMV_Re
thalamusVis2["LeftMedial"] = thalamusVis2.LeftMDl + thalamusVis2.LeftMDm + thalamusVis2.LeftPt + thalamusVis2.LeftMV_Re
thalamusVis2["RightMedial"] = thalamusVis2.RightMDl + thalamusVis2.RightMDm + thalamusVis2.RightPt + thalamusVis2.RightMV_Re
thalamusDif["LeftMedial"] = thalamusDif.LeftMDl + thalamusDif.LeftMDm + thalamusDif.LeftPt + thalamusDif.LeftMV_Re
thalamusDif["RightMedial"] = thalamusDif.RightMDl + thalamusDif.RightMDm + thalamusDif.RightPt + thalamusDif.RightMV_Re
###Output
_____no_output_____
###Markdown
Check for normality
###Code
## Normality check for all columns together
### Declare empty variables to hold column names
NormallyDistributed = []
NonNormallyDistributed = []
### Loop through all columns
for col in df.columns:
if is_numeric_dtype(df[col]) == True: ## Numeric check
data = df[np.isfinite(df[col])] ## Drop NAs (the shapiro will not calculate statistic if NAs present)
r, p = stats.shapiro(data[col]) ### If less than 0.05 non normally distributed
if p < 0.05:
NonNormallyDistributed.append(col)
else:
NormallyDistributed.append(col)
### Save in text files the names of the variables
with open('NormallyDistributedVariables.txt', 'w') as filehandle:
for listitem in NormallyDistributed:
filehandle.write('%s\n' % listitem)
with open('NonNormallyDistributedVariables.txt', 'w') as filehandle:
for listitem in NonNormallyDistributed:
filehandle.write('%s\n' % listitem)
#### Visualise distribution - if want to for individual columns
col = "HADSdepression"
sns.distplot(df[df.PD==1][col])
stats.shapiro(df[df.PD==1][col])
###Output
_____no_output_____
###Markdown
Demographics Extract group means and std
###Code
## Save all group mean and std for each column
### Group by PD_VHAny: 0 controls, 1 PD non VH, 2 PD VH
dfMean = df.groupby("PD_VHAny").mean()
dfMean["Type"] = "Mean"
dfSTD = df.groupby("PD_VHAny").std()
dfSTD["Type"] = "STD"
pd.concat([dfMean,dfSTD], axis=0).to_csv("GroupedDemographics.csv")
###Output
_____no_output_____
###Markdown
Comparisons between 3 groups- Step 1 - Loop through all variables and run ANOVA for normally distributed and Kruskal Wallis for non normally distributed ones- Step 2 - Post hoc testing for those who are significantly different
###Code
def groupCompare(variables, group, dataframe, number_groups):
### Declare empty variables to hold column names
NormallyDistributed = []
NonNormallyDistributed = []
statistic = []
p_value = []
types = []
### Loop through all columns of a dataframe and check normality
for col in dataframe.columns:
if is_numeric_dtype(dataframe[col]) == True: ## Numeric check
data = dataframe[np.isfinite(dataframe[col])] ## Drop NAs (the shapiro will not calculate statistic if NAs present)
r, p = stats.shapiro(data[col]) ### If less than 0.05 non normally distributed
if p < 0.05:
NonNormallyDistributed.append(col)
else:
NormallyDistributed.append(col)
for var in variables:
if number_groups > 2:
if var in NormallyDistributed: ## Normally distributed then do ANOVA
data=dataframe[np.isfinite(dataframe[var])]
variable = data[var].dropna()
comp = data[group] ### comparison of interest
anova = ols("variable ~ C(comp)", data=data).fit() ### run anova
r = anova.rsquared_adj ## extract overall model adjusted r statistic
p = anova.f_pvalue ## extract overall model p-value
statistic.append(r)
p_value.append(p)
types.append("ANOVA")
elif var in NonNormallyDistributed: ### Non normally distributed then do Kruskal Wallis
data = dataframe[np.isfinite(dataframe[var])]
### declare the three series
v1 = data[data[group] == 0][var]
v2 = data[data[group] == 1][var]
v3 = data[data[group] == 2][var]
r,p = stats.kruskal(v1, v2, v3) ### run Kruskal wallis
statistic.append(r)
p_value.append(p)
types.append("Kruskal-Wallis")
else: ### In case any variables were labelled incorrectly
statistic.append("NA")
p_value.append("NA")
types.append("NA")
elif number_groups == 2:
if var in NormallyDistributed: ## Normally distributed then do ttest
data=dataframe[np.isfinite(dataframe[var])]
v1 = data[data.PD_VHAny == 1][var]
v2 = data[data.PD_VHAny == 2][var]
r, p = stats.ttest_ind(v1, v2)
statistic.append(r)
p_value.append(p)
types.append("t-test")
elif var in NonNormallyDistributed: ### Non normally distributed then do Mann-Whitney
data = dataframe[np.isfinite(dataframe[var])]
v1 = data[data.PD_VHAny == 1][var]
v2 = data[data.PD_VHAny == 2][var]
r,p = stats.mannwhitneyu(v1, v2) ### run Kruskal wallis
statistic.append(r)
p_value.append(p)
types.append("Mann-Whitney")
else: ### In case any variables were labelled incorrectly
statistic.append("NA")
p_value.append("NA")
types.append("NA")
### Combine results on dataframe
results = pd.DataFrame(data=np.zeros((len(variables), 0))) # empty dataframe
results["Variable"] = variables # variable names
results["Statistic"] = statistic # statistic
results["Pvalue"] = p_value # p_value
results["Type"] = types # type of statistical test used
return(results)
filename = r"C:\Users\Angelika\Dropbox\PhD\EXPERIMENTS\02_StructuralMRI\02_ThalamicSegmentation\DemographicsAll3groups.txt"
variables = [line.rstrip('\n') for line in open(filename)]
result = groupCompare(variables,"PD_VHAny",df,2)
## Perform comparisons between all 3 groups at once
### Load variables for 3 group comparison
filename = r"C:\Users\Angelika\Dropbox\PhD\EXPERIMENTS\02_StructuralMRI\02_ThalamicSegmentation\DemographicsAll3groups.txt"
variables = [line.rstrip('\n') for line in open(filename)]
## Empty variables to hold results
statistic = []
p_value = []
types = []
### Loop through all the variables
for var in variables:
if var in NormallyDistributed: ## Normally distributed then do ANOVA
data=df[np.isfinite(df[var])]
variable = data[var].dropna()
group = data.PD_VHAny ### comparison of interest
anova = ols("variable ~ C(group)", data=data).fit() ### run anova
r = anova.rsquared_adj ## extract overall model adjusted r statistic
p = anova.f_pvalue ## extract overall model p-value
statistic.append(r)
p_value.append(p)
types.append("ANOVA")
elif var in NonNormallyDistributed: ### Non normally distributed then do Kruskal Wallis
data = df[np.isfinite(df[var])]
### declare the three series
v1 = data[data.PD_VHAny == 0][var]
v2 = data[data.PD_VHAny == 1][var]
v3 = data[data.PD_VHAny == 2][var]
r,p = stats.kruskal(v1, v2, v3) ### run Kruskal wallis
statistic.append(r)
p_value.append(p)
types.append("Kruskal-Wallis")
else: ### In case any variables were labelled incorrectly
statistic.append("NA")
p_value.append("NA")
types.append("NA")
### Combine results on dataframe
results = pd.DataFrame(data=np.zeros((len(variables), 0))) # empty dataframe
results["Variable"] = variables # variable names
results["Statistic"] = statistic # statistic
results["Pvalue"] = p_value # p_value
results["Type"] = types # type of statistical test used
results.to_csv("GroupComparisons_3Groups.csv") # export to csv
### Post hoc testing
# Variables that are significantly different between groups
results[results.Pvalue <=0.05]
# Post hoc tukey test for ANOVA
var = "Stroop_colour_time"
data = df[np.isfinite(df[var])]
variable = data[var] # substitute variable sequentially to test all continuous variables following ANOVA
group = data.PD_VHAny
sp.posthoc_tukey_hsd(variable, group, alpha=0.05)
# The contrast appearing as 1 is significant / 0 is non significant / -1 for diagonals
## Post hoc dunn test for Kruskal Wallis
var = "Stroop_both_time"
X = [df[df.PD_VHAny == 0][var], df[df.PD_VHAny == 1][var], df[df.PD_VHAny == 2][var]]
sp.posthoc_dunn(X)
### returns the exact p-values for each comparison
###Output
_____no_output_____
###Markdown
Comparisons between PD groups only
###Code
### Continuous variables
### Load variables for PD group comparison
filename = r"C:\Users\Angelika\Dropbox\PhD\EXPERIMENTS\02_StructuralMRI\02_ThalamicSegmentation\DemographicsPDgroups.txt"
variables = [line.rstrip('\n') for line in open(filename)]
## Empty variables to hold results
statistic = []
p_value = []
types = []
### Loop through all the variables
for var in variables:
if var in NormallyDistributed: ## Normally distributed then do ANOVA
data=df[np.isfinite(df[var])]
v1 = data[data.PD_VHAny == 1][var]
v2 = data[data.PD_VHAny == 2][var]
r, p = stats.ttest_ind(v1, v2)
statistic.append(r)
p_value.append(p)
types.append("t-test")
elif var in NonNormallyDistributed: ### Non normally distributed then do Kruskal Wallis
data = df[np.isfinite(df[var])]
v1 = data[data.PD_VHAny == 1][var]
v2 = data[data.PD_VHAny == 2][var]
r,p = stats.mannwhitneyu(v1, v2) ### run Kruskal wallis
statistic.append(r)
p_value.append(p)
types.append("Mann-Whitney")
else: ### In case any variables were labelled incorrectly
statistic.append("NA")
p_value.append("NA")
types.append("NA")
### Combine results on dataframe
results = pd.DataFrame(data=np.zeros((len(variables), 0))) # empty dataframe
results["Variable"] = variables # variable names
results["Statistic"] = statistic # statistic
results["Pvalue"] = p_value # p_value
results["Type"] = types # type of statistical test used
results.to_csv("GroupComparisons_PDonly.csv") # export to csv
### Categorical variables
filename = r"C:/Users/Angelika/Dropbox/PhD/EXPERIMENTS/02_StructuralMRI/02_ThalamicSegmentation/DemographicsPDgroupsCategorical.txt"
variables = [line.rstrip('\n') for line in open(filename)]
## Empty variables to hold results
statistic = []
p_value = []
for var in variables:
data=df[np.isfinite(df[var])]
# data = data[data.PD==1] ## do only in PD
tab = pd.crosstab(data.PD_VHAny,data[var])
r, p, dof, exp = stats.chi2_contingency(tab)
statistic.append(r)
p_value.append(p)
### Combine results on dataframe
results = pd.DataFrame(data=np.zeros((len(variables), 0))) # empty dataframe
results["Variable"] = variables # variable names
results["ChiSquare"] = statistic # statistic
results["Pvalue"] = p_value # p_value
results.to_csv("GroupComparisons_PDonlyCategorical.csv") # export to csv
### Comparison of longitudinal difference
variablesVisit1 = ["MOCA", "MMSE", "DigitSpanF", "DigitSpanB", "Stroop_colour_time", "Stroop_both_time", "FluencyAnimal", "WordRec", "LogMem_Delayed", "GNT", "FluencyLetter", "JLO", "Hooper", "UPDRS", "UPDRSMotorScoreonly", "LEDD"]
variablesVisit2 = ["MOCA_Session2", "MMSE_Session2", "Digit_Span_Forward_Session2", "Digit_Span_Backward_Session2", "Stroop_colour_time_Session2", "Stroop_both_time_Session2", "Verbal_fluency_category_Session2", "Word_recognition_Session2", "Logical_Memory_Delayed_Session2", "GNT_Session2", "Verbal_fluency_letter_Session2", "JLO_Session2", "Hooper_Session2", "UPDRS_Session2", "UPDRSMotorScoreOnly_Session2", "LEDD_S2"]
variablesDif = []
dfPD = df[df.PD ==1]
for x in range(len(variablesVisit1)):
variable = str(variablesVisit1[x] + "_Dif")
variablesDif.append(variable)
dfPD[variable] = dfPD[(variablesVisit2[x])] - dfPD[(variablesVisit1[x])]
resultsDif = groupCompare(variablesDif, "PD_VHAny", dfPD, 2)
resultsDif.to_csv("GroupDifferences_LongitudinalCognition_PDonly.csv")
###Output
_____no_output_____
###Markdown
Compare across groups 1. Compare individual nuclei or tracts
###Code
#### Loop through all the individual nuclei
## Declare the datase
data = tractVis1
#data = data[data.PD == 1] ## Select PD only
#data = data[data.PD_VHAny!=1] ## Remove PD non VH
data["groups"] = 0
vc = {"IntracranialVolume": "IntracranialVolume", "Gender": "0 + C(Gender)", "Age": "Age"}
# # Declare empty lists
pvalues = []
coefficients = []
lowerCI = []
upperCI = []
### Loop through all nuclei
#for nucleus in thalVis1.drop(columns=["Participant", "LeftWhole_thalamus", "RightWhole_thalamus"]).columns: ### for nuclei
for nucleus in fcVis1.columns: ### for tracts
# Change the Y variable to each ROI name
formula = nucleus + " ~ PD"
md = sm.MixedLM.from_formula(formula, data=data, re_formula="1", vc_formula=vc, groups=data.groups)
mdf = md.fit() # fit the model
### Select the values I want from the model
p = mdf.pvalues[1]
coef = (mdf.conf_int().iloc[1]).mean()
lower = (mdf.conf_int().iloc[1])[0]
upper = (mdf.conf_int().iloc[1])[1]
### Append them to the empty lists
pvalues.append(p)
coefficients.append(coef)
lowerCI.append(lower)
upperCI.append(upper)
# FDR correct the p-values
FDR = sm.stats.multipletests(pvalues, is_sorted=False, alpha=0.05, method="fdr_bh", returnsorted=False)
# Merge to Dataframe and export as csv
# outdata = pd.DataFrame(data=np.zeros(((len(thalVis1.columns)-3),0))) ### for nuclei
# outdata["Nucleus"] = thalVis1.drop(columns=["Participant", "LeftWhole_thalamus", "RightWhole_thalamus"]).columns ### For nuclei
outdata = pd.DataFrame(data=np.zeros((len(fcVis1.columns),0))) ### for tracts
outdata["Tract"] = fcVis1.columns ### for tracts
outdata["Coef"] = coefficients
outdata["lowerCI"] = lowerCI
outdata["upperCI"] = upperCI
outdata["pValues"] = pvalues
outdata["FDR"] = FDR[1]
outdata.to_csv("FCVis1_PDvsControls.csv")
##### LONGITUDINAL
#### Loop through all the individual nuclei
## Declare the datase
data = thalamusDif
data = data[data.PD == 1]
# data = data[data.PD_VHAny!=1]
data["groups"] = 0
# # Declare empty lists
pvalues = []
coefficients = []
lowerCI = []
upperCI = []
### Loop through all nuclei
for nucleus in thalVis1.drop(columns=["Participant", "LeftWhole_thalamus", "RightWhole_thalamus"]).columns: ### for nuclei
#for nucleus in fcVis1.columns: ### for tracts
vis1volume = str(nucleus + "Vis_1")
data[vis1volume] = thalVis1[nucleus]
# Change the Y variable to each ROI name
vc = {"IntracranialVolume": "IntracranialVolume", "Gender": "0 + C(Gender)", "Age": "Age", "Visit1":vis1volume}
formula = nucleus + " ~ C(PD_VHAny)* Time2Scan"
# formula = nucleus + " ~ C(PD_VHAny) * Time + " + vis1volume
md = sm.MixedLM.from_formula(formula, data=data, re_formula="1", vc_formula=vc, groups=data.groups)
mdf = md.fit() # fit the model
### Select the values I want from the model
p = mdf.pvalues[3]
coef = (mdf.conf_int().iloc[3]).mean()
lower = (mdf.conf_int().iloc[3])[0]
upper = (mdf.conf_int().iloc[3])[1]
### Append them to the empty lists
pvalues.append(p)
coefficients.append(coef)
lowerCI.append(lower)
upperCI.append(upper)
# FDR correct the p-values
FDR = sm.stats.multipletests(pvalues, is_sorted=False, alpha=0.05, method="fdr_bh", returnsorted=False)
# Merge to Dataframe and export as csv
outdata = pd.DataFrame(data=np.zeros(((len(thalVis1.columns)-3),0))) ### for nuclei
outdata["Nucleus"] = thalVis1.drop(columns=["Participant", "LeftWhole_thalamus", "RightWhole_thalamus"]).columns ### For nuclei
#outdata = pd.DataFrame(data=np.zeros((len(fcVis1.columns),0))) ### for tracts
#outdata["Tract"] = fcVis1.columns ### for tracts
outdata["Coef"] = coefficients
outdata["lowerCI"] = lowerCI
outdata["upperCI"] = upperCI
outdata["pValues"] = pvalues
outdata["FDR"] = FDR[1]
outdata.to_csv("Thalamus_LongitudinalPD_VHAny.csv")
#Individual ones that fail
data = tractVis1
# data = data[data.PD == 1]
data = data[data.PD_VHAny != 1]
data["groups"] = 0
vc = {"IntracranialVolume": "IntracranialVolume", "Age": "Age"}
formula = "RightPuM" + " ~ PD_VHAny"
md = sm.MixedLM.from_formula(formula, data=data, re_formula="1", vc_formula=vc, groups=data.groups)
mdf = md.fit()
mdf.summary()
#### LONGITUDINAL LME4
import os
os.environ["R_HOME"] = r"C:/PROGRA~1/R/R-3.6.1"
os.environ["PATH"] = r"C:/PROGRA~1/R/R-3.6.1\bin\x64" + ";" + os.environ["PATH"]
os.environ['KMP_DUPLICATE_LIB_OK']='True'
from pymer4.models import Lmer
# Clear longitudinal data
data = longthal
#data = data[data.PD == 1] ## PD only
#data = data[data.PD_VHAny !=1] ## Drop PD non VH
data["subject"] = data.Participant.str.slice(start=-3).astype(int) ### Add a subject column as ints
dropColumns = demographics + ["Participant", "LeftWhole_thalamus", "RightWhole_thalamus", "Session"] ## drop columns to get only the thalami
#dropColumns = demographics + ["Session"] ## for Tracts
# # Declare empty lists
pvalues = []
T_stats = []
coefficients = []
lowerCI = []
upperCI = []
# nuclei = thalVis1.drop(columns="Participant").columns
### Loop through all nuclei
for nucleus in thalVis1.drop(columns=dropColumns).columns: ### for nuclei
formula = str(nucleus + " ~ PD * Time2Scan + Age + Gender + IntracranialVolume + (1 | subject)")
model = Lmer(formula, data=data)
m = model.fit()
# get the values for the Slope
mdf = m.loc["PD:Time2Scan"]
coef = mdf["Estimate"]
lower = mdf["2.5_ci"]
upper = mdf["97.5_ci"]
p = mdf["P-val"]
t = mdf["T-stat"]
### Append them to the empty lists
pvalues.append(p)
coefficients.append(coef)
lowerCI.append(lower)
upperCI.append(upper)
T_stats.append(t)
# FDR correct the p-values
FDR = sm.stats.multipletests(pvalues, is_sorted=False, alpha=0.05, method="fdr_bh", returnsorted=False)
# Merge to Dataframe and export as csv
outdata = pd.DataFrame(data=np.zeros((len(thalVis1.drop(columns=dropColumns).columns),0))) ### for nuclei
outdata["Nucleus"] = thalVis1.drop(columns=dropColumns).columns ### For nuclei
# outdata["Tract"] = fcVis1.drop(columns=dropColumns).columns ### for tracts
outdata["T-stat"] = T_stats
outdata["Coef"] = coefficients
outdata["lowerCI"] = lowerCI
outdata["upperCI"] = upperCI
outdata["pValues"] = pvalues
outdata["FDR"] = FDR[1]
outdata.to_csv("Thalami_LongitudinalPD.csv")
df.groupby("PD_VisPerf").Hallucinations_Session2.mean(), df.groupby("PD_VisPerf").Hallucinations_Session2.std()
v1 = df[df1.PD_VisPerf == 1].Hallucinations_Session2
v2 = df[df1.PD_VisPerf == 2].Hallucinations_Session2
data = df[df.PD_VisPerf != 0]
tab = pd.crosstab(data.PD_VisPerf, data.PD_VHAny)
stats.chi2_contingency(tab)
###Output
_____no_output_____
###Markdown
2. Compare categories of nuclei
###Code
### Loop through all the nuclei categories
## Declare the datase
data = tractDif
data = data[data.PD == 1]
data["groups"] = 0
#vc = {"IntracranialVolume": "IntracranialVolume", "Age": "Age"}
vc = {"IntracranialVolume": "IntracranialVolume", "Gender": "0 + C(Gender)", "Age": "Age"}
# # Declare empty lists
pvalues = []
coefficients = []
lowerCI = []
upperCI = []
### Loop through all nuclei
for nucleus in fcVis1.columns:
# Change the Y variable to each ROI name
formula = nucleus + " ~ VH_Any"
md = sm.MixedLM.from_formula(formula, data=data, re_formula="1", vc_formula=vc, groups=data.groups)
mdf = md.fit()
p = mdf.pvalues[1]
# coef = (mdf.conf_int().loc["PD"]).mean()
# lower = (mdf.conf_int().loc["PD"])[0]
# upper = (mdf.conf_int().loc["PD"])[1]
coef = (mdf.conf_int().iloc[1]).mean()
lower = (mdf.conf_int().iloc[1])[0]
upper = (mdf.conf_int().iloc[1])[1]
pvalues.append(p)
coefficients.append(coef)
lowerCI.append(lower)
upperCI.append(upper)
# FDR correct
FDR = sm.stats.multipletests(pvalues, is_sorted=False, alpha=0.05, method="fdr_bh", returnsorted=False)
# Merge to Dataframe and export as csv
outdata = pd.DataFrame(data=np.zeros((len(fcVis1.columns),0)))
outdata["Nucleus"] = fcVis1.columns
outdata["Coef"] = coefficients
outdata["lowerCI"] = lowerCI
outdata["upperCI"] = upperCI
outdata["pValues"] = pvalues
outdata["FDR"] = FDR[1]
outdata.to_csv("VisitDifThalamus_VHvsNonVH_Categories.csv")
###Output
_____no_output_____
###Markdown
3. Compare Whole thalamus
###Code
### Repeat with whole thalamus
data = thalamusDif
data = data[data.PD_VHAny != 1]
data["groups"] = 0
data["Whole_thalamus"] = data.LeftWhole_thalamus + data.RightWhole_thalamus
md = sm.MixedLM.from_formula("Whole_thalamus ~ VH_Any", data=data, re_formula="1", vc_formula=vc, groups=data.groups)
mdf = md.fit()
mdf.summary()
###Output
_____no_output_____
###Markdown
Visualisations Individual Nuclei Barplots
###Code
### Barplots and swarmplot with all thalamic volumes between Controls, PD-VH, PD-nonVH
### The database you want to loop through (aka Baseline Visit, Visit 2 etc)
data = thalamusVis1
### Set the style parameters for the graph
colors= ["grey", "salmon", "brown"] # colour dictionary
sns.set_palette(sns.color_palette(colors))
sns.set_style("white")
sns.set_context("poster", rc={"font.size":18,"axes.titlesize":18,"axes.labelsize":18})
#### Loop through all the nuclei
#for nucleus in thalVis1.drop(columns=["Participant", "LeftWhole_thalamus", "RightWhole_thalamus"]).columns: ## If looping through individual nuclei
for nucleus in SumColumns: ### If looping through categories
fig, ax = plt.subplots(figsize=(10,10)) # set size
sns.boxplot(x="PD_VHAny", y=nucleus, data=data) # boxplot
sns.swarmplot(x="PD_VHAny", y=nucleus, data=data, color="black", size=6) # superimposed swarmplot
# filename to save output - this needs to be unique
outname = str(nucleus + "_Visit1_Barplot.png")
# Make labels pretty
ax.set_xticklabels(["Controls", "PD non VH", "PD VH"], fontsize=20)
plt.xlabel(" ")
plt.ylabel(nucleus, fontsize=20)
# Save file, set dpi to 300
plt.savefig(outname, dpi=300)
### Create longitudinal plots for each nucleus
# Set style for the graphs
colors= ["salmon", "brown"]
palette=sns.color_palette(colors)
sns.set_style("white")
nuc = ["RightMDm", "LeftPc"]
## Loop through all nuclei
for nucleus in nuc:
#for nucleus in thalVis1.drop(columns=["Participant", "LeftWhole_thalamus", "RightWhole_thalamus"]).columns: ## If looping through individual nuclei
#for nucleus in SumColumns: ### If looping through categories
# Clear the data
#### Need to select the specific nucleus I want from visit 1 and visit 2 and then melt the database
data = thalVis1[["Participant",nucleus]]
data[str(nucleus + "_S2")] = thalVis2[nucleus]
data["PD_VHAny"] = df.PD_VHAny ### This is the group of interest
data = data[data.PD_VHAny !=0]
dataMelt = data.melt(id_vars="PD_VHAny", value_vars=[nucleus, (str(nucleus + "_S2"))])
#### Initiate graph pointplot, errorbars:95CI
fig, ax = plt.subplots(figsize=(10,10))
fig = sns.pointplot(y="value", x="variable",hue="PD_VHAny", data=dataMelt, palette=palette, dodge=True, scale=0.5, errwidth=2, legend=True)
#### Make the labels pretty
ax.set_xticklabels(["Baseline", "Session 2"], fontsize=20)
plt.yticks(fontsize=18)
plt.xlabel(" ")
plt.ylabel(nucleus, fontsize=20)
### Add legend with the correct colours
l = plt.legend(title="", loc="upper right", labels= ["PD non VH", "PD VH"], fontsize=16)
l.legendHandles[0].set_color(colors[0])
l.legendHandles[1].set_color(colors[1])
#l.legendHandles[2].set_color(colors[2])
### Filename to save output
outname = str(nucleus + "_Longitudinal.png")
### Save
plt.savefig(outname, dpi=300)
## Create longitudinal plots for each thalamic tract
# Set style for the graphs
colors= ["grey", "salmon", "brown"]
palette=sns.color_palette(colors)
sns.set_style("white")
## Loop through all nuclei
for nucleus in fcVis1.columns: ## If looping through individual nuclei
#for nucleus in SumColumns: ### If looping through categories
# Clear the data
#### Need to select the specific nucleus I want from visit 1 and visit 2 and then melt the database
data = fcVis1[[nucleus]]
data[str(nucleus + "_S2")] = fcVis2[nucleus]
data["PD_VHAny"] = df.PD_VHAny ### This is the group of interest
dataMelt = data.melt(id_vars="PD_VHAny", value_vars=[nucleus, (str(nucleus + "_S2"))])
#### Initiate graph pointplot, errorbars:95CI
fig, ax = plt.subplots(figsize=(10,10))
fig = sns.pointplot(y="value", x="variable",hue="PD_VHAny", data=dataMelt, palette=palette, dodge=True, scale=0.5, errwidth=2, legend=True)
#### Make the labels pretty
ax.set_xticklabels(["Baseline", "Session 2"], fontsize=20)
plt.yticks(fontsize=18)
plt.xlabel(" ")
plt.ylabel(nucleus, fontsize=20)
### Add legend with the correct colours
l = plt.legend(title="", loc="upper right", labels= ["Controls", "PD non VH", "PD VH"], fontsize=16)
l.legendHandles[0].set_color(colors[0])
l.legendHandles[1].set_color(colors[1])
l.legendHandles[2].set_color(colors[2])
### Filename to save output
outname = str(nucleus + "_Longitudinal.png")
### Save
plt.savefig(outname, dpi=300)
###Output
_____no_output_____
###Markdown
Miami and Nuclei correlation
###Code
#### Thalamic tracts
### Set Style
sns.set_style("white")
sns.set_context("poster", rc={"font.size":28,"axes.titlesize":18,"axes.labelsize":18})
fig, ax = plt.subplots(figsize=(20,20))
colors = ["salmon", "firebrick"]
palette = sns.color_palette(colors)
cols = ["MiamiAny", "PD_VHAny", "Participant"]
exclude = ["RightVM", "LeftVM", "RightPt", "LeftPt", "RightPc", "LeftPc"]
for col in fcVis1.columns:
if col not in exclude:
cols.append(col)
data = tractDif[tractDif.PD == 1]
data = pd.melt(data[cols], id_vars=["PD_VHAny", "Participant", "MiamiAny"])
sns.scatterplot (y="value", x="MiamiAny", data=data, hue="PD_VHAny", ax=ax, legend=False, palette=palette)
sns.regplot (y="value", x="MiamiAny", data=data, scatter=False, ax=ax, color="black", line_kws={"lw":2})
# Make labels pretty
plt.xlabel("UM-PDHQ score")
plt.ylabel("Thalamic tracts Mean FC change", fontsize=20)
stats.spearmanr(data.value, data.MiamiAny)
plt.savefig("FCtracts_Miami.png")
#### Significant Nuclei
#### Thalamic tracts
### Set Style
sns.set_style("white")
sns.set_context("poster", rc={"font.size":28,"axes.titlesize":18,"axes.labelsize":18})
fig, ax = plt.subplots(figsize=(20,20))
colors = ["salmon", "firebrick"]
palette = sns.color_palette(colors)
cols = ["MiamiAny", "PD_VHAny", "RightMDm", "LeftPc"]
data = thalamusDif[thalamusDif.PD == 1]
data = pd.melt(data[cols], id_vars=["PD_VHAny", "MiamiAny"])
data = data[data.variable == "LeftPc"]
sns.scatterplot (y="value", x="MiamiAny", data=data, hue="PD_VHAny", ax=ax, legend=False, palette=palette)
sns.regplot (y="value", x="MiamiAny", data=data, scatter=False, ax=ax, color="black", line_kws={"lw":2})
# Make labels pretty
plt.xlabel("UM-PDHQ score")
plt.ylabel("LeftPc", fontsize=20)
stats.spearmanr(data.value, data.MiamiAny)
#plt.savefig("LeftPc.png")
###Output
_____no_output_____
###Markdown
Visualise as percentage change
###Code
###### Clean Data
data = fcVis2.copy()
data["PD_VHAny"] = df.PD_VHAny
## Separate Low and High vis datasets
dataHigh = data[data.PD_VHAny == 1]
dataLow = data[data.PD_VHAny == 2]
## One column for each nucleus
nuclei = []
for col in fcVis1.columns:
nuclei.append(col)
## Declare empty lists to hold results
resHigh_mean = np.zeros(len(nuclei))
resHigh_lower = np.zeros(len(nuclei))
resHigh_upper = np.zeros(len(nuclei))
resLow_mean = np.zeros(len(nuclei))
resLow_lower = np.zeros(len(nuclei))
resLow_upper = np.zeros(len(nuclei))
for i in range(len(nuclei)):
## High Vis
meanHigh = dataHigh[nuclei[i]].mean()
stdHigh = dataHigh[nuclei[i]].std()
resHigh_mean[i] = meanHigh
resHigh_lower[i] = meanHigh - (2*stdHigh)
resHigh_upper[i] = meanHigh + (2*stdHigh)
## Low Vis
meanLow = dataLow[nuclei[i]].mean()
stdLow = dataLow[nuclei[i]].std()
resLow_mean[i] = meanLow
resLow_lower[i] = meanLow - (2*stdLow)
resLow_upper[i] = meanLow + (2*stdLow)
resultsMean = pd.DataFrame(data = nuclei)
resultsUpper = pd.DataFrame(data = nuclei)
resultsLower = pd.DataFrame(data = nuclei)
resultsMean["LowVis"] = resLow_mean
resultsUpper["LowVis"] = resLow_upper
resultsLower["LowVis"] = resLow_lower
resultsMean["HighVis"] = resHigh_mean
resultsUpper["HighVis"] = resHigh_upper
resultsLower["HighVis"] = resHigh_lower
resultsMean.to_csv("MeanFC.csv")
resultsUpper.to_csv("UpperFC.csv")
resultsLower.to_csv("LowerFC.csv")
sns.set_style("ticks")
sns.set_context("poster", rc={"font.size":12,"axes.titlesize":12,"axes.labelsize":12})
results = pd.read_csv(r"C:\Users\Angelika\Dropbox\PhD\EXPERIMENTS\02_StructuralMRI\02_ThalamicSegmentation\Visit2_FC_Perc.csv")
### Select the Left or Right only
results = results[results["0"].str.contains("Right")]
colorsSig = ["lightgray", "brown", "royalblue"]
palette = sns.color_palette(colorsSig)
fig = sns.catplot(x="VH", y="0", data=results, hue="Significant", kind="box", orient="h", legend=False, fliersize=0, whis=0, linewidth=1, height=20, aspect=0.6, sharey=True, palette=palette)
fig.set_axis_labels("", "")
plt.xlim(-1,2)
#plt.xlim(-0.25,0)
fig.savefig("Visit2_TractFC_Right", dpi=300)
###Output
_____no_output_____ |
Pipeline-Videos.ipynb | ###Markdown
Pipeline: Lane finding on videosWe show an end to end processing pipeline from raw uncalibrated videos to lanes including lane curvature etc Read a video file, dummy-process it, and save it as outputWe start with a pipeline which does nothing but read the video, dummy-process single images, and save the new output.
###Code
def process_image_dummy(image):
return image
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import os
#input_name = 'challenge_video.mp4'
input_name = 'project_video.mp4'
output_dir = 'output_videos'
# .subclip for quick test, args are start and stop in seconds
#input_clip = VideoFileClip(input_name).subclip(0,5)
input_clip = VideoFileClip(input_name)
output_clip = input_clip.fl_image(process_image_dummy)
%time output_clip.write_videofile(os.path.join(output_dir,input_name), audio=False)
###Output
t: 0%| | 0/1260 [00:00<?, ?it/s, now=None]
###Markdown
Distortion correctionRead in raw uncalibrated images and correct for camera lens distortions
###Code
import cv2
import pickle, pprint
# Camera matrix and distortion coefficients obtained in 'Camera Calibration' notebook
calib_file = open('camera_calib.pickle','rb')
calib_dict = pickle.load(calib_file)
print('Read camera calibration objects:')
pprint.pprint(calib_dict)
calib_file.close()
#cameraMatrix = [[1.15777942e+03, 0.00000000e+00, 6.67111049e+02],
# [0.00000000e+00, 1.15282305e+03, 3.86129069e+02],
# [0.00000000e+00, 0.00000000e+00, 1.00000000e+00]]
#distCoeffs = [[-0.24688833, -0.02372814, -0.00109843, 0.00035105, -0.00259138]]
def cal_undistort(img, mtx=calib_dict['cameraMatrix'], dist=calib_dict['distCoeffs']):
undist = cv2.undistort(img, mtx, dist)
return undist
# Undistort video
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import os
#input_name = 'challenge_video'
input_name = 'project_video'
output_dir = 'output_videos'
# .subclip for quick test, args are start and stop in seconds
#input_clip = VideoFileClip(input_name+'.mp4').subclip(0,5)
input_clip = VideoFileClip(input_name+'.mp4')
output_clip = input_clip.fl_image(cal_undistort)
%time output_clip.write_videofile(os.path.join(output_dir,input_name+'_undist'+'.mp4'), audio=False)
%%bash
ls output_videos
ls -lh output_videos/challenge_video.mp4
%%HTML
<video width="320" height="240" controls>
<!-- <source src="output_videos/challenge_video.mp4" type="video/mp4"> -->
<source src="output_videos/project_video.mp4" type="video/mp4">
</video>
<video width="320" height="240" controls>
<!-- <source src="output_videos/challenge_video_undist.mp4" type="video/mp4"> -->
<source src="output_videos/project_video_undist.mp4" type="video/mp4">
</video>
###Output
_____no_output_____
###Markdown
Lane PixelsWe use a combination of various gradients, color transforms, et cetera to identify pixels which belong to lane markings
###Code
import numpy as np
import cv2
# Function to select pixels belonging to lanes
def lane_pixels(img, s_thresh=(170, 255), sx_thresh=(20, 100)):
# Convert to HLS color space and separate the V channel
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
l_channel = hls[:,:,1]
s_channel = hls[:,:,2]
# Sobel x
sobelx = cv2.Sobel(l_channel, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= sx_thresh[0]) & (scaled_sobel <= sx_thresh[1])] = 1
# Threshold color channel
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh[0]) & (s_channel <= s_thresh[1])] = 1
# Stack each channel
color_binary = np.dstack(( np.zeros_like(sxbinary), sxbinary, s_binary)) * 255
# Combine the two binary thresholds
combined_binary = np.zeros_like(sxbinary)
combined_binary[(s_binary == 1) | (sxbinary == 1)] = 1
return color_binary,combined_binary
# Apply on video
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import os
def process_image(img):
# Undistort
undistorted = cal_undistort(img, calib_dict['cameraMatrix'], calib_dict['distCoeffs'])
# Lane pixels
sep, combined = lane_pixels(undistorted,s_thresh=(180, 255), sx_thresh=(30, 120))
return sep
#input_name = 'challenge_video'
input_name = 'project_video'
output_dir = 'output_videos'
# .subclip for quick test, args are start and stop in seconds
#input_clip = VideoFileClip(input_name+'.mp4').subclip(0,5)
input_clip = VideoFileClip(input_name+'.mp4')
output_clip = input_clip.fl_image(process_image)
%time output_clip.write_videofile(os.path.join(output_dir,input_name+'_lanepixels'+'.mp4'), audio=False)
%%bash
ls -lh output_videos
%%HTML
<video width="320" height="240" controls>
<!-- <source src="output_videos/challenge_video.mp4" type="video/mp4"> -->
<source src="output_videos/project_video.mp4" type="video/mp4">
</video>
<video width="320" height="240" controls>
<!-- <source src="output_videos/challenge_video_lanepixels.mp4" type="video/mp4"> -->
<source src="output_videos/project_video_lanepixels.mp4" type="video/mp4">
</video>
###Output
_____no_output_____
###Markdown
Warp image: bird's eye viewWarp the image such that it is represented in a bird's eye view
###Code
import matplotlib.pyplot as plt
import matplotlib.patches as patches
# Looking at one of the undistorted images, we find this trapeziod
# 249 690
# 1058 690
# 606 444
# 674 444
#
# Visualize our trapezoid
#
# Read image
img = cv2.imread('output_images/straight_lines1_cal.jpg')
#img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
trapezoid = np.float32([(249,690),(1058,690),(674,444),(606,444)])
# Show
fig, ax = plt.subplots(nrows=1, ncols=1,sharex=True,sharey=True,figsize=(15,30))
ax.imshow(img)
ax.add_patch(patches.Polygon(xy=trapezoid, fill=False,linewidth=2,color='red'))
def get_warp_matrix():
'''
Our warp matrix
'''
# Image properties
height = 720
width = 1280
# Our trapeziod in the source image
src = np.float32([[249,690],[1058,690],[674,444],[606,444]])
# Define destination for the source trapezoid
dst = np.copy(src)
# .. only lower half of image
dst[:,1] = (dst[:,1] - height/2)*2
# .. bird's eye
dst[3,0] = dst[0,0]
dst[2,0] = dst[1,0]
# # Squeeze all points by 50px or so in lateral direction
# squeeze = 200
# #print('Before squeeze: ',dst)
# # left points
# dst[np.array([0,3]), np.array([0,0])] = dst[np.array([0,3]), np.array([0,0])] + squeeze
# # right points
# dst[np.array([1,2]), np.array([0,0])] = dst[np.array([1,2]), np.array([0,0])] - squeeze
# #print('After squeeze: ',dst)
# Squeeze all points by 50px or so in lateral direction,
# but squueze while respecting camera center :)
squeeze = 200
#print('Before squeeze: ',dst)
# Get the squeeze wrt the center
img_center = width//2
left_dist = np.abs(img_center - dst[0,0])
right_dist = np.abs(img_center- dst[1,0])
mean_dist = (left_dist + right_dist)/2.0
left_squeeze = int(squeeze *left_dist/mean_dist)
right_squeeze = int(squeeze *right_dist/mean_dist)
# left points
dst[np.array([0,3]), np.array([0,0])] = dst[np.array([0,3]), np.array([0,0])] + left_squeeze
# right points
dst[np.array([1,2]), np.array([0,0])] = dst[np.array([1,2]), np.array([0,0])] - right_squeeze
#print('After squeeze: ',dst)
# Move the bottom dest points down
# Story is, we don't fit a linear parameter, so we need to get the
# longitudinal zero position right.. no downward shift is too little
# shifting to height -5 is too much..
#print('Before shift:',dst[np.array([0,1]), np.array([1,1])])
dst[np.array([0,1]), np.array([1,1])] = height - 20
#print('After shift:',dst[np.array([0,1]), np.array([1,1])])
# Get the transformation matrix
M = cv2.getPerspectiveTransform(src, dst)
return M
def birds_eye(img):
'''
Warps an image to birds eye view
'''
# Image properties
height = img.shape[0]
width = img.shape[1]
# Get warp matrix
M = get_warp_matrix()
# Do warp
warped = cv2.warpPerspective(img, M, (width,height))
return warped
# Apply on video
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import os
# The camera image in bird's eye view
def process_image_birds_pic(img):
# Undistort
undistorted = cal_undistort(img, calib_dict['cameraMatrix'], calib_dict['distCoeffs'])
# Lane pixels
sep, combined = lane_pixels(undistorted,s_thresh=(180, 255), sx_thresh=(30, 120))
# warp
warp = birds_eye(undistorted)
#warp_lanes = birds_eye(combined)
return warp
# The lane pixels in bird's eye view
def process_image_birds_lane(img):
# Undistort
undistorted = cal_undistort(img, calib_dict['cameraMatrix'], calib_dict['distCoeffs'])
# Lane pixels
sep, combined = lane_pixels(undistorted,s_thresh=(180, 255), sx_thresh=(30, 120))
# warp
#warp = birds_eye(undistorted)
warp_lanes = birds_eye(sep)
return warp_lanes
#input_name = 'challenge_video'
input_name = 'project_video'
output_dir = 'output_videos'
# .subclip for quick test, args are start and stop in seconds
#input_clip = VideoFileClip(input_name+'.mp4').subclip(0,5)
input_clip = VideoFileClip(input_name+'.mp4')
output_clip_pic = input_clip.fl_image(process_image_birds_pic)
%time output_clip_pic.write_videofile(os.path.join(output_dir,input_name+'_birds_pic'+'.mp4'), audio=False)
output_clip_lane = input_clip.fl_image(process_image_birds_lane)
%time output_clip_lane.write_videofile(os.path.join(output_dir,input_name+'_birds_lane'+'.mp4'), audio=False)
%%HTML
<video width="320" height="240" controls>
<!-- <source src="output_videos/challenge_video_birds_pic.mp4" type="video/mp4"> -->
<source src="output_videos/project_video_birds_pic.mp4" type="video/mp4">
</video>
<video width="320" height="240" controls>
<!-- <source src="output_videos/challenge_video_birds_lane.mp4" type="video/mp4"> -->
<source src="output_videos/project_video_birds_lane.mp4" type="video/mp4">
</video>
###Output
_____no_output_____
###Markdown
Fit lanes - no priorWe fit a polynomial to selected lane pixels
###Code
def get_pixels_no_prior(img,x_window=100,n_y_windows=5):
'''
Find the lane markings without prior
'''
# For videos, we pass separate lane pixels, i.e. rgb.
# Need to combine them here
# Turn RGB into binary grayscale
# print('img.shape: ',img.shape) #--> (720, 1280, 3)
img_bin = np.zeros(shape=img.shape[0:2])
# print('img_bin.shape: ',img_bin.shape) #--> (720, 1280)
img_bin[(img[:,:,0]==255) | (img[:,:,1]==255) | (img[:,:,2]==255)] = 1
img=img_bin
# Get the x position of the left lane,
# as the weighted mean of all lane pixels in lower half
img_height = img.shape[0]
y_half = img_height//2
weights = np.linspace(0.,1.,y_half)
histogram = np.dot(img[y_half:,:].T,weights)
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
# With narrow projection
#leftx_base = np.argmax(histogram[:midpoint])
#rightx_base = np.argmax(histogram[midpoint:]) + midpoint
# With wide projection, i.e. we see left and right lanes too
leftx_base = np.argmax(histogram[midpoint//2:midpoint]) + midpoint//2
rightx_base = np.argmax(histogram[midpoint:midpoint*3//2]) + midpoint
# Create an output image to draw on and visualize the result
out_img = np.dstack((img, img, img))
out_img = out_img*255
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin
margin = x_window
# Set minimum number of pixels found to recenter window
minpix = 50
# Set height of windows - based on nwindows above and image shape
window_height = np.int(img.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = img.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base
rightx_current = rightx_base
# Momentum of the update
leftx_momentum = 0
rightx_momentum = 0
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = []
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = img.shape[0] - (window+1)*window_height
win_y_high = img.shape[0] - window*window_height
# Don't simply use the previous position but use
# the local curvature as well
leftx_current = leftx_current + np.mean((leftx_momentum,rightx_momentum),dtype=int)
rightx_current = rightx_current + np.mean((leftx_momentum,rightx_momentum),dtype=int)
### The four boundaries of the window ###
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),
(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),
(win_xright_high,win_y_high),(0,255,0), 2)
### Identify the nonzero pixels in x and y within the window ###
good_left_inds = ((nonzeroy>win_y_low) & (nonzeroy<=win_y_high) &
(nonzerox>win_xleft_low) & (nonzerox<=win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy>win_y_low) & (nonzeroy<=win_y_high) &
(nonzerox>win_xright_low) & (nonzerox<=win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
### If we found > minpix pixels, recenter next window ###
### (`right` or `leftx_current`) on their mean position ###
if(len(good_left_inds) > minpix):
old = leftx_current
leftx_current = np.mean(nonzerox[good_left_inds],dtype=int)
leftx_momentum = leftx_momentum + leftx_current - old
#print('leftx_current: ',leftx_current)
else:
leftx_momentum = int(leftx_momentum*1.2) # empiric, should do some math here
if(len(good_right_inds) > minpix):
old = rightx_current
rightx_current = np.mean(nonzerox[good_right_inds],dtype=int)
rightx_momentum = rightx_momentum + rightx_current - old
#print('rightx_current: ',rightx_current)
else:
rightx_momentum = int(rightx_momentum*1.2) # empiric, should do some math here
# Concatenate the arrays of indices (previously was a list of lists of pixels)
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty, out_img
from scipy.optimize import curve_fit
def fit_fct(y,a1,a2,c):
'''
Takes as input an array of values and fits
the first part with
x = a1 + c*y**2
and the second
x = a1+a2 + c*y**2
The two parts are distiguished by a bool array
Allows to fit two lines at once
'''
y,b = y
b = b.astype(dtype=bool)
x = a1 + c*(y**2)
x[b] = x[b] + a2
return x
def fit_polynomial(leftx, lefty, rightx, righty, img):
'''
simultaneously fits two polynomials with shared paramaters:
leftx = a1 + c*lefty**2
rightx = a1+a2 + c*lefty**2
and returns
a1, a2, c
'''
#
# Our assumption is: the road only has one curvature
#
# curve fit takes x as (k,M)-shaped array,
# we combine left and right x and y
# we add to y whether it's left or right
x = np.concatenate((leftx,rightx))
y = np.concatenate((lefty,righty))
y = img.shape[0] - y
b = np.zeros_like(x,dtype=bool)
b[len(leftx):] = True
yb = np.vstack((y,b))
# Do the fit
p = np.array((300,800,0))
# Standard fit without error
#p = curve_fit(fit_fct,y,x,p)
# Fit with error increasing longitudinally
p = curve_fit(fit_fct,yb,x,p,sigma=y+1)
#print(p)
return p
# Apply on video
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import os
# The lane pixels in bird's eye view
def process_image(img):
# Undistort
undistorted = cal_undistort(img, calib_dict['cameraMatrix'], calib_dict['distCoeffs'])
# Lane pixels
sep, combined = lane_pixels(undistorted,s_thresh=(180, 255), sx_thresh=(30, 120))
# warp
#warp = birds_eye(undistorted)
warp_lanes = birds_eye(sep)
#
# TODO use fit if it exists
#
# Get pixels without fit
leftx, lefty, rightx, righty, img_windows = get_pixels_no_prior(warp_lanes,x_window=100)
# fit
params,params_e = fit_polynomial(leftx, lefty, rightx, righty, warp_lanes)
# visualize lane pixels
img_windows[lefty, leftx] = [255, 0, 0]
img_windows[righty, rightx] = [0, 0, 255]
# Generate x and y values for plotting
ploty = np.linspace(0, warp_lanes.shape[0]-1, warp_lanes.shape[0] )
left_fitx = params[0] + params[2]*(warp_lanes.shape[0]-ploty)**2
right_fitx = params[0]+params[1] + params[2]*(warp_lanes.shape[0]-ploty)**2
# Plots the left and right polynomials on the lane lines
left_points = np.stack((left_fitx, ploty), axis=1)
right_points = np.stack((right_fitx, ploty), axis=1)
cv2.polylines(img_windows, np.int32([left_points]), isClosed=False, color=(255, 255, 0)) # color = yellow
cv2.polylines(img_windows, np.int32([right_points]), isClosed=False, color=(255, 255, 0)) # color = yellow
return img_windows
#input_name = 'challenge_video'
input_name = 'project_video'
output_dir = 'output_videos'
# .subclip for quick test, args are start and stop in seconds
#input_clip = VideoFileClip(input_name+'.mp4').subclip(0,5)
input_clip = VideoFileClip(input_name+'.mp4')
output_clip = input_clip.fl_image(process_image)
%time output_clip.write_videofile(os.path.join(output_dir,input_name+'_fitlanes_noprior'+'.mp4'), audio=False)
%%HTML
<video width="320" height="240" controls>
<!-- <source src="output_videos/challenge_video_birds_pic.mp4" type="video/mp4"> -->
<source src="output_videos/project_video_birds_pic.mp4" type="video/mp4">
</video>
<video width="320" height="240" controls>
<!-- <source src="output_videos/challenge_video_fitlanes_noprior.mp4" type="video/mp4"> -->
<source src="output_videos/project_video_fitlanes_noprior.mp4" type="video/mp4">
</video>
###Output
_____no_output_____
###Markdown
Fit lanes - with priorWe fit a polynomial to selected lane pixels using the previous fit to select pixels
###Code
# Get pixels with a fit
# leftx, lefty, rightx, righty, img_windows = get_pixels_no_prior(warp_lanes,x_window=100)
def get_pixels_prior(img, params, x_window=50):
#
# Select all pixels within x_window px around the existing fit
#
# For videos, we pass separate lane pixels, i.e. rgb.
# Need to combine them here
# Turn RGB into binary grayscale
# print('img.shape: ',img.shape) #--> (720, 1280, 3)
img_bin = np.zeros(shape=img.shape[0:2])
# print('img_bin.shape: ',img_bin.shape) #--> (720, 1280)
img_bin[(img[:,:,0]==255) | (img[:,:,1]==255) | (img[:,:,2]==255)] = 1
img=img_bin
# Create an output image to draw on and visualize the result
out_img = np.dstack((img, img, img))
out_img = out_img*255
# Get all non-zero pixels
nonzero = img.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
# Coordinate transformation: our y is zero at the bottom
y = img.shape[0] - nonzeroy
# For all y, derive the corresponding x of the lane fits
left_fitx = params[0] + params[2]*y**2
right_fitx = params[0]+params[1] + params[2]*y**2
# Identify the nonzero pixels in x and y within the window ###
left_lane_inds = ((nonzerox>left_fitx-x_window)
& (nonzerox<=left_fitx+x_window)).nonzero()[0]
right_lane_inds = ((nonzerox>right_fitx-x_window)
& (nonzerox<=right_fitx+x_window)).nonzero()[0]
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
#
# Visualize
#
# Visualize our search window
ploty = np.linspace(0, out_img.shape[0]-1, out_img.shape[0] )
left_fitx_lo = params[0] +params[2]*(out_img.shape[0]-ploty)**2 -x_window
left_fitx_hi = params[0] +params[2]*(out_img.shape[0]-ploty)**2 +x_window
right_fitx_lo = params[0]+params[1] +params[2]*(out_img.shape[0]-ploty)**2 -x_window
right_fitx_hi = params[0]+params[1] +params[2]*(out_img.shape[0]-ploty)**2 +x_window
# Plots the left and right polynomials on the lane lines
left_points_lo = np.stack((left_fitx_lo, ploty), axis=1)
left_points_hi = np.stack((left_fitx_hi, ploty), axis=1)
right_points_lo = np.stack((right_fitx_lo, ploty), axis=1)
right_points_hi = np.stack((right_fitx_hi, ploty), axis=1)
cv2.polylines(out_img, np.int32([left_points_lo]), isClosed=False, color=(0, 255, 0)) # color = green
cv2.polylines(out_img, np.int32([left_points_hi]), isClosed=False, color=(0, 255, 0)) # color = green
cv2.polylines(out_img, np.int32([right_points_lo]), isClosed=False, color=(0, 255, 0)) # color = green
cv2.polylines(out_img, np.int32([right_points_hi]), isClosed=False, color=(0, 255, 0)) # color = green
return leftx, lefty, rightx, righty, out_img
# Apply on video
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import os
# Whether we have a successful fit
has_fit = False
# Parameters of the fit
params = [999.,999.,999.]
# The lane pixels in bird's eye view
def process_image(img):
# reference global var
global has_fit
global params
# Undistort
undistorted = cal_undistort(img, calib_dict['cameraMatrix'], calib_dict['distCoeffs'])
# Lane pixels
sep, combined = lane_pixels(undistorted,s_thresh=(180, 255), sx_thresh=(30, 120))
# warp
#warp = birds_eye(undistorted)
warp_lanes = birds_eye(sep)
#
# Select pixels to fit
#
if(has_fit==True):
# Get pixels with fit
leftx, lefty, rightx, righty, img_windows = get_pixels_prior(warp_lanes, params, x_window=50)
# Reset fit if left or right is bad.. ..say less than 20 pixels for a bad frame
if(leftx.shape[0]<10 or rightx.shape[0]<10):
print('Reset: leftx.shape:',leftx.shape,', rightx.shape:',rightx)
has_fit=False
if(has_fit==False):
# Get pixels without fit
leftx, lefty, rightx, righty, img_windows = get_pixels_no_prior(warp_lanes,x_window=100)
has_fit = True
# fit
params,params_e = fit_polynomial(leftx, lefty, rightx, righty, warp_lanes)
# visualize lane pixels
img_windows[lefty, leftx] = [255, 0, 0]
img_windows[righty, rightx] = [0, 0, 255]
# Generate x and y values for plotting
ploty = np.linspace(0, warp_lanes.shape[0]-1, warp_lanes.shape[0] )
left_fitx = params[0] + params[2]*(warp_lanes.shape[0]-ploty)**2
right_fitx = params[0]+params[1] + params[2]*(warp_lanes.shape[0]-ploty)**2
# Plots the left and right polynomials on the lane lines
left_points = np.stack((left_fitx, ploty), axis=1)
right_points = np.stack((right_fitx, ploty), axis=1)
cv2.polylines(img_windows, np.int32([left_points]), isClosed=False, color=(255, 255, 0)) # color = yellow
cv2.polylines(img_windows, np.int32([right_points]), isClosed=False, color=(255, 255, 0)) # color = yellow
return img_windows
#input_name = 'challenge_video'
input_name = 'project_video'
output_dir = 'output_videos'
# .subclip for quick test, args are start and stop in seconds
#input_clip = VideoFileClip(input_name+'.mp4').subclip(0,5)
input_clip = VideoFileClip(input_name+'.mp4')
output_clip = input_clip.fl_image(process_image)
%time output_clip.write_videofile(os.path.join(output_dir,input_name+'_fitlanes'+'.mp4'), audio=False)
%%HTML
<video width="320" height="240" id="vid1" controls>
<!-- <source src="output_videos/challenge_video_birds_pic.mp4" type="video/mp4"> -->
<source src="output_videos/project_video_birds_pic.mp4" type="video/mp4">
</video>
<video width="320" height="240" id="vid2" controls>
<!-- <source src="output_videos/challenge_video_fitlanes.mp4" type="video/mp4"> -->
<source src="output_videos/project_video_fitlanes_noprior.mp4" type="video/mp4">
</video>
<video width="320" height="240" id="vid3" controls>
<!-- <source src="output_videos/challenge_video_fitlanes.mp4" type="video/mp4"> -->
<source src="output_videos/project_video_fitlanes.mp4" type="video/mp4">
</video>
###Output
_____no_output_____
###Markdown
Determine Curve RadiusWe use https://en.wikipedia.org/wiki/Radius_of_curvatureFormula $\rho = \frac{|1+(f'(t))^2 |^{3/2}}{|f''(t)|}$, with $f(t) = a1 (+a2) + c*t^2$ $f'(t) = 2c*t$ $f''(t) = 2c$ → $\rho(t) = \frac{|1+(2ct)^2|^{3/2}}{|2c|}$ → $\rho(t=0) = \frac{1}{|2c|}$ Unit conversion of c$f(t) [xpx] = ... + c [???] * (t [ypx])^2$ $[xpx] = [???]* [ypx]^2$ $[???] = [xpx]/[ypx]^2 $ → unit of c is xpx/ypx^2 c' [m/m^2] = c [xpx/ypx^2]* m_per_xpixel / m_per_ypixel^2
###Code
def get_radius(params):
'''
takes in params a1, a2, c
returns radius rho = 1/|2c|
'''
_,_,c=params
# pixels to meters:
# lane markings
# - length 10feet = 3.048m
# - gap 30feet = 9.144m
# lane width
# - 12 feet = 3.658m
#
#
# This is on the narrow bird's eye
#
# we measure
# * straight_lines1_bird *
# lane width 1057-251=806; 1061-243=818
# marker 568-530=38; 445-406=39
# gap 531-444=87; 655-568=87
#
# <-10ft-><-----30ft----->
# - length: 38 and 39 |||||||| ||||||||
# - gap: 87 and 87
# --> 30feet = (38*3 +39*3 +87+87)/4 = (114+117+87+87)/4 = 405/4 = 101
# 9.144m = 101 pixel
# Above must be wrong!! Totally inconsistent values. How about?
# This would make sense looking at the left lane markings of straight_lines1_bird.jpg
# <-10ft->
# <---------30ft--------->
# <5f><------20ft----><5f>
# |||||||| ||||||||
# --> 20feet = (38*2 +39*2 + 87 +87)/4 = (76+78+87+87)/4=328/4 =82
# 6.096m = 82 pixel
#
# * straight_lines2_bird *
# marker 10ft: 523-478=45; 397-351=46
#
# udacity: 30m/720 pixel
#
# --> Overall, sth seems off. let's go with average of the markers
# 10ft = (45+46+38+39)/4 ypixels = 168/4 ypixels = 42 ypixels = 3.048m
# --> m_per_ypixel = 3.048/42 m/ypixel
m_per_ypixel = 3.048/42
# - width
# 12ft = (806+818)/2 xpixel = 812 xpixel = 3.658m
m_per_xpixel = 3.658/812
#This is on the wide bird's eye
# with a sqeeze of 200px left and right
# * straight lines 1 bird
# marker length 568-531 = 37
# lane width 858-450 = 408
m_per_ypixel = 3.048/37
m_per_xpixel = 3.658/408
# This is on birds eye
# with a sqeeze of weighted 200px left and right
# and a shift down to height -5
# * straight lines 1 bird
# marker length 613-572 = 41
# lane width 882-443 = 449
m_per_ypixel = 3.048/41
m_per_xpixel = 3.658/449
# with a sqeeze of weighted 200px left and right
# and a shift down to height -20
# * straight lines 1 bird
# marker length 600-559 = 41
# lane width 852-442 = 410
m_per_ypixel = 3.048/41
m_per_xpixel = 3.658/410
cm = c * m_per_xpixel / m_per_ypixel**2
# This evaluates at t = 0
#return 1/(2*abs(cm))
# This evaluates in the center
# no difference, gives factor 1.005
#print('factor is ',(1+(2*cm*360*m_per_ypixel)**2)**(3/2))
return (1+(2*cm*360*m_per_ypixel)**2)**(3/2)/(2*abs(cm))
def get_position(params,img):
'''
retrieves the position of the car in the lane
with respect to its center in meters
| |
| |
| . |
-1 0 1
'''
a1,a2,_=params
# The midpoint of the image is the car's position
car_pos_px = img.shape[1]/2
# The center of the lane is mean of the two lane
# demarcations
lane_center_px = a1+a2/2
# Position of the car in the lane in pixels
pos_px = car_pos_px - lane_center_px
# Convert x pixels in meters
#rel_pos_m = pos_px * 3.658/408
#rel_pos_m = pos_px *3.658/449
rel_pos_m = pos_px *3.658/410
return rel_pos_m, car_pos_px, lane_center_px
###Output
_____no_output_____
###Markdown
Draw on original imageThe `cv2.warpPerspective` works easiest with images. We 1) draw our lane in warped space as an image, 2) unwarp the lane image3) stack the calibrated original image and the umwarped lane inpaintings
###Code
def paint_lane_warped(img,params):
# Work with an image copy
img_painted = np.zeros_like(img)
# Draw red lane demarcations
#ploty = np.linspace(0, warp_lanes.shape[0]-1, warp_lanes.shape[0])
ploty = np.linspace(0, img.shape[0]-1, 10)
left_fitx = params[0] + params[2]*(img.shape[0]-ploty)**2
right_fitx = params[0]+params[1] + params[2]*(img.shape[0]-ploty)**2
left_line = np.int64(np.stack((left_fitx, ploty),axis=1)) # (720,2)
right_line = np.int64(np.stack((right_fitx, ploty),axis=1)) # (720,2)
#cv2.polylines(img_painted,[left_line,right_line],isClosed=False,color=(255,0,0,0),thickness=3)
# Draw a green lane
lane=np.concatenate((left_line, right_line[::-1]))
cv2.fillPoly(img_painted, [lane] ,color=(0, 255, 0))
return img_painted
def paint_lane_unwarped(img,params):
'''
Paints the lane in unwarped
'''
height = img.shape[0]
width = img.shape[1]
# Get the green lane in birds eye
lane_birds = paint_lane_warped(img,params)
# Get warp matrix
M = get_warp_matrix()
# unwarp
lane_image = cv2.warpPerspective(lane_birds, M, (width,height),flags=cv2.WARP_INVERSE_MAP+cv2.INTER_LINEAR+cv2.WARP_FILL_OUTLIERS)
# stack
stacked = cv2.addWeighted(img, 0.8, lane_image, 0.5, 0.)
# Add text
cv2.putText(stacked, 'Curve radius: {0:05.0f}m, lane position: {1:.1f}m'.format(get_radius(params),get_position(params,img)[0]),
org=(100,100), fontFace=cv2.FONT_HERSHEY_PLAIN, fontScale=3, color=(255,0,0),thickness=5)
return stacked
# Apply on video
from moviepy.editor import VideoFileClip
from IPython.display import HTML
import os
# Whether we have a successful fit
has_fit = False
# Parameters of the fit
params = [999.,999.,999.]
# The lane pixels in bird's eye view
def process_image(img):
# reference global var
global has_fit
global params
# Undistort
undistorted = cal_undistort(img, calib_dict['cameraMatrix'], calib_dict['distCoeffs'])
# Lane pixels
sep, combined = lane_pixels(undistorted,s_thresh=(180, 255), sx_thresh=(30, 120))
# warp
#warp = birds_eye(undistorted)
warp_lanes = birds_eye(sep)
#
# Select pixels to fit
#
if(has_fit==True):
# Get pixels with fit
leftx, lefty, rightx, righty, img_windows = get_pixels_prior(warp_lanes, params, x_window=50)
# Reset fit if left or right is bad.. ..say less than 20 pixels for a bad frame
if(leftx.shape[0]<10 or rightx.shape[0]<10):
print('Reset: leftx.shape:',leftx.shape,', rightx.shape:',rightx)
has_fit=False
if(has_fit==False):
# Get pixels without fit
leftx, lefty, rightx, righty, img_windows = get_pixels_no_prior(warp_lanes,x_window=100)
has_fit = True
# fit
params,params_e = fit_polynomial(leftx, lefty, rightx, righty, warp_lanes)
# radius
radius = get_radius(params)
# position
position = get_position(params,warp_lanes)
# Paint lane
paint_lane = paint_lane_unwarped(undistorted,params)
return paint_lane
#input_name = 'project_video'
#input_name = 'challenge_video'
input_name = 'harder_challenge_video'
output_dir = 'output_videos'
# .subclip for quick test, args are start and stop in seconds
#input_clip = VideoFileClip(input_name+'.mp4').subclip(0,5)
input_clip = VideoFileClip(input_name+'.mp4')
output_clip = input_clip.fl_image(process_image)
%time output_clip.write_videofile(os.path.join(output_dir,input_name+'_origlanes'+'.mp4'), audio=False)
%%HTML
<video width="640" height="480" id="vid1" controls>
<!-- <source src="output_videos/project_video_origlanes.mp4" type="video/mp4"> -->
<!-- <source src="output_videos/challenge_video_origlanes.mp4" type="video/mp4"> -->
<source src="output_videos/harder_challenge_video_origlanes.mp4" type="video/mp4">
</video>
###Output
_____no_output_____ |
deeplearning_basics/MNISTClassification.ipynb | ###Markdown
###Code
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
def mnist_grid(x, y):
plt.style.use("dark_background")
fig, axs = plt.subplots(10, 10, figsize=(20, 20))
for i in range(10):
for j in range(10):
axs[i, j].imshow(x[y == i][j], cmap="gray")
axs[i, j].axis("off")
plt.subplots_adjust(wspace=0, hspace=-0.1)
###Output
_____no_output_____
###Markdown
Download the dataset
###Code
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data()
assert x_train.shape == (60000, 28, 28)
assert x_test.shape == (10000, 28, 28)
assert y_train.shape == (60000,)
assert y_test.shape == (10000,)
mnist_grid(x_train, y_train)
###Output
_____no_output_____
###Markdown
Classification ProblemGiven an image $\mathbf{x} \in \mathcal{X}$, we want to retrieve the label $\mathbf{y} \in \mathcal{Y}$. We define $\mathcal{Y}$ to be the space of one-hot encoded vector of labels in the set $\{0,\dots,9\}$.We define $\mathbf{\hat{y}} = f_\varphi(\mathbf{x})$ with a CNN architecture. We use softmax activation to output the predicted label.The loss function is the cross entropy$$ \mathcal{L}_\varphi = -\sum_{i=0}^{9} \mathbf{y}_i \log \mathbf{\hat{y}}_i$$
###Code
# preprocessing
y_train_onehot = tf.cast(tf.one_hot(y_train, 10), tf.float32)
y_test_onehot = tf.cast(tf.one_hot(y_test, 10), tf.float32)
# normalize images in the range [0, 1] and add channel dimension
x_train = tf.cast(x_train[..., tf.newaxis], tf.float32) / 255
x_test = tf.cast(x_test[..., tf.newaxis], tf.float32) / 255
# Define model architecture
model = tf.keras.Sequential(
[
tf.keras.layers.Conv2D(filters=16, kernel_size=3, strides=2, padding="same", activation="relu"), # in=[batch, 28, 28, 1], out=[batch, 14, 14, 16]
tf.keras.layers.Conv2D(filters=16, kernel_size=3, strides=2, padding="same", activation="relu"), # in=[batch, 14, 14, 16], out=[batch, 7, 7, 16]
tf.keras.layers.Flatten(), # in=[batch, 7, 7, 16], out=[batch, 784]
tf.keras.layers.Dense(units=50, activation="relu"), # in=[batch, 784], out=[batch, 50],
tf.keras.layers.Dense(units=10, activation="softmax") # output layer
]
)
# Define optimizer, loss function and metric to track
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3)
model.compile(
optimizer=optimizer,
loss=tf.keras.losses.CategoricalCrossentropy(),
metrics=[
tf.keras.metrics.Precision(),
tf.keras.metrics.AUC()
]
)
# Optimize the network
history = model.fit(
x=x_train,
y=y_train_onehot,
batch_size=32,
epochs=5,
validation_split=0.1
)
# Loss curve
plt.style.use("default")
plt.plot(history.history["loss"], color="k", label="Train")
plt.plot(history.history["val_loss"], "k--", label="Val")
plt.legend(loc=5)
plt.ylabel("Loss")
plt.xlabel("Epochs")
ax2 = plt.gca().twinx()
ax2.set_ylabel("Precision")
plt.plot(history.history["precision"], color="r", label="Train loss")
plt.plot(history.history["val_precision"], "r--", label="Val loss")
# showcase prediction on test set
plt.style.use("dark_background")
fig, axs = plt.subplots(10, 10, figsize=(20, 20))
for i in range(10):
for j in range(10):
example = x_test[y_test == i][j]
axs[i, j].imshow(example[..., 0], cmap="gray")
prediction = np.argmax(model(example[None, ...]))
axs[i, j].annotate(f"{prediction}", xy=(0.9, 0.9), xycoords="axes fraction", fontsize=20)
axs[i, j].axis("off")
plt.subplots_adjust(wspace=0, hspace=-0.1)
###Output
_____no_output_____ |
Advanced Deployment Scenarios with TensorFlow/week 2/text_classification.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Text ClassificationIn this notebook we will classify movie reviews as being either `positive` or `negative`. We'll use the [IMDB dataset](https://www.tensorflow.org/datasets/catalog/imdb_reviews) that contains the text of 50,000 movie reviews from the [Internet Movie Database](https://www.imdb.com/). These are split into 25,000 reviews for training and 25,000 reviews for testing. The training and testing sets are *balanced*, meaning they contain an equal number of positive and negative reviews. Run in Google Colab View source on GitHub Setup
###Code
try:
%tensorflow_version 2.x
except:
pass
import tensorflow as tf
import tensorflow_hub as hub
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
print("\u2022 Using TensorFlow Version:", tf.__version__)
###Output
• Using TensorFlow Version: 2.2.0
###Markdown
Download the IMDB DatasetWe will download the [IMDB dataset](https://www.tensorflow.org/datasets/catalog/imdb_reviews) using TensorFlow Datasets. We will use a training set, a validation set, and a test set. Since the IMDB dataset doesn't have a validation split, we will use the first 60\% of the training set for training, and the last 40\% of the training set for validation.
###Code
splits = ['train[:60%]', 'train[-40%:]', 'test']
splits, info = tfds.load(name="imdb_reviews", with_info=True, split=splits, as_supervised=True)
train_data, validation_data, test_data = splits
###Output
[1mDownloading and preparing dataset imdb_reviews/plain_text/1.0.0 (download: 80.23 MiB, generated: Unknown size, total: 80.23 MiB) to /root/tensorflow_datasets/imdb_reviews/plain_text/1.0.0...[0m
Shuffling and writing examples to /root/tensorflow_datasets/imdb_reviews/plain_text/1.0.0.incompleteFCZ8UJ/imdb_reviews-train.tfrecord
Shuffling and writing examples to /root/tensorflow_datasets/imdb_reviews/plain_text/1.0.0.incompleteFCZ8UJ/imdb_reviews-test.tfrecord
Shuffling and writing examples to /root/tensorflow_datasets/imdb_reviews/plain_text/1.0.0.incompleteFCZ8UJ/imdb_reviews-unsupervised.tfrecord
[1mDataset imdb_reviews downloaded and prepared to /root/tensorflow_datasets/imdb_reviews/plain_text/1.0.0. Subsequent calls will reuse this data.[0m
###Markdown
Explore the Data Let's take a moment to look at the data.
###Code
num_train_examples = info.splits['train'].num_examples
num_test_examples = info.splits['test'].num_examples
num_classes = info.features['label'].num_classes
print('The Dataset has a total of:')
print('\u2022 {:,} classes'.format(num_classes))
print('\u2022 {:,} movie reviews for training'.format(num_train_examples))
print('\u2022 {:,} movie reviews for testing'.format(num_test_examples))
###Output
The Dataset has a total of:
• 2 classes
• 25,000 movie reviews for training
• 25,000 movie reviews for testing
###Markdown
The labels are either 0 or 1, where 0 is a negative review, and 1 is a positive review. We will create a list with the corresponding class names, so that we can map labels to class names later on.
###Code
class_names = ['negative', 'positive']
###Output
_____no_output_____
###Markdown
Each example consists of a sentence representing the movie review and a corresponding label. The sentence is not preprocessed in any way. Let's take a look at the first example of the training set.
###Code
for review, label in train_data.take(1):
review = review.numpy()
label = label.numpy()
print('\nMovie Review:\n\n', review)
print('\nLabel:', class_names[label])
###Output
Movie Review:
b"This was an absolutely terrible movie. Don't be lured in by Christopher Walken or Michael Ironside. Both are great actors, but this must simply be their worst role in history. Even their great acting could not redeem this movie's ridiculous storyline. This movie is an early nineties US propaganda piece. The most pathetic scenes were those when the Columbian rebels were making their cases for revolutions. Maria Conchita Alonso appeared phony, and her pseudo-love affair with Walken was nothing but a pathetic emotional plug in a movie that was devoid of any real meaning. I am disappointed that there are movies like this, ruining actor's like Christopher Walken's good name. I could barely sit through it."
Label: negative
###Markdown
Load Word EmbeddingsIn this example, the input data consists of sentences. The labels to predict are either 0 or 1.One way to represent the text is to convert sentences into word embeddings. Word embeddings, are an efficient way to represent words using dense vectors, where semantically similar words have similar vectors. We can use a pre-trained text embedding as the first layer of our model, which will have two advantages:* We don't have to worry anout text preprocessing.* We can benefit from transfer learning.For this example we will use a model from [TensorFlow Hub](https://tfhub.dev/) called [google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1). We'll create a `hub.KerasLayer` that uses the TensorFlow Hub model to embed the sentences. We can choose to fine-tune the TF hub module weights during training by setting the `trainable` parameter to `True`.
###Code
embedding = "https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1"
hub_layer = hub.KerasLayer(embedding, input_shape=[], dtype=tf.string, trainable=True)
###Output
_____no_output_____
###Markdown
Build Pipeline
###Code
batch_size = 512
train_batches = train_data.shuffle(num_train_examples // 4).batch(batch_size).prefetch(1)
validation_batches = validation_data.batch(batch_size).prefetch(1)
test_batches = test_data.batch(batch_size)
###Output
_____no_output_____
###Markdown
Build the ModelIn the code below we will build a Keras `Sequential` model with the following layers:1. The first layer is a TensorFlow Hub layer. This layer uses a pre-trained SavedModel to map a sentence into its embedding vector. The model that we are using ([google/tf2-preview/gnews-swivel-20dim/1](https://tfhub.dev/google/tf2-preview/gnews-swivel-20dim/1)) splits the sentence into tokens, embeds each token and then combines the embedding. The resulting dimensions are: `(num_examples, embedding_dimension)`.2. This fixed-length output vector is piped through a fully-connected (`Dense`) layer with 16 hidden units.3. The last layer is densely connected with a single output node. Using the `sigmoid` activation function, this value is a float between 0 and 1, representing a probability, or confidence level.
###Code
model = tf.keras.Sequential([
hub_layer,
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')])
###Output
_____no_output_____
###Markdown
Train the ModelSince this is a binary classification problem and the model outputs a probability (a single-unit layer with a sigmoid activation), we'll use the `binary_crossentropy` loss function.
###Code
model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit(train_batches,
epochs=20,
validation_data=validation_batches)
###Output
Epoch 1/20
30/30 [==============================] - 4s 122ms/step - loss: 1.4808 - accuracy: 0.4449 - val_loss: 0.9655 - val_accuracy: 0.4533
Epoch 2/20
30/30 [==============================] - 4s 120ms/step - loss: 0.8495 - accuracy: 0.4870 - val_loss: 0.7417 - val_accuracy: 0.5269
Epoch 3/20
30/30 [==============================] - 4s 119ms/step - loss: 0.6684 - accuracy: 0.6077 - val_loss: 0.6259 - val_accuracy: 0.6567
Epoch 4/20
30/30 [==============================] - 4s 120ms/step - loss: 0.6042 - accuracy: 0.6802 - val_loss: 0.5923 - val_accuracy: 0.6854
Epoch 5/20
30/30 [==============================] - 3s 116ms/step - loss: 0.5763 - accuracy: 0.7009 - val_loss: 0.5724 - val_accuracy: 0.7072
Epoch 6/20
30/30 [==============================] - 4s 118ms/step - loss: 0.5538 - accuracy: 0.7263 - val_loss: 0.5505 - val_accuracy: 0.7263
Epoch 7/20
30/30 [==============================] - 3s 114ms/step - loss: 0.5298 - accuracy: 0.7463 - val_loss: 0.5294 - val_accuracy: 0.7456
Epoch 8/20
30/30 [==============================] - 4s 123ms/step - loss: 0.5044 - accuracy: 0.7642 - val_loss: 0.5075 - val_accuracy: 0.7612
Epoch 9/20
30/30 [==============================] - 4s 120ms/step - loss: 0.4777 - accuracy: 0.7839 - val_loss: 0.4839 - val_accuracy: 0.7779
Epoch 10/20
30/30 [==============================] - 4s 123ms/step - loss: 0.4492 - accuracy: 0.8006 - val_loss: 0.4601 - val_accuracy: 0.7939
Epoch 11/20
30/30 [==============================] - 4s 119ms/step - loss: 0.4184 - accuracy: 0.8185 - val_loss: 0.4356 - val_accuracy: 0.8050
Epoch 12/20
30/30 [==============================] - 3s 117ms/step - loss: 0.3878 - accuracy: 0.8381 - val_loss: 0.4114 - val_accuracy: 0.8201
Epoch 13/20
30/30 [==============================] - 4s 120ms/step - loss: 0.3579 - accuracy: 0.8543 - val_loss: 0.3911 - val_accuracy: 0.8335
Epoch 14/20
30/30 [==============================] - 4s 118ms/step - loss: 0.3305 - accuracy: 0.8683 - val_loss: 0.3740 - val_accuracy: 0.8398
Epoch 15/20
30/30 [==============================] - 3s 115ms/step - loss: 0.3055 - accuracy: 0.8807 - val_loss: 0.3559 - val_accuracy: 0.8493
Epoch 16/20
30/30 [==============================] - 3s 115ms/step - loss: 0.2823 - accuracy: 0.8927 - val_loss: 0.3431 - val_accuracy: 0.8542
Epoch 17/20
30/30 [==============================] - 4s 120ms/step - loss: 0.2613 - accuracy: 0.9021 - val_loss: 0.3326 - val_accuracy: 0.8598
Epoch 18/20
30/30 [==============================] - 4s 120ms/step - loss: 0.2433 - accuracy: 0.9108 - val_loss: 0.3244 - val_accuracy: 0.8633
Epoch 19/20
30/30 [==============================] - 4s 117ms/step - loss: 0.2262 - accuracy: 0.9178 - val_loss: 0.3191 - val_accuracy: 0.8653
Epoch 20/20
30/30 [==============================] - 4s 119ms/step - loss: 0.2126 - accuracy: 0.9236 - val_loss: 0.3136 - val_accuracy: 0.8679
###Markdown
Evaluate the ModelWe will now see how well our model performs on the testing set.
###Code
eval_results = model.evaluate(test_batches, verbose=0)
for metric, value in zip(model.metrics_names, eval_results):
print(metric + ': {:.3}'.format(value))
###Output
loss: 0.325
accuracy: 0.86
|
webinar/jupyterhub_webinar.ipynb | ###Markdown
Outline* Project Jupyter* What you can do with Jupyter?* Jupyter/IPython basics * Introduction to markdown, magic, widgets* Introduction to ALCF JupyterHub* Live Demos * New kernel installation * ezCobalt: how to submit jobs * ezBalsam: how to use Balsam Disclaimer* This webinar will not cover: * low level details about queuing or ensembling jobs or creating Balsam workflows, etc. covered in a [previous webinar](https://alcf.anl.gov/events/best-practices-queueing-and-running-jobs-theta) * using Jupyter through an ssh tunnel, reverse proxy, or remote kernels * using Dask, Spark, Kubernetes, or a container for distributed computing * accessing compute nodes directly* ALCF JupyterHub is a new service and improving rapidly. You can send an email to [email protected] (cc: [email protected]) for problems and suggestions. Project Jupyter* Started in 2014, as an IPython spin-off project led by Fernando Perez to “develop open-source software, open-standards, and services for interactive computing”.* Inspired by Galileo’s notebooks and languages used in scientific software: Julia, Python, and R. Jupyter X What you can do?* Interactive development environment * Fast code prototyping, test new ideas easily * Most languages are supported through [Jupyter kernels](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels)* Learn or teach with notebooks * Prepare tutorials, run demos* Data analysis and visualization* Presentations with Reveal.js* Interactive work on HPC centers or cloud * JupyterHub * [Google Colab](https://colab.research.google.com) * [Binder](https://mybinder.org/) Basics (Shortcuts)* `Esc/Enter` get in command/edit mode| Command mode | Edit mode|| :------------- | :------------------------ || `h` show (edit) all shortcuts | `shift enter` Run cell, select below || `a/b` insert cell above/below |`cmd/ctrl enter` Run cell|| `c/x` copy/cut selected cell |`tab` completion or indent|| `V/v` paste cell above/below |`shift tab` tooltip|| `d,d` delete cell |`cmd/ctrl d` delete line|| `y/m/r` code/markdown/raw mode |`cmd/ctrl a` select all|| `f` search, replace |`cmd/ctrl z` undo|| `p` open the command palette | `cmd/ctrl /` comment|
###Code
import os
os.getenv??
#help('modules')
#help('modules mpi4py')
###Output
_____no_output_____
###Markdown
Markdown* bullet list * subbullet* equation: $E=mc^2$* inline code `echo hello jupyter````* A [link](https://alcf.anl.gov/events/towards-interactive-high-performance-computing-alcf-jupyterhub)* Table| Col 1 | Col 2| Col 3|| :-----| :---:|----: || 1, 1 | 1,2 | 1,3 || 2, 1 | 2,2 | 2,3 || 3, 1 | 3,2 | 3,3 |* A kitten IPython Magic* Magic functions are prefixed by `%` (line magic) or `%%` (cell magic)* Cell magic `%%` should be at the first line* Shell commands are prefixed by `!`* `%quickref`: Quick reference card for IPython* `%magic`: Info on IPython magic functions* `%debug`: Interactive debugger* `%timeit`: Report time execution* `%prun`: Profile (%lprun is better, `pip install lprun` and `%load_ext line_profiler`)
###Code
%magic
import numpy as np
a = [1]*1000
%timeit sum(a)
b = np.array(a)
%timeit np.sum(a)
%timeit np.sum(b)
###Output
10000 loops, best of 5: 7.51 µs per loop
The slowest run took 145.08 times longer than the fastest. This could mean that an intermediate result is being cached.
10000 loops, best of 5: 106 µs per loop
The slowest run took 5.23 times longer than the fastest. This could mean that an intermediate result is being cached.
100000 loops, best of 5: 7.14 µs per loop
###Markdown
Jupyter Widgets (ipywidgets)* Widgets are basic GUI elements that can enhance interactivity on a Jupyter notebook* Enables using sliders, text boxes, buttons, and more that can link input and output.
###Code
import ipywidgets
ipywidgets.IntSlider()
ipywidgets.Text(value='Hello Jupyter!', disabled=False)
ipywidgets.ToggleButton(value=False, description="Don't click",
button_style='danger', tooltip='Description',)
###Output
_____no_output_____ |
ceral/new_analysis.ipynb | ###Markdown
This is statistical data analysis method of the numpy
###Code
import numpy as np
rainfall = np.array([5.21, 3.76, 3.27, 2.35, 1.89, 1.55, 0.65, 1.06, 1.72, 3.35, 4.82, 5.11])
rain_mean = np.mean(rainfall)
rain_median = np.median(rainfall)
first_quarter = np.percentile(rainfall, 25)
third_quarter = np.percentile(rainfall, 75)
interquartile_range = third_quarter - first_quarter
rain_std = np.std(rainfall)
print ("The Mean is " , rain_mean)
print ("The Median is ", rain_median)
print ("The First quarter is ", first_quarter)
print ("The Third quarter is ",third_quarter)
print ( "The IQR is ",interquartile_range)
print ("Std is ", rain_std)
print("Hello world")
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Normal DistributionNormal distribution is the one which is normally distributed on the both sides of the mean. It is a unimodel distribution. i. e. the values are distributed along the mean equally i.e. on the right and left side
###Code
a = np.random.normal(loc = 0, scale = 1, size = 1000000)
print(a.shape)
print(a.ndim)
import numpy as np
from matplotlib import pyplot as plt
# Brachiosaurus
b_data = np.random.normal( 6.7, 0.7, 1000)
# Fictionosaurus
f_data = np.random.normal(7.7, 0.3, 1000)
plt.hist(b_data,
bins=30, range=(5, 8.5), histtype='step',
label='Brachiosaurus')
plt.hist(f_data,
bins=30, range=(5, 8.5), histtype='step',
label='Fictionosaurus')
plt.xlabel('Femur Length (ft)')
plt.legend(loc=2)
plt.show()
mystery_dino = "Brachiosaurus"
answer = False
###Output
_____no_output_____ |
EDAConclusions.ipynb | ###Markdown
Exploratory Data Analysis Findings I. Data Collection 1. Importing packages and libraries
###Code
# In this EDA findings project, I used various data analysis visualizations to answer three
# important questions: What does the imdb scores and gross revenues say about certain types of films?,
# How does the duration and the budget play a significant role in the types of movies to produce?,
# and Does the directors and/or actors/actresses impact the return on investments (ROI)
# from the gross revenues, budget, and profits from the box office outcomes?
# Packages / libraries
import os #provides functions for interacting with the operating system
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# Loading the data
movies = pd.read_csv("films.csv")
# runs all the data
movies
print(movies.shape)
movies.head(31)
# "How did you pick the question(s) that you did?"
# "Why are these questions important from a business perspective?"
# "How did you decide on the data cleaning options you performed?"
# "Why did you choose a given method or library?"
# "Why did you select those visualizations and what did you learn from each of them?"
# "Why did you pick those features as predictors?"
# "How would you interpret the results?"
# "How confident are you in the predictive quality of the results?"
# "What are some of the things that could cause the results to be wrong?"
###Output
(29, 28)
###Markdown
Data Visualization from EDA
###Code
# This first data visualization is a scatterplot. It shows the relationships between IMDB scores and box
# office revenues.
plt.figure(figsize=(10,10))
ax = sns.scatterplot(x='imdb_score', y='gross', data=movies, color='green')
# I used this chart because the scattered features allows me to analyze the patterns
# of the cash inflow streams from moviegoers. Also, the graph helps to better understand what types
# of film contents and genres moviegoers like to watch by also using the dataset.
# This chart also provides a business perspective through the sales of movie tickets and the pricing of
# movie tickets key performance indicators. This helps Microsoft to better
# strategize in predicting the trends and any contingencies in producing certain types of films to
# meet its own business objectives,future endeavors, and to expand its own markets in the industry.
# I learned from this scatterplot, the essentials of the movie industry. For example, '
# how to find ways to predict the types of films moviegoers will watch,
# how the categories of movies influence box office, and more importantly how many theaters are playing a specific film.
# The second data visualization method that was used is the line graphs.
# The graphs basically reveals the identities of the duration and the budgeting of the top
# 30 grossing films in the datasets.
plt.figure(figsize=(10,5))
ax = sns.lineplot(x='duration', y='budget', hue = 'imdb_score', style = 'color', ci=None, data=movies)
movies = pd.read_csv("films.csv")
# runs all the data
movies
print(movies.shape)
movies.head(31)
# After thorough analysis using this line graph, I realized that the duration and the budgeting costs
# are on the high end. So that means that both variables are important factors in deciding which genres
# of films to produce. The line graphs shows the corresponding features between the duration and
# budgeting among the top 30 films. This displays the features in the
# production of movies is a somewhat a risk in terms of how long the movie runs.
# Moreover, what is the costs in making a movie over time prior to its release in accordance to production
# time. The business perspective here is the actual costs in producing a movie and how the capital
# should be distributed during filming. Benchmarking other films released during or in the past
# is a way to come up with strategies to capitalize on profits while reaching a wider audience. Another
# factor is to utilize predictors on the performance of other films playing both current and past.
# In this line plot, I acknowledged that budgeting is one of the key significant aspects of movie production.
# For instance, if a company only has about thirty million to produce a film, then production will be
# in a risky position. Additionally, what is the costs of hiring a film studio to produce a movie,
# difficulties in finding directors and actors/actresses who are willing to forgo salary due to low
# budget.
plt.figure(figsize=(10,10))
ax = sns.lineplot(x='duration', y='gross', data=movies, hue='aspect_ratio', palette='Set2', markers=True)
movies = pd.read_csv("films.csv")
# runs all the data
movies
print(movies.shape)
movies.head(31)
# I also utilized the budget feature in correlation to the box office gross revenues to look deeper into
# the aspects of preferences by moviegoers. As it is shown in the graph, the gross revenues are very
# volatile as the time of the movie gets longer. This means that the audiences prefer movies that are
# longer in duration compared to shorter films. Moreover, on the business side, it can be concluded that
# the longer the movie last, the more costly it is to produce a movie as the movie needs to have a
# longer story line in the script. The other key predictors here includes budgeting costs and
# genres of films where both can impact profits or is a risk for negative ratings/loss. I learned that
# gross revenues and duration for a film are absolutely unpredictable factors to movie production.
# For instance, if I chose to produce a movie such as Titanic, would the movie do as well as the one
# where director, James Cameron did back in th 90s? The answer here is no because of the actors/actresses,
# the screen play of the film, and the intangibility of the directors vision.
# In this last chart, I decided to use the scatterplot graph as a way to look deeper into
# the items for profits and return on investments (ROI).
plt.figure(figsize=(10,10))
ax = sns.scatterplot(x="profit", y="roi", data=movies, hue = 'aspect_ratio', ci= False)
profits = []
roi_vals = []
for index, row in movies.iterrows():
profit = row['gross']- row['budget']
budget = row['budget']
num = profit - budget
den = budget
# convert roi to percentage
roi = (num / den) * 100
profits.append(profit)
roi_vals.append(roi)
movies['profit'] = profits
movies['roi'] = roi_vals
movies.head(31)
# This graph is definitely useful in comparing the apsects of gross revenues, budgeting, profits, and
# return on investments. For example, for the top 30 movies in the dataset, those variables in the below
# dataset conveys the benefits that can come from film production along with other
# strategic business objectives. On the other hand, the scatterplot shows that certain categories
# of films had a lower budget compared the other films that had a higher grossing box office success.
# The business perspectives relate to the features such as which actors/actresses, directors,
# and IMDB ratings. How each of those elements can contribute generate revenues and profits,
# how they can impact the return on investments, and the types of ratings that would benefit
# the movie's success. There are key predictors here that come in to play,
# for example, ROI can have an effect on the financial results of a business operations
# such as the balance sheets. Another predictor is in accordance to the long-term endeavors
# relating to film production. What are the other strategies that should be used to
# continuing the growth of that business segment? I learned that in predicting what
# genres of movies to produce deals with the scope of the business operations other than
# moviegoers movie preferences. Whether it is action, thriller, comedy, or sci-fi films, the other
# relevant factors include the busines objectives, financials, long-term investment, etc.
###Output
_____no_output_____ |
notebooks/EDA New.ipynb | ###Markdown
Univariate Analysis
###Code
cat_cols=['Pclass','Name','Sex','Ticket','Cabin','Embarked','Survived']
num_cols=['Age','Fare','SibSp','Parch']
###Output
_____no_output_____
###Markdown
Let's construct Histograms for the Numerical Variables
###Code
import matplotlib.pyplot as plt
for i in num_cols:
df[i].plot(kind='hist')
plt.xlabel(i)
plt.show()
for i in num_cols:
df[i].plot(kind='box')
plt.show()
for i in cat_cols:
df[i].value_counts().plot(kind='bar')
plt.xlabel(i)
plt.show()
###Output
_____no_output_____
###Markdown
Bivariate Analysis
###Code
import seaborn as sns
for i in num_cols:
sns.boxplot(x=df['Survived'],y=df[i])
plt.show()
for i in ['Pclass','Sex','Embarked']:
pd.crosstab(df['Survived'],df[i]).plot(kind='bar')
plt.show()
a=[1,2,3,4,5]
a[:-1][1:2]
###Output
_____no_output_____ |
notebooks/Numba_GPU_KDE.ipynb | ###Markdown
Simulate Data
###Code
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
n_features = 6
blob, _ = make_blobs(n_samples=10000, n_features=n_features, random_state=0)
eval_points = np.linspace(-10, 10)
grid = np.meshgrid(eval_points, eval_points)
x_grid, y_grid = grid
grid = np.stack([g.flat for g in grid], axis=1)
b = np.ones((n_features,))
blob = blob.astype(np.float32)
b = b.astype(np.float32)
plt.scatter(blob[:, 0], blob[:, 1], s=1)
eval_points = np.concatenate((grid, blob[0, 2:] * np.ones((grid.shape[0], 1))), axis=1)
eval_points = eval_points.astype(np.float32)
###Output
_____no_output_____
###Markdown
Cupy KDE
###Code
import math
def gaussian(x):
return cp.exp(-0.5 * x ** 2) / cp.sqrt(2 * cp.pi)
@cp.fuse()
def cupy_kde(eval_points, samples, bandwidths):
eval_points = cp.asarray(eval_points)
samples = cp.asarray(samples)
bandwidths = cp.asarray(bandwidths)
return cp.asnumpy(
cp.mean(
cp.prod(
gaussian(
(
(
cp.expand_dims(eval_points, axis=0)
- cp.expand_dims(samples, axis=1)
)
/ bandwidths
)
),
axis=-1,
),
axis=0,
)
/ cp.prod(bandwidths)
)
result = cupy_kde(eval_points, blob, b)
plt.pcolormesh(x_grid, y_grid, result.reshape((50, 50)))
%timeit cp.cuda.stream.get_current_stream().synchronize(); cupy_kde(eval_points, blob, b); cp.cuda.stream.get_current_stream().synchronize();
from sklearn.neighbors import KernelDensity
np.allclose(
np.exp(KernelDensity(bandwidth=b[0]).fit(blob).score_samples(eval_points)),
cupy_kde(eval_points, blob, b),
)
###Output
_____no_output_____
###Markdown
NUMBA CPU
###Code
import numba
import math
SQRT_2PI = np.float32(math.sqrt(2.0 * math.pi))
@numba.vectorize(
["float32(float32, float32, float32)"], nopython=True, target="cpu", fastmath=True
)
def gaussian_pdf(x, mean, sigma):
"""Compute the value of a Gaussian probability density function at x with given mean and sigma."""
return math.exp(-0.5 * ((x - mean) / sigma) ** 2) / (sigma * SQRT_2PI)
@numba.njit(
"float32[:](float32[:, :], float32[:, :], float32[:])",
parallel=True,
fastmath=True,
nogil=True,
)
def numba_kde_multithread2(eval_points, samples, bandwidths):
n_eval_points = len(eval_points)
result = np.zeros((n_eval_points,), dtype=np.float32)
n_samples = len(samples)
denom = n_samples * np.prod(bandwidths)
for i in numba.prange(n_eval_points):
for j in range(n_samples):
result[i] += np.prod(gaussian_pdf(eval_points[i], samples[j], bandwidths))
result[i] /= denom
return result
result = numba_kde_multithread2(eval_points, blob, b)
%timeit numba_kde_multithread2(eval_points, blob, b)
###Output
309 ms ± 10.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
###Markdown
NUMBA CUDA GPU+ One eval point only
###Code
from numba import cuda
import math
SQRT_2PI = np.float32(math.sqrt(2.0 * math.pi))
@cuda.jit(device=True, inline=True)
def gaussian_pdf(x, mean, sigma):
"""Compute the value of a Gaussian probability density function at x with given mean and sigma."""
return math.exp(-0.5 * ((x - mean) / sigma) ** 2) / (sigma * SQRT_2PI)
@cuda.jit
def numba_kde_cuda(eval_points, samples, bandwidths, out):
thread_id = cuda.grid(1)
stride = cuda.gridsize(1)
n_samples = samples.shape[0]
for sample_ind in range(thread_id, n_samples, stride):
diff = 1.0
for bandwidth_ind in range(bandwidths.shape[0]):
diff *= (
gaussian_pdf(
eval_points[0, bandwidth_ind],
samples[sample_ind, bandwidth_ind],
bandwidths[bandwidth_ind],
)
/ bandwidths[bandwidth_ind]
)
diff /= n_samples
cuda.atomic.add(out, 0, diff)
out = np.zeros((1,), dtype=np.float32)
threads_per_block = 64
n_eval_points = 1
n_train, n_test = blob.shape[0], n_eval_points
blocks_per_grid = ((n_train * n_test) + (threads_per_block - 1)) // threads_per_block
numba_kde_cuda[blocks_per_grid, threads_per_block](eval_points[[0]], blob, b, out)
print(out)
numba_kde_multithread2(eval_points, blob, b)[0]
###Output
_____no_output_____
###Markdown
NUMBA CUDA GPU2+ All eval points (one spike, all position bins)+ Using atomic add
###Code
@cuda.jit
def numba_kde_cuda2(eval_points, samples, bandwidths, out):
thread_id1, thread_id2 = cuda.grid(2)
stride1, stride2 = cuda.gridsize(2)
(n_eval, n_bandwidths), n_samples = eval_points.shape, samples.shape[0]
for eval_ind in range(thread_id1, n_eval, stride1):
for sample_ind in range(thread_id2, n_samples, stride2):
diff = 1.0
for bandwidth_ind in range(n_bandwidths):
diff *= (
gaussian_pdf(
eval_points[eval_ind, bandwidth_ind],
samples[sample_ind, bandwidth_ind],
bandwidths[bandwidth_ind],
)
/ bandwidths[bandwidth_ind]
)
diff /= n_samples
cuda.atomic.add(out, eval_ind, diff)
n_eval_points = eval_points.shape[0]
out = np.zeros((n_eval_points,), dtype=np.float32)
n_train, n_test = blob.shape[0], n_eval_points
threads_per_block = 8, 8
blocks_per_grid_x = math.ceil(n_test / threads_per_block[0])
blocks_per_grid_y = math.ceil(n_train / threads_per_block[1])
blocks_per_grid = blocks_per_grid_x, blocks_per_grid_y
numba_kde_cuda2[blocks_per_grid, threads_per_block](eval_points, blob, b, out)
np.allclose(numba_kde_multithread2(eval_points, blob, b), out)
from sklearn.neighbors import KernelDensity
np.allclose(
np.exp(KernelDensity(bandwidth=b[0]).fit(blob).score_samples(eval_points)), out
)
result = np.exp(KernelDensity(bandwidth=b[0]).fit(blob).score_samples(eval_points))
plt.pcolormesh(x_grid, y_grid, result.reshape((50, 50)))
plt.pcolormesh(x_grid, y_grid, out.reshape((50, 50)))
plt.plot(out - numba_kde_multithread2(eval_points, blob, b))
plt.plot(
out - np.exp(KernelDensity(bandwidth=b[0]).fit(blob).score_samples(eval_points))
)
%timeit cuda.synchronize(); numba_kde_cuda2[blocks_per_grid, threads_per_block](eval_points, blob, b, out); cuda.synchronize()
%timeit KernelDensity(bandwidth=b[0]).fit(blob).score_samples(eval_points)
%timeit numba_kde_multithread2(eval_points, blob, b)
%timeit cp.cuda.stream.get_current_stream().synchronize(); cupy_kde(eval_points, blob, b); cp.cuda.stream.get_current_stream().synchronize();
###Output
131 ms ± 17.5 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
NUMBA CUDA GPU3+ Loop over all spikes and covariate bins+ Using atomic add
###Code
@cuda.jit
def numba_kde_cuda3(covariate_bins, marks, samples, bandwidths, out):
thread_id1, thread_id2, thread_id3 = cuda.grid(3)
stride1, stride2, stride3 = cuda.gridsize(3)
n_bins, n_cov = covariate_bins.shape
n_test, n_features = marks.shape
n_samples = samples.shape[0]
for test_ind in range(thread_id1, n_test, stride1):
for bin_ind in range(thread_id2, n_bins, stride2):
for sample_ind in range(thread_id3, n_samples, stride3):
diff = 1.0
for cov_ind in range(n_cov):
diff *= (
gaussian_pdf(
covariate_bins[bin_ind, cov_ind],
samples[sample_ind, cov_ind],
bandwidths[cov_ind],
)
/ bandwidths[cov_ind]
)
for feature_ind in range(n_features):
diff *= (
gaussian_pdf(
marks[test_ind, feature_ind],
samples[sample_ind, n_cov + feature_ind],
bandwidths[n_cov + feature_ind],
)
/ bandwidths[n_cov + feature_ind]
)
diff /= n_samples
cuda.atomic.add(out, (test_ind, bin_ind), diff)
# n_samples, n_test = blob.shape[0], blob.shape[0]
# n_bins, n_cov = grid.shape
# n_marks = 4
# out = np.zeros((n_test, n_bins))
# threads_per_block = 4, 8, 4
# blocks_per_grid_x = math.ceil(n_test / threads_per_block[0])
# blocks_per_grid_y = math.ceil(n_bins / threads_per_block[1])
# blocks_per_grid_z = math.ceil(n_samples / threads_per_block[2])
# blocks_per_grid = blocks_per_grid_x, blocks_per_grid_y, blocks_per_grid_z
# numba_kde_cuda3[blocks_per_grid, threads_per_block](
# grid, np.ascontiguousarray(blob[:n_test, n_cov:]), blob, b, out
# )
# np.allclose(out[0], np.exp(KernelDensity(bandwidth=b[0]).fit(blob).score_samples(eval_points)))
# plt.pcolormesh(x_grid, y_grid, out[0].reshape((50, 50)))
# %timeit cuda.synchronize(); numba_kde_cuda3[blocks_per_grid, threads_per_block](grid, np.ascontiguousarray(blob[:n_test, n_cov:]), blob, b, out); cuda.synchronize()
# import numba
# import math
# SQRT_2PI = np.float64(math.sqrt(2.0 * math.pi))
# @numba.vectorize(['float64(float64, float64, float64)'], nopython=True, target='cpu')
# def gaussian_pdf(x, mean, sigma):
# '''Compute the value of a Gaussian probability density function at x with given mean and sigma.'''
# return math.exp(-0.5 * ((x - mean) / sigma)**2) / (sigma * SQRT_2PI)
# @numba.njit('float64[:, :](float64[:, :], float64[:, :], float64[:, :], float64[:])', parallel=True, fastmath=True)
# def numba_kde_multithread3(covariate_bins, marks, samples, bandwidths):
# n_bins, n_cov = covariate_bins.shape
# n_test, n_features = marks.shape
# n_samples = samples.shape[0]
# result = np.zeros((n_test, n_bins))
# for test_ind in numba.prange(n_test):
# for bin_ind in range(n_bins):
# for sample_ind in range(n_samples):
# diff = 1.0
# for cov_ind in range(n_cov):
# diff *= (gaussian_pdf(covariate_bins[bin_ind, cov_ind],
# samples[sample_ind, cov_ind],
# bandwidths[cov_ind])
# / bandwidths[cov_ind])
# for feature_ind in range(n_features):
# diff *= (gaussian_pdf(marks[test_ind, feature_ind],
# samples[sample_ind,
# n_cov + feature_ind],
# bandwidths[n_cov + feature_ind])
# / bandwidths[n_cov + feature_ind])
# result[test_ind, bin_ind] += diff / n_samples
# return result
# n_samples, n_test = blob.shape[0], blob.shape[0]
# n_bins, n_cov = grid.shape
# result = numba_kde_multithread3(grid, np.ascontiguousarray(blob[:2, n_cov:]), blob, b)
# np.allclose(result[0], np.exp(KernelDensity(bandwidth=b[0]).fit(blob).score_samples(eval_points)))
# plt.plot(result[0] - np.exp(KernelDensity(bandwidth=b[0]).fit(blob).score_samples(eval_points)))
# fig, axes = plt.subplots(1, 2)
# axes[0].pcolormesh(x_grid, y_grid, result[0].reshape((50, 50)))
# axes[1].pcolormesh(x_grid, y_grid, np.exp(KernelDensity(bandwidth=b[0]).fit(blob).score_samples(eval_points)).reshape((50, 50)))
# %timeit numba_kde_multithread3(grid, np.ascontiguousarray(blob[:n_test, n_cov:]), blob, b)
###Output
_____no_output_____
###Markdown
NUMBA CUDA GPU 2a+ All eval points (one spike, all covariate bins)+ Avoid atomic add
###Code
from numba import cuda
import math
SQRT_2PI = np.float32(math.sqrt(2.0 * math.pi))
@cuda.jit(device=True, inline=True)
def gaussian_pdf(x, mean, sigma):
"""Compute the value of a Gaussian probability density function at x with given mean and sigma."""
return math.exp(-0.5 * ((x - mean) / sigma) ** 2) / (sigma * SQRT_2PI)
@cuda.jit
def numba_kde_cuda2a(eval_points, samples, bandwidths, out):
n_samples, n_bandwidths = samples.shape
thread_id = cuda.grid(1)
sum_kernel = 0.0
for sample_ind in range(n_samples):
product_kernel = 1.0
for bandwidth_ind in range(n_bandwidths):
product_kernel *= (
gaussian_pdf(
eval_points[thread_id, bandwidth_ind],
samples[sample_ind, bandwidth_ind],
bandwidths[bandwidth_ind],
)
/ bandwidths[bandwidth_ind]
)
sum_kernel += product_kernel
out[thread_id] = sum_kernel / n_samples
n_eval_points = eval_points.shape[0]
threads_per_block = 64
blocks_per_grid_x = math.ceil(n_eval_points / threads_per_block)
blocks_per_grid = blocks_per_grid_x
out = np.zeros((n_eval_points,), dtype=np.float32)
numba_kde_cuda2a[blocks_per_grid, threads_per_block](eval_points, blob, b, out)
np.allclose(numba_kde_multithread2(eval_points, blob, b), out)
%timeit cuda.synchronize(); numba_kde_cuda2a[blocks_per_grid, threads_per_block](eval_points, blob, b, out); cuda.synchronize()
###Output
124 ms ± 67.5 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
###Markdown
NUMBA CUDA GPU 2b+ All eval points (one spike, all covariate bins)+ Avoid atomic add+ Use tiling
###Code
from numba import cuda
from numba.types import float64, float32
import math
SQRT_2PI = np.float32(math.sqrt(2.0 * math.pi))
@cuda.jit(device=True, inline=True)
def gaussian_pdf(x, mean, sigma):
"""Compute the value of a Gaussian probability density function at x with given mean and sigma."""
return math.exp(-0.5 * ((x - mean) / sigma) ** 2) / (sigma * SQRT_2PI)
TILE_SIZE = 64
@cuda.jit
def numba_kde_cuda2c(eval_points, samples, bandwidths, out, out2):
"""
Parameters
----------
eval_points : ndarray, shape (n_eval, n_bandwidths)
samples : ndarray, shape (n_samples, n_bandwidths)
out : ndarray, shape (n_eval,)
"""
n_eval, n_bandwidths = eval_points.shape
n_samples = samples.shape[0]
thread_id1 = cuda.grid(1)
relative_thread_id1 = cuda.threadIdx.x
n_threads = cuda.blockDim.x
samples_tile = cuda.shared.array((TILE_SIZE, 6), float32)
sum_kernel = 0.0
for tile_ind in range(0, n_samples, TILE_SIZE):
tile_index = tile_ind * TILE_SIZE + relative_thread_id2
for i in range(0, TILE_SIZE, n_threads):
for bandwidth_ind in range(n_bandwidths):
samples_tile[relative_thread_id2 + i, bandwidth_ind] = samples[
tile_index + i, bandwidth_ind
]
out2[tile_index + i, bandwidth_ind] = samples[
tile_index + i, bandwidth_ind
]
cuda.syncthreads()
if tile_index < n_samples:
product_kernel = 1.0
for bandwidth_ind in range(n_bandwidths):
product_kernel *= (
gaussian_pdf(
eval_points[thread_id1, bandwidth_ind],
samples_tile[relative_thread_id2, bandwidth_ind],
bandwidths[bandwidth_ind],
)
/ bandwidths[bandwidth_ind]
)
sum_kernel += product_kernel
cuda.syncthreads()
out[thread_id1] = sum_kernel / n_samples
# Allocate shared memory dynamically by setting shape = 0 and
# . kernel[grid_dim, block_dim, stream, dyn_shared_size](*kernel_args)
# If not using streams use 0
###Output
_____no_output_____
###Markdown
Numba KDE CUDA GPU3a
###Code
from numba import cuda
from numba.types import float64, float32
import math
SQRT_2PI = np.float32(math.sqrt(2.0 * math.pi))
@cuda.jit(device=True, inline=True)
def gaussian_pdf(x, mean, sigma):
"""Compute the value of a Gaussian probability density function at x with given mean and sigma."""
return math.exp(-0.5 * ((x - mean) / sigma) ** 2) / (sigma * SQRT_2PI)
TILE_SIZE = 64
@cuda.jit
def numba_kde_cuda2c(eval_points, samples, bandwidths, out, out2):
"""
Parameters
----------
eval_points : ndarray, shape (n_eval, n_bandwidths)
samples : ndarray, shape (n_samples, n_bandwidths)
out : ndarray, shape (n_eval,)
"""
n_eval, n_bandwidths = eval_points.shape
n_samples = samples.shape[0]
thread_id1, thread_id2 = cuda.grid(2)
relative_thread_id1 = cuda.threadIdx.x
relative_thread_id2 = cuda.threadIdx.y
n_threads = cuda.blockDim.x
samples_tile = cuda.shared.array((TILE_SIZE, 6), float32)
sum_kernel = 0.0
for tile_ind in range(0, n_samples, TILE_SIZE):
tile_index = tile_ind * TILE_SIZE + relative_thread_id2
for i in range(0, TILE_SIZE, n_threads):
for bandwidth_ind in range(n_bandwidths):
samples_tile[relative_thread_id2 + i, bandwidth_ind] = samples[
tile_index + i, bandwidth_ind
]
out2[tile_index + i, bandwidth_ind] = samples[
tile_index + i, bandwidth_ind
]
cuda.syncthreads()
if tile_index < n_samples:
product_kernel = 1.0
for bandwidth_ind in range(n_bandwidths):
product_kernel *= (
gaussian_pdf(
eval_points[thread_id1, bandwidth_ind],
samples_tile[relative_thread_id2, bandwidth_ind],
bandwidths[bandwidth_ind],
)
/ bandwidths[bandwidth_ind]
)
sum_kernel += product_kernel
cuda.syncthreads()
out[thread_id1] = sum_kernel / n_samples
# Allocate shared memory dynamically by setting shape = 0 and
# . kernel[grid_dim, block_dim, stream, dyn_shared_size](*kernel_args)
# If not using streams use 0
n_eval_points = eval_points.shape[0]
out = np.zeros((n_eval_points,), dtype=np.float32)
n_train, n_test = blob.shape[0], n_eval_points
out2 = np.zeros((n_train, 6), dtype=np.float32)
threads_per_block = 16, 16
blocks_per_grid_x = math.ceil(n_test / threads_per_block[0])
blocks_per_grid_y = math.ceil(n_train / threads_per_block[1])
blocks_per_grid = blocks_per_grid_x, blocks_per_grid_y
numba_kde_cuda2c[blocks_per_grid, threads_per_block](eval_points, blob, b, out, out2)
result = numba_kde_multithread2(eval_points, blob, b)
np.allclose(result, out)
plt.plot(result - out)
%timeit cuda.synchronize(); numba_kde_cuda2c[blocks_per_grid, threads_per_block](eval_points, blob, b, out); cuda.synchronize()
np.nonzero(out2.sum(axis=1))[0]
plt.plot(out2[:128, 0])
plt.plot(out2[:128, 1])
out2[:128]
blocks_per_grid
###Output
_____no_output_____ |
raspberry/line_follower_cv/basic python.ipynb | ###Markdown
Hough transform
###Code
# # Edge detection
img = cv2.imread('images/lines_4.jpeg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
edges = cv2.Canny(gray,100,150,apertureSize = 3)
plt.imshow(edges)
# Detect and draw
lines = cv2.HoughLines(edges,1,np.pi/180,50)
print(len(lines), 'lines found')
img2 = img.copy()
for line in lines:
rho,theta = line[0]
#print('Theta =', theta*180/np.pi)
#print('rho =', rho)
# if between 0+-15 or 180+-15
# if between 45+-15
if theta*180/np.pi>15 and theta*180/np.pi<75: print('turn right')
# if between 135+-15
elif theta*180/np.pi>115 and theta*180/np.pi<165: print('turn left')
a = np.cos(theta)
b = np.sin(theta)
x0 = a*rho
y0 = b*rho
x1 = int(x0 + 1000*(-b))
y1 = int(y0 + 1000*(a))
x2 = int(x0 - 1000*(-b))
y2 = int(y0 - 1000*(a))
cv2.line(img2,(x1,y1),(x2,y2),(0,0,255),5)
plt.imshow(img2)
print(img.shape)
'''
Theta = 61.000002835844505
rho = 428.0
Theta = 61.99999717184774
rho = 390.0
Theta = 0.9999999922536332
rho = 219.0
Theta = 62.99999833804013
rho = 387.0
Theta = 178.00000267657154
rho = -257.0
Theta = 61.99999717184774
rho = 424.0
'''
'''
Theta = 118.99999534292266
rho = 145.0
Theta = 1.9999999845072665
rho = 325.0
Theta = 178.99999701257477
rho = -364.0
Theta = 116.99999301053786
rho = 122.0
Theta = 118.00000100691943
rho = 150.0
'''
###Output
_____no_output_____ |
Web-Scraping Scripts and Data/Accredited Canadian English Undergrad MechEng Programs/OntarioTechU/WS_OntarioTechU_MechEng_Core_and_Electives_(AllYears).ipynb | ###Markdown
1. Collect course link texts for driver to click on
###Code
page_soup = soup(driver.page_source, 'lxml')
containers = page_soup.findAll("div", {"class": "acalog-core"})[5:10]
containers
len(containers)
lists_of_links = [container.findAll("a") for container in containers]
link_texts = [link.text for list_of_links in lists_of_links for link in list_of_links]
link_texts
#take out empty strings and the co-op program course
link_texts = [link.strip() for link in link_texts if link != "" and "ENGR 0998U – Engineering Internship Program" not in link]
link_texts
###Output
_____no_output_____
###Markdown
2. Script to click open all course info boxes
###Code
from selenium.common.exceptions import NoSuchElementException
from selenium.webdriver.common.keys import Keys
driver.get(url)
for link_text in link_texts:
link = driver.find_element_by_link_text(link_text)
time.sleep(2)
link.send_keys(Keys.DOWN)
time.sleep(2)
link.click()
time.sleep(3)
print("clicked {}".format(link_text))
page_soup = soup(driver.page_source, 'lxml')
container = page_soup.find("div", {"class": "custom_leftpad_20"})
container
###Output
clicked COMM 1050U – Technical Communications
clicked ENGR 1015U – Introduction to Engineering
clicked MATH 1010U – Calculus I
clicked MATH 1850U – Linear Algebra for Engineers
clicked PHY 1010U – Physics I
clicked CHEM 1800U – Chemistry for Engineers
clicked ENGR 1025U – Engineering Design
clicked ENGR 1200U – Introduction to Programming for Engineers
clicked MATH 1020U – Calculus II
clicked PHY 1020U – Physics II
clicked SSCI 1470U – Impact of Science and Technology on Society
clicked MANE 2220U – Structure and Properties of Materials
clicked MATH 2860U – Differential Equations for Engineers
clicked MECE 2230U – Statics
clicked MECE 2310U – Concurrent Engineering and Design
clicked MECE 2320U – Thermodynamics
clicked ELEE 2790U – Electric Circuits
clicked MATH 2070U – Numerical Methods
clicked MECE 2420U – Solid Mechanics I
clicked MECE 2430U – Dynamics
clicked MECE 2860U – Fluid Mechanics
clicked STAT 2800U – Statistics and Probability for Engineers
clicked MANE 3190U – Manufacturing and Production Processes
clicked MECE 3030U – Computer-Aided Design
clicked MECE 3270U – Kinematics and Dynamics of Machines
clicked MECE 3350U – Control Systems
clicked MECE 3420U – Solid Mechanics II
clicked ENGR 3360U – Engineering Economics
clicked MECE 3210U – Mechanical Vibrations
clicked MECE 3220U – Machine Design
clicked MECE 3230U – Thermodynamic Applications
clicked MECE 3390U – Mechatronics
clicked MECE 3930U – Heat Transfer
clicked ENGR 4760U – Ethics, Law and Professionalism for Engineers
clicked ENGR 4950U – Capstone Systems Design for Mechanical, Automotive, Mechatronics and Manufacturing Engineering I
clicked MECE 4210U – Advanced Solid Mechanics and Stress Analysis
clicked MECE 4290U – Finite Element Methods
clicked ENGR 4951U – Capstone Systems Design for Mechanical, Automotive, Mechatronics and Manufacturing Engineering II
clicked AUTE 3010U – Introduction to Automotive Engineering
clicked ENGR 3160U – Engineering Operations and Project Management
clicked MANE 3120U – Thermo-mechanical Processing of Materials
clicked MANE 3300U – Integrated Manufacturing Systems
clicked MANE 3460U – Industrial Ergonomics
clicked MANE 4045U – Quality Control
clicked MANE 4160U – Artificial Intelligence in Engineering
clicked MANE 4190U – Principles of Material Removal Processes
clicked MANE 4280U – Robotics and Automation
clicked MANE 4380U – Life Cycle Engineering
clicked MANE 4600U – Additive Manufacturing
clicked MANE 4700U – Introduction to Tribology: Friction, Wear and Lubrication
clicked MECE 3260U – Introduction to Energy Systems
clicked MECE 3410U – Electro-Mechanical Energy Conversion
clicked MECE 4000U – Special Topics in Mechanical Engineering
clicked MECE 4250U – Advanced Materials Engineering
###Markdown
3. Scrape course codes, names, descriptions from the updated page html
###Code
containers = container.findAll("li", {"class": "acalog-course acalog-course-open"})
len(containers)
len(link_texts)
course_descs = [cont.find("div", {"class": None}).text for cont in containers]
course_descs
course_descs = [course_desc.split(" ")[1].split("Credit hours:")[0] for course_desc in course_descs]
course_descs
course_descs = [desc.replace("\xa0", " ") for desc in course_descs]
course_descs
course_names = [text.split(" – ")[1] for text in link_texts]
course_names
course_codes = [text.split(" – ")[0] for text in link_texts]
course_codes
###Output
_____no_output_____
###Markdown
4. Write to CSV
###Code
import pandas as pd
df = pd.DataFrame({
"Course Number": course_codes,
"Course Name": course_names,
"Course Description": course_descs
})
df
df.to_csv('OntarioTechU_MechEng_Core_and_Electives_(AllYears).csv', index = False)
###Output
_____no_output_____ |
psm/sandbox.ipynb | ###Markdown
Retrieve ontology
###Code
from owlready2 import *
import pandas as pd
import json
import math
# change path below to match your local copy of the .owl file you want to use
onto = get_ontology("file:///Users/kevinlin/Documents/classes/cs270/final-project/cs270-final-project/ontology/business.owl").load()
###Output
_____no_output_____
###Markdown
Retrieve & prepare dataset
###Code
business_pkl = '../yelp_dataset/business.pkl'
business_df = pd.read_pickle(business_pkl)
# retrieve businesses that have 'Restaurant' and 'Food' in 'categories'
df = business_df[business_df['categories'].notnull()]
df = df[df['categories'].str.contains('Restaurants')]
df = df[df['categories'].str.contains('Food')]
# parse 'attributes.DietaryRestrictions'
df['attributes.DietaryRestrictions'] = df['attributes.DietaryRestrictions'].replace([float('nan'), 'None'], "{'dairy-free': False, 'gluten-free': False, 'vegan': False, 'kosher': False, 'halal': False, 'soy-free': False, 'vegetarian': False}")
df['attributes.DietaryRestrictions'] = df['attributes.DietaryRestrictions'].str.replace("\'", "\"").str.replace("False", "\"False\"").str.replace("True", "\"True\"")
df = df.join(df['attributes.DietaryRestrictions'].apply(json.loads).apply(pd.Series))
# parse 'attributes.Ambience' attribute
df['attributes.Ambience'] = df['attributes.Ambience'].replace(float('nan'), "{'touristy': False, 'hipster': False, 'romantic': False, 'divey': False, 'intimate': False, 'trendy': False, 'upscale': False, 'classy': False, 'casual': False}")
df['attributes.Ambience'] = df['attributes.Ambience'].str.replace("\'", "\"").str.replace("False", "\"False\"").str.replace("True", "\"True\"").str.replace("None", "\"False\"")
df = df.join(df['attributes.Ambience'].apply(json.loads).apply(pd.Series))
# ... add more as necessary
#df.columns
df
###Output
_____no_output_____
###Markdown
Validate dataset properties
###Code
# unique values of 'stars'
print(business_df['stars'].unique())
print(type(business_df['stars'].unique()[0]))
# check types of all 'review_count' values
print(set([type (x) for x in business_df['review_count'].unique()]))
# check types of all 'name' values
print(set([type (x) for x in business_df['name'].unique()]))
#print(business_df['hours.Monday'].unique())
print(set([type (x) for x in business_df['city'].unique()]))
print(set([type (x) for x in business_df['latitude'].unique()]))
print(set([type (x) for x in business_df['longitude'].unique()]))
print(set([type (x) for x in business_df['categories'].unique()]))
for x in business_df['attributes.Ambience'].unique():
if isinstance(x, float):
print(x)
print(set([type (x) for x in business_df['attributes.Ambience'].unique()]))
print(business_df['attributes.Ambience'][0])
# check types of all 'name' values
print(set([type (x) for x in business_df['name'].unique()]))
# check values and types of all 'RestaurantsPriceRange2' values
print(business_df['attributes.RestaurantsPriceRange2'].unique())
print(type(business_df['attributes.RestaurantsPriceRange2'].unique()[-1])) # float or str
# note: not every business has a price range!
###Output
['2' '1' nan '3' '4' 'None' 1.0 2.0 3.0 4.0]
<class 'float'>
###Markdown
Create instances
###Code
onto.Business
onto['CajunRestaurant']
# loop through dataset and create instances
i = 0
for _, row in df.iterrows():
individual = onto.Business(row['business_id'])
# fill 'characteristic' data properties
individual.businessName = row['name']
individual.stars = row['stars']
individual.reviewCount = row['review_count']
# fill 'location' data properties
individual.city = row['city']
individual.latitude = row['latitude']
individual.longitude = row['longitude']
## min/max lat/long for places within 100km of business
r = 100 / 6371
lat_rad = math.radians(row['latitude'])
long_rad = math.radians(row['longitude'])
individual.minLat = math.degrees(lat_rad - r)
individual.maxLat = math.degrees(lat_rad + r)
d_lon = math.asin(math.sin(r) / math.cos(lat_rad))
individual.minLon = math.degrees(long_rad - d_lon)
individual.maxLon = math.degrees(long_rad + d_lon)
# fill in 'operations' data properties
hourAttributes = ['hours.Monday', 'hours.Tuesday', 'hours.Wednesday', 'hours.Thursday', 'hours.Friday', 'hours.Saturday', 'hours.Sunday']
dayPrefixes = ['mon', 'tues', 'wed', 'thurs', 'fri', 'sat', 'sun']
openProperties = [dayPrefix + 'OpenTime' for dayPrefix in dayPrefixes]
closeProperties = [dayPrefix + 'CloseTime' for dayPrefix in dayPrefixes]
for hourAttr, openProp, closeProp in zip(hourAttributes, openProperties, closeProperties):
hours = row[hourAttr]
if isinstance(hours, str):
openTime, closeTime = hours.split('-')
openHour, openMinute = [int(i) for i in openTime.split(':')]
closeHour, closeMinute = [int(i) for i in closeTime.split(':')]
setattr(individual, openProp, openHour + (openMinute * 0.01))
# handle next day scenario
if closeHour < openHour:
setattr(individual, closeProp, 23.59)
else:
setattr(individual, closeProp, closeHour + (closeMinute * 0.01))
# make multi-class individual (assign relevant parent classes)
## categories + dietary restriction (specialization & restaurant type)
categories = row['categories']
if isinstance(categories, str):
categories = categories.split(', ')
# American restaurants
if 'American (Traditional)' in categories:
individual.is_a.append(onto.TraditionalAmericanRestaurant)
if 'American (New)' in categories:
individual.is_a.append(onto.NewAmericanRestaurant)
if 'Cajun/Creole' in categories:
individual.is_a.append(onto.CajunRestaurant)
if 'Tex-Mex' in categories:
individual.is_a.append(onto.TexMexRestaurant)
if 'Southern' in categories:
individual.is_a.append(onto.SouthernRestaurant)
if 'Hawaiian' in categories:
individual.is_a.append(onto.HawaiianRestaurant)
# Asian restaurants
if 'Pan Asian' in categories:
individual.is_a.append(onto.PanAsianRestaurant)
if 'Taiwanese' in categories:
individual.is_a.append(onto.TaiwaneseRestaurant)
if 'Hakka' in categories:
individual.is_a.append(onto.HakkaRestaurant)
if 'Singaporean' in categories:
individual.is_a.append(onto.SingaporeanRestaurant)
if 'Korean' in categories:
individual.is_a.append(onto.KoreanRestaurant)
if 'Japanese' in categories:
individual.is_a.append(onto.JapaneseRestaurant)
if 'Chinese' in categories:
individual.is_a.append(onto.ChineseRestaurant)
if 'Shanghainese' in categories:
individual.is_a.append(onto.ShanghaineseRestaurant)
if 'HongKongStyleCafe' in categories:
individual.is_a.append(onto.HongKongStyleCafe)
if 'Cantonese' in categories:
individual.is_a.append(onto.CantoneseRestaurant)
if 'Asian Fusion' in categories:
individual.is_a.append(onto.AsianFusionRestaurant)
# Specializations
if 'Dumplings' in categories:
individual.specializesIn.append(onto.Dumplings)
if 'Dim Sum' in categories:
individual.specializesIn.append(onto.Dimsum)
diet = row['attributes.DietaryRestrictions']
if 'Vegetarian' in categories or row['vegetarian'] == 'True':
individual.specializesIn.append(onto.Vegetarian)
if 'Vegan' in categories or row['vegan'] == 'True':
individual.specializesIn.append(onto.Vegetarian)
## ambience
if row['casual'] == 'True':
individual.hasAmbience.append(onto.CasualAmbience)
if row['classy'] == 'True':
individual.hasAmbience.append(onto.ClassyAmbience)
if row['divey'] == 'True':
individual.hasAmbience.append(onto.DiveyAmbience)
if row['hipster'] == 'True':
individual.hasAmbience.append(onto.HipsterAmbience)
if row['intimate'] == 'True':
individual.hasAmbience.append(onto.IntimateAmbience)
if row['romantic'] == 'True':
individual.hasAmbience.append(onto.RomanticAmbience)
if row['touristy'] == 'True':
individual.hasAmbience.append(onto.TouristyAmbience)
if row['trendy'] == 'True':
individual.hasAmbience.append(onto.TrendyAmbience)
if row['upscale'] == 'True':
individual.hasAmbience.append(onto.UpscaleAmbience)
# debug
i += 1
if i > 1000:
break # full run takes a long time... save for later
# print(individual.__class__)
# print(row)
# break
###Output
_____no_output_____
###Markdown
Save
###Code
#onto.save(file = '../ontology/businessWithInstances.owl', format = 'rdfxml')
###Output
_____no_output_____
###Markdown
QueryingGoal: input -> best restaurantsGeerate [SPARQL queries](https://owlready2.readthedocs.io/en/latest/sparql.html)[SPARQL documentation](https://www.w3.org/TR/sparql11-query/)Input format:`{ "lat": double, "lon": double, "day": str ('mon'...'sun'), "time": double (format: hour.minute e.g. 11.45), "categories": list of categories, "ambiences": list of Ambience classes, "minStars": double, "minReviewCount" = int}` Querying Experimentation
###Code
list(default_world.sparql("""
SELECT ?x
WHERE {
?x rdfs:subClassOf* business:RomanticAmbience
}
"""))
list(default_world.sparql("""
SELECT ?x
WHERE {
?x rdf:type ?type
?type rdfs:subClassOf* business:Restaurant
?type rdfs:subClassOf* business:TaiwaneseRestaurant
}
"""))
# example template
list(default_world.sparql("""
SELECT ?x ?businessName
WHERE {
?x rdf:type ?type
?type rdfs:subClassOf* business:TaiwaneseRestaurant
?x business:businessName ?businessName
?x business:minLat ?minLat
?x business:maxLat ?maxLat
?x business:minLon ?minLon
?x business:maxLon ?maxLon
FILTER (49 < ?maxLat && 49 > ?minLat && -123 < ?maxLon && -123 > ?minLon)
?x business:monOpenTime ?openTime
?x business:monCloseTime ?closeTime
FILTER (11.45 > ?openTime && 11.45 < ?closeTime)
?x business:stars ?stars
?x business:reviewCount ?reviewCount
FILTER (?stars > 2.0 && ?reviewCount > 0)
?x business:hasAmbience business:CasualAmbience
?x business:hasAmbience business:RomanticAmbience
}
"""))
###Output
_____no_output_____
###Markdown
Querying Template
###Code
# lat = 42.0
# lon = -123.0
lat = None
lon = None
day = 'mon'
time = 11.45
categories = ['TaiwaneseRestaurant']
ambiences = []
minStars = 2.0
minReviewCount = 1
# json = parse(json that nicholas gives us)
# lat, lon ,day ... = json
# lat = json['lat']
query = "SELECT ?x\nWHERE {\n\t?x rdf:type ?type\n\t?x business:businessName ?businessName\n"
if lat is not None and lon is not None:
query += "\t?x business:minLat ?minLat\n\t?x business:maxLat ?maxLat\n\t?x business:minLon ?minLon\n\t?x business:maxLon ?maxLon\n"
query += "\tFILTER (" + str(lat) + " < ?maxLat && " + str(lat) + " > ?minLat && " + str(lon) + " < ?maxLon && " + str(lon) + " > ?minLon)\n"
if day is not None and time is not None:
query += "\t?x business:" + day + "OpenTime ?openTime\n\t?x business:" + day + "CloseTime ?closeTime\n"
query += "\tFILTER (" + str(time) + " > ?openTime && " + str(time) + " < ?closeTime)\n"
if minStars is not None:
query += "\t?x business:stars ?stars\n\tFILTER(?stars > " + str(minStars) + ")\n"
if minReviewCount is not None:
query += "\t?x business:reviewCount ?reviewCount\n\tFILTER(?reviewCount > " + str(minReviewCount) + ")\n"
if len(categories) > 0:
for cat in categories:
"""
if cat in dishes:
query += "\t?x business:hasDish" + cat + "\n"
elif cat in
"""
query += "\t?type rdfs:subClassOf* business:" + cat + "\n"
if len(ambiences) > 0:
for amb in ambiences:
query += "\t?x business:hasAmbience business:" + amb + "\n"
query += "}"
print(query)
results = list(default_world.sparql(query))
print(results)
"""
while len(results) < 10:
# run the query
# before: modify the constraints (e.g. categories.append(children of ancestors))
# lower minStars
# lower minReviewCount
# etc.
results.append(# run the query)
"""
# TODO: develope specific algorithms / strategies for PSM
# (e.g. what happens when user wants to find more results or there are no results at all)
# get direct parent classes
parents = []
targetClass = onto.HakkaRestaurant
for ancestor in targetClass.ancestors():
if targetClass in list(ancestor.subclasses()):
parents.append(ancestor)
print(parents)
# get children of direct parent classes
for parent in parents:
print(list(parent.subclasses()))
###Output
[business.CantoneseRestaurant, business.HakkaRestaurant, business.HongKongStyleCafe, business.ShanghaineseRestaurant]
[business.HakkaRestaurant]
###Markdown
Debugging / Scratch Work
###Code
# check inconsistent classes
list(default_world.inconsistent_classes())
###Output
_____no_output_____ |
LeetCode/LeetCode_520DetectCapital.ipynb | ###Markdown
LeetCode 540. Detect Capital Questionhttps://leetcode.com/problems/detect-capital/ Given a word, you need to judge whether the usage of capitals in it is right or not. We define the usage of capitals in a word to be right when one of the following cases holds: All letters in this word are capitals, like "USA". All letters in this word are not capitals, like "leetcode". Only the first letter in this word is capital if it has more than one letter, like "Google". Otherwise, we define that this word doesn't use capitals in a right way. Example 1: Input: "USA" Output: True Example 2: Input: "FlaG" Output: False Note: The input will be a non-empty word consisting of uppercase and lowercase latin letters. My Solution
###Code
def detectCapitalUse(word):
if word == word.upper() or word == word.lower() or word == word[0].upper()+word[1:].lower():
return True
else:
return False
# test code
w1 = "USA"
w2 = "leetcode"
w3 = "Google"
w4 = "FlaG"
detectCapitalUse(w1)
detectCapitalUse(w2)
detectCapitalUse(w3)
detectCapitalUse(w4)
###Output
_____no_output_____
###Markdown
My Result__Runtime__ : 20 ms, faster than 96.89% of Python online submissions for Detect Capital.__Memory Usage__ : 11.9 MB, less than 9.95% of Python online submissions for Detect Capital. @StefanPochmann's Solution
###Code
def detectCapitalUse(word):
return word.isupper() or word.islower() or word.istitle()
detectCapitalUse(w1)
detectCapitalUse(w2)
detectCapitalUse(w3)
detectCapitalUse(w4)
###Output
_____no_output_____ |
notebooks/mysql_tutorial.ipynb | ###Markdown
MySQL 8.0 Reference Manual Basic Steps for MySQL Server Deployment with Docker Starting a MySQL Server Instance MySQL docker official image
###Code
!docker run --name=mysql1 -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysqlpass -d mysql
###Output
_____no_output_____
###Markdown
MySQL docker Oracle image```docker run --name=mysql1 -p 3306:3306 -e MYSQL_ROOT_PASSWORD=mysqlpass -d mysql/mysql-server``` Connecting to MySQL Server from within the Container ```docker exec -it mysql1 mysql -uroot -pmysqlpass``` Container Shell Access ```docker exec -it mysql1 bash ``` Stopping and Deleting a MySQL Container ```docker stop mysql1``` ```docker restart mysql1``` ```docker rm mysql1``` Tutorial Connecting to and Disconnecting from the Server ```mysql -u root -p```
###Code
from sqlalchemy import create_engine, inspect, Table, Column, String, Integer, ForeignKey, MetaData, CHAR, VARCHAR, DATE, select, desc, func, text
from sqlalchemy.sql import and_, or_, not_, alias
from sqlalchemy.dialects.mysql import INTEGER, DECIMAL
import pandas as pd
engine = create_engine('mysql+pymysql://root:mysqlpass@localhost', pool_recycle=3600)
conn = engine.connect()
def query(s):
result = conn.execute(s)
for row in result:
print(row)
###Output
_____no_output_____
###Markdown
Creating and Selecting a Database ```sqlSHOW DATABASES;```
###Code
def get_dbs():
insp = inspect(engine)
dbs = insp.get_schema_names()
print(dbs)
get_dbs()
###Output
_____no_output_____
###Markdown
```sqlCREATE DATABASE menagerie;```
###Code
def set_queries(queries):
for query in queries:
result = conn.execute(query)
queries = [
'DROP DATABASE IF EXISTS menagerie;',
'CREATE DATABASE menagerie;'
]
set_queries(queries)
get_dbs()
conn.close()
engine = create_engine('mysql+pymysql://root:mysqlpass@localhost/menagerie', pool_recycle=3600)
conn = engine.connect()
###Output
_____no_output_____
###Markdown
Creating a Table ```sqlSHOW TABLES;```
###Code
def show_tables():
print(engine.table_names())
show_tables()
###Output
_____no_output_____
###Markdown
```sqlCREATE TABLE pet ( name VARCHAR(20), owner VARCHAR(20), species VARCHAR(20), sex CHAR(1), birth DATE, death DATE);```
###Code
meta = MetaData()
pet = Table('pet', meta,
Column('name', VARCHAR(20)),
Column('owner', VARCHAR(20)),
Column('species', VARCHAR(20)),
Column('sex', CHAR(1)),
Column('birth', DATE),
Column('death', DATE)
)
pet.create(engine)
show_tables()
###Output
_____no_output_____
###Markdown
```sqlDESCRIBE pet;```
###Code
print(meta.tables)
###Output
_____no_output_____
###Markdown
Loading Data into a Table ```sqlINSERT INTO pet VALUES ('Fluffy', 'Harold', 'cat', 'f', '1993-02-04', NULL), ('Claws', 'Diane', 'cat', 'm', '1994-03-17', NULL), ('Buffy', 'Harold', 'dog', 'f', '1989-05-13', NULL), ('Fang', 'Benny', 'dog', 'm', '1990-08-27', NULL), ('Bowser', 'Diane', 'dog', 'm', '1979-08-31', '1995-07-29'), ('Chirpy', 'Gwen', 'bird', 'f', '1998-09-11', NULL), ('Whistler', 'Gwen', 'bird', NULL, '1997-12-09', NULL), ('Slim', 'Benny', 'snake', 'm', '1996-04-29', NULL), ('Puffball', 'Diane', 'hamster', 'f', '1999-03-30', NULL);```
###Code
cols = [str(col).split('.')[1] for col in pet.columns]
ins_data = [
['Fluffy', 'Harold', 'cat', 'f', '1993-02-04', None],
['Claws', 'Gwen', 'cat', 'm', '1994-03-17', None],
['Buffy', 'Harold', 'dog', 'f', '1989-05-13', None],
['Fang', 'Benny', 'dog', 'm', '1990-08-27', None],
['Bowser', 'Diane', 'dog', 'm', '1979-08-31', '1995-07-29'],
['Chirpy', 'Gwen', 'bird', 'f', '1998-09-11', None],
['Whistler', 'Gwen', 'bird', None, '1997-12-09', None],
['Slim', 'Benny', 'snake', 'm', '1996-04-29', None],
['Puffball', 'Diane', 'hamster', 'f', '1999-03-30', None]
]
data = [{col: d for col, d in zip(cols, ds)} for ds in ins_data]
result = conn.execute(pet.insert(), data)
###Output
_____no_output_____
###Markdown
Selecting All Data ```sqlSELECT * FROM pet;```
###Code
s = select([pet])
query(s)
###Output
_____no_output_____
###Markdown
```sqlUPDATE pet SET birth = '1989-08-31' WHERE name = 'Bowser';```
###Code
stmt = pet.update().\
where(pet.c.name == 'Bowser').\
values(birth = '1989-08-31')
result = conn.execute(stmt)
###Output
_____no_output_____
###Markdown
Selecting Particular Rows ```sqlSELECT * FROM pet WHERE name = 'Bowser';```
###Code
s = select([pet]).where(pet.c.name == 'Bowser')
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT * FROM pet WHERE birth >= '1998-1-1';```
###Code
s = select([pet]).where(pet.c.birth >= '1998-1-1')
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT * FROM pet WHERE species = 'dog' AND sex = 'f';```
###Code
s = select([pet]).\
where(
and_(pet.c.species == 'dog', pet.c.sex == 'f')
)
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT * FROM pet WHERE species = 'snake' OR species = 'bird';```
###Code
s = select([pet]).\
where(
or_(pet.c.species == 'snake', pet.c.species == 'bird')
)
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT * FROM pet WHERE (species = 'cat' AND sex = 'm') OR (species = 'dog' AND sex = 'f');```
###Code
s = select([pet]).\
where(
or_(
and_(pet.c.species == 'cat', pet.c.sex == 'm'
),
and_(pet.c.species == 'dog', pet.c.sex == 'f'
)
)
)
query(s)
###Output
_____no_output_____
###Markdown
Selecting Particular Columns ```sqlSELECT name, birth FROM pet;```
###Code
s = select([pet.c.name, pet.c.birth])
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT owner FROM pet;```
###Code
s = select([pet.c.owner])
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT DISTINCT owner FROM pet;```
###Code
s = select([pet.c.owner]).distinct()
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT name, species, birth FROM pet WHERE species = 'dog' OR species = 'cat';```
###Code
s = select([pet.c.name, pet.c.species, pet.c.birth]).\
where(
or_(pet.c.species == 'dog', pet.c.species == 'cat')
)
query(s)
###Output
_____no_output_____
###Markdown
Sorting Rows ```sqlSELECT name, birth FROM pet ORDER BY birth;```
###Code
s = select([pet.c.name, pet.c.birth]).order_by('birth')
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT name, birth FROM pet ORDER BY birth DESC;```
###Code
s = select([pet.c.name, pet.c.birth]).order_by(desc('birth'))
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT name, species, birth FROM pet ORDER BY species, birth DESC;```
###Code
s = select([pet.c.name, pet.c.species, pet.c.birth]).order_by('species', desc('birth'))
query(s)
###Output
_____no_output_____
###Markdown
Date Calculations ```sqlSELECT name, birth, CURDATE(), TIMESTAMPDIFF(YEAR,birth,CURDATE()) AS age FROM pet;```
###Code
s = select([pet.c.name, pet.c.birth, func.curdate(), func.timestampdiff(text('YEAR'), pet.c.birth, func.curdate())])
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT name, birth, CURDATE(), TIMESTAMPDIFF(YEAR,birth,CURDATE()) AS age FROM petORDER BY name;```
###Code
s = select([pet.c.name, pet.c.birth, func.curdate(), func.timestampdiff(text('YEAR'), pet.c.birth, func.curdate()).label('age')]).order_by('name')
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT name, birth, CURDATE(), TIMESTAMPDIFF(YEAR,birth,CURDATE()) AS age FROM petORDER BY age;```
###Code
s = select([pet.c.name, pet.c.birth, func.curdate(), func.timestampdiff(text('YEAR'), pet.c.birth, func.curdate()).label('age')]).order_by('age')
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT name, birth, death, TIMESTAMPDIFF(YEAR,birth,death) AS age FROM petWHERE death IS NOT NULLORDER BY age;```
###Code
s = select([pet.c.name, pet.c.birth, pet.c.death, func.timestampdiff(text('YEAR'), pet.c.birth, pet.c.death).label('age')]).\
where(pet.c.death != None).order_by('age')
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT name, birth, MONTH(birth) FROM pet;```
###Code
s = select([pet.c.name, pet.c.birth, func.month(pet.c.birth)])
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT name, birth FROM pet WHERE MONTH(birth) = 5;```
###Code
s = select([pet.c.name, pet.c.birth]).\
where(func.month(pet.c.birth) == 5)
query(s)
###Output
_____no_output_____
###Markdown
Pattern Matching To find names beginning with b:```sqlSELECT * FROM pet WHERE name LIKE 'b%';```
###Code
s = select([pet]).\
where(pet.c.name.like('b%'))
query(s)
###Output
_____no_output_____
###Markdown
To find names ending with fy:```sqlSELECT * FROM pet WHERE name LIKE '%fy';```
###Code
s = select([pet]).\
where(pet.c.name.like('%fy'))
query(s)
###Output
_____no_output_____
###Markdown
To find names containing a w```sqlSELECT * FROM pet WHERE name LIKE '%w%';```
###Code
s = select([pet]).\
where(pet.c.name.like('%w%'))
query(s)
###Output
_____no_output_____
###Markdown
To find names containing exactly five characters, use five instances of the _ pattern character:```sqlSELECT * FROM pet WHERE name LIKE '%w%';```
###Code
s = select([pet]).\
where(pet.c.name.like('_____'))
query(s)
###Output
_____no_output_____
###Markdown
To find names beginning with b, use ^ to match the beginning of the name:```sqlSELECT * FROM pet WHERE REGEXP_LIKE(name, '^b');```
###Code
s = select([pet]).\
where(func.regexp_like(pet.c.name, '^b'))
query(s)
###Output
_____no_output_____
###Markdown
To find names ending with `fy`, use `$` to match the end of the name:```sqlSELECT * FROM pet WHERE REGEXP_LIKE(name, 'fy%');```
###Code
s = select([pet]).\
where(func.regexp_like(pet.c.name, 'fy$'))
query(s)
###Output
_____no_output_____
###Markdown
To find names containing a `w`, use this query:```sqlSELECT * FROM pet WHERE REGEXP_LIKE(name, 'w');```
###Code
s = select([pet]).\
where(func.regexp_like(pet.c.name, 'w'))
query(s)
###Output
_____no_output_____
###Markdown
To find names containing exactly five characters, use `^` and `$` to match the beginning and end of the name, and five instances of . in between:```sqlSELECT * FROM pet WHERE REGEXP_LIKE(name, '^.....$');```
###Code
s = select([pet]).\
where(func.regexp_like(pet.c.name, '^.....$'))
query(s)
###Output
_____no_output_____
###Markdown
You could also write the previous query using the {**n**} (“*repeat-**n**-times*”) operator:```sqlSELECT * FROM pet WHERE REGEXP_LIKE(name, '^.....$');```
###Code
s = select([pet]).\
where(func.regexp_like(pet.c.name, '^.{5}$'))
query(s)
###Output
_____no_output_____
###Markdown
Counting Rows ```sqlSELECT COUNT(*) FROM pet;```
###Code
s = select([func.count()]).select_from(pet)
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT owner, COUNT(*) FROM pet GROUP BY owner;```
###Code
s = select([pet.c.owner, func.count()]).group_by('owner')
query(s)
###Output
_____no_output_____
###Markdown
Number of animals per species:```sqlSELECT species, COUNT(*) FROM pet GROUP BY species;```
###Code
s = select([pet.c.species, func.count()]).group_by('species')
query(s)
###Output
_____no_output_____
###Markdown
Number of animals per sex:```sqlSELECT sex, COUNT(*) FROM pet GROUP BY sex;```
###Code
s = select([pet.c.sex, func.count()]).group_by('sex')
query(s)
###Output
_____no_output_____
###Markdown
Number of animals per combination of species and sex:```sqlSELECT species, sex, COUNT(*) FROM pet GROUP BY species, sex;```
###Code
s = select([pet.c.species, pet.c.sex, func.count()]).group_by('species', 'sex')
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT species, sex, COUNT(*) FROM petWHERE species = 'dog' OR species = 'cat'GROUP BY species, sex;```
###Code
s = select([pet.c.species, pet.c.sex, func.count()]).\
where(or_(pet.c.species == 'dog', pet.c.species == 'cat')).\
group_by('species', 'sex')
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT species, sex, COUNT(*) FROM petWHERE sex IS NOT NULLGROUP BY species, sex;```
###Code
s = select([pet.c.species, pet.c.sex, func.count()]).\
where(pet.c.sex != None).\
group_by('species', 'sex')
query(s)
###Output
_____no_output_____
###Markdown
Using More Than one Table ```sqlCREATE TABLE event ( name VARCHAR(20), date DATE, type VARCHAR(15), remark VARCHAR(255));```
###Code
meta = MetaData()
event = Table('event', meta,
Column('name', VARCHAR(20)),
Column('date', DATE),
Column('type', VARCHAR(15)),
Column('remark', VARCHAR(255))
)
event.create(engine)
show_tables()
###Output
_____no_output_____
###Markdown
```sqlINSERT INTO event VALUES ('Fluffy', '1995-05-15', 'litter', '4 kittens, 3 female, 1 male'), ('Buffy', '1993-06-23', 'litter', '5 puppies, 2 female, 3 male'), ('Buffy', '1994-06-19', 'litter', '3 puppies, 3 female'), ('Chirpy', '1999-03-21', 'vet', 'needed beak straightened'), ('Slim', '1997-08-03', 'vet', 'broken rib'), ('Bowser', '1991-10-12', 'kennel', NULL), ('Fang', '1991-10-12', 'kennel', NULL), ('Fang', '1998-08-28', 'birthday', 'Gave him a new chew toy'), ('Claws', '1998-03-17', 'birthday', 'Gave him a new flea collar'), ('Whistler', '1998-12-09', 'birthday', 'First birthday');```
###Code
cols = [str(col).split('.')[1] for col in event.columns]
ins_data = [
['Fluffy', '1995-05-15', 'litter', '4 kittens, 3 female, 1 male'],
['Buffy', '1993-06-23', 'litter', '5 puppies, 2 female, 3 male'],
['Buffy', '1994-06-19', 'litter', '3 puppies, 3 female'],
['Chirpy', '1999-03-21', 'vet', 'needed beak straightened'],
['Slim', '1997-08-03', 'vet', 'broken rib'],
['Bowser', '1991-10-12', 'kennel', None],
['Fang', '1991-10-12', 'kennel', None],
['Fang', '1998-08-28', 'birthday', 'Gave him a new chew toy'],
['Claws', '1998-03-17', 'birthday', 'Gave him a new flea collar'],
['Whistler', '1998-12-09', 'birthday', 'First birthday']
]
data = [{col: d for col, d in zip(cols, ds)} for ds in ins_data]
result = conn.execute(event.insert(), data)
s = select([event])
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT pet.name, TIMESTAMPDIFF(YEAR,birth,date) AS age, remarkFROM pet INNER JOIN event ON pet.name = event.nameWHERE event.type = 'litter';```
###Code
s = select([pet.c.name, func.timestampdiff(text('YEAR'), pet.c.birth, text('date')).label('age'), event.c.remark]).\
select_from(pet.join(event, pet.c.name == event.c.name)).\
where(event.c.type == 'litter')
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT p1.name, p1.sex, p2.name, p2.sex, p1.speciesFROM pet AS p1 INNER JOIN pet AS p2ON p1.species = p2.speciesAND p1.sex = 'f' AND p1.death IS NULLAND p2.sex = 'm' AND p2.death IS NULL;```
###Code
p1 = pet.alias('p1')
p2 = pet.alias('p2')
s = select([p1.c.name, p1.c.sex, p2.c.name, p2.c.sex, p1.c.species]).\
select_from(p1.join(p2, and_(
p1.c.species == p2.c.species, and_(
p1.c.sex == 'f', and_(
p1.c.death == None, and_(
p2.c.sex == 'm', p2.c.death == None))))))
query(s)
###Output
_____no_output_____
###Markdown
Examples of Common Queries ```sqlCREATE TABLE shop ( article INT UNSIGNED DEFAULT '0000' NOT NULL, dealer CHAR(20) DEFAULT '' NOT NULL, price DECIMAL(16,2) DEFAULT '0.00' NOT NULL, PRIMARY KEY(article, dealer));INSERT INTO shop VALUES (1,'A',3.45),(1,'B',3.99),(2,'A',10.99),(3,'B',1.45), (3,'C',1.69),(3,'D',1.25),(4,'D',19.95);```
###Code
meta = MetaData()
shop = Table('shop', meta,
Column('article', INTEGER(unsigned=True), server_default='0000', primary_key=True),
Column('dealer', CHAR(20), server_default='', primary_key=True),
Column('price', DECIMAL(16,2), server_default='0.00', nullable=False)
)
shop.create(engine)
show_tables()
raw_data = [
[1, 'A', 3.45], [1, 'B', 3.99], [2, 'A', 10.99], [3, 'B', 1.45],
[3, 'C', 1.69], [3, 'D', 1.25], [4, 'D', 19.95]
]
ins_data = [{col: data for col, data in zip(shop.c.keys(), row)} for row in raw_data]
result = conn.execute(shop.insert(), ins_data)
s = select([shop])
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT * FROM shop ORDER BY article;```
###Code
s = select([shop]).order_by('article')
query(s)
###Output
_____no_output_____
###Markdown
**The Maximum Value for a Column** ```sqlSELECT MAX(article) AS article FROM shop;```
###Code
s = select([func.max(shop.c.article).label('article')])
query(s)
###Output
_____no_output_____
###Markdown
**The Row Holding the Maximum of a Certain Column** ```sqlSELECT article, dealer, priceFROM shopWHERE price=(SELECT MAX(price) FROM shop);```
###Code
s = select([shop.c.article, shop.c.dealer, shop.c.price]).\
where(shop.c.price == (select([func.max(shop.c.price)])))
query(s)
###Output
_____no_output_____
###Markdown
```sqlSELECT article, dealer, priceFROM shopORDER BY price DESCLIMIT 1;```
###Code
s = select([shop.c.article, shop.c.dealer, shop.c.price]).\
order_by(desc(shop.c.price)).limit(1)
query(s)
###Output
_____no_output_____
###Markdown
**Maximum of Column per Group** ```sqlSELECT article, MAX(price) AS priceFROM shopGROUP BY articleORDER BY article;```
###Code
s = select([shop.c.article, func.max(shop.c.price).label('price')]).\
group_by(shop.c.article).\
order_by(shop.c.article)
query(s)
conn.close()
engine = create_engine('mysql+pymysql://root:mysqlpass@localhost', pool_recycle=3600)
# engine = create_engine('mysql+pymysql://root:mysqlpass@localhost/menagerie', pool_recycle=3600)
conn = engine.connect()
insert_list = [(43, 'F', 2.37),(22, 'O', 2.35)]
tablename = 'menagerie.shop'
schema, table_name = tablename.split('.')
meta = MetaData()
shop = Table(table_name, meta, autoload=True, autoload_with=engine, schema=schema)
insert_list = [{col: data for col, data in zip(shop.c.keys(), row)} for row in insert_list]
print(insert_list)
#result = conn.execute(shop.insert(), insert_list)
tablename = 'menagerie.kmeans_us_arrests_data'
schema, table_name = tablename.split('.')
df_train = pd.read_sql_table(table_name, engine, schema=schema)
print(the_frame)
print("\nHead(10) of Train data:")
print(df_train.head(10))
# A dictionary to get 'sno' to 'state' mapping. Required for plotting.
df1 = df_train.select(['sno', 'state'])
insp = inspect(engine)
dbs = insp.get_schema_names()
print(dbs)
insp.get_table_names(schema='menagerie')
insp.get_table_comment('kmeans_us_arrests_data', schema='menagerie')
insp.get_columns('kmeans_us_arrests_data', schema='menagerie')
###Output
_____no_output_____ |
notebooks/Mean Decrease in Impurity Importances.ipynb | ###Markdown
How to use MeanDecreaseImpurity class 0. Load packages
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
from eml.importances.mdi import MeanDecreaseImpurity
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. Load dataset
###Code
iris = load_iris()
###Output
_____no_output_____
###Markdown
2. Train model
###Code
estimator = RandomForestClassifier()
estimator.fit(iris.data, iris.target)
###Output
_____no_output_____
###Markdown
3. Compute importances
###Code
mdi = MeanDecreaseImpurity(use_precompute=False) # mdi importances are already computed in sklearn
mdi.fit(estimator)
importances = mdi.interpret()
###Output
_____no_output_____
###Markdown
4. Display the results
###Code
features = [iris.feature_names[idx] for idx in np.argsort(importances)]
sorted_importances = np.sort(importances)
plt.barh(y=features, width=sorted_importances)
plt.show()
###Output
_____no_output_____ |
src/RandomForest/.ipynb_checkpoints/jf-model-6-checkpoint.ipynb | ###Markdown
Nuevo Modelo
###Code
important_values = values\
.merge(labels, on="building_id")
important_values.drop(columns=["building_id"], inplace = True)
important_values["geo_level_1_id"] = important_values["geo_level_1_id"].astype("category")
important_values
important_values.shape
X_train, X_test, y_train, y_test = train_test_split(important_values.drop(columns = 'damage_grade'),
important_values['damage_grade'], test_size = 0.2, random_state = 123)
#OneHotEncoding
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status", "age_range_superstructure"]
for feature in features_to_encode:
X_train = encode_and_bind(X_train, feature)
X_test = encode_and_bind(X_test, feature)
X_train
# Utilizo los mejores parametros segun el GridSearch
rf_model = RandomForestClassifier(n_estimators = 150,
max_depth = None,
max_features = 50,
min_samples_split = 15,
min_samples_leaf = 1,
criterion = "gini",
verbose=True)
rf_model.fit(X_train, y_train)
rf_model.score(X_train, y_train)
# Calculo el F1 score para mi training set.
y_preds = rf_model.predict(X_test)
f1_score(y_test, y_preds, average='micro')
importances = pd.DataFrame({"column_name": X_train.columns, "importance": rf_model.feature_importances_})
less_important_values = importances.loc[importances['importance'] < 0.0005]
less_important_columns = less_important_values["column_name"].to_list()
less_important_columns
fig, ax = plt.subplots(figsize = (20,20))
plt.bar(X_train.columns[101:], rf_model.feature_importances_[101:])
plt.xlabel("Features")
plt.xticks(rotation = 90)
plt.ylabel("Importance")
plt.show()
X_train.drop(columns = less_important_columns, inplace = True)
X_test.drop(columns = less_important_columns, inplace = True)
X_train.shape
# # Busco los mejores tres parametros indicados abajo.
# n_estimators = [65, 100, 135]
# max_features = [0.2, 0.5, 0.8]
# max_depth = [None, 2, 5]
# min_samples_split = [5, 15, 25]
# # min_impurity_decrease = [0.0, 0.01, 0.025, 0.05, 0.1]
# # min_samples_leaf
# hyperF = {'n_estimators': n_estimators,
# 'max_features': max_features,
# 'max_depth': max_depth,
# 'min_samples_split': min_samples_split
# }
# gridF = GridSearchCV(estimator = RandomForestClassifier(random_state = 123),
# scoring = 'f1_micro',
# param_grid = hyperF,
# cv = 3,
# verbose = 1,
# n_jobs = -1)
# bestF = gridF.fit(X_train, y_train)
# res = pd.DataFrame(bestF.cv_results_)
# res.loc[res['rank_test_score'] <= 10]
rf_model_2 = RandomForestClassifier(n_estimators = 135,
max_depth = None,
max_features = 0.2,
min_samples_split = 15,
min_samples_leaf = 1,
criterion = "gini",
verbose=True)
rf_model_2.fit(X_train, y_train)
rf_model_2.score(X_train, y_train)
# Calculo el F1 score para mi training set.
y_preds = rf_model_2.predict(X_test)
f1_score(y_test, y_preds, average='micro')
test_values = pd.read_csv('../../csv/test_values.csv', index_col = "building_id")
test_values
test_values_subset = test_values
test_values_subset["geo_level_1_id"] = test_values_subset["geo_level_1_id"].astype("category")
test_values_subset
#Promedio de altura por piso
test_values_subset['height_percentage_per_floor_pre_eq'] = test_values_subset['height_percentage']/test_values_subset['count_floors_pre_eq']
test_values_subset['volume_percentage'] = test_values_subset['area_percentage'] * test_values_subset['height_percentage']
#Algunos promedios por localizacion
test_values_subset['avg_age_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['age'].transform('mean')
test_values_subset['avg_area_percentage_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['area_percentage'].transform('mean')
test_values_subset['avg_height_percentage_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['height_percentage'].transform('mean')
test_values_subset['avg_count_floors_for_geo_level_2_id'] = test_values_subset.groupby('geo_level_2_id')['count_floors_pre_eq'].transform('mean')
test_values_subset['avg_age_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['age'].transform('mean')
test_values_subset['avg_area_percentage_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['area_percentage'].transform('mean')
test_values_subset['avg_height_percentage_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['height_percentage'].transform('mean')
test_values_subset['avg_count_floors_for_geo_level_3_id'] = test_values_subset.groupby('geo_level_3_id')['count_floors_pre_eq'].transform('mean')
#Relacion material(los mas importantes segun el modelo 5)-antiguedad
test_values_subset['20_yr_age_range'] = test_values_subset['age'] // 20 * 20
test_values_subset['20_yr_age_range'] = test_values_subset['20_yr_age_range'].astype('str')
test_values_subset['superstructure'] = ''
test_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_mud_mortar_stone'], test_values_subset['superstructure'] + 'b', test_values_subset['superstructure'])
test_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_cement_mortar_brick'], test_values_subset['superstructure'] + 'e', test_values_subset['superstructure'])
test_values_subset['superstructure'] = np.where(test_values_subset['has_superstructure_timber'], test_values_subset['superstructure'] + 'f', test_values_subset['superstructure'])
test_values_subset['age_range_superstructure'] = test_values_subset['20_yr_age_range'] + test_values_subset['superstructure']
del test_values_subset['20_yr_age_range']
del test_values_subset['superstructure']
test_values_subset
def encode_and_bind(original_dataframe, feature_to_encode):
dummies = pd.get_dummies(original_dataframe[[feature_to_encode]])
res = pd.concat([original_dataframe, dummies], axis=1)
res = res.drop([feature_to_encode], axis=1)
return(res)
features_to_encode = ["geo_level_1_id", "land_surface_condition", "foundation_type", "roof_type",\
"position", "ground_floor_type", "other_floor_type",\
"plan_configuration", "legal_ownership_status", "age_range_superstructure"]
for feature in features_to_encode:
test_values_subset = encode_and_bind(test_values_subset, feature)
test_values_subset
test_values_subset.columns
less_important_columns_new = filter(lambda c: c in test_values_subset.columns.to_list(), less_important_columns)
test_values_subset.drop(columns = less_important_columns_new, inplace = True)
test_values_subset.drop(columns = list(filter(lambda col: col not in X_train.columns.to_list() , test_values_subset.columns.to_list())), inplace = True)
test_values_subset.shape
# Genero las predicciones para los test.
preds = rf_model_2.predict(test_values_subset)
submission_format = pd.read_csv('../../csv/submission_format.csv', index_col = "building_id")
my_submission = pd.DataFrame(data=preds,
columns=submission_format.columns,
index=submission_format.index)
my_submission.head()
my_submission.to_csv('../../csv/predictions/jf-model-6-submission.csv')
!head ../../csv/predictions/jf-model-6-submission.csv
###Output
_____no_output_____ |
day_03/02_airbnb_data_cleaning.ipynb | ###Markdown
###Code
import pandas as pd
# anchorage_entire_homes_2021-01-04.csv
df = pd.read_csv('https://github.com/gumdropsteve/intro_to_python/raw/main/day_03/anchorage_entire_homes_2021-01-04.csv')
# give me a link
df['a'][0]
# look for similar columns (we want to extract data)
df.head()
# the price column looks like it follows a similar format
df['e'].head()
# ${what we want} / night
type(df['e'][0])
df['e'][0]
first_value = df['e'][0]
# first_value.split(" ")[0]
float(first_value.split(" ")[0][1:]) # took this and added it to base code
# old
def get_room_price(listing):
"""
returns the nightly rate (price) of given listing
"""
price_text = listing.find('div', {'class':'_ls0e43'}).text
price = price_text.split('Price:')
return price[1]
# new (now returns float value of price per night)
def get_room_price(listing):
"""
returns the nightly rate (price) of given listing
"""
price_text = listing.find('div', {'class':'_ls0e43'}).text
price = price_text.split('Price:')
price = price[1]
price = price.split(" ")[0][1:] # skip the $
return price
###Output
_____no_output_____
###Markdown
Extracting Reviews```Rating 4.67 out of 5;4.67333 reviews (333)```
###Code
df['g'][0]
type(df['g'][0])
df['g'].head()
df['g'].tail()
df['g'][0].split(';')
# check string logic (to make sure it looks how you wanted)
'out of 5' in df['g'][0].split(';')[0] # not going to do this now, just something to think about
df['g'][0].split(';')[1]
right_side = df['g'][0].split(';')[1]
right_split = right_side.split(' ')
right_split
detailed_score = right_split[0]
detailed_score = float(detailed_score)
detailed_score
number_of_reviews = right_split[1]
# just a precaution
number_of_reviews = number_of_reviews.strip()
number_of_reviews = number_of_reviews[:-1]
number_of_reviews = number_of_reviews.split('(')
number_of_reviews
number_of_reviews = number_of_reviews[1]
number_of_reviews = int(number_of_reviews)
number_of_reviews
###Output
_____no_output_____
###Markdown
All together
###Code
df['g'][0]
data = df['g'][0]
right_side = data.split(';')[1]
right_split = right_side.split(' ')
# find the detailed average review score (x.xxxxx)
detailed_score = right_split[0]
detailed_score = float(detailed_score)
# find the number of reviews
number_of_reviews = right_split[1]
number_of_reviews = number_of_reviews.strip() # just a precaution
number_of_reviews = number_of_reviews[:-1]
number_of_reviews = number_of_reviews.split('(')
number_of_reviews = number_of_reviews[1]
number_of_reviews = int(number_of_reviews)
# did it work?
detailed_score, number_of_reviews
# before
def get_n_reviews(listing):
'''
Returns the number of reviews
'''
try: # Not all listings have reviews // extraction failed
output = listing.findAll("span", {"class":"_krjbj"})[1].text
except:
output = None # Indicate that the extraction failed -> can indicate no reviews or a mistake in scraping
return output
# after
def get_n_reviews(listing):
'''
Returns the number of reviews
'''
try:
output = listing.findAll("span", {"class":"_krjbj"})[1].text
# focus the right side of the data
right_side = output.split(';')[1]
right_split = right_side.split(' ')
# find the detailed average review score (x.xxxxx)
detailed_score = right_split[0]
detailed_score = float(detailed_score)
# find the number of reviews
number_of_reviews = right_split[1]
number_of_reviews = number_of_reviews.strip() # just a precaution
number_of_reviews = number_of_reviews[:-1]
number_of_reviews = number_of_reviews.split('(')
number_of_reviews = number_of_reviews[1]
number_of_reviews = int(number_of_reviews)
output = (detailed_score, number_of_reviews)
# Not all listings have reviews // extraction failed
except:
output = None # Indicate that the extraction failed -> can indicate no reviews or a mistake in scraping
return output
###Output
_____no_output_____ |
WEEK9IPNaiveBayesClassifier.ipynb | ###Markdown
Python Programming:Naive Nayes Classifier ***Defining the Question*** a) ***Specifying the Data Analytic Question***conduct experiments on the dataset by Building a a Naive Bayes model classifier then calculate the resulting metrics. b) ***Defining the metrics for success***Implementing a Naive Bayes classifier on the provided dataset. ***c) Understanding the context***Implementing a Naive Bayes classifier on the provided dataset.i) Dataset2The "spam" concept is diverse: advertisements for products/web sites, make money fast schemes, chain letters, pornography...The dataset is collection of spam e-mails came from our postmaster and individuals who had filed spam. The collection of non-spam e-mails came from filed work and personal e-mailsData Features The dataset2 has the following features:Data Set Characteristics: MultivariateNumber of Instances:4601Industry source: ComputingAttribute Characteristics:Integer, RealNumber of Attributes:57Missing Values?:Yes ***d) Recording the Experimental Design***Steps to implement:- Data Pre-processing step- EDA- Fitting the model(Naive Bayes) to the Training set- Predicting the test result- Test accuracy of the result(Creation of Confusion matrix)- Visualizing the test set result. ***e) Data Relevance***The data used for this project is necessary for building a model that implements the KNN classifier[https://archive.ics.uci.edu/ml/datasets/Spambase]. ***Importing the required libraries***
###Code
#importing the neccessary libraries
# Importing libraries
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
***Loading Data***
###Code
#reading datasets
df = pd.read_csv('/content/spambase.data')
###Output
_____no_output_____
###Markdown
***Checking Data***
###Code
# Previewing the top of our dataset
#
df.head()
# Previewing the bottom of our dataset
#
df.tail()
#Checking information about our dataset
df.info()
#checking the shape for the datasets
df.shape
#Checking our columns
df.columns
#Summary of descriptive statistics
df.describe()
###Output
_____no_output_____
###Markdown
***Data Cleaning*** ***Validity***
###Code
df.columns
#Checking for outliers
check=['0', '0.64', '0.64.1', '0.1', '0.32', '0.2', '0.3', '0.4', '0.5', '0.6']
plt.subplots(figsize=(10,10))
df.boxplot(check)
plt.show()
#Checking for outliers
check = ['0.7', '0.64.2', '0.8', '0.9', '0.10', '0.32.1', '0.11', '1.29', '1.93',
'0.12']
plt.subplots(figsize=(10,10))
df.boxplot(check)
plt.show()
#Checking for outliers
check=['0.96', '0.13', '0.14', '0.15', '0.16', '0.17', '0.18', '0.19',
'0.20', '0.21', '0.22']
plt.subplots(figsize=(10,10))
df.boxplot(check)
plt.show()
#Checking for outliers
check=['0.23', '0.24', '0.25', '0.26', '0.27', '0.28',
'0.29', '0.30', '0.31', '0.32.2', '0.33']
plt.subplots(figsize=(10,10))
df.boxplot(check)
plt.show()
#Checking for outliers
check=['0.34', '0.35', '0.36','0.37', '0.38', '0.39', '0.40', '0.41', '0.42', '0.778', '0.43', '0.44',
'3.756', '61', '278', '1']
plt.subplots(figsize=(10,10))
df.boxplot(check)
plt.show()
# checking for anomalies
q11 = df['1'].quantile(.25)
q31 = df['1'].quantile(.75)
iqr11 = q31 - q11
iqr11
##
q11, q31 = np.percentile(df['1'], [25, 75])
iqr = q31 - q11
l_bound = q11 - (1.5*iqr)
u_bound = q31 + (1.5 * iqr)
print(iqr11, iqr)
# there are no anomalies in the data
###Output
1.0 1.0
###Markdown
***Completeness***
###Code
#Checking for null values
df.isnull().sum()
###Output
_____no_output_____
###Markdown
***Consistency***
###Code
#Checking for duplicates
df.duplicated().sum()
#Dropping duplicates
df.drop_duplicates(inplace=True)
###Output
_____no_output_____
###Markdown
***Uniformity***
###Code
df.shape
df.columns
df
df.rename(columns={'0':'word_freq_make','0.64':'word_freq_address','0.64.1':'word_freq_all', '0.1':'word_freq_3d',
'0.32':'word_freq_our','0.2':'word_freq_over','0.3':'word_freq_remove','0.4':'word_freq_internet',
'0.5':'word_freq_order','0.6':'word_freq_mail','0.7':'word_freq_receive','0.64.2':'word_freq_will',
'0.8':'word_freq_people','0.9':'word_freq_report','0.10':'word_freq_address',
'0.32.1':'word_freq_free','0.11':'word_freq_business', '1.29':'word_freq_email',
'1.93':'word_freq_you','0.12':'word_freq_credit','0.96':'word_freq_your','0.13':'word_freq_font',
'0.14':'word_freq_000','0.15':'word_freq_money','0.16':'word_freq_hp','0.17':'word_freq_hpl',
'0.18':'word_freq_george','0.19':'word_freq_650','0.20':'word_freq_lab','0.21':'word_freq_labs',
'0.22':'word_freq_telnet','0.23':'word_freq_857','0.24':'word_freq_data','0.25':'word_freq_415',
'0.26':'word_freq_85','0.27':'word_freq_technology','0.28':'word_freq_1999','0.29':'word_freq_parts',
'0.30':'word_freq_pm','0.31':'word_freq_direct','0.32.2':'word_freq_cs','0.33':'word_freq_meeting',
'0.34':'word_freq_original','0.35':'word_freq_project', '0.36':'word_freq_re','0.37':'word_freq_edu',
'0.38':'word_freq_table','0.39':'word_freq_conference','0.40':'char_freq_%3B','0.41':'char_freq_%28',
'0.42':'char_freq_%5B','0.778':'char_freq_%21','0.43':'char_freq_%24','0.44':'char_freq_%23',
'3.756':'capital_run_length_average','61':'capital_run_length_longest','278':'capital_run_length_total', '1':'class'},inplace=True)
#Converting columns to lowercase
df.columns = df.columns.str.strip().str.lower()
df.columns
###Output
_____no_output_____
###Markdown
***Exploratory Data Analysis*** ***Univariate Data Analysis***
###Code
df.describe()
import seaborn as sns
import seaborn as sns
import matplotlib.pyplot as plt
# Bar plot
plt.figure(figsize=(10,5))
sns.countplot(x='class', data=df)
plt.title('Barchart for class',fontweight='bold',fontsize=15)
plt.xlabel('Class',fontweight='bold',fontsize=15)
plt.ylabel('count',fontweight='bold',fontsize=15)
plt.show()
# Histogram
plt.figure(figsize=(10,5))
sns.distplot(df['class'])
plt.title('Histogram for class',fontweight='bold',fontsize=15)
plt.xlabel('Class',fontweight='bold',fontsize=15)
plt.ylabel('Density',fontweight='bold',fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
> We can see that most of the emails were considered not to be spam ***Bivariate Analysis***
###Code
#heatmap
#checking for correlation using spearman method
plt.figure(figsize=(40,40))
correlation_matrix=df.corr(method = 'spearman')
sns.heatmap(correlation_matrix, xticklabels=correlation_matrix.columns, yticklabels=correlation_matrix.columns, annot = False)
plt.xticks( rotation=45)
plt.title('Correlation between variables')
plt.show
df.corr()
###Output
_____no_output_____
###Markdown
***Implementing the Solution*** ***Naive Bayes Classifier***
###Code
# the exercise expects us to implement a Naive Bayes classifier.
# it is an experiment that demands the metrics be calcuated carefully and
# all observatios noted.
# therefore, after splitting the dataset into two parts i.e 80 - 20 sets,
# we have to further make conclusions based on second ad third experiments with
# different partitionin schemes 70-30, 60-40.
# this experiment expects a computation of the accuracy matrix which is the
# percentage of correct classification
# it is then required that the confusion matrix be calculated and
# optimization done on the models.
# the whole process is as below
# gaussian
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from scipy.stats import norm
x = np.linspace(-5, 5)
y = norm.pdf(x)
plt.plot(x, y)
plt.vlines(ymin=0, ymax=0.4, x=1, colors=['red'])
###Output
_____no_output_____
###Markdown
***PART 1: 80:20 partition***
###Code
# importing the required libraries
from sklearn.model_selection import train_test_split
from sklearn import feature_extraction, model_selection, naive_bayes, metrics, svm
import numpy as np
from sklearn.naive_bayes import BernoulliNB
###Output
_____no_output_____
###Markdown
***Preprocessing***
###Code
# preprocessing
X = df.drop('class',axis=1).values
y = df['class'].values
###Output
_____no_output_____
###Markdown
***Splitting Data***
###Code
# Splitting our data into a training set and a test set
#
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
# check the shapes of the train and test sets
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
###Output
(3367, 57)
(3367,)
(842, 57)
(842,)
###Markdown
***Training our Algorithm***
###Code
# Training our model
#
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
model = clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
***Making prediction***
###Code
import numpy as np
# Predicting our test predictors
predicted = model.predict(X_test)
print(np.mean(predicted == y_test))
# Predicting the Test set results
y_pred = clf.predict(X_test)
y_pred
# evaluating the algorithm
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
# Print the Confusion Matrix with k =5 and slice it into four pieces
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
# Classification metrices
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
0 0.97 0.71 0.82 495
1 0.70 0.97 0.81 347
accuracy 0.81 842
macro avg 0.83 0.84 0.81 842
weighted avg 0.86 0.81 0.82 842
###Markdown
***PART 2: 70:30 partition*** ***Splitting Data***
###Code
# Splitting our data into a training set and a test set
#
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=0)
# check the shapes of the train and test sets
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
###Output
(2946, 57)
(2946,)
(1263, 57)
(1263,)
###Markdown
***Training our Algorithm***
###Code
# Training our model
#
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
model = clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
***Making prediction***
###Code
import numpy as np
# Predicting our test predictors
predicted = model.predict(X_test)
print(np.mean(predicted == y_test))
# Predicting the Test set results
y_pred = clf.predict(X_test)
y_pred
# evaluating the algorithm
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
# Print the Confusion Matrix with k =5 and slice it into four pieces
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
# Classification metrices
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
0 0.96 0.73 0.83 737
1 0.72 0.96 0.82 526
accuracy 0.83 1263
macro avg 0.84 0.84 0.82 1263
weighted avg 0.86 0.83 0.83 1263
###Markdown
***PART 3: 60:40 partition*** ***Splitting Data***
###Code
# Splitting our data into a training set and a test set
#
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4, random_state=0)
# check the shapes of the train and test sets
print(X_train.shape)
print(y_train.shape)
print(X_test.shape)
print(y_test.shape)
###Output
(2525, 57)
(2525,)
(1684, 57)
(1684,)
###Markdown
***Training our Algorithm***
###Code
# Training our model
#
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
model = clf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
***Making prediction***
###Code
import numpy as np
# Predicting our test predictors
predicted = model.predict(X_test)
print(np.mean(predicted == y_test))
# Predicting the Test set results
y_pred = clf.predict(X_test)
y_pred
# evaluating the algorithm
from sklearn.metrics import classification_report, confusion_matrix
print(confusion_matrix(y_test, y_pred))
# Print the Confusion Matrix with k =5 and slice it into four pieces
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(y_test, y_pred)
# Classification metrices
print(classification_report(y_test, y_pred))
###Output
precision recall f1-score support
0 0.96 0.74 0.84 994
1 0.72 0.96 0.82 690
accuracy 0.83 1684
macro avg 0.84 0.85 0.83 1684
weighted avg 0.86 0.83 0.83 1684
|
test_peakfit.ipynb | ###Markdown
Test peak fit Implemented peak functions
###Code
x = np.linspace(-3, 3, 123)
f = Gauss()
plt.plot(x, f(x, 0, 1, 1), label=f.name);
f = Lorentzian()
plt.plot(x, f(x, 0, 1, 1), label=f.name);
f = PseudoVoigt()
plt.plot(x, f(x, 0, 1, 1, 0.5), label=f.name);
plt.plot([-.5, .5], [.5, .5], 'o-k', label='FWHM'); # test FWHM
plt.xlabel('x'); plt.ylabel('y'); plt.legend();
###Output
_____no_output_____
###Markdown
Simple fit
###Code
# Generate random data
x = np.linspace(-5, 5, 123)
y = 7 + 0.1*np.random.randn(*x.shape)
y += Gauss()(x, 0.5, 1, 1)
# Fit using automatic estimation of initial parameters:
results, fit = peakfit(x, y, Gauss())
# _note:_ a linear slope is by default included
# set background=None to prevent this
for r in results:
print(r)
# Graph
plot_results(x, y, results, fit,
save_path='./example/',
save_name='simple_fit');
###Output
{'function': 'Gaussian', 'x0': 0.5118165702655286, 'x0_std': 0.01919764032426884, 'fwhm': 1.0669309886475926, 'fwhm_std': 0.047420959053825144, 'amplitude': 0.9555028940163733, 'amplitude_std': 0.03555326587415687}
{'function': 'Linear', 'slope': 0.0006902393525260128, 'slope_std': 0.0027897680827435943, 'intercept': 7.0042950253252085, 'intercept_std': 0.009236314095164573}
###Markdown
With a linear background
###Code
# Generate random data data
x = np.linspace(-5, 5, 123)
y = 7 + 0.1*x + 0.1*np.random.randn(*x.shape)
y += Gauss()(x, 0.5, 1, 1)
# Fit using manual estimation of initial parameters:
results, fit = peakfit(x, y, Gauss(0, 1, 1))
for r in results:
print(r)
# Graph
plot_results(x, y, results, fit);
# Generate data
x = np.linspace(-5, 5, 123)
y = 0.1*x + 0.1*np.random.randn(*x.shape)
y += Gauss()(x, 0.5, 1, 1)
# Fit without the linear background:
results, fit = peakfit(x, y, Gauss(0.6, 1, 1), background=None)
for r in results:
print(r)
# Graph
plot_results(x, y, results, fit);
###Output
{'function': 'Gaussian', 'x0': 0.5374215110604311, 'x0_std': 0.06410456664179134, 'fwhm': 1.0976572764111254, 'fwhm_std': 0.1509547191299359, 'amplitude': 0.9896408861979163, 'amplitude_std': 0.11786081800219353}
###Markdown
Multi-peak
###Code
# Generate random data
x = np.linspace(-6.5, 4.5, 234)
y = 0.1*np.random.randn(*x.shape)
y += Gauss()(x, 0.5, 1, 0.8)
y += Gauss()(x, -1.5, 1.5, 1.)
# Fit using automatic estimation of initial parameters:
results, fit = peakfit(x, y, Sum(Gauss(-2, 1, 1), Gauss(1, 1, 1)))
for r in results:
print(r)
# Graph
plot_results(x, y, results, fit);
###Output
{'function': 'Gaussian', 'x0': -1.5342658570922274, 'x0_std': 0.01938369604706734, 'fwhm': 1.4940701179437883, 'fwhm_std': 0.05129276877774841, 'amplitude': 1.0077303029190006, 'amplitude_std': 0.0260197815261413}
{'function': 'Gaussian', 'x0': 0.48911148847371666, 'x0_std': 0.021199879104210197, 'fwhm': 1.0831820847647569, 'fwhm_std': 0.05290493921205561, 'amplitude': 0.7985987496009366, 'amplitude_std': 0.029712063330130437}
{'function': 'Linear', 'slope': -0.0038231893464169254, 'slope_std': 0.002090543914741856, 'intercept': -0.023058286473366822, 'intercept_std': 0.008830983067596176}
###Markdown
Pseudo Voigt
###Code
# Generate random data
x = np.linspace(-5, 5, 211)
y = 0.02*np.random.randn(*x.shape)
y += PseudoVoigt()(x, 0.4, 1, 1, 0.4)
# Fit using automatic estimation of initial parameters:
results, fit = peakfit(x, y, PseudoVoigt(), background=None)
for r in results:
print(r)
# Graph
f = plot_results(x, y, results, fit, save_path='./example');
###Output
{'function': 'PseudoVoigt', 'x0': 0.3967750860645064, 'x0_std': 0.003137110021380574, 'fwhm': 1.0089968349406078, 'fwhm_std': 0.010269330634502786, 'amplitude': 1.000651448113745, 'amplitude_std': 0.006742408843577464, 'eta': 0.40667823791617225, 'eta_std': 0.026832559976803713}
|
examples/toymodels.ipynb | ###Markdown
Logistic Regression
###Code
# We first create a toy dataset that is linear separable
# We chose a simple classification model with decision boundary being 4x1 - 3x2 > 1
np.random.seed(42)
x = np.random.randn(200, 2)
y = ((4 * x[:, 0] - 3 * x[:, 1]) > 1).astype(int)
x = x + np.random.randn(200, 2) * 0.1
plot_df = {
'x1': x[:, 0],
'x2': x[:, 1],
'y': y
}
print(x.shape, y.shape)
sns.set()
plt.figure(figsize=(8, 7))
sns.scatterplot(data=plot_df, x='x1', y='x2', hue=y)
x_plot = np.arange(-2, 3)
y_plot = (4 * x_plot - 1) / 3
sns.lineplot(x=x_plot, y=y_plot, color='green', linestyle='--', label='boundary')
plt.title('toy data and decision boundary')
plt.show()
# define a simple logistic regression model
class MyModel(str.Module):
def __init__(self):
super().__init__()
self.register_param(w1=str.tensor(np.random.randn()))
self.register_param(w2=str.tensor(np.random.randn()))
self.register_param(b=str.tensor(np.random.randn()))
def forward(self, x):
w1 = self.params['w1'].repeat(x.shape[0])
w2 = self.params['w2'].repeat(x.shape[0])
b = self.params['b'].repeat(x.shape[0])
y = w1 * str.tensor(x[:, 0]) + w2 * str.tensor(x[:, 1]) + b
return y
# define loss function and optimizer
model = MyModel()
criterion = str.BCELoss()
opt = SGD(model.parameters(), lr=0.1, momentum=0.9)
# training using SGD with momentum
losses = []
accs = []
for epoch in trange(100):
outputs = model(x)
targets = str.tensor(y.astype(float))
loss = criterion(targets, outputs)
preds = (outputs.data > 0.5).astype(int)
acc = (preds == y).mean()
opt.zero_grad()
loss.backward()
opt.step()
losses.append(float(loss.data))
accs.append(acc)
loss_df = {
'epoch': np.arange(1, 101),
'BCE Loss': losses,
'Accuracy': accs
}
sns.lineplot(data=loss_df, x='epoch', y='BCE Loss', label='BCE Loss')
sns.lineplot(data=loss_df, x='epoch', y='Accuracy', label='Accuracy')
plt.title('Training Metrics vs. Epoch')
plt.show()
plot_df = {
'x1': x[:, 0],
'x2': x[:, 1],
'y': y
}
w1 = float(model.params['w1'].data)
w2 = float(model.params['w2'].data)
b = float(model.params['b'].data)
sns.set()
plt.figure(figsize=(8, 7))
sns.scatterplot(data=plot_df, x='x1', y='x2', hue=y)
x_plot = np.arange(-2, 3)
y_plot = (4 * x_plot - 1) / 3
pred_plot = -(w1 * x_plot + b) / w2
sns.lineplot(x=x_plot, y=y_plot, color='green', linestyle='--', label='boundary')
sns.lineplot(x=x_plot, y=pred_plot, color='red', linestyle='--', label='predicted boundary')
plt.title('toy data and decision boundary')
plt.show()
###Output
_____no_output_____
###Markdown
Polynomial Regression
###Code
x = np.random.randn(500)
x_sq = (x ** 2)
x_cb = (x ** 3)
y = 4 * x - 5 * x_sq + 3 * x_cb + np.random.randn(500) * 3 - 2
sns.set()
plt.figure(figsize=(8, 7))
sns.scatterplot(x=x, y=y, label='data points')
x_plot = np.arange(-3, 3, 0.01)
y_plot = 4 * x_plot - 5 * (x_plot ** 2) + 3 * (x_plot ** 3)
sns.lineplot(x=x_plot, y=y_plot, color='red', linestyle='--', label='ideal curve')
plt.title('toy data')
plt.show()
# define a polynomial regression model
class PolyModel(str.Module):
def __init__(self):
super().__init__()
self.register_param(w1=str.tensor(np.random.randn()))
self.register_param(w2=str.tensor(np.random.randn()))
self.register_param(w3=str.tensor(np.random.randn()))
self.register_param(b=str.tensor(np.random.randn()))
def forward(self, x):
w1 = self.params['w1'].repeat(x.shape[0])
w2 = self.params['w2'].repeat(x.shape[0])
w3 = self.params['w3'].repeat(x.shape[0])
b = self.params['b'].repeat(x.shape[0])
y = w1 * str.tensor(x) + w2 * str.tensor(x ** 2) + w3 * str.tensor(x ** 3) + b
return y
model = PolyModel()
criterion = str.MSELoss()
opt = SGD(model.parameters(), lr=0.001, momentum=0.9)
# training using SGD with momentum
losses = []
for epoch in trange(100):
outputs = model(x)
targets = str.tensor(y.astype(float))
loss = criterion(targets, outputs)
opt.zero_grad()
loss.backward()
opt.step()
losses.append(float(loss.data))
loss_df = {
'epoch': np.arange(1, 101),
'MSE Loss': losses,
}
sns.lineplot(data=loss_df, x='epoch', y='MSE Loss', label='MSE Loss')
plt.title('Training MSE vs. Epoch')
plt.show()
w1 = model.params['w1'].data
w2 = model.params['w2'].data
w3 = model.params['w3'].data
b = model.params['b'].data
sns.set()
plt.figure(figsize=(8, 7))
sns.scatterplot(x=x, y=y, label='data points')
x_plot = np.arange(-3, 3, 0.01)
y_plot = 4 * x_plot - 5 * (x_plot ** 2) + 3 * (x_plot ** 3)
y_pred = w1 * x_plot + w2 * (x_plot ** 2) + w3 * (x_plot ** 3) + b
sns.lineplot(x=x_plot, y=y_plot, color='red', linestyle='--', label='ideal curve')
sns.lineplot(x=x_plot, y=y_pred, color='green', linestyle='--', label='predicted curve')
plt.title('toy data')
plt.show()
###Output
_____no_output_____ |
Freezing_A_Model/Freezing a TensorFlow Model.ipynb | ###Markdown
Freezing a TensorFlow Model
###Code
import tensorflow as tf
import numpy as np
from tensorflow.python.tools import freeze_graph
###Output
_____no_output_____
###Markdown
Required files1) Saved model2) Saved model's graph
###Code
# Freeze the graph
save_path="/Users/Enkay/Documents/Viky/python/lecture/freeze/model_files/" #directory to model files
MODEL_NAME = 'Sample_model' #name of the model optional
input_graph_path = save_path+'savegraph.pbtxt'#complete path to the input graph
checkpoint_path = save_path+'model.ckpt' #complete path to the model's checkpoint file
input_saver_def_path = ""
input_binary = False
output_node_names = "output" #output node's name. Should match to that mentioned in your code
restore_op_name = "save/restore_all"
filename_tensor_name = "save/Const:0"
output_frozen_graph_name = save_path+'frozen_model_'+MODEL_NAME+'.pb' # the name of .pb file you would like to give
clear_devices = True
freeze_graph.freeze_graph(input_graph_path, input_saver_def_path,
input_binary, checkpoint_path, output_node_names,
restore_op_name, filename_tensor_name,
output_frozen_graph_name, clear_devices, "")
###Output
INFO:tensorflow:Restoring parameters from /Users/Enkay/Documents/Viky/python/lecture/freeze/model_files/model.ckpt
INFO:tensorflow:Froze 2 variables.
Converted 2 variables to const ops.
|
test_nb/1PN_ecc_burst.ipynb | ###Markdown
Definitions
###Code
GMsun = 1.32712440018e20 # m^3/s^2
c = 299792458 # m/s
Rsun = GMsun / c**2
Tsun = GMsun / c**3
###Output
_____no_output_____
###Markdown
Newtonian orbital parameters
###Code
def dvp_N(vp, de, eta):
"""Newtonian change in pericenter speed
eq. 71 of L&Y
"""
return -13*np.pi/96 * eta * vp**6 * (1 + 44/65*de)
def dde_N(vp, de, eta):
"""Newtonian change in eccentricity
eq. 72 of L&Y
"""
return 85*np.pi/48 * eta * vp**5 * (1 + 791/850*de)
def T_N(vp, de, Mtot=1):
"""Newtonian orbital period (time between bursts)
eq. 76 of L&Y
"""
return 2*np.pi * Mtot / vp**3 * ((2-de)/de)**(3/2)
def f_N(vp, de, Mtot=1):
"""Newtonian GW frequency of burst
eq. 79 of L&Y
"""
return vp**3 / (2*np.pi*Mtot * (2-de))
###Output
_____no_output_____
###Markdown
second order ($\frac{v^2}{c^2}$) corrections for orbits
###Code
def V2(de, eta):
"""1PN correction to change in pericenter speed
order (v/c)^2
eq. 73 of L&Y
"""
return -251/104*eta + 8321/2080 + de*(14541/6760*eta - 98519/135200)
def D2(de, eta):
"""1PN correction to change in eccentricity
order (v/c)^2
eq. 74 of L&Y
"""
return -4017/680*eta + 4773/800 + de*(225393/144500*eta - 602109/340000)
def P2(de, eta):
"""1PN correction to orbital period (time between bursts)
order (v/c)^2
eq. 77 of L&Y
"""
return 3/2*eta - 3/4 + de*(-5/8*eta + 3/4)
def R2(de, eta):
"""1PN correction to GW frequency
order (v/c)^2
eq. 80 of L&Y
"""
return 7/4*eta - 5/2 - 5/8*de*eta
###Output
_____no_output_____
###Markdown
1PN orbital parameters (next given prev)
###Code
def dvp_1PN(vp0, de0, eta):
"""1PN change in pericenter speed
eq. 69 of L&Y
"""
return dvp_N(vp0, de0, eta) * (1 + V2(de0, eta)*vp0**2)
def dde_1PN(vp0, de0, eta):
"""1PN change in eccentricity
eq. 70 of L&Y
"""
return dde_N(vp0, de0, eta) * (1 + D2(de0, eta)*vp0**2)
def T_PN(vp0, de0, eta, Mtot=1):
"""1PN orbital period (time between bursts)
eq. 75 of L&Y
"""
vp1 = vp0 + dvp_1PN(vp0, de0, eta)
de1 = de0 + dde_1PN(vp0, de0, eta)
return T_N(vp1, de1, Mtot) * (1 + P2(de1, eta)*vp1**2)
def f_PN(vp0, de0, eta, Mtot=1):
"""1PN GW frequency of burst
eq. 78 of L&Y
"""
vp1 = vp0 + dvp_1PN(vp0, de0, eta)
de1 = de0 + dde_1PN(vp0, de0, eta)
return f_N(vp1, de1, Mtot) * (1 - R2(de1, eta)*vp1**2)
###Output
_____no_output_____
###Markdown
testing
###Code
tf_correct = [[-1754.0, 0.021160242920888948],
[-614.0, 0.023075408530304677],
[-203.0, 0.027441440005832807],
[0.0, 0.05255602595335716]
]
Mtot = 30
q = 0.25
eta = q / (1+q)**2
t_star, f_star = tf_correct[2]
t_next, f_next = tf_correct[3]
def diff_for(de):
vp = (2*np.pi*f_star * (2-de))**(1/3)
t = t_star + T_PN(vp, de, eta)
f = f_PN(vp, de, eta)
return np.abs(t-t_next)
des = np.arange(0.01, 0.99, 1e-2)
diff = [diff_for(x) for x in des]
plt.plot(des, diff)
de + dde_1PN(vp, de, eta)
de = 0.95
vp = (2*np.pi*f_star * (2-de))**(1/3)
t = t_star + T_PN(vp, de, eta)
f = f_PN(vp, de, eta)
t,f
f_next
vps = np.arange(0.05, 0.95, 0.01)
des = np.arange(0.01, 0.50, 0.01)
period = np.zeros([len(vps), len(des)])
for ii,vp in enumerate(vps):
for jj,de in enumerate(des):
period[ii,jj] = T_N(vp, de)
np.log10(np.diff(np.array(tf_correct).T[0]))
dT = np.log10(np.diff(np.array(tf_correct).T[0]))
#plt.pcolormesh(des, vps, np.log10(period))
plt.contourf(des, vps, np.log10(period))#, colors='k')
plt.colorbar()
plt.scatter([0.25, 0.3, 0.35, 0.4], [0.83, 0.75, 0.68, 0.63], marker='x', s=100, color='r')
plt.ylabel(r"$v/c$")
plt.xlabel(r"$\delta e = 1-e$")
np.diff(np.array(tf_correct).T[0])
de = 0.4
vp = (2*np.pi*f_star * (2-de))**(1/3)
print("de = {:.2f}, vp = {:.3f}".format(de, vp))
vp=0.53
print(T_N(vp, de))
print(T_PN(vp, de, eta))
from scipy.optimize import minimize
Mtot = 30
q = 0.25
eta = q / (1+q)**2
t_star, f_star = tf_correct[2]
t_next, f_next = tf_correct[3]
sigT = 5 # hit 203 +/- 5
sigF = 0.005 # hit 0.052 +/- 0.005
def diff_for(x):
vp, de = x
t = t_star + T_PN(vp, de, eta)
f = f_PN(vp, de, eta)
return (t-t_next)**2/sigT**2 + (f-f_next)**2/sigF**2
guess = [0.50, 0.40]
cnstrnt = [{'type':'ineq', 'fun':lambda x: x[0]},
{'type':'ineq', 'fun':lambda x: x[1]}] # both params must be non-negative
result = minimize(diff_for, guess, method='SLSQP', constraints=cnstrnt)
print(result.x)
vp, de = result.x
T_PN(vp, de, eta)
f_PN(vp, de, eta)
tf_correct
fun = lambda x: x[1]
fun(result.x)
###Output
_____no_output_____ |
test_data/visualizeInputData.ipynb | ###Markdown
visualize Input Datathis is expected to be run with the csv-file generated in importTestData.ipynb
###Code
import pandas as pd
import matplotlib.pyplot as plt
import math
import numpy as np
path = "beam1/beam1-0.csv"
dtype = {
'time': 'int64',
'type': 'category',
'vehicle': 'int64',
'parkingTaz': 'category', #
'chargingPointType': 'category',
'primaryFuelLevel': 'float64', #
'mode': 'category',
'currentTourMode': 'category',
'vehicleType': 'category',
'arrivalTime': 'float64', #
'departureTime': 'float64', #
'linkTravelTime': 'string',
'primaryFuelType': 'category',
'parkingZoneId': 'category',
'duration': 'float64' #
}
df_sim_new = pd.read_csv(path, dtype=dtype, index_col="time")
df_sim_new.head(3)
df = df_sim_new.sort_index()
df.head(20)
df.tail(10)
i = 18406
slice = df["vehicle"].iloc[1000]
slice
idx = df["type"] == "RefuelSessionEvent"
idx.iloc[1]
plugin = 1.490243e+08
delta = 92700000
out = 2.417243e+08
capacity = 302234052
print (out - plugin == delta)
print (out/capacity)
# print taz
a = df.parkingTaz.cat.categories.to_numpy().astype("int64")
a = np.sort(a)
print(a)
# perform data cleaning. make sure that every Charging Event consits of
# ChargingPlugInEvent, RefuelSessionEvent, ChargingPlugOutEvent.
# did not finished coding, did not test.
PlugIn = np.array([], dtype=int)
RefuelSession = np.array([], dtype=int)
PlugOut = np.array([],dtype = int)
VehicleId = np.array([], dtype = int)
completeSequence = np.array([], dtype=bool)
for i in range(0, len(df)):
row = df.iloc[i,:]
if np.isin( row["vehicle"], VehicleId):
if row["type"] == "ChargingPlugInEvent":
PlugIn.append(i)
if row["type"] == "RefuelSessionEvent":
RefuelSession.append(i)
if row["type"] == "ChargingPlugOutEvent":
PlugOut.append(i)
else:
if row["type"] == "ChargingPlugInEvent" or row["type"] == "RefuelSessionEvent":
VehicleId.append(row["vehicle"])
if row["type"] == "ChargingPlugInEvent":
PlugIn.append(i)
else:
PlugIn.append(0)
if row["type"] == "RefuelSessionEvent":
RefuelSession.append(i)
else:
RefuelSession.append(0)
PlugOut.append(0)
# read out charging demand
#set this, if you want to do this for a special TAZ
setTAZ = False
taz = 103
time = []
startTime = []
demand = []
duration = []
power = []
for i in range(0, len(df)):
if df["type"].iloc[i] == "RefuelSessionEvent":
if not setTAZ or df["parkingTaz"].iloc[i] == str(taz):
time.append(df.index[i])
startTime.append(df.index[i] - df["duration"].iloc[i])
demand.append(df["fuel"].iloc[i])
duration.append(df["duration"].iloc[i])
#calculate power
x = df["fuel"].iloc[i] / df["duration"].iloc[i]
if math.isnan(x):
x = 0
power.append(x)
#discretize / resample
t_start = 18000
t_end = max(time)
step = 60
power_new = []
time_new = []
no_active =[]
t_act = t_start
time = np.array(time)
startTime = np.array(startTime)
power = np.array(power)
while t_act < t_end + step:
idx1 = (time >= t_act)
idx2 = (startTime < t_act)
idx = idx1 & idx2
power_new.append( power[idx].sum())
no_active.append(idx.sum())
time_new.append( t_act )
t_act += step
#print power demand
fig, ax = plt.subplots(2,1)
ax[0].plot(time_new, np.array(power_new)/1e3)
ax[0].set_ylabel("power in kW")
ax[1].step(time_new, no_active)
ax[1].set_ylabel("number active charges")
ax[1].set_xlabel("time in sec.")
###Output
C:\Users\akaju\anaconda3\envs\py_btms_controller\lib\site-packages\ipykernel_launcher.py:20: RuntimeWarning: invalid value encountered in double_scalars
|
notebooks/seir/[STABLE] generate_report.ipynb | ###Markdown
Perform Fit
###Code
predictions_dict = single_fitting_cycle(**copy.deepcopy(config['fitting']))
###Output
_____no_output_____
###Markdown
Loss Dataframe
###Code
from viz.fit import plot_histogram
fig,axs = plt.subplots(4,2,figsize = (10,20))
plot_histogram(predictions_dict,fig = fig,axs=axs)
predictions_dict['df_loss']
###Output
_____no_output_____
###Markdown
Plot Best Forecast
###Code
predictions_dict['forecasts'] = {}
predictions_dict['forecasts']['best'] = predictions_dict['trials']['predictions'][0]
predictions_dict['plots']['forecast_best'] = plot_forecast(predictions_dict,
which_compartments=config['fitting']['loss']['loss_compartments'],
error_bars=False, **config['plotting'])
###Output
_____no_output_____
###Markdown
Process trials + Find best beta
###Code
uncertainty_args = {'predictions_dict': predictions_dict, "variable_param_ranges" :config['fitting']['variable_param_ranges'],
**config['uncertainty']['uncertainty_params']}
uncertainty = config['uncertainty']['method'](**uncertainty_args)
predictions_dict['plots']['beta_loss'], _ = plot_beta_loss(uncertainty.dict_of_trials)
uncertainty_forecasts = uncertainty.get_forecasts()
for key in uncertainty_forecasts.keys():
predictions_dict['forecasts'][key] = uncertainty_forecasts[key]['df_prediction']
predicions_dict['forecasts']['ensemble_mean'] = uncertainty.ensemble_mean_forecast
###Output
_____no_output_____
###Markdown
Plot Top k Trials
###Code
kforecasts = plot_top_k_trials(predictions_dict, k=config['plotting']['num_trials_to_plot'],
which_compartments=config['plotting']['plot_topk_trials_for_columns'])
predictions_dict['plots']['forecasts_topk'] = {}
for column in config['plotting']['plot_topk_trials_for_columns']:
predictions_dict['plots']['forecasts_topk'][column.name] = kforecasts[column]
predictions_dict['beta'] = uncertainty.beta
predictions_dict['beta_loss'] = uncertainty.beta_loss
predictions_dict['deciles'] = uncertainty_forecasts
###Output
_____no_output_____
###Markdown
Plot Deciles Forecasts
###Code
for fits_to_plot in config['plotting']['pair_fits_to_plot']:
predictions_dict['plots'][f'forecast_{fits_to_plot[0]}_{fits_to_plot[1]}'] = plot_forecast(
predictions_dict, which_compartments=config['fitting']['loss']['loss_compartments'],
fits_to_plot=fits_to_plot, **config['plotting'])
ptiles_plots = plot_ptiles(predictions_dict, which_compartments=config['plotting']['plot_ptiles_for_columns'])
predictions_dict['plots']['forecasts_ptiles'] = {}
for column in config['plotting']['plot_ptiles_for_columns']:
predictions_dict['plots']['forecasts_ptiles'][column.name] = ptiles_plots[column]
###Output
_____no_output_____
###Markdown
Create Report
###Code
save_dict_and_create_report(predictions_dict, config, ROOT_DIR=output_folder, config_filename=config_filename)
###Output
_____no_output_____
###Markdown
Create Output CSV
###Code
df_output = create_decile_csv_new(predictions_dict)
df_output.to_csv(f'{output_folder}/deciles.csv')
###Output
_____no_output_____
###Markdown
Create All Trials Output
###Code
df_all = create_all_trials_csv(predictions_dict)
df_all.to_csv(f'{output_folder}/all_trials.csv')
###Output
_____no_output_____
###Markdown
Log on W&B
###Code
wandb.init(project="covid-modelling", config=wandb_config)
log_wandb(predictions_dict)
###Output
_____no_output_____
###Markdown
Log on MLFlow
###Code
log_mlflow(config['logging']['experiment_name'], run_name=config['logging']['run_name'], artifact_dir=output_folder)
###Output
_____no_output_____ |
notebooks/nowcast/Mean sea level of nowcast vs nowcast-green.ipynb | ###Markdown
Explore mean sea level in nowcast and nowcast-green
###Code
import datetime
import numpy as np
import os
import netCDF4 as nc
import matplotlib.pyplot as plt
import numpy as np
import glob
from dateutil import tz
from nowcast.figures import shared, figures
from nowcast import analyze, residuals
from salishsea_tools import viz_tools
%matplotlib inline
def load_model_ssh(grid):
ssh = grid.variables['sossheig'][:]
time = grid.variables['time_counter']
dates=nc.num2date(time[:], time.units)
return ssh, dates
nowcast = '/results/SalishSea/nowcast/'
nowcast_green = '/results/SalishSea/nowcast-green/'
location = 'PointAtkinson'
tides_path = '/data/nsoontie/MEOPAR/tools/SalishSeaNowcast/tidal_predictions/'
labels={nowcast: 'nowcast', nowcast_green: 'nowcast-green'}
grid_B = {}
grid_B[nowcast] = nc.Dataset('/data/nsoontie/MEOPAR/NEMO-forcing/grid/bathy_meter_SalishSea2.nc')
grid_B[nowcast_green] = nc.Dataset('/data/nsoontie/MEOPAR/NEMO-forcing/grid/bathy_downonegrid2.nc')
mesh_mask = {}
mesh_mask[nowcast] = nc.Dataset('/data/nsoontie/MEOPAR/NEMO-forcing/grid/mesh_mask_SalishSea2.nc')
mesh_mask[nowcast_green] = nc.Dataset('/data/nsoontie/MEOPAR/NEMO-forcing/grid/mesh_mask_downbyone2.nc')
runs = [nowcast, nowcast_green]
def load_ssh_time_series(d1,d2, runs):
numdays = (d2-d1).total_seconds()/86400
dates = [d1 + datetime.timedelta(days=d) for d in np.arange(numdays+1) ]
ssh = {}
time = {}
for run in runs:
d = dates[0]
fname = glob.glob(os.path.join(run, d.strftime('%d%b%y').lower(), '*_1d_*_grid_T.nc'))[0]
grid = nc.Dataset(fname)
ssh[run], time[run] = load_model_ssh(grid)
for d in dates[1:]:
fname = glob.glob(os.path.join(run, d.strftime('%d%b%y').lower(), '*_1d_*_grid_T.nc'))[0]
grid = nc.Dataset(fname)
s,t = load_model_ssh(grid)
ssh[run]=np.concatenate((ssh[run],s))
time[run]=np.concatenate((time[run],t))
tmask = mesh_mask[run].variables['tmask'][:,0,:,:]
ssh[run] = np.ma.array(ssh[run], mask = np.ones(ssh[run].shape) - tmask)
return ssh, time
###Output
_____no_output_____
###Markdown
Sept 11 to 26, 2016
###Code
sdt = datetime.datetime(2016,9,11)
edt = datetime.datetime(2016,9,26)
ssh, time = load_ssh_time_series(sdt, edt, runs)
fig,axs = plt.subplots(1,3,figsize=(20,8))
for run, ax in zip(runs, axs):
mesh=ax.pcolormesh(ssh[run].mean(axis=0),vmin=-.5,vmax=.5)
plt.colorbar(mesh,ax=ax)
ax.set_title('mean ssh for {}'.format(labels[run]))
viz_tools.plot_coastline(ax, grid_B[run])
ax=axs[-1]
diff = ssh[nowcast_green].mean(axis=0) - ssh[nowcast].mean(axis=0)
mesh=ax.pcolormesh(diff,vmin=-.1,vmax=.1,cmap='bwr')
plt.colorbar(mesh,ax=ax)
ax.set_title('difference ssh for {} - {}'.format(labels[nowcast_green], labels[nowcast]))
viz_tools.plot_coastline(ax, grid_B[run])
for run in runs:
print('{} mean ssh whole domain: {} m'.format(labels[run], np.mean(ssh[run])))
###Output
nowcast mean ssh whole domain: -0.05068201194654997 m
nowcast-green mean ssh whole domain: -0.14778545136372168 m
###Markdown
Dec 12, 2015 to Oct 11, 2016
###Code
sdt = datetime.datetime(2015,12,12)
edt = datetime.datetime(2016,10,11)
ssh, time = load_ssh_time_series(sdt, edt, runs)
fig,axs = plt.subplots(1,3,figsize=(20,8))
for run, ax in zip(runs, axs):
mesh=ax.pcolormesh(ssh[run].mean(axis=0),vmin=-.5,vmax=.5)
plt.colorbar(mesh,ax=ax)
ax.set_title('mean ssh for {}'.format(labels[run]))
viz_tools.plot_coastline(ax, grid_B[run])
ax=axs[-1]
diff = ssh[nowcast_green].mean(axis=0) - ssh[nowcast].mean(axis=0)
mesh=ax.pcolormesh(diff,vmin=-.1,vmax=.1,cmap='bwr')
plt.colorbar(mesh,ax=ax)
ax.set_title('difference ssh for {} - {}'.format(labels[nowcast_green], labels[nowcast]))
viz_tools.plot_coastline(ax, grid_B[run])
for run in runs:
print('{} mean ssh whole domain: {} m'.format(labels[run], np.mean(ssh[run])))
###Output
nowcast mean ssh whole domain: 0.04306697046398426 m
nowcast-green mean ssh whole domain: -0.023014445287752542 m
###Markdown
Compare mean ssh to Neah Bay forcing anamoly
###Code
def NeahBay_forcing_time_series(d1, d2, results_path):
"""Create a time series of forcing ssh from Neah Bay between dates d1 and d2"""
numdays = (d2-d1).days
dates = [d1 + datetime.timedelta(days=i) for i in range(numdays)]
tides_path = '/data/nsoontie/MEOPAR/tools/SalishSeaNowcast/tidal_predictions/'
surges = np.array([])
times = np.array([])
for d in dates:
file_path = os.path.join(results_path, d.strftime('%d%b%y').lower())
filename = glob.glob(os.path.join(file_path,'ssh*.txt'))
if filename:
time, surge, fflag = residuals.NeahBay_forcing_anom(filename[0], d, tides_path)
surge_t, time_t = analyze.truncate_data(np.array(surge),
np.array(time), d, d+datetime.timedelta(days=1))
surges = np.concatenate((surges, surge_t))
times = np.concatenate((times, time_t))
return surges, times
forcing, forcing_time = NeahBay_forcing_time_series(sdt.replace(tzinfo=tz.tzutc()),
edt.replace(tzinfo=tz.tzutc()),
'/results/SalishSea/nowcast/')
colors=['b','g']
fig,ax = plt.subplots(1,1,figsize=(20,6))
for run, c in zip(runs, colors):
ax.plot(time[run],ssh[run].mean(axis=-1).mean(axis=-1),label=labels[run],color=c)
ax.plot(forcing_time, forcing, '--', label='Neah Bay anomaly',
color='k')
ax.legend()
ax.grid()
ax.set_ylabel('ssh [m]')
ax.set_title('Full domain mean ssh compared to Neah Bay forcing anomaly')
ax.set_ylim([-.5,1])
###Output
_____no_output_____ |
notebooks/sine_ja_esp32.ipynb | ###Markdown
「tflite micro」であそぼう! 元ノートブック:[@dansitu](https://twitter.com/dansitu) 日本語バーション:[@proppy](https://twitter.com/proppy]) github.com/proppy/TfLiteMicroArduino ← PRをどうぞ 「tflite micro」ってなんだ?- マイコンで「tflite」が動く事- https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/micro
###Code
! python -m pip install tensorflow==2.0.0-beta1
import tensorflow as tf
print(tf.version.VERSION)
! python -m pip install matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.rcParams['figure.dpi'] = 200
###Output
_____no_output_____
###Markdown
一番かんたんなモデルを作りましょう! sin() 1000個
###Code
import numpy as np
import math
import matplotlib.pyplot as plt
x_values = np.random.uniform(low=0, high=2*math.pi, size=1000)
np.random.shuffle(x_values)
y_values = np.sin(x_values)
plt.plot(x_values, y_values, 'b.')
print(plt.show())
###Output
_____no_output_____
###Markdown
ノイズをかけて
###Code
y_values += 0.1 * np.random.randn(*y_values.shape)
plt.plot(x_values, y_values, 'b.')
plt.show()
###Output
_____no_output_____
###Markdown
datasetをちゃんと分けて
###Code
x_train, x_test, x_validate = x_values[:600], x_values[600:800], x_values[800:]
y_train, y_test, y_validate = y_values[:600], y_values[600:800], y_values[800:]
plt.plot(x_train, y_train, 'b.', label="Train")
plt.plot(x_test, y_test, 'r.', label="Test")
plt.plot(x_validate, y_validate, 'y.', label="Validate")
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Kerasで2分を温めて
###Code
from tensorflow.keras import layers
import tensorflow as tf
model = tf.keras.Sequential()
model.add(layers.Dense(16, activation='relu', input_shape=(1,)))
model.add(layers.Dense(16, activation='relu'))
model.add(layers.Dense(1))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
history = model.fit(x_train, y_train, epochs=1000, batch_size=16,
validation_data=(x_validate, y_validate), verbose=1)
###Output
_____no_output_____
###Markdown
モデルを試して
###Code
predictions = model.predict(x_test)
plt.clf()
plt.plot(x_test, y_test, 'bo', label='Test')
plt.plot(x_test, predictions, 'ro', label='Keras')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
tfliteにゆっくり変わって
###Code
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = [tf.lite.Optimize.OPTIMIZE_FOR_SIZE]
tflite_model = converter.convert()
open("sine_model_data.tflite", "wb").write(tflite_model)
###Output
_____no_output_____
###Markdown
マイコンに入れる前に最後の確認
###Code
interpreter = tf.lite.Interpreter('sine_model_data.tflite')
interpreter.allocate_tensors()
input = interpreter.tensor(interpreter.get_input_details()[0]["index"])
output = interpreter.tensor(interpreter.get_output_details()[0]["index"])
lite_predictions = np.empty(x_test.size)
for i in range(x_test.size):
input()[0] = x_test[i]
interpreter.invoke()
lite_predictions[i] = output()[0]
plt.plot(x_test, y_test, 'bo', label='Test')
plt.plot(x_test, predictions, 'ro', label='Keras')
plt.plot(x_test, lite_predictions, 'kx', label='TFLite')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
マイコンに入れるために「ANSI C」に変わって
###Code
_ = ! which xxd || apt-get install xxd
! xxd -i sine_model_data.tflite > sine_model_data.h
try:
from google.colab import files
files.download('sine_model_data.h')
except Exception as e:
from IPython.display import FileLink
display(FileLink('sine_model_data.h'))
###Output
_____no_output_____ |
HW1/HadISST/HadISST.ipynb | ###Markdown
A simple play with HadISST by Tianxiang Gao
###Code
import xarray as xr
import cmocean.cm as cmo
import matplotlib.pyplot as plt
import numpy as np
import urllib
# read HadISST montly data
sst = xr.open_dataset('HadISST_sst.nc')
lon0 = -170
lon1 = -120
lat0 = -5
lat1 = 5
fig = plt.figure(figsize=(16,4))
ax = fig.add_subplot(111)
# get anomaly from climatology (1981-2010)
clim = sst.sst.sel(time=slice('1981-01-01','2011-01-01')).isel(latitude=slice(lat0+90, lat1+90), longitude=slice(10,60)).mean(axis=0)
data = sst.sst.isel(time=-1,latitude=slice(lat0+90, lat1+90), longitude=slice(10,60))
nino = data - clim
nino.name = 'SST anomaly ($^\circ C$)'
m = nino.plot(cmap=cmo.balance, vmin= -2, vmax=2)
ax.set_aspect('equal')
ax.set_xticklabels([170, 160, 150, 140, 130, 120], fontsize=12)
plt.xlabel(r'Longitude ($^\circ $W)')
plt.ylabel(r'Latitude ($^\circ $N)')
plt.title('Sea Surface Temperature (SST) Anomaly of Nino3.4 Region, ' + str(nino.time.values)[:7], fontsize=16)
###Output
_____no_output_____
###Markdown
Nino 3.4 Comparison Fetch data from NOAA
###Code
data = urllib.request.urlopen('https://psl.noaa.gov/gcos_wgsp/Timeseries/Data/nino34.long.anom.data')
noaa = []
for line in data: # files are iterable
noaa.append(line)
noaa = noaa[1:-7]
nino_noaa = []
for line in noaa:
temp = line.split()
for month in temp[1:]:
if float(month) == -99.99:
pass
else:
nino_noaa.append(float(month))
nino_noaa = xr.DataArray(data=nino_noaa)
nino_noaa['dim_0'] = sst.time.values
nino_noaa.rename({'dim_0': 'time'});
###Output
_____no_output_____
###Markdown
Calculate area weighted anomaly
###Code
nino_had = ((sst.sst-clim)*np.cos(np.deg2rad(clim.latitude))).mean(axis=1).mean(axis=1)
mappable0 = nino_had.plot(figsize=(16,8))
mappable1 = nino_noaa.plot()
plt.legend(['Nino3.4 by HadISST1.1','Nino3.4 by NOAA (HadISST1)']);
###Output
_____no_output_____ |
docs/secretsanta_example.ipynb | ###Markdown
Generating the list List of names for the draw and associated contacts (email, skype, slack...)
###Code
names = ["Bill","Bob","Alice","Charlie A","Timmy","Charlie B","Dave"]
contacts = ["[email protected]","[email protected]","[email protected]","[email protected]","[email protected]","[email protected]","[email protected]"]
###Output
_____no_output_____
###Markdown
Any excluded pairings? (spouses, siblings, people already giving gifts...)
###Code
exclusions = [["Bill","Bob"],["Charlie A","Charlie B"],["Bob","Dave"]]
###Output
_____no_output_____
###Markdown
Tell Santa all this information
###Code
persons = make_dict_of_persons(names,contacts,exclusions)
santa = Santa(persons)
###Output
_____no_output_____
###Markdown
Let's have a look at this data
###Code
print(persons['Alice'].contact)
print(persons['Bob'].exclusions)
###Output
[email protected]
[Bill, Dave]
###Markdown
Santa makes a list
###Code
random.seed(123456789) # Optionally seed the RNG with an integer
gifts = santa.make_a_list()
santa.display() # If you want to keep it secret from yourself, don't run this!
###Output
Assigned gifts:
Bill ==> Charlie B
Bob ==> Charlie A
Alice ==> Dave
Charlie A ==> Timmy
Timmy ==> Bob
Charlie B ==> Alice
Dave ==> Bill
###Markdown
Email each person the recipient for their gift Custom body for email
###Code
message = """\
Subject: Secret santa - your giftee
Hi {name},
Your secret santa has been drawn! You have been assigned to give a gift to: {gift_reciever}
This secret santa has been organised by a computer https://github.com/gdold/secretsanta
Have a great solstice!"""
###Output
_____no_output_____
###Markdown
Send messages
###Code
smtp_server = 'smtp.gmail.com'
elves = Elves(persons,gifts,message,smtp_server)
elves.send_email()
###Output
This will send the assigned secret santas to the gifters via email.
You will be asked to confirm before the messages are sent.
|
test/testdata/pytorch/first_neural_network/first_neural_network.ipynb | ###Markdown
First Neural Network: Image Classification Objectives:- Train a minimal image classifier on [MNIST](https://paperswithcode.com/dataset/mnist) using PyTorch- Usese PyTorch and torchvision
###Code
# The usual imports
import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
# load the data
class ReshapeTransform:
def __init__(self, new_size):
self.new_size = new_size
def __call__(self, img):
return torch.reshape(img, self.new_size)
transformations = transforms.Compose([
transforms.ToTensor(),
transforms.ConvertImageDtype(torch.float32),
ReshapeTransform((-1,))
])
trainset = torchvision.datasets.MNIST(root='./data', train=True,
download=True, transform=transformations)
testset = torchvision.datasets.MNIST(root='./data', train=False,
download=True, transform=transformations)
# check shape of data
trainset.data.shape, testset.data.shape
# data loader
BATCH_SIZE = 128
train_dataloader = torch.utils.data.DataLoader(trainset,
batch_size=BATCH_SIZE,
shuffle=True,
num_workers=0)
test_dataloader = torch.utils.data.DataLoader(testset,
batch_size=BATCH_SIZE,
shuffle=False,
num_workers=0)
# model
model = nn.Sequential(nn.Linear(784, 512), nn.ReLU(), nn.Linear(512, 10))
# training preparation
trainer = torch.optim.RMSprop(model.parameters())
loss = nn.CrossEntropyLoss()
def get_accuracy(output, target, batch_size):
# Obtain accuracy for training round
corrects = (torch.max(output, 1)[1].view(target.size()).data == target.data).sum()
accuracy = 100.0 * corrects/batch_size
return accuracy.item()
# train
for ITER in range(5):
train_acc = 0.0
train_running_loss = 0.0
model.train()
for i, (X, y) in enumerate(train_dataloader):
output = model(X)
l = loss(output, y)
# update the parameters
l.backward()
trainer.step()
trainer.zero_grad()
# gather metrics
train_acc += get_accuracy(output, y, BATCH_SIZE)
train_running_loss += l.detach().item()
print('Epoch: %d | Train loss: %.4f | Train Accuracy: %.4f' \
%(ITER+1, train_running_loss / (i+1),train_acc/(i+1)))
###Output
_____no_output_____ |
01 - general/05 - Histograms.ipynb | ###Markdown
Histograms are a great way to visualize individual color components
###Code
import cv2
import numpy as np
# We need to import matplotlib to create our histogram plots
from matplotlib import pyplot as plt
image = cv2.imread('../images/input.jpg')
print (image.shape[0]*image.shape[1])
histogram = cv2.calcHist([image], [0], None, [256], [0, 256])
# We plot a histogram, ravel() flatens our image array
plt.hist(image.ravel(), 256, [0, 256]);
plt.show()
# Viewing Separate Color Channels
color = ('b', 'g', 'r')
# We now separate the colors and plot each in the Histogram
for i, col in enumerate(color):
histogram2 = cv2.calcHist([image], [i], None, [256], [0, 256])
plt.plot(histogram2, color = col)
plt.xlim([0,256])
plt.show()
###Output
10077696
|
1.DeepLearning/02.VanillaNN/mnist_one_hidden_layer_tf.ipynb | ###Markdown
MNIST-Neural Network-Single Hidden Layer with Tensorflow
###Code
import numpy as np
import matplotlib.pyplot as plt
from tensorflow.examples.tutorials.mnist import input_data
import math
import tensorflow as tf
print(tf.__version__)
%matplotlib inline
###Output
1.0.1
###Markdown
1. MNIST handwritten digits image set- Note1: http://yann.lecun.com/exdb/mnist/- Note2: https://www.tensorflow.org/versions/r0.11/tutorials/mnist/beginners/index.html
###Code
mnist = input_data.read_data_sets("/Users/yhhan/git/deeplink/0.Common/data/MNIST_data/", one_hot=True)
###Output
Extracting /Users/yhhan/git/deeplink/0.Common/data/MNIST_data/train-images-idx3-ubyte.gz
Extracting /Users/yhhan/git/deeplink/0.Common/data/MNIST_data/train-labels-idx1-ubyte.gz
Extracting /Users/yhhan/git/deeplink/0.Common/data/MNIST_data/t10k-images-idx3-ubyte.gz
Extracting /Users/yhhan/git/deeplink/0.Common/data/MNIST_data/t10k-labels-idx1-ubyte.gz
###Markdown
- Each image is 28 pixels by 28 pixels. We can interpret this as a big array of numbers:- flatten 1-D tensor of size 28x28 = 784. - Each entry in the tensor is a pixel intensity between 0 and 1, for a particular pixel in a particular image.$$[0, 0, 0, ..., 0.6, 0.7, 0.7, 0.5, ... 0.8, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 0.9, 0.3, ..., 0.4, 0.4, 0.4, ... 0, 0, 0]$$ 1) Training Data
###Code
print(type(mnist.train.images), mnist.train.images.shape)
print(type(mnist.train.labels), mnist.train.labels.shape)
###Output
<class 'numpy.ndarray'> (55000, 784)
<class 'numpy.ndarray'> (55000, 10)
###Markdown
- Number of train images is 55000.- **mnist.train.images** is a tensor with a shape of [55000, 784]. - A one-hot vector is a vector which is 0 in most entries, and 1 in a single entry.- In this case, the $n$th digit will be represented as a vector which is 1 in the nth entry. - For example, 3 would be $[0,0,0,1,0,0,0,0,0,0]$. - **mnist.train.labels** is a tensor with a shape of [55000, 10].
###Code
fig = plt.figure(figsize=(20, 5))
for i in range(5):
img = np.array(mnist.train.images[i])
img.shape = (28, 28)
plt.subplot(150 + (i+1))
plt.imshow(img, cmap='gray')
###Output
_____no_output_____
###Markdown
2) Validation Data
###Code
print(type(mnist.validation.images), mnist.validation.images.shape)
print(type(mnist.validation.labels), mnist.validation.labels.shape)
###Output
<class 'numpy.ndarray'> (5000, 784)
<class 'numpy.ndarray'> (5000, 10)
###Markdown
3) Test Data
###Code
print(type(mnist.test.images), mnist.test.images.shape)
print(type(mnist.test.labels), mnist.test.labels.shape)
###Output
<class 'numpy.ndarray'> (10000, 784)
<class 'numpy.ndarray'> (10000, 10)
###Markdown
2. Simple Neural Network Model (No Hidden Layer) 1) Tensor Operation and Shape - Input Layer to Output Layer - $i=1...784$ - $j=1...10$$$ u_j = \sum_i W_{ji} x_i + b_j $$ - Presentation of Matrix and Vector - Shape of ${\bf W}: (10, 784)$ - Shape of ${\bf x}: (784, 1)$ - Shape of ${\bf b}: (10,)$ - Shape of ${\bf u}: (10,)$$$ {\bf u} = {\bf Wx + b} $$ - **Transposed Matrix** Operation in Tensorflow - Shape of ${\bf W}: (784, 10)$ - Shape of ${\bf x}: (1, 784)$ - Shape of ${\bf b}: (10,)$ - Shape of ${\bf u}: (10,)$$$ {\bf u} = {\bf xW + b} $$ - Small Sized Example
###Code
W_ = np.array([[1, 2, 3], [4, 5, 6]]) #shape of W: (2, 3)
x_ = np.array([[1, 2]]) #shape of x: (1, 2)
xW_ = np.dot(x_, W_) #shape of xW: (1, 3)
print(W_.shape, x_.shape, xW_.shape)
print(xW_)
print()
b_ = np.array([10, 20, 30]) #shape of b: (3,)
u_ = xW_ + b_ #shape of u: (1, 3)
print(b_.shape, u_.shape)
print(u_)
###Output
(2, 3) (1, 2) (1, 3)
[[ 9 12 15]]
(3,) (1, 3)
[[19 32 45]]
###Markdown
2) Mini Batch
###Code
batch_images, batch_labels = mnist.train.next_batch(100)
print(batch_images.shape)
#print batch_images
print
print(batch_labels.shape)
#print batch_labels
###Output
(100, 784)
(100, 10)
###Markdown
- Mini Batch (ex. batch size = 100) - Shape of ${\bf W}: (784, 10)$ - Shape of ${\bf x}: (100, 784)$ - Shape of ${\bf b}: (10,)$ - Shape of ${\bf u}: (100, 10)$$$ {\bf U} = {\bf XW + B} $$ - Small Sized Example
###Code
W_ = np.array([[1, 2, 3], [4, 5, 6]]) #shape of W: (2, 3)
x_ = np.array([[1, 2], [1, 2], [1, 2], [1, 2], [1, 2]]) #shape of x: (5, 2)
xW_ = np.dot(x_, W_) #shape of xW: (5, 3)
print(W_.shape, x_.shape, xW_.shape)
print(xW_)
print()
b_ = np.array([10, 20, 30]) #shape of b: (3,)
u_ = xW_ + b_ #shape of u: (1, 3)
print(b_.shape, u_.shape)
print(u_)
###Output
(2, 3) (5, 2) (5, 3)
[[ 9 12 15]
[ 9 12 15]
[ 9 12 15]
[ 9 12 15]
[ 9 12 15]]
(3,) (5, 3)
[[19 32 45]
[19 32 45]
[19 32 45]
[19 32 45]
[19 32 45]]
###Markdown
3) Model Construction - The placeholder to store the training data:
###Code
x = tf.placeholder(tf.float32, [None, 784])
print("x -", x.get_shape())
###Output
x - (?, 784)
###Markdown
- The placeholder to store the correct answers (ground truth):
###Code
y_target = tf.placeholder(tf.float32, [None, 10])
###Output
_____no_output_____
###Markdown
- A single (output) layer neural network model
###Code
weight_init_std = 0.01
W = tf.Variable(weight_init_std * tf.random_normal([784, 10]))
b = tf.Variable(tf.zeros([10]))
print("W -", W.get_shape())
print("b -", b.get_shape())
u = tf.matmul(x, W) + b
print("u -", u.get_shape())
###Output
u - (?, 10)
###Markdown
4) Target Setup - softmax$$ {\bf z} = softmax({\bf u}) $$ - Error functions: Cross entropy - Suppose you have two tensors, where $u$ contains computed scores for each class (for example, from $u = W*x + b$) and $y_target$ contains one-hot encoded true labels.u = ... Predicted label, e.g. $u = tf.matmul(X, W) + by_target = ... True label, one-hot encoded- We call $u$ **logits** (if you interpret the scores in u as unnormalized log probabilities).- Additionally, the total cross-entropy loss computed in this manner:z = tf.nn.softmax(u)total_loss = tf.reduce_mean(-tf.reduce_sum(y_target * tf.log(z), [1]))- is essentially equivalent to the total cross-entropy loss computed with the function softmax_cross_entropy_with_logits():total_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=u, labels=y_target))
###Code
learning_rate = 0.1
error = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=u, labels=y_target))
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(error)
###Output
_____no_output_____
###Markdown
4. Learning (Training) & Evaluation
###Code
prediction_and_ground_truth = tf.equal(tf.argmax(u, 1), tf.argmax(y_target, 1))
accuracy = tf.reduce_mean(tf.cast(prediction_and_ground_truth, tf.float32))
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
batch_size = 100
total_batch = int(math.ceil(mnist.train.num_examples/float(batch_size)))
print("Total batch: %d" % total_batch)
training_epochs = 50
epoch_list = []
train_error_list = []
validation_error_list = []
test_accuracy_list = []
for epoch in range(training_epochs):
epoch_list.append(epoch)
# Train Error Value
train_error_value = sess.run(error, feed_dict={x: mnist.train.images, y_target: mnist.train.labels})
train_error_list.append(train_error_value)
validation_error_value = sess.run(error, feed_dict={x: mnist.validation.images, y_target: mnist.validation.labels})
validation_error_list.append(validation_error_value)
test_accuracy_value = sess.run(accuracy, feed_dict={x: mnist.test.images, y_target: mnist.test.labels})
test_accuracy_list.append(test_accuracy_value)
print("Epoch: {0:2d}, Train Error: {1:0.5f}, Validation Error: {2:0.5f}, Test Accuracy: {3:0.5f}".format(epoch, train_error_value, validation_error_value, test_accuracy_value))
for i in range(total_batch):
batch_images, batch_labels = mnist.train.next_batch(batch_size)
sess.run(optimizer, feed_dict={x: batch_images, y_target: batch_labels})
###Output
Total batch: 550
Epoch: 0, Train Error: 2.30572, Validation Error: 2.30698, Test Accuracy: 0.12840
Epoch: 1, Train Error: 0.39198, Validation Error: 0.37061, Test Accuracy: 0.90180
Epoch: 2, Train Error: 0.34676, Validation Error: 0.32587, Test Accuracy: 0.90890
Epoch: 3, Train Error: 0.32534, Validation Error: 0.30698, Test Accuracy: 0.91660
Epoch: 4, Train Error: 0.31445, Validation Error: 0.29831, Test Accuracy: 0.91660
Epoch: 5, Train Error: 0.30418, Validation Error: 0.28912, Test Accuracy: 0.91790
Epoch: 6, Train Error: 0.29866, Validation Error: 0.28601, Test Accuracy: 0.91850
Epoch: 7, Train Error: 0.29390, Validation Error: 0.28150, Test Accuracy: 0.91900
Epoch: 8, Train Error: 0.29029, Validation Error: 0.27967, Test Accuracy: 0.91980
Epoch: 9, Train Error: 0.28661, Validation Error: 0.27622, Test Accuracy: 0.92200
Epoch: 10, Train Error: 0.28457, Validation Error: 0.27407, Test Accuracy: 0.92190
Epoch: 11, Train Error: 0.28074, Validation Error: 0.27155, Test Accuracy: 0.92230
Epoch: 12, Train Error: 0.27949, Validation Error: 0.27210, Test Accuracy: 0.92440
Epoch: 13, Train Error: 0.27634, Validation Error: 0.26903, Test Accuracy: 0.92230
Epoch: 14, Train Error: 0.27572, Validation Error: 0.26896, Test Accuracy: 0.92290
Epoch: 15, Train Error: 0.27364, Validation Error: 0.26740, Test Accuracy: 0.92340
Epoch: 16, Train Error: 0.27223, Validation Error: 0.26725, Test Accuracy: 0.92270
Epoch: 17, Train Error: 0.27102, Validation Error: 0.26595, Test Accuracy: 0.92290
Epoch: 18, Train Error: 0.27071, Validation Error: 0.26719, Test Accuracy: 0.92330
Epoch: 19, Train Error: 0.26938, Validation Error: 0.26507, Test Accuracy: 0.92190
Epoch: 20, Train Error: 0.26838, Validation Error: 0.26431, Test Accuracy: 0.92210
Epoch: 21, Train Error: 0.26776, Validation Error: 0.26610, Test Accuracy: 0.92200
Epoch: 22, Train Error: 0.26633, Validation Error: 0.26555, Test Accuracy: 0.92400
Epoch: 23, Train Error: 0.26622, Validation Error: 0.26398, Test Accuracy: 0.92470
Epoch: 24, Train Error: 0.26345, Validation Error: 0.26242, Test Accuracy: 0.92270
Epoch: 25, Train Error: 0.26297, Validation Error: 0.26230, Test Accuracy: 0.92380
Epoch: 26, Train Error: 0.26315, Validation Error: 0.26254, Test Accuracy: 0.92380
Epoch: 27, Train Error: 0.26165, Validation Error: 0.26105, Test Accuracy: 0.92450
Epoch: 28, Train Error: 0.26111, Validation Error: 0.26239, Test Accuracy: 0.92470
Epoch: 29, Train Error: 0.25995, Validation Error: 0.26142, Test Accuracy: 0.92350
Epoch: 30, Train Error: 0.25970, Validation Error: 0.26080, Test Accuracy: 0.92450
Epoch: 31, Train Error: 0.25900, Validation Error: 0.26151, Test Accuracy: 0.92420
Epoch: 32, Train Error: 0.26040, Validation Error: 0.26189, Test Accuracy: 0.92400
Epoch: 33, Train Error: 0.25777, Validation Error: 0.25990, Test Accuracy: 0.92350
Epoch: 34, Train Error: 0.25738, Validation Error: 0.26107, Test Accuracy: 0.92430
Epoch: 35, Train Error: 0.25743, Validation Error: 0.26133, Test Accuracy: 0.92400
Epoch: 36, Train Error: 0.25644, Validation Error: 0.26034, Test Accuracy: 0.92460
Epoch: 37, Train Error: 0.25596, Validation Error: 0.26001, Test Accuracy: 0.92440
Epoch: 38, Train Error: 0.25591, Validation Error: 0.26088, Test Accuracy: 0.92520
Epoch: 39, Train Error: 0.25558, Validation Error: 0.26054, Test Accuracy: 0.92540
Epoch: 40, Train Error: 0.25539, Validation Error: 0.26121, Test Accuracy: 0.92550
Epoch: 41, Train Error: 0.25400, Validation Error: 0.25911, Test Accuracy: 0.92520
Epoch: 42, Train Error: 0.25396, Validation Error: 0.25937, Test Accuracy: 0.92360
Epoch: 43, Train Error: 0.25311, Validation Error: 0.25945, Test Accuracy: 0.92460
Epoch: 44, Train Error: 0.25310, Validation Error: 0.25960, Test Accuracy: 0.92520
Epoch: 45, Train Error: 0.25337, Validation Error: 0.26013, Test Accuracy: 0.92450
Epoch: 46, Train Error: 0.25380, Validation Error: 0.26053, Test Accuracy: 0.92570
Epoch: 47, Train Error: 0.25286, Validation Error: 0.25917, Test Accuracy: 0.92560
Epoch: 48, Train Error: 0.25228, Validation Error: 0.26081, Test Accuracy: 0.92410
Epoch: 49, Train Error: 0.25221, Validation Error: 0.26051, Test Accuracy: 0.92410
###Markdown
5. Analysis with Graph
###Code
# Draw Graph about Error Values & Accuracy Values
def draw_error_values_and_accuracy(epoch_list, train_error_list, validation_error_list, test_accuracy_list):
# Draw Error Values and Accuracy
fig = plt.figure(figsize=(20, 5))
plt.subplot(121)
plt.plot(epoch_list[1:], train_error_list[1:], 'r', label='Train')
plt.plot(epoch_list[1:], validation_error_list[1:], 'g', label='Validation')
plt.ylabel('Total Error')
plt.xlabel('Epochs')
plt.grid(True)
plt.legend(loc='upper right')
plt.subplot(122)
plt.plot(epoch_list[1:], test_accuracy_list[1:], 'b', label='Test')
plt.ylabel('Accuracy')
plt.xlabel('Epochs')
plt.yticks(np.arange(0.0, 1.0, 0.05))
plt.grid(True)
plt.legend(loc='lower right')
plt.show()
draw_error_values_and_accuracy(epoch_list, train_error_list, validation_error_list, test_accuracy_list)
def draw_false_prediction(diff_index_list):
fig = plt.figure(figsize=(20, 5))
for i in range(5):
j = diff_index_list[i]
print("False Prediction Index: %s, Prediction: %s, Ground Truth: %s" % (j, prediction[j], ground_truth[j]))
img = np.array(mnist.test.images[j])
img.shape = (28, 28)
plt.subplot(150 + (i+1))
plt.imshow(img, cmap='gray')
with tf.Session() as sess:
init = tf.global_variables_initializer()
sess.run(init)
# False Prediction Profile
prediction = sess.run(tf.argmax(u, 1), feed_dict={x:mnist.test.images})
ground_truth = sess.run(tf.argmax(y_target, 1), feed_dict={y_target:mnist.test.labels})
print(prediction)
print(ground_truth)
diff_index_list = []
for i in range(mnist.test.num_examples):
if (prediction[i] != ground_truth[i]):
diff_index_list.append(i)
print("Number of False Prediction:", len(diff_index_list))
draw_false_prediction(diff_index_list)
###Output
_____no_output_____ |
src2/.ipynb_checkpoints/45 > Determystick Nets-checkpoint.ipynb | ###Markdown
It is necessary to get deterministic computation before moving ahead with trying stuff out. Refactoring some stuff too for future ease.
###Code
# python imports, check requirements.txt for version
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
import os
import requests
import io
import torch
# loading the data, check readme.md for download
PATH = '../data/'
files = os.listdir(PATH)
dfs = {f[:-4] : pd.read_csv(PATH + f)
for f in files if f[-3:] == 'csv'
}
###Output
_____no_output_____
###Markdown
The graph formulation:**Nodes**: Members (total 763)**Node Features**: (the node label for classification)- Reputation Value **Edges**: (both can be treated as directed edges)- Notifications (total 47066)- Private Messages (total 3101)
###Code
def create_nodes(dfs):
members = sorted(list(dfs['orig_members'].member_id))
reputation = [[m, sum(dfs['orig_reputation_index'].member_id == m)] for m in members]
just_reps = [reputation[z][1] for z in range(reputation.__len__())]
med_reps = np.median(just_reps)
rep_cutoff = med_reps # change to whatever cutoff to keep
labels = [int(rep > rep_cutoff) for rep in just_reps]
return labels
def create_edges(dfs, edge_type):
members = sorted(list(dfs['orig_members'].member_id))
notifs_raw = [[dfs['orig_inline_notifications'].notify_from_id[z],
dfs['orig_inline_notifications'].notify_to_id[z]]
for z in range(dfs['orig_inline_notifications'].shape[0])]
notifs = [n for n in notifs_raw if (n[0] in members and n[1] in members)]
messages_raw = [[dfs['orig_message_topics'].mt_starter_id[z],
dfs['orig_message_topics'].mt_to_member_id[z]]
for z in range(dfs['orig_message_topics'].shape[0])]
messages = [n for n in messages_raw if (n[0] in members and n[1] in members)]
if edge_type == 'notifs':
edges_raw = notifs
elif edge_type == 'messages':
edges_raw = messages
member_index = {members[i] : i+1 for i in range(len(members))}
edges = [
[member_index[node] for node in edge]
for edge in edges_raw
]
return edges
def create_adjacency(dfs, edge_type):
members = sorted(list(dfs['orig_members'].member_id))
num_nodes = len(members)
adjacency = [[0 for _ in range(num_nodes)] for _ in range(num_nodes)]
edges = create_edges(dfs, edge_type)
# add edges symmetrically
for edge in edges:
adjacency[edge[0] - 1][edge[1] - 1] = 1
adjacency[edge[1] - 1][edge[0] - 1] = 1
# add self loops
for i in range(len(adjacency)):
adjacency[i][i] = 1
from sklearn.preprocessing import normalize
# normalize the adjacency matrix
adjacency = np.array(adjacency)
adjacency = normalize(adjacency, norm='l1', axis=1)
return adjacency
def seed(SEED=42):
# numpy
np.random.seed(SEED)
# torch
torch.manual_seed(SEED)
# cuda
if torch.cuda.is_available():
torch.cuda.manual_seed(SEED)
def train_val_test_split(num_nodes, train, val):
idx_train = np.random.choice(range(num_nodes), int(train * num_nodes), replace=False)
idx_vt = list(set(range(num_nodes)) - set(idx_train))
idx_val = np.random.choice(idx_vt, int(val * num_nodes), replace=False)
idx_test = list(set(idx_vt) - set(idx_val))
return np.array(idx_train), np.array(idx_val), np.array(idx_test)
def featurize(labels, ft_type='random', size=50):
if ft_type == 'random':
return np.random.rand(len(labels), size)
if ft_type == 'uniform':
return np.ones(len(labels), size)
def dataloader(dfs, edge_type, trainf=0.30, valf=0.20):
labels = np.array(create_nodes(dfs))
features = featurize(labels)
adjacency = create_adjacency(dfs, edge_type)
idx_train, idx_val, idx_test = train_val_test_split(adjacency.shape[0], trainf, valf)
data = (adjacency, features, labels, idx_train, idx_val, idx_test)
data = tuple(map(lambda z: torch.from_numpy(z), data))
return data
def accuracy(output, labels):
# print(output, labels)
preds = output.max(1)[1].type_as(labels)
correct = preds.eq(labels).double()
correct = correct.sum()
return correct / len(labels)
import math
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import matplotlib.pyplot as plt
%matplotlib inline
import time
class GraphConv(nn.Module):
def __init__(self, in_features, out_features):
super(GraphConv, self).__init__()
self.in_features = in_features
self.out_features = out_features
self.weight = nn.Parameter(torch.Tensor(in_features, out_features))
self.bias = nn.Parameter(torch.Tensor(out_features))
self.reset_parameters()
def reset_parameters(self):
stdv = 1. / math.sqrt(self.weight.size(1))
self.weight.data.uniform_(-stdv, stdv)
self.bias.data.uniform_(-stdv, stdv)
def forward(self, input, adj):
# print(input.dtype, adj.dtype)
input = input.float()
adj = adj.float()
# print(input.dtype, adj.dtype)
support = torch.mm(input, self.weight)
output = torch.mm(adj, support) # permutation inv sum of all neighbor features
return output + self.bias
def __repr__(self):
return self.__class__.__name__ +' ('+str(self.in_features)+' -> '+str(self.out_features)+')'
class VanillaGCN(nn.Module):
def __init__(self, nfeat, nhid, nclass, dropout):
super(VanillaGCN, self).__init__()
self.gc1 = GraphConv(nfeat, nhid)
self.gc2 = GraphConv(nhid, nclass)
self.dropout = dropout
def forward(self, x, adj):
x = F.relu(self.gc1(x, adj))
x = F.dropout(x, self.dropout, training=self.training)
x = self.gc2(x, adj)
return F.log_softmax(x, dim=1)
# TODO: try different activation functions
# hyperparameters
lr = 0.03
epochs = 200
wd = 5e-4
hidden = 16
dropout = 0.5
fastmode = False
def train(epoch, model, optimizer, features, adj, idx_train, idx_val, labels):
t = time.time()
model.train()
optimizer.zero_grad()
output = model(features, adj)
loss_train = F.nll_loss(output[idx_train], labels[idx_train])
acc_train = accuracy(output[idx_train], labels[idx_train])
loss_train.backward()
optimizer.step()
if not fastmode:
model.eval()
output = model(features, adj)
loss_val = F.nll_loss(output[idx_val], labels[idx_val])
acc_val = accuracy(output[idx_val], labels[idx_val])
if epoch % 10 == 0:
print('Epoch: {:04d}'.format(epoch+1),
'loss_train: {:.4f}'.format(loss_train.item()),
'loss_val: {:.4f}'.format(loss_val.item()),
'acc_val: {:.4f}'.format(acc_val.item()))
return loss_train.item(), loss_val.item()
def test(model, features, adj, idx_test, labels):
model.eval()
output = model(features, adj)
loss_test = F.nll_loss(output[idx_test], labels[idx_test])
acc_test = accuracy(output[idx_test], labels[idx_test])
print("Test set results:",
"loss= {:.4f}".format(loss_test.item()),
"accuracy= {:.4f}".format(acc_test.item()))
def expt_loop(edge_type):
seed()
adj, features, labels, idx_train, idx_val, idx_test = dataloader(dfs, edge_type)
model = VanillaGCN(
nfeat = features.shape[1],
nhid = hidden,
nclass = labels.max().item() + 1,
dropout = dropout
)
optimizer = optim.Adam(model.parameters(), lr=lr, weight_decay=wd)
train_losses, val_losses = [], []
for epoch in range(epochs):
loss_train, loss_val = train(epoch, model, optimizer,
features, adj, idx_train, idx_val, labels)
train_losses.append(loss_train)
val_losses.append(loss_val)
test(model, features, adj, idx_test, labels)
plt.plot(train_losses, label='Train Loss')
plt.plot(val_losses, label='Val Loss')
plt.grid()
plt.xlabel('Epochs')
plt.ylabel('NLLLoss')
plt.legend()
plt.show()
expt_loop('notifs')
###Output
Epoch: 0001 loss_train: 0.7523 loss_val: 0.9276 acc_val: 0.4868
Epoch: 0011 loss_train: 0.6858 loss_val: 0.6900 acc_val: 0.5724
Epoch: 0021 loss_train: 0.6597 loss_val: 0.6817 acc_val: 0.5921
Epoch: 0031 loss_train: 0.6114 loss_val: 0.6776 acc_val: 0.6184
Epoch: 0041 loss_train: 0.5808 loss_val: 0.6880 acc_val: 0.6184
Epoch: 0051 loss_train: 0.5711 loss_val: 0.6908 acc_val: 0.6053
Epoch: 0061 loss_train: 0.5450 loss_val: 0.7209 acc_val: 0.6382
Epoch: 0071 loss_train: 0.5441 loss_val: 0.7264 acc_val: 0.6184
Epoch: 0081 loss_train: 0.5341 loss_val: 0.7363 acc_val: 0.6382
Epoch: 0091 loss_train: 0.4927 loss_val: 0.7714 acc_val: 0.6382
Epoch: 0101 loss_train: 0.5316 loss_val: 0.7708 acc_val: 0.6316
Epoch: 0111 loss_train: 0.4839 loss_val: 0.7886 acc_val: 0.6053
Epoch: 0121 loss_train: 0.5095 loss_val: 0.8441 acc_val: 0.6382
Epoch: 0131 loss_train: 0.4896 loss_val: 0.8417 acc_val: 0.6316
Epoch: 0141 loss_train: 0.4825 loss_val: 0.8755 acc_val: 0.6447
Epoch: 0151 loss_train: 0.4888 loss_val: 0.9177 acc_val: 0.6447
Epoch: 0161 loss_train: 0.5069 loss_val: 0.8578 acc_val: 0.6250
Epoch: 0171 loss_train: 0.4971 loss_val: 0.8830 acc_val: 0.6316
Epoch: 0181 loss_train: 0.5231 loss_val: 0.9300 acc_val: 0.6053
Epoch: 0191 loss_train: 0.4758 loss_val: 0.9546 acc_val: 0.6447
Test set results: loss= 0.9962 accuracy= 0.5692
|
python_basics/Closures and Decorators.ipynb | ###Markdown
Local Functions
###Code
def sort_by_last_letter(strings):
def last_letter(string):
return string[-1]
return sorted(strings, key = last_letter)
words = ["deepak", "gaurav", "jai"]
sort_by_last_letter(words)
sort_by_last_letter.last_letter
# the new function(last_letter()) is created after each time def is executed
g = "global"
def outer(p = "params"):
l = "local"
def local():
print(l, g, p)
local()
outer()
x = "global"
print(x)
def outer(x = "params"):
print(x)
x = "local"
def local():
print(x)
local()
outer()
###Output
global
params
local
###Markdown
Returning local functions
###Code
def outer():
def inner():
print("inner")
return inner
inner_func = outer()
inner_func()
# How?
# Because of CLOSURES property of python. which maintains reference to objects from local scopes
###Output
inner
###Markdown
Closures
###Code
def enclosing():
x = "closed over"
b = True
d = {"name": "jai", "age": 21}
y = 2 # not in closurebecause not used in inner function
def inner_function():
print(x, b, d)
return inner_function
lf = enclosing()
lf()
lf.__closure__
def raise_to(exp):
def raise_to_exp(x):
return exp**x
return raise_to_exp
x = raise_to(6)
print(x(2))
print(x(0))
print(x(6))
print(x(1))
x.__closure__
###Output
36
1
46656
6
###Markdown
nonlocal vs global
###Code
message = "global"
def enclosing():
message = "enclosing"
def local():
message = "local"
print("enclosing function: ", message)
local()
print("enclosing function: ", message)
print("global message: ", message)
enclosing()
print("global message: ", message)
message = "global"
def enclosing():
message = "enclosing"
def local():
global message
message = "local"
print("enclosing function: ", message)
local()
print("enclosing function: ", message)
print("global message: ", message)
enclosing()
print("global message: ", message)
message = "global"
def enclosing():
message = "enclosing"
def local():
nonlocal message
message = "local"
print("enclosing function: ", message)
local()
print("enclosing function: ", message)
print("global message: ", message)
enclosing()
print("global message: ", message)
###Output
global message: global
enclosing function: enclosing
enclosing function: local
global message: global
###Markdown
Decoratorsmodify or enhance the functions withour changing their deinations implemeted as callables that take and return other callable- Replace, enhance, or modify existing function- Does not change the orignal function defination- Calling code does need not to change- Decorator mechanism uses the modified function's orignal nameIt first compliles the base function(creates a Function object) and then passes this fncyion object to the decorator.Decorator by defination takes arguement as function object as only arguement, and it returns a new callable object which is nothing but a new local function defined inside the decoratorAnd then function binds with the orignal function name.
###Code
def my_decorator(f):
def wrap(*args, **kwargs): # args of the my_fun
x = f(*args, **kwargs)
return x
return wrap
@my_decorator
def my_fun(*args, **kwargs):
pass
###Output
_____no_output_____
###Markdown
Function as Callables
###Code
def escape_unicode(f):
print("2")
def wrap(*args, **kwargs):
print("3")
x = f(*args, **kwargs)
print("4")
return ascii(x)
print("5")
return wrap
def northen_city():
return "aα"
@escape_unicode
def northen_city1():
print("1")
return "aα"
northen_city()
northen_city1()
###Output
3
1
4
###Markdown
Classes as CallableNeed to have \__call__ method
###Code
# Call count, Class as Decorator(only if it as __Call__ method)
class CallCount:
def __init__(self, f):
self.count = 0
self.f = f
def __call__(self, *args, **kwargs):
self.count += 1
self.f(*args, **kwargs)
return self.f(*args, **kwargs)
@CallCount
def hello(name):
return "Hello, {}".format(name)
hello("Jai")
hello("Deepak")
hello.count
###Output
_____no_output_____
###Markdown
Instance as Decorator
###Code
class Trace:
def __init__(self):
self.enabled = True
def __call__(self, f):
def wrap(*args, **kwargs):
if self.enabled:
print("Calling {}".format(f))
return f(*args, **kwargs)
return wrap
tracer = Trace()
@tracer
def rotate_list(l):
return l[1:] + [l[0]]
l = [1, 2, 3]
m = rotate_list(l)
m = rotate_list(l)
m = rotate_list(l)
tracer.enabled = False
m = rotate_list(l)
###Output
_____no_output_____
###Markdown
Multiple Decorator
###Code
@decorator1
@decorator2
@decorator3
def my_fnction():
...
###Output
_____no_output_____
###Markdown
It first applies the decorator3 and new returned function is then passed to decorator2 and then to 1.
###Code
tracer.enabled = True
@tracer
@escape_unicode
def nor_weign_island_maker(name):
return name + "deltaΔ"
nor_weign_island_maker("Jai")
tracer.enabled = False
nor_weign_island_maker("Jai")
###Output
3
4
###Markdown
Functools.wrap()During applying decorators, we lose the meta data of the function which we have applied on the decorator.Or it overrided by the decorator meta data
###Code
# FOr EXAMPLE
def noop(f):
def wrap(*args, **kwargs):
return f(*args, **kwargs)
return wrap
@noop
def hello():
"""Prints the hello world message"""
print("Hello world!!")
hello.__name__
hello.__doc__
help(hello)
###Output
Help on function wrap in module __main__:
wrap(*args, **kwargs)
###Markdown
To avoid this losing of Meta Data, we use functools.wrap()
###Code
from functools import wraps
def noop(f):
@wraps(f)
def wrap(*args, **kwargs):
return f(*args, **kwargs)
return wrap
@noop
def hello():
"""Prints the hello world message"""
print("Hello world!!")
hello.__name__
hello.__doc__
help(hello)
###Output
Help on function hello in module __main__:
hello()
Prints the hello world message
|
author-analysis.ipynb | ###Markdown
Exploring Gender Bias in Ratings The following section addresses the question of whether there is a relationship between gender and ratings. Due to the nature of the library used to perform the analysis, gender will be assumed to be a binary gender system (male and female). This does not reflect our beliefs about gender identity.The null hypothesis states that the difference in ratings between males and female authors is due to chance only. The alternative hypothesis states that the difference is due to some other factors and not chance alone. The library [gender-guesser](https://pypi.org/project/gender-guesser/) takes in first names as input and outputs 4 outcomes: `(1) unknown (name not found)`, `(2) andy (androgynous)`, `(3) male`, `(4) female`, `(5) mostly_male`, `(6) mostly_female`. `Andy` is assigned when the probability of being male is equal to the probability of being female. `Unknown` is assigned to names that weren't found in the database. Installation * In Jupyter notebook!pip install gender-guesser* In the terminalpip install gender-guesser
###Code
%matplotlib inline
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
from scipy.stats import ttest_ind # statistics
import gender_guesser.detector as gender # gender analysis
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('seaborn-pastel')
# create a dataframe from the authors json file. This file contains authors data that goes beying our book genre data
authors = pd.read_json('../data/goodreads_book_authors.json', lines = True)
###Output
_____no_output_____
###Markdown
Since we have some repeated authors in the dataset, we will group by name and take the average of the rating for each author to perform our analysis. The average will be stored in a new column named 'rating'.
###Code
authors['rating'] = authors.groupby('name')['average_rating'].transform('mean')
authors.head()
# get authors' first name
first_names = authors['name'].str.split(' ',expand=True)[0]
# exact gender using gender-guesser
d = gender.Detector(case_sensitive=False)
#create list of genders from names
genders = [d.get_gender(name) for name in first_names]
# create a pandas series for gender. We will convert mostly_female and mostly_male to female and male respectively
genders = pd.Series([x.replace('mostly_female','female').replace('mostly_male','male') for x in genders])
# calculate the percentage of males and females represented in the dataset
gender_proportions = genders.value_counts().to_frame()
gender_proportions.reset_index(inplace = True)
gender_proportions.columns = ['gender', 'count']
total = sum (gender_proportions['count'])
gender_proportions['percentage'] = gender_proportions['count']/total*100
gender_proportions
#create a column for gender in the authors dataframe
authors['Gender'] = genders
# create a rating array for males and females for plotting purposes
male_scores = authors[authors['Gender'] == 'male']['rating'].values
female_scores= authors[authors['Gender'] == 'female']['rating'].values
###Output
_____no_output_____
###Markdown
PlottingNow, we are ready to plot the distribution of ratings for male vs. female authors
###Code
pal = sns.color_palette('pastel')
pal.as_hex()
# set the figure size
plt.rcParams['figure.figsize'] = [10, 6]
# Assign colors for gender
colors = ['#82CDFF', '#FFB482']
names = ['Male', 'Female']
# Make the histogram using a list of lists
plt.hist([male_scores,female_scores], bins = int(180/15),
color = colors, label=names)
# Plot formatting
plt.legend()
plt.xlabel('Average Rating')
plt.ylabel('Frequency')
plt.title('Side-by-side omparison of average ratings for male vs. female authors')
plt.savefig('../visualizations/gender-ratings.svg', format='svg', dpi=1200)
# basic exploration of the dataset
authors.shape
authors['name'].value_counts()
pal = sns.color_palette("Set2")
pal.as_hex()
fig, axes = plt.subplots(2,1)
axes[0].hist(male_scores, color='#f27d52', bins = 10, width = 0.3)
axes[0].set_xlabel('Male Average Ratings')
# Make the y-axis label, ticks and tick labels match the line color.
axes[0].set_ylabel('male scores')
axes[1].hist(female_scores, color='#2a89d1', bins = 10, width = 0.3)
axes[1].set_ylabel('Female Average Ratings')
fig.tight_layout()
###Output
_____no_output_____
###Markdown
Making sense of the resultsIn order to determine whether the difference in ratings between male and female authors is significant, we need to determine the type of statistical tool to use. Since we are looking at differences between 2 groups, a student t-test or Kolmogorov-Smirnov test stand out as possible choices.An independent student t-test works best with normally distributed data. The KS Test on the other hand,is a non-parametric and distribution-free test. It makes no assumption about the distribution of data.Let's examine the distributed of ratings:
###Code
plt.hist(authors['rating'], bins = 10, width = 0.3)
W, p_value = scipy.stats.shapiro(authors["rating"])
if p_value < 0.05:
print("Rejecting null hypothesis - data does not come from a normal distribution (p=%s)"%p_value)
else:
print("Cannot reject null hypothesis (p=%s)"%p_value)
###Output
Rejecting null hypothesis - data does not come from a normal distribution (p=0.0)
###Markdown
It appears that our sample isn't normally distributed. Based on this finding, we will go ahead and use the KS test to examine the difference in the two groups(male authors vs. female authors). Running Kolmogorov-Smirnov test
###Code
male_scores = authors[authors['Gender'] == 'male']['rating'].values
female_scores = authors[authors['Gender'] == 'female']['rating'].values
scipy.stats.ks_2samp(male_scores, female_scores)
###Output
_____no_output_____ |
data_insights.ipynb | ###Markdown
###Code
import pandas as pd
df = pd.read_csv('https://raw.githubusercontent.com/JimKing100/med_cab/master/data/cannabis.csv')
df.head()
df.shape
df.nunique
df['Strain'].unique()
df['Type'].unique()
df2 = df['Effects'].str.split(',').apply(pd.Series)
df2.index = df['Strain']
df2 = df2.stack().reset_index('Strain')
df2 = df2.rename(columns={0: 'Effects'})
df2['Effects']
df2['Effects'].unique()
df3 = df['Flavor'].str.split(',').apply(pd.Series)
df3.index = df['Strain']
df3 = df3.stack().reset_index('Strain')
df3 = df3.rename(columns={0: 'Flavor'})
df3['Flavor']
df3['Flavor'].unique()
###Output
_____no_output_____ |
notebooks/20201002 - Experiments on 1D cellular automata with Pytorch.ipynb | ###Markdown
Experiments on 1D Cellular Automata with PyTorch
###Code
# Base Data Science snippet
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import os
import time
from tqdm import tqdm_notebook
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys
sys.path.append("../")
import abio
###Output
_____no_output_____
###Markdown
Abio library development
###Code
from abio.cellular_automata import CellularAutomata1D
x = np.zeros(32)
x[16] = 1
def convert_rule_to_list(rule:int) -> list:
rule = np.binary_repr(rule)
rule = "0"*(8-len(rule))+rule
rule = list(map(int,list(rule)))
return rule
convert_rule_to_list(90)
def iterative_update(state,rule):
state_pad = np.pad(state,pad_width = 1)
next_states = []
for i in range(len(state)):
window_state = state_pad[i:i+3]
window_state = window_state * np.array([4,2,1])
next_cell_index = int(np.sum(window_state))
next_cell_state = rule[next_cell_index]
next_states.append(next_cell_state)
return np.array(next_states)
rule = convert_rule_to_list(90)
init_state = np.random.binomial(1,p = 0.02,size = 1000)
rule
%%time
states = []
x = init_state
for i in range(1000):
x = iterative_update(x,rule)
states.append(x)
states = np.vstack(x)
6800 / 400
180 / 20
%%time
from abio.cellular_automata import CellularAutomata1D
ca = CellularAutomata1D()
x = ca.run_random(rule = 90,size = 1000,p_init = 0.01,n_steps = 1000)
x = ca.run_random(size = 100,p_init = 0.02,return_fig = True)
x = ca.run_random(size = 100,p_init = 0.02,return_fig = True)
###Output
_____no_output_____ |
docs/examples/analytic_short_time.ipynb | ###Markdown
According to the thesis of Audrey Cottet (http://www-drecam.cea.fr/drecam/spec/Pres/Quantro, equation 3.13, page 159) over a timescale much shorter that $1/f_\text{UV}$ the dephasing factor is gaussian:$$f_\phi(t) = \exp \big( -\frac{1}{2} (2 \pi D t)^2 S^\text{tot} \big)$$where $D = \frac{d f_{01}}{ d \lambda}$ is the derivative of the qubit gap in natural frequency units with respect to the noise parameter $\lambda$, and $S^\text{tot} = \int df S(f)$ is the total noise power.
###Code
A = 1.0
f_uv = 1.0
seed = 0
cutoff_time = None
n_traces = 2**11
n_frequencies = 10000
f_interval = 0.1
# Initialise the noise generator.
generator = noisegen.NoiseGenerator(n_frequencies, f_interval)
# Take samples of the power spectral density.
psd = psd_func(generator.fft_frequencies, A, f_uv)
generator.specify_psd(psd=psd)
# Generate noise samples.
generator.generate_trace_truncated(seed=seed, n_traces=n_traces, cutoff_time=cutoff_time)
# D = derivative of qubit gap in natural frequency units with respect to the noise.
D = 5e0
delta_gap = delta_gap_func(generator.samples, D=D)
phase = 2*np.pi*pd.DataFrame(integrate.cumtrapz(delta_gap,x=delta_gap.index,axis=0),index=delta_gap.index[1:])
coherence = np.exp(1j*phase).mean(axis=1).abs()
# Calculate the total noise power.
def sinc(x):
return np.sinc(x/np.pi)
def integrand_func(f, t, psd_args=()):
return sinc(np.pi*f*t)**2 * psd_func(f, *psd_args)
psd_args = (A, f_uv)
integrator_args = (0.0, psd_args)
eps = 1e-9
limit = 50
f_0 = -f_uv
f_1 = f_uv
S_tot = scipy.integrate.quad(integrand_func, f_0, f_1, args=integrator_args, epsabs=eps, epsrel=eps, limit=limit)[0]
def short_time_coherence_func(t, D, S_tot):
return np.exp(-0.5*S_tot*(2*np.pi*D*t)**2)
short_time_coherence = pd.Series(short_time_coherence_func(coherence.index, D, S_tot), index=coherence.index)
matplotlib.rcParams.update({'font.size': 22})
fig, axes = plt.subplots(1, 1, figsize=(10, 6))
coherence.plot(ax=axes, label='Numerical')
short_time_coherence.plot(ax=axes, label='Analytical')
axes.set_xlim([0,0.1])
axes.set_ylim([0, 1])
legend = axes.legend()
axes.set_ylabel('Coherence')
axes.set_xlabel('Time')
###Output
_____no_output_____
###Markdown
The analytical formula relies on the fact that for a gaussian distributed variable we have$$\langle \exp(i \phi(t)) \rangle = \exp(- \frac{1}{2} \langle \phi(t)^2 \rangle)$$$$\phi(t) = 2 \pi D \int_0^t dt^\prime \lambda(t^\prime)$$According to equation 3.9 on page 158 this can be calculated according to:$$\langle \phi(t)^2 \rangle = (2 \pi t D)^2 \int df S(f) \text{sinc}(\pi f t)^2$$At short times $t \ll 1/f_\text{UV}$ the sinc term is approximately uniform over the bulk of the majority of the power spectrum. Therefore the integral reduces to $S^\text{tot}$:$$\langle \phi(t)^2 \rangle = (2 \pi t D)^2 S^\text{tot}$$Below we can see close agreement between the analyticall and numerically calculated mean square phase.
###Code
mean_square_phase = (phase**2).mean(axis=1)
mean_square_phase_analytical = (2*np.pi*mean_square_phase.index*D)**2 * S_tot
mean_square_phase_analytical= pd.Series(mean_square_phase_analytical, index=mean_square_phase.index)
fig, axes = plt.subplots(1, 1, figsize=(10, 6))
mean_square_phase.plot(ax=axes, label='Numerical')
mean_square_phase_analytical.plot(ax=axes, label='Analytical')
axes.set_ylabel(r'$\langle \phi(t)^2 \rangle$')
axes.set_xlabel('Time')
legend = axes.legend()
axes.set_xlim([0,0.1])
axes.set_ylim([0,20])
###Output
_____no_output_____ |
Coursera/Tools for Data Science/Week-1.ipynb | ###Markdown
***Data is Central to Data Science*** - Python R Sql are first foremost languages for data science
###Code
n=
!pip3 install request
###Output
Collecting request
Downloading request-2019.4.13.tar.gz (1.3 kB)
Collecting get
Downloading get-2019.4.13.tar.gz (1.3 kB)
Collecting post
Downloading post-2019.4.13.tar.gz (1.3 kB)
Requirement already satisfied: setuptools in /usr/lib/python3/dist-packages (from request) (45.2.0)
Collecting query_string
Downloading query-string-2019.4.13.tar.gz (1.6 kB)
Collecting public
Downloading public-2019.4.13.tar.gz (2.3 kB)
Building wheels for collected packages: request, get, post, query-string, public
Building wheel for request (setup.py) ... [?25ldone
[?25h Created wheel for request: filename=request-2019.4.13-py3-none-any.whl size=1675 sha256=aebca2215fc2cd8daed2a7cdec3273ce7749ffa7b723f4544340c077ac44b4bc
Stored in directory: /home/keshav/.cache/pip/wheels/29/c2/a5/f437d6a64263acb19182cfa42b4487a711d419d9fafb2db264
Building wheel for get (setup.py) ... [?25ldone
[?25h Created wheel for get: filename=get-2019.4.13-py3-none-any.whl size=1692 sha256=13f87dbb199cecf9d2b80b6b9215fb1fdaa4906ddad9663e80c05127a9087295
Stored in directory: /home/keshav/.cache/pip/wheels/48/69/49/aecdf882e5c150d577e2293f09a51bfdb276abb41742f67e75
Building wheel for post (setup.py) ... [?25ldone
[?25h Created wheel for post: filename=post-2019.4.13-py3-none-any.whl size=1659 sha256=f7e9667c29fa7d5d25fde7dc8b129026e005ff936afb3b6faaec6069fb2e54ae
Stored in directory: /home/keshav/.cache/pip/wheels/59/43/82/591be16e9747432311255e30008bf2bf0ffe68c7da30ec4374
Building wheel for query-string (setup.py) ... [?25ldone
[?25h Created wheel for query-string: filename=query_string-2019.4.13-py3-none-any.whl size=2048 sha256=9bedea1d2207336b7bad2a63fbaf4937e74760061479cc84e5625b18fca665da
Stored in directory: /home/keshav/.cache/pip/wheels/5b/05/a3/eef676684dc188f229b612c003a0eaf5cb9fbac8c50c0d4a42
Building wheel for public (setup.py) ... [?25ldone
[?25h Created wheel for public: filename=public-2019.4.13-py3-none-any.whl size=2534 sha256=5d9b3e4f4274eea09912e8eea11fa43a85e111b356a5643f521f7b012dc2cb57
Stored in directory: /home/keshav/.cache/pip/wheels/24/f6/53/2d8bc46a815b2c176056f714cfa2b4a9736913f38a14efb320
Successfully built request get post query-string public
Installing collected packages: public, query-string, get, post, request
Successfully installed get-2019.4.13 post-2019.4.13 public-2019.4.13 query-string-2019.4.13 request-2019.4.13
|
8. support vector machine/images_preprocess.ipynb | ###Markdown
读取图片
###Code
# 读取图片
img = cv2.imread(r'.\images\mask_cor\00000_Mask.jpg')
img.shape
# 显示图片
plt.imshow(cv2.cvtColor(img,cv2.COLOR_BGR2RGB))
###Output
_____no_output_____
###Markdown
裁剪人脸
###Code
#加载SSD模型
face_detector = cv2.dnn.readNetFromCaffe('./weights/deploy.prototxt.txt','weights/res10_300x300_ssd_iter_140000.caffemodel')
face_detector
# 转为Blob
# 对图像进行预处理,包括减均值,比例缩放,裁剪,交换通道等,返回一个4通道的blob
img_blob = cv2.dnn.blobFromImage(img,1,(1024,1024),(104,177,123),swapRB=True)
img_blob.shape
# 输入
face_detector.setInput(img_blob)
# 推理
detections = face_detector.forward()
detections.shape
# 人数
person_count = detections.shape[2]
person_count
# 人脸检测函数
def face_detect(img):
#转为Blob
img_blob = cv2.dnn.blobFromImage(img,1,(300,300),(104,177,123),swapRB=True)
# 输入
face_detector.setInput(img_blob)
# 推理
detections = face_detector.forward()
# 获取原图尺寸
img_h,img_w = img.shape[:2]
# 人脸数量
person_count = detections.shape[2]
for face_index in range(person_count):
# 置信度
confidence = detections[0,0,face_index,2]
if confidence > 0.5:
locations = detections[0,0,face_index,3:7] * np.array([img_w,img_h,img_w,img_h])
# 取证
l,t,r,b = locations.astype('int')
# cv2.rectangle(img,(l,t),(r,b),(0,255,0),5)
return img[t:b,l:r]
return None
# 测试图片
img_new = cv2.imread(r'.\images\mask_cor\00000_Mask.jpg')
face_crop = face_detect(img_new)
face_crop.shape
# 显示图片
plt.imshow(cv2.cvtColor(face_crop,cv2.COLOR_BGR2RGB))
###Output
_____no_output_____
###Markdown
3.转为Blob图像
###Code
# 转为Blob的函数
def imgBlob(img):
# 转为Blob
img_blob = cv2.dnn.blobFromImage(img,1,(100,100),(104,177,123),swapRB=True)
# 压缩维度
img_squeeze = np.squeeze(img_blob).T
# 旋转
img_rotate = cv2.rotate(img_squeeze,cv2.ROTATE_90_CLOCKWISE)
# 镜像
img_flip = cv2.flip(img_rotate,1)
# 去除负数,并归一化
img_blob = np.maximum(img_flip,0) / img_flip.max()
return img_blob
from skimage.feature import hog
img_test = cv2.imread(r'.\images\mask_cor\00000_Mask.jpg')
img_g = cv2.cvtColor(img_test,cv2.COLOR_BGR2GRAY)
img_g.shape
fd = hog(img_g, orientations=8, pixels_per_cell=(16, 16),
cells_per_block=(1, 1))
fd.shape
###Output
C:\Users\haitao\Anaconda3\lib\site-packages\skimage\feature\_hog.py:150: skimage_deprecation: Default value of `block_norm`==`L1` is deprecated and will be changed to `L2-Hys` in v0.15. To supress this message specify explicitly the normalization method.
skimage_deprecation)
###Markdown
处理所有图片
###Code
# 获取图片类别 labels
import os,glob
import tqdm
labels = os.listdir('images/')
labels
# 遍历所有类别
# 两个列表保存结果
img_list1 = []
label_list1 = []
for label in labels:
# 获取每类文件列表
file_list =glob.glob('images/%s/*.jpg' % (label))
for img_file in tqdm.tqdm( file_list ,desc = "处理 %s " % (label)):
# 读取文件
img = cv2.imread(img_file)
# 裁剪人脸
img_crop = face_detect(img)
# 判断空的情况
if img_crop is not None:
# 转为Blob
img_blob = imgBlob(img_crop)
img_blobg = cv2.cvtColor(img_blob,cv2.COLOR_BGR2GRAY)
fd = hog(img_blobg, orientations=8, pixels_per_cell=(8, 8),
cells_per_block=(1, 1))
img_list1.append(fd)
label_list1.append(label)
###Output
处理 mask_cor : 100%|██████████████████████████████████████████████████████████████████| 950/950 [00:46<00:00, 20.39it/s]
处理 mask_uncor : 100%|████████████████████████████████████████████████████████████████| 928/928 [00:47<00:00, 19.40it/s]
处理 no_mask : 100%|█████████████████████████████████████████████████████████████████| 1000/1000 [00:50<00:00, 19.93it/s]
###Markdown
5.保存为numpy文件
###Code
# 转为numpy数据
X_g = np.asarray(img_list1)
Y_g = np.asarray(label_list1)
X_g.shape,Y_g.shape
# 存储为numpy文件
np.savez('./data/imageData_g.npz',X_g,Y_g)
from sklearn.model_selection import train_test_split
from sklearn import svm
from sklearn.metrics import accuracy_score
from sklearn.metrics import confusion_matrix
x_train,x_test,y_train,y_test = train_test_split(X_g,Y_g,
test_size=0.25,random_state=42)
cls = svm.SVC(kernel='rbf')
cls.fit(x_train,y_train)
predictLabels = cls.predict(x_test)
print ( "svm acc:%s" % accuracy_score(y_test,predictLabels))
cls1 = svm.SVC(kernel='linear')
cls1.fit(x_train,y_train)
predictLabels = cls1.predict(x_test)
print ( "svm 1 acc:%s" % accuracy_score(y_test,predictLabels))
cls2 = svm.SVC(kernel='poly')
cls2.fit(x_train,y_train)
predictLabels = cls2.predict(x_test)
print ( "svm 2 acc:%s" % accuracy_score(y_test,predictLabels))
from joblib import dump, load
dump(cls, './models/svc.joblib')
# 调用模型
cls = load('./models/svc.joblib')
predictLabels = cls.predict(x_test)
print ( "svm acc:%s" % accuracy_score(y_test,predictLabels))
predictLabels[0]
###Output
_____no_output_____ |
notebooks/Miscellaneous/Wood's Anomaly.ipynb | ###Markdown
Numerical Singularities due to Wood's AnomalyWood's anomly is an age-old phenomenon observed in the early 20th century. It is basically a phenomenon when $k_z = 0$ in a layer. In RCWA, we define a $k_{zi}$ for every Fourier component:$$k_{zi} = k_0^2n^2 - k_{xi}^2 -k_{yi}^2$$Assume for a second we are in the 1D case and $k_y = 0$ and that $n=1$.Then we can make $k_{zi} = 0$ simply by asking $k_0 = k_{xi}$, or:$$\frac{2\pi}{\lambda} = k_x \pm \frac{2\pi m}{a}$$Further assuming normal incidence, we just have:$$\frac{2\pi}{\lambda} = \frac{2\pi m}{a}$$So we see we can get issues precisely when $\lambda$ is a rational fraction of the lattice constant. However, for a typical grating, it's not really obvious how we can get to this singularity because the $k_z$ values we are interested are extracted from an eigensolver. One case we can force $k_z=0$ without a doubt it for a uniform slab. However, as the below script will show, the 1D TE and TM Case with the Gaylord formulation seems to be safe, specifically because unlike the scattering matrix formalism, they do not try to invert $Kz$ when they go extract the eigenmodes in $H$This is the link to the original paperhttps://www.osapublishing.org/view_article.cfm?gotourl=https%3A%2F%2Fwww%2Eosapublishing%2Eorg%2FDirectPDFAccess%2FE451A175-DDA2-A95A-D5C67C8D01CB3352_33172%2Fjosaa-12-5-1068%2Epdf%3Fda%3D1%26id%3D33172%26seq%3D0%26mobile%3Dno&org=Stanford%20University%20Libraries
###Code
## same as the analytic case but with the fft
import numpy as np
import matplotlib.pyplot as plt
from numpy.linalg import cond
import cmath;
from scipy.fftpack import fft, fftfreq, fftshift, rfft
from scipy.fftpack import dst, idst
from scipy.linalg import expm
from scipy import linalg as LA
# Moharam et. al Formulation for stable and efficient implementation for RCWA
plt.close("all")
np.set_printoptions(precision = 4)
def grating_fourier_harmonics(order, fill_factor, n_ridge, n_groove):
""" function comes from analytic solution of a step function in a finite unit cell"""
#n_ridge = index of refraction of ridge (should be dielectric)
#n_ridge = index of refraction of groove (air)
#n_ridge has fill_factor
#n_groove has (1-fill_factor)
# there is no lattice constant here, so it implicitly assumes that the lattice constant is 1...which is not good
if(order == 0):
return n_ridge**2*fill_factor + n_groove**2*(1-fill_factor);
else:
#should it be 1-fill_factor or fill_factor?, should be fill_factor
return(n_ridge**2 - n_groove**2)*np.sin(np.pi*order*(fill_factor))/(np.pi*order);
def grating_fourier_array(num_ord, fill_factor, n_ridge, n_groove):
""" what is a convolution in 1D """
fourier_comps = list();
for i in range(-num_ord, num_ord+1):
fourier_comps.append(grating_fourier_harmonics(i, fill_factor, n_ridge, n_groove));
return fourier_comps;
def fourier_reconstruction(x, period, num_ord, n_ridge, n_groove, fill_factor = 0.5):
index = np.arange(-num_ord, num_ord+1);
f = 0;
for n in index:
coef = grating_fourier_harmonics(n, fill_factor, n_ridge, n_groove);
f+= coef*np.exp(cmath.sqrt(-1)*np.pi*n*x/period);
#f+=coef*np.cos(np.pi*n*x/period)
return f;
def fourier_reconstruction_general(x, period, num_ord, coefs):
'''
overloading odesn't work in python...fun fact, since it is dynamically typed (vs statically typed)
:param x:
:param period:
:param num_ord:
:param coefs:
:return:
'''
index = np.arange(-num_ord, num_ord+1);
f = 0; center = int(len(coefs)/2); #no offset
for n in index:
coef = coefs[center+n];
f+= coef*np.exp(cmath.sqrt(-1)*2*np.pi*n*x/period);
return f;
def grating_fft(eps_r):
assert len(eps_r.shape) == 2
assert eps_r.shape[1] == 1;
#eps_r: discrete 1D grid of the epsilon profile of the structure
fourier_comp = np.fft.fftshift(np.fft.fft(eps_r, axis = 0)/eps_r.shape[0]);
#ortho norm in fft will do a 1/sqrt(n) scaling
return np.squeeze(fourier_comp);
# plt.plot(x, np.real(fourier_reconstruction(x, period, 1000, 1,np.sqrt(12), fill_factor = 0.1)));
# plt.title('check that the analytic fourier series works')
# #'note that the lattice constant tells you the length of the ridge'
# plt.show()
L0 = 1e-6;
e0 = 8.854e-12;
mu0 = 4*np.pi*1e-8;
fill_factor = 0.3; # 50% of the unit cell is the ridge material
num_ord = 10; #INCREASING NUMBER OF ORDERS SEEMS TO CAUSE THIS THING TO FAIL, to many orders induce evanescence...particularly
# when there is a small fill factor
PQ = 2*num_ord+1;
indices = np.arange(-num_ord, num_ord+1)
n_ridge = 4; #3.48; # ridge
n_groove = 1; # groove (unit-less)
lattice_constant = 1; # SI units
# we need to be careful about what lattice constant means
# in the gaylord paper, lattice constant exactly means (0, L) is one unit cell
d = 1; # thickness, SI units
Nx = 2*256;
eps_r = n_groove**2*np.ones((2*Nx, 1)); #put in a lot of points in eps_r
border = int(2*Nx*fill_factor);
eps_r[0:border] = n_ridge**2;
fft_fourier_array = grating_fft(eps_r);
x = np.linspace(-lattice_constant,lattice_constant,1000);
period = lattice_constant;
fft_reconstruct = fourier_reconstruction_general(x, period, num_ord, fft_fourier_array);
fourier_array_analytic = grating_fourier_array(Nx, fill_factor, n_ridge, n_groove);
analytic_reconstruct = fourier_reconstruction(x, period, num_ord, n_ridge, n_groove, fill_factor)
plt.figure();
plt.plot(np.real(fft_fourier_array[Nx-20:Nx+20]), linewidth=2)
plt.plot(np.real(fourier_array_analytic[Nx-20:Nx+20]));
plt.legend(('fft', 'analytic'))
plt.show()
plt.figure();
plt.plot(x,fft_reconstruct)
plt.plot(x,analytic_reconstruct);
plt.legend(['fft', 'analytic'])
plt.show()
## simulation parameters
theta = (0)*np.pi/180;
spectra = list();
spectra_T = list();
## construct permittivity harmonic components E
#fill factor = 0 is complete dielectric, 1 is air
##construct convolution matrix
E = np.zeros((2 * num_ord + 1, 2 * num_ord + 1)); E = E.astype('complex')
p0 = Nx; #int(Nx/2);
p_index = np.arange(-num_ord, num_ord + 1);
q_index = np.arange(-num_ord, num_ord + 1);
fourier_array = fft_fourier_array;#fourier_array_analytic;
detected_pffts = np.zeros_like(E);
for prow in range(2 * num_ord + 1):
# first term locates z plane, 2nd locates y coumn, prow locates x
row_index = p_index[prow];
for pcol in range(2 * num_ord + 1):
pfft = p_index[prow] - p_index[pcol];
detected_pffts[prow, pcol] = pfft;
E[prow, pcol] = fourier_array[p0 + pfft]; # fill conv matrix from top left to top right
## IMPORTANT TO NOTE: the indices for everything beyond this points are indexed from -num_ord to num_ord+1
## alternate construction of 1D convolution matrix
I = np.identity(2 * num_ord + 1)
wavelength_scan = [ 1.17118644067796620]
# E is now the convolution of fourier amplitudes
for wvlen in wavelength_scan:
j = cmath.sqrt(-1);
lam0 = wvlen; k0 = 2 * np.pi / lam0; #free space wavelength in SI units
print('wavelength: ' + str(wvlen));
## =====================STRUCTURE======================##
## Region I: reflected region (half space)
n1 = 1;#cmath.sqrt(-1)*1e-12; #apparently small complex perturbations are bad in Region 1, these shouldn't be necessary
## Region 2; transmitted region
n2 = 1;
#from the kx_components given the indices and wvln
kx_array = k0*(n1*np.sin(theta) + indices*(lam0 / lattice_constant)); #0 is one of them, k0*lam0 = 2*pi
k_xi = kx_array;
## IMPLEMENT SCALING: these are the fourier orders of the x-direction decomposition.
KX = np.diag(kx_array/k0);
KX2 = np.diag(np.power((k_xi/k0),2)); #singular since we have a n=0, m= 0 order and incidence is normal
## construct matrix of Gamma^2 ('constant' term in ODE):
A = KX2 - E; #conditioning of this matrix is not bad, A SHOULD BE SYMMETRIC
#sum of a symmetric matrix and a diagonal matrix should be symmetric;
##
# when we calculate eigenvals, how do we know the eigenvals correspond to each particular fourier order?
eigenvals, W = LA.eigh(A); #A should be symmetric or hermitian
#we should be gauranteed that all eigenvals are REAL
eigenvals = eigenvals.astype('complex');
Q = np.diag(np.sqrt(eigenvals)); #Q should only be positive square root of eigenvals
V = W@Q; #H modes
#print((np.linalg.cond(W), np.linalg.cond(V))) #well conditioned
## this is the great typo which has killed us all this time
X = np.diag(np.exp(-k0*np.diag(Q)*d)); #this is poorly conditioned because exponentiation
## pointwise exponentiation vs exponentiating a matrix
## observation: almost everything beyond this point is worse conditioned
k_I = k0**2*(n1**2 - (k_xi/k0)**2); #k_z in reflected region k_I,zi
k_II = k0**2*(n2**2 - (k_xi/k0)**2); #k_z in transmitted region
k_I = k_I.astype('complex'); k_I = np.sqrt(k_I);
k_II = k_II.astype('complex'); k_II = np.sqrt(k_II);
Y_I = np.diag(k_I/k0);
Y_II = np.diag(k_II/k0);
delta_i0 = np.zeros((len(kx_array),1));
delta_i0[num_ord] = 1;
n_delta_i0 = delta_i0*j*n1*np.cos(theta); #this is a VECTOR
## design auxiliary variables: SEE derivation in notebooks: RCWA_note.ipynb
# we want to design the computation to avoid operating with X, particularly with inverses
# since X is the worst conditioned thing
#print((np.linalg.cond(W), np.linalg.cond(V)))
# #Bo's solution
# O = np.block([
# [W, W],
# [V,-V]
# ]); #this is much better conditioned than S..
#print(np.linalg.cond(O))
Wi = np.linalg.inv(W);
Vi = np.linalg.inv(V);
Oi = 0.5*np.block([[Wi, Vi],[Wi, -Vi]])
f = I;
g = j*Y_II; #all matrices
fg = np.concatenate((f,g),axis = 0)
#ab = np.matmul(np.linalg.inv(O),fg);
# ab = np.matmul(Oi, fg);
# a = ab[0:PQ,:];
# b = ab[PQ:,:];
a = 0.5*(Wi+j*Vi@Y_II);
b = 0.5*(Wi-j*Vi@Y_II);
fbiX = np.matmul(np.linalg.inv(b),X)
#altTerm = (a@X@X@b); #not well conditioned and I-altTermis is also poorly conditioned.
#print(np.linalg.cond(I-np.linalg.inv(altTerm)))
#print(np.linalg.cond(X@b)); #not well conditioned.
term = X@a@fbiX; # THIS IS SHITTILY CONDITIONED
# print((np.linalg.cond(X), np.linalg.cond(term)))
# print(np.linalg.cond(I+term)); #but this is EXTREMELY WELL CONDITIONED.
f = np.matmul(W, I+term);
g = np.matmul(V,-I+term);
T = np.linalg.inv(j*np.matmul(Y_I,f)+g);
T = np.matmul(T,(np.matmul(j*Y_I,delta_i0)+n_delta_i0));
R = np.matmul(f,T)-delta_i0;
T = np.matmul(fbiX, T)
## calculate diffraction efficiencies
#I would expect this number to be real...
DE_ri = R*np.conj(R)*np.real(np.expand_dims(k_I,1))/(k0*n1*np.cos(theta));
DE_ti = T*np.conj(T)*np.real(np.expand_dims(k_II,1))/(k0*n1*np.cos(theta));
print(np.sum(DE_ri))
#print(np.sum(DE_ri))
spectra.append(np.sum(DE_ri)); #spectra_T.append(T);
spectra_T.append(np.sum(DE_ti))
print((np.sum(DE_ri), np.sum(DE_ti)))
from numpy.linalg import solve as bslash
## FFT of 1/e;
inv_fft_fourier_array = grating_fft(1/eps_r);
##construct convolution matrix
E_conv_inv = np.zeros((2 * num_ord + 1, 2 * num_ord + 1));
E_conv_inv = E_conv_inv.astype('complex')
p0 = Nx;
p_index = np.arange(-num_ord, num_ord + 1);
for prow in range(2 * num_ord + 1):
# first term locates z plane, 2nd locates y coumn, prow locates x
for pcol in range(2 * num_ord + 1):
pfft = p_index[prow] - p_index[pcol];
E_conv_inv[prow, pcol] = inv_fft_fourier_array[p0 + pfft]; # fill conv matrix from top left to top right
## IMPORTANT TO NOTE: the indices for everything beyond this points are indexed from -num_ord to num_ord+1
## alternate construction of 1D convolution matrix
spectra = [];
spectra_T = [];
I = np.eye(2 * num_ord + 1)
wavelength_scan = [0.5];
for wvlen in wavelength_scan:
j = cmath.sqrt(-1);
lam0 = wvlen; k0 = 2 * np.pi / lam0; #free space wavelength in SI units
print('wavelength: ' + str(wvlen));
## =====================STRUCTURE======================##
## Region I: reflected region (half space)
n1 = 1;#cmath.sqrt(-1)*1e-12; #apparently small complex perturbations are bad in Region 1, these shouldn't be necessary
## Region 2; transmitted region
n2 = 1;
#from the kx_components given the indices and wvln
kx_array = k0*(n1*np.sin(theta) + indices*(lam0 / lattice_constant)); #0 is one of them, k0*lam0 = 2*pi
k_xi = kx_array;
## IMPLEMENT SCALING: these are the fourier orders of the x-direction decomposition.
KX = np.diag((k_xi/k0)); #singular since we have a n=0, m= 0 order and incidence is normal
## construct matrix of Gamma^2 ('constant' term in ODE):
A = np.linalg.inv(E_conv_inv)@(KX@bslash(E, KX) - I); #conditioning of this matrix is not bad, A SHOULD BE SYMMETRIC
#sum of a symmetric matrix and a diagonal matrix should be symmetric;
##
# when we calculate eigenvals, how do we know the eigenvals correspond to each particular fourier order?
#eigenvals, W = LA.eigh(A); #A should be symmetric or hermitian, which won't be the case in the TM mode
eigenvals, W = LA.eig(A);
#we should be gauranteed that all eigenvals are REAL
eigenvals = eigenvals.astype('complex');
Q = np.diag(np.sqrt(eigenvals)); #Q should only be positive square root of eigenvals
V = E_conv_inv@(W@Q); #H modes
## this is the great typo which has killed us all this time
X = np.diag(np.exp(-k0*np.diag(Q)*d)); #this is poorly conditioned because exponentiation
## pointwise exponentiation vs exponentiating a matrix
## observation: almost everything beyond this point is worse conditioned
k_I = k0**2*(n1**2 - (k_xi/k0)**2); #k_z in reflected region k_I,zi
k_II = k0**2*(n2**2 - (k_xi/k0)**2); #k_z in transmitted region
k_I = k_I.astype('complex'); k_I = np.sqrt(k_I);
k_II = k_II.astype('complex'); k_II = np.sqrt(k_II);
Z_I = np.diag(k_I / (n1**2 * k0 ));
Z_II = np.diag(k_II /(n2**2 * k0));
delta_i0 = np.zeros((len(kx_array),1));
delta_i0[num_ord] = 1;
n_delta_i0 = delta_i0*j*np.cos(theta)/n1;
## design auxiliary variables: SEE derivation in notebooks: RCWA_note.ipynb
# we want to design the computation to avoid operating with X, particularly with inverses
# since X is the worst conditioned thing
O = np.block([
[W, W],
[V,-V]
]); #this is much better conditioned than S..
f = I;
g = j * Z_II; #all matrices
fg = np.concatenate((f,g),axis = 0)
ab = np.matmul(np.linalg.inv(O),fg);
a = ab[0:PQ,:];
b = ab[PQ:,:];
term = X @ a @ np.linalg.inv(b) @ X;
f = W @ (I+term);
g = V@(-I+term);
T = np.linalg.inv(np.matmul(j*Z_I, f) + g);
T = np.dot(T, (np.dot(j*Z_I, delta_i0) + n_delta_i0));
R = np.dot(f,T)-delta_i0; #shouldn't change
T = np.dot(np.matmul(np.linalg.inv(b),X),T)
## calculate diffraction efficiencies
#I would expect this number to be real...
DE_ri = R*np.conj(R)*np.real(np.expand_dims(k_I,1))/(k0*n1*np.cos(theta));
DE_ti = T*np.conj(T)*np.real(np.expand_dims(k_II,1)/n2**2)/(k0*np.cos(theta)/n1);
#print(np.sum(DE_ri))
spectra.append(np.sum(DE_ri)); #spectra_T.append(T);
spectra_T.append(np.sum(DE_ti))
print(np.sum(DE_ri))
print(np.sum(DE_ti))
###Output
wavelength: 0.5
(0.7389444455607868+0j)
(0.26105555443921635+0j)
|
spark_sql/sparksql_004.ipynb | ###Markdown
**DOCUMENTACAO**https://spark.apache.org/docs/3.0.2/api/sql/
###Code
%fs ls "FileStore/tables/"
import pyspark.sql.functions as F
dados1 = [
(1, "Anderson", 1000.00),
(2, "Kennedy", 2000.00),
(3, "Bruno", 2300.00),
(4, "Maria", 2300.00),
(5, "Eduardo", 2400.00),
(6, "Mendes", 1900.00),
(7, "Kethlyn", 1500.00),
(8, "Thiago", 1800.00),
(9, "Carla", 2100.00)
]
schema1 = ["id", "Nome", "Salario"]
spark.createDataFrame(data=dados1, schema=schema1).createOrReplaceTempView("funcionarios")
dados2 = [
(1, "Delhi", "India"),
(2, "Tamil Nadu", "India"),
(3, "London", "UK"),
(4, "Sydney", "Australia"),
(8, "New York", "USA"),
(9, "California", "USA"),
(10, "New Jersey", "USA"),
(11, "Texas", "USA"),
(12, "Chicago", "USA")
]
schema2 = ["id", "local", "pais"]
spark.createDataFrame(data=dados2, schema=schema2).createOrReplaceTempView("localizacoes")
%sql
SHOW TABLES
###Output
_____no_output_____
###Markdown
SELECT campos FROM tabela1 INNER JOIN tabela2 ON tabela1.coluna = tabela2.coluna
###Code
%sql
SELECT * FROM funcionarios INNER JOIN localizacoes ON funcionarios.id = localizacoes.id
%sql
SELECT * FROM funcionarios LEFT OUTER JOIN localizacoes ON funcionarios.id = localizacoes.id
%sql
SELECT * FROM funcionarios RIGHT OUTER JOIN localizacoes ON funcionarios.id = localizacoes.id
###Output
_____no_output_____ |
Deep_Learning_with_PyTorch_Zero_to_GANs/Lesson3-Training_Deep_Neural_Networks_on_a_GPU/fashion-feedforward-minimal.ipynb | ###Markdown
Classifying images from Fashion MNIST using feedforward neural networksDataset source: https://github.com/zalandoresearch/fashion-mnistDetailed tutorial: https://jovian.ml/aakashns/04-feedforward-nn
###Code
# Uncomment and run the commands below if imports fail
# !conda install numpy pandas pytorch torchvision cpuonly -c pytorch -y
# !pip install matplotlib --upgrade --quiet
import torch
import torchvision
import numpy as np
import matplotlib.pyplot as plt
import torch.nn as nn
import torch.nn.functional as F
from torchvision.datasets import FashionMNIST
from torchvision.transforms import ToTensor
from torchvision.utils import make_grid
from torch.utils.data.dataloader import DataLoader
from torch.utils.data import random_split
%matplotlib inline
project_name='fashion-feedforward-minimal'
###Output
_____no_output_____
###Markdown
Preparing the Data
###Code
dataset = FashionMNIST(root='data/', download=True, transform=ToTensor())
test_dataset = FashionMNIST(root='data/', train=False, transform=ToTensor())
val_size = 10000
train_size = len(dataset) - val_size
train_ds, val_ds = random_split(dataset, [train_size, val_size])
len(train_ds), len(val_ds)
batch_size=128
train_loader = DataLoader(train_ds, batch_size, shuffle=True, num_workers=4, pin_memory=True)
val_loader = DataLoader(val_ds, batch_size*2, num_workers=4, pin_memory=True)
test_loader = DataLoader(test_dataset, batch_size*2, num_workers=4, pin_memory=True)
for images, _ in train_loader:
print('images.shape:', images.shape)
plt.figure(figsize=(16,8))
plt.axis('off')
plt.imshow(make_grid(images, nrow=16).permute((1, 2, 0)))
break
###Output
images.shape: torch.Size([128, 1, 28, 28])
###Markdown
Model
###Code
def accuracy(outputs, labels):
_, preds = torch.max(outputs, dim=1)
return torch.tensor(torch.sum(preds == labels).item() / len(preds))
class MnistModel(nn.Module):
"""Feedfoward neural network with 1 hidden layer"""
def __init__(self, in_size, out_size):
super().__init__()
# hidden layer
self.linear1 = nn.Linear(in_size, 16)
# hidden layer 2
self.linear2 = nn.Linear(16, 32)
# output layer
self.linear3 = nn.Linear(32, out_size)
def forward(self, xb):
# Flatten the image tensors
out = xb.view(xb.size(0), -1)
# Get intermediate outputs using hidden layer 1
out = self.linear1(out)
# Apply activation function
out = F.relu(out)
# Get intermediate outputs using hidden layer 2
out = self.linear2(out)
# Apply activation function
out = F.relu(out)
# Get predictions using output layer
out = self.linear3(out)
return out
def training_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
return loss
def validation_step(self, batch):
images, labels = batch
out = self(images) # Generate predictions
loss = F.cross_entropy(out, labels) # Calculate loss
acc = accuracy(out, labels) # Calculate accuracy
return {'val_loss': loss, 'val_acc': acc}
def validation_epoch_end(self, outputs):
batch_losses = [x['val_loss'] for x in outputs]
epoch_loss = torch.stack(batch_losses).mean() # Combine losses
batch_accs = [x['val_acc'] for x in outputs]
epoch_acc = torch.stack(batch_accs).mean() # Combine accuracies
return {'val_loss': epoch_loss.item(), 'val_acc': epoch_acc.item()}
def epoch_end(self, epoch, result):
print("Epoch [{}], val_loss: {:.4f}, val_acc: {:.4f}".format(epoch, result['val_loss'], result['val_acc']))
###Output
_____no_output_____
###Markdown
Using a GPU
###Code
torch.cuda.is_available()
def get_default_device():
"""Pick GPU if available, else CPU"""
if torch.cuda.is_available():
return torch.device('cuda')
else:
return torch.device('cpu')
device = get_default_device()
device
def to_device(data, device):
"""Move tensor(s) to chosen device"""
if isinstance(data, (list,tuple)):
return [to_device(x, device) for x in data]
return data.to(device, non_blocking=True)
class DeviceDataLoader():
"""Wrap a dataloader to move data to a device"""
def __init__(self, dl, device):
self.dl = dl
self.device = device
def __iter__(self):
"""Yield a batch of data after moving it to device"""
for b in self.dl:
yield to_device(b, self.device)
def __len__(self):
"""Number of batches"""
return len(self.dl)
train_loader = DeviceDataLoader(train_loader, device)
val_loader = DeviceDataLoader(val_loader, device)
test_loader = DeviceDataLoader(test_loader, device)
###Output
_____no_output_____
###Markdown
Training the model
###Code
def evaluate(model, val_loader):
outputs = [model.validation_step(batch) for batch in val_loader]
return model.validation_epoch_end(outputs)
def fit(epochs, lr, model, train_loader, val_loader, opt_func=torch.optim.SGD):
history = []
optimizer = opt_func(model.parameters(), lr)
for epoch in range(epochs):
# Training Phase
for batch in train_loader:
loss = model.training_step(batch)
loss.backward()
optimizer.step()
optimizer.zero_grad()
# Validation phase
result = evaluate(model, val_loader)
model.epoch_end(epoch, result)
history.append(result)
return history
input_size = 784
num_classes = 10
model = MnistModel(input_size, out_size=num_classes)
to_device(model, device)
history = [evaluate(model, val_loader)]
history
history += fit(5, 0.5, model, train_loader, val_loader)
history += fit(5, 0.1, model, train_loader, val_loader)
losses = [x['val_loss'] for x in history]
plt.plot(losses, '-x')
plt.xlabel('epoch')
plt.ylabel('loss')
plt.title('Loss vs. No. of epochs');
accuracies = [x['val_acc'] for x in history]
plt.plot(accuracies, '-x')
plt.xlabel('epoch')
plt.ylabel('accuracy')
plt.title('Accuracy vs. No. of epochs');
###Output
_____no_output_____
###Markdown
Prediction on Samples
###Code
def predict_image(img, model):
xb = to_device(img.unsqueeze(0), device)
yb = model(xb)
_, preds = torch.max(yb, dim=1)
return preds[0].item()
img, label = test_dataset[0]
plt.imshow(img[0], cmap='gray')
print('Label:', dataset.classes[label], ', Predicted:', dataset.classes[predict_image(img, model)])
evaluate(model, test_loader)
###Output
_____no_output_____
###Markdown
Save and upload
###Code
saved_weights_fname='fashion-feedforward.pth'
torch.save(model.state_dict(), saved_weights_fname)
!pip install jovian --upgrade --quiet
pip install --upgrade pip
import jovian
jovian.commit(project=project_name, environment=None, outputs=[saved_weights_fname])
###Output
_____no_output_____ |
Verification/Verifying the results of Unique SuperSet.ipynb | ###Markdown
Verifying the results of Unique SuperSet in Spark
###Code
import numpy as np
import pandas as pd
import copy
row_items = []
with open("testInp.txt") as f_1:
for line in f_1:
row_items.append(line.strip("\n").split("\t")[1].replace(","," "))
row_items_df = pd.DataFrame(row_items, columns=["Trans"])
with open("testInp_n.txt", "w+") as f_2:
for x in row_items:
f_2.write(x+"\n")
###Output
_____no_output_____
###Markdown
Finding Unique SuperSet Trans
###Code
n = len(row_items)
unique_SS = []
for i in range(n):
temp_1 = set(row_items[i].split(" "))
# print("temp_1: ", temp_1)
flag = True
temp_uss = copy.deepcopy(unique_SS)
for temp_2 in temp_uss:
#temp_2 = set(temp_uss[j].split(" "))
# print("temp_2: ", temp_2)
if temp_1 == temp_2 or temp_1.issubset(temp_2):
# print("In Condition 1")
flag = False
break
elif temp_2.issubset(temp_1):
# print("In Condition 2")
unique_SS.remove(temp_2)
# break
else:
continue
if flag:
unique_SS.append(temp_1)
#temp = ""
#for y in sorted(list(map(int,list(temp_1)))):
# temp+=str(y)+" "
#unique_SS.append(temp.strip(" "))
# print(unique_SS)
len(unique_SS)
unique_SS = [" ".join(sorted(list(s))) for s in unique_SS]
unique_SS
unique_SS_df = pd.DataFrame(unique_SS, columns=["CodeUnique_SS"])
unique_SS_df
###Output
_____no_output_____
###Markdown
Importing the results of uniqueSuperSets results from Spark
###Code
spark_res = pd.read_csv("testInp_USS.csv", header=None, names=["SparkUnique_SS"])
spark_res
###Output
_____no_output_____
###Markdown
Comparing both the results
###Code
merged_df = pd.merge(unique_SS_df, spark_res, how="inner", left_on="CodeUnique_SS", right_on="SparkUnique_SS")
merged_df
###Output
_____no_output_____ |
PyTorch/.ipynb_checkpoints/Burgers-checkpoint.ipynb | ###Markdown
Import Libraries
###Code
import torch
import torch.autograd as autograd # computation graph
from torch import Tensor # tensor node in the computation graph
import torch.nn as nn # neural networks
import torch.optim as optim # optimizers e.g. gradient descent, ADAM, etc.
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1 import make_axes_locatable
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.ticker
import numpy as np
import time
from pyDOE import lhs #Latin Hypercube Sampling
import scipy.io
#Set default dtype to float32
torch.set_default_dtype(torch.float)
#PyTorch random number generator
torch.manual_seed(1234)
# Random number generators in other libraries
np.random.seed(1234)
# Device configuration
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
if device == 'cuda':
print(torch.cuda.get_device_name())
###Output
cpu
###Markdown
*Data Prep*Training and Testing data is prepared from the solution file
###Code
data = scipy.io.loadmat('Data/burgers_shock_mu_01_pi.mat') # Load data from file
x = data['x'] # 256 points between -1 and 1 [256x1]
t = data['t'] # 100 time points between 0 and 1 [100x1]
usol = data['usol'] # solution of 256x100 grid points
X, T = np.meshgrid(x,t) # makes 2 arrays X and T such that u(X[i],T[j])=usol[i][j] are a tuple
###Output
_____no_output_____
###Markdown
Test DataWe prepare the test data to compare against the solution produced by the PINN.
###Code
''' X_u_test = [X[i],T[i]] [25600,2] for interpolation'''
X_u_test = np.hstack((X.flatten()[:,None], T.flatten()[:,None]))
# Domain bounds
lb = X_u_test[0] # [-1. 0.]
ub = X_u_test[-1] # [1. 0.99]
'''
Fortran Style ('F') flatten,stacked column wise!
u = [c1
c2
.
.
cn]
u = [25600x1]
'''
u_true = usol.flatten('F')[:,None]
###Output
_____no_output_____
###Markdown
Training Data
###Code
def trainingdata(N_u,N_f):
'''Boundary Conditions'''
#Initial Condition -1 =< x =<1 and t = 0
leftedge_x = np.hstack((X[0,:][:,None], T[0,:][:,None])) #L1
leftedge_u = usol[:,0][:,None]
#Boundary Condition x = -1 and 0 =< t =<1
bottomedge_x = np.hstack((X[:,0][:,None], T[:,0][:,None])) #L2
bottomedge_u = usol[-1,:][:,None]
#Boundary Condition x = 1 and 0 =< t =<1
topedge_x = np.hstack((X[:,-1][:,None], T[:,0][:,None])) #L3
topedge_u = usol[0,:][:,None]
all_X_u_train = np.vstack([leftedge_x, bottomedge_x, topedge_x]) # X_u_train [456,2] (456 = 256(L1)+100(L2)+100(L3))
all_u_train = np.vstack([leftedge_u, bottomedge_u, topedge_u]) #corresponding u [456x1]
#choose random N_u points for training
idx = np.random.choice(all_X_u_train.shape[0], N_u, replace=False)
X_u_train = all_X_u_train[idx, :] #choose indices from set 'idx' (x,t)
u_train = all_u_train[idx,:] #choose corresponding u
'''Collocation Points'''
# Latin Hypercube sampling for collocation points
# N_f sets of tuples(x,t)
X_f_train = lb + (ub-lb)*lhs(2,N_f)
X_f_train = np.vstack((X_f_train, X_u_train)) # append training points to collocation points
return X_f_train, X_u_train, u_train
###Output
_____no_output_____
###Markdown
Physics Informed Neural Network
###Code
class Sequentialmodel(nn.Module):
def __init__(self,layers):
super().__init__() #call __init__ from parent class
'activation function'
self.activation = nn.Tanh()
'loss function'
self.loss_function = nn.MSELoss(reduction ='mean')
'Initialise neural network as a list using nn.Modulelist'
self.linears = nn.ModuleList([nn.Linear(layers[i], layers[i+1]) for i in range(len(layers)-1)])
self.iter = 0
'''
Alternatively:
*all layers are callable
Simple linear Layers
self.fc1 = nn.Linear(2,50)
self.fc2 = nn.Linear(50,50)
self.fc3 = nn.Linear(50,50)
self.fc4 = nn.Linear(50,1)
'''
'Xavier Normal Initialization'
# std = gain * sqrt(2/(input_dim+output_dim))
for i in range(len(layers)-1):
# weights from a normal distribution with
# Recommended gain value for tanh = 5/3?
nn.init.xavier_normal_(self.linears[i].weight.data, gain=1.0)
# set biases to zero
nn.init.zeros_(self.linears[i].bias.data)
'foward pass'
def forward(self,x):
if torch.is_tensor(x) != True:
x = torch.from_numpy(x)
u_b = torch.from_numpy(ub).float().to(device)
l_b = torch.from_numpy(lb).float().to(device)
#preprocessing input
x = (x - l_b)/(u_b - l_b) #feature scaling
#convert to float
a = x.float()
'''
Alternatively:
a = self.activation(self.fc1(a))
a = self.activation(self.fc2(a))
a = self.activation(self.fc3(a))
a = self.fc4(a)
'''
for i in range(len(layers)-2):
z = self.linears[i](a)
a = self.activation(z)
a = self.linears[-1](a)
return a
def loss_BC(self,x,y):
loss_u = self.loss_function(self.forward(x), y)
return loss_u
def loss_PDE(self, x_to_train_f):
nu = 0.01/np.pi
x_1_f = x_to_train_f[:,[0]]
x_2_f = x_to_train_f[:,[1]]
g = x_to_train_f.clone()
g.requires_grad = True
u = self.forward(g)
u_x_t = autograd.grad(u,g,torch.ones([x_to_train_f.shape[0], 1]).to(device), retain_graph=True, create_graph=True)[0]
u_xx_tt = autograd.grad(u_x_t,g,torch.ones(x_to_train_f.shape).to(device), create_graph=True)[0]
u_x = u_x_t[:,[0]]
u_t = u_x_t[:,[1]]
u_xx = u_xx_tt[:,[0]]
f = u_t + (self.forward(g))*(u_x) - (nu)*u_xx
loss_f = self.loss_function(f,f_hat)
return loss_f
def loss(self,x,y,x_to_train_f):
loss_u = self.loss_BC(x,y)
loss_f = self.loss_PDE(x_to_train_f)
loss_val = loss_u + loss_f
return loss_val
'callable for optimizer'
def closure(self):
optimizer.zero_grad()
loss = self.loss(X_u_train, u_train, X_f_train)
loss.backward()
self.iter += 1
if self.iter % 100 == 0:
error_vec, _ = PINN.test()
print(loss,error_vec)
return loss
'test neural network'
def test(self):
u_pred = self.forward(X_u_test_tensor)
error_vec = torch.linalg.norm((u-u_pred),2)/torch.linalg.norm(u,2) # Relative L2 Norm of the error (Vector)
u_pred = u_pred.cpu().detach().numpy()
u_pred = np.reshape(u_pred,(256,100),order='F')
return error_vec, u_pred
###Output
_____no_output_____
###Markdown
*Solution Plot*
###Code
def solutionplot(u_pred,X_u_train,u_train):
fig, ax = plt.subplots()
ax.axis('off')
gs0 = gridspec.GridSpec(1, 2)
gs0.update(top=1-0.06, bottom=1-1/3, left=0.15, right=0.85, wspace=0)
ax = plt.subplot(gs0[:, :])
h = ax.imshow(u_pred, interpolation='nearest', cmap='rainbow',
extent=[T.min(), T.max(), X.min(), X.max()],
origin='lower', aspect='auto')
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
fig.colorbar(h, cax=cax)
ax.plot(X_u_train[:,1], X_u_train[:,0], 'kx', label = 'Data (%d points)' % (u_train.shape[0]), markersize = 4, clip_on = False)
line = np.linspace(x.min(), x.max(), 2)[:,None]
ax.plot(t[25]*np.ones((2,1)), line, 'w-', linewidth = 1)
ax.plot(t[50]*np.ones((2,1)), line, 'w-', linewidth = 1)
ax.plot(t[75]*np.ones((2,1)), line, 'w-', linewidth = 1)
ax.set_xlabel('$t$')
ax.set_ylabel('$x$')
ax.legend(frameon=False, loc = 'best')
ax.set_title('$u(x,t)$', fontsize = 10)
'''
Slices of the solution at points t = 0.25, t = 0.50 and t = 0.75
'''
####### Row 1: u(t,x) slices ##################
gs1 = gridspec.GridSpec(1, 3)
gs1.update(top=1-1/3, bottom=0, left=0.1, right=0.9, wspace=0.5)
ax = plt.subplot(gs1[0, 0])
ax.plot(x,usol.T[25,:], 'b-', linewidth = 2, label = 'Exact')
ax.plot(x,u_pred.T[25,:], 'r--', linewidth = 2, label = 'Prediction')
ax.set_xlabel('$x$')
ax.set_ylabel('$u(x,t)$')
ax.set_title('$t = 0.25s$', fontsize = 10)
ax.axis('square')
ax.set_xlim([-1.1,1.1])
ax.set_ylim([-1.1,1.1])
ax = plt.subplot(gs1[0, 1])
ax.plot(x,usol.T[50,:], 'b-', linewidth = 2, label = 'Exact')
ax.plot(x,u_pred.T[50,:], 'r--', linewidth = 2, label = 'Prediction')
ax.set_xlabel('$x$')
ax.set_ylabel('$u(x,t)$')
ax.axis('square')
ax.set_xlim([-1.1,1.1])
ax.set_ylim([-1.1,1.1])
ax.set_title('$t = 0.50s$', fontsize = 10)
ax.legend(loc='upper center', bbox_to_anchor=(0.5, -0.35), ncol=5, frameon=False)
ax = plt.subplot(gs1[0, 2])
ax.plot(x,usol.T[75,:], 'b-', linewidth = 2, label = 'Exact')
ax.plot(x,u_pred.T[75,:], 'r--', linewidth = 2, label = 'Prediction')
ax.set_xlabel('$x$')
ax.set_ylabel('$u(x,t)$')
ax.axis('square')
ax.set_xlim([-1.1,1.1])
ax.set_ylim([-1.1,1.1])
ax.set_title('$t = 0.75s$', fontsize = 10)
plt.savefig('Burgers.png',dpi = 500)
###Output
_____no_output_____
###Markdown
Main
###Code
'Generate Training data'
N_u = 100 #Total number of data points for 'u'
N_f = 10000 #Total number of collocation points
X_f_train_np_array, X_u_train_np_array, u_train_np_array = trainingdata(N_u,N_f)
'Convert to tensor and send to GPU'
X_f_train = torch.from_numpy(X_f_train_np_array).float().to(device)
X_u_train = torch.from_numpy(X_u_train_np_array).float().to(device)
u_train = torch.from_numpy(u_train_np_array).float().to(device)
X_u_test_tensor = torch.from_numpy(X_u_test).float().to(device)
u = torch.from_numpy(u_true).float().to(device)
f_hat = torch.zeros(X_f_train.shape[0],1).to(device)
layers = np.array([2,20,20,20,20,20,20,20,20,1]) #8 hidden layers
PINN = Sequentialmodel(layers)
PINN.to(device)
'Neural Network Summary'
print(PINN)
params = list(PINN.parameters())
'''Optimization'''
'L-BFGS Optimizer'
optimizer = torch.optim.LBFGS(PINN.parameters(), lr=0.1,
max_iter = 25000,
max_eval = None,
tolerance_grad = 1e-05,
tolerance_change = 1e-09,
history_size = 100,
line_search_fn = 'strong_wolfe')
start_time = time.time()
optimizer.step(PINN.closure)
'Adam Optimizer'
# optimizer = optim.Adam(PINN.parameters(), lr=0.001,betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False)
# max_iter = 20000
# start_time = time.time()
# for i in range(max_iter):
# loss = PINN.loss(X_u_train, u_train, X_f_train)
# optimizer.zero_grad() # zeroes the gradient buffers of all parameters
# loss.backward() #backprop
# optimizer.step()
# if i % (max_iter/10) == 0:
# error_vec, _ = PINN.test()
# print(loss,error_vec)
elapsed = time.time() - start_time
print('Training time: %.2f' % (elapsed))
''' Model Accuracy '''
error_vec, u_pred = PINN.test()
print('Test Error: %.5f' % (error_vec))
''' Solution Plot '''
solutionplot(u_pred,X_u_train.cpu().detach().numpy(),u_train.cpu().detach().numpy())
###Output
_____no_output_____ |
verification/Freyberg/verify_unc_results.ipynb | ###Markdown
verify pyEMU results with the henry problem
###Code
%matplotlib inline
import os
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import pyemu
###Output
setting random seed
###Markdown
instaniate ```pyemu``` object and drop prior info. Then reorder the jacobian and save as binary. This is needed because the pest utilities require strict order between the control file and jacobian
###Code
la = pyemu.Schur("freyberg.jcb",verbose=False,forecasts=[])
la.drop_prior_information()
jco_ord = la.jco.get(la.pst.obs_names,la.pst.par_names)
ord_base = "freyberg_ord"
jco_ord.to_binary(ord_base + ".jco")
la.pst.write(ord_base+".pst")
###Output
_____no_output_____
###Markdown
extract and save the forecast sensitivity vectors
###Code
pv_names = []
predictions = ["sw_gw_0","sw_gw_1","or28c05_0","or28c05_1"]
for pred in predictions:
pv = jco_ord.extract(pred).T
pv_name = pred + ".vec"
pv.to_ascii(pv_name)
pv_names.append(pv_name)
###Output
_____no_output_____
###Markdown
save the prior parameter covariance matrix as an uncertainty file
###Code
prior_uncfile = "pest.unc"
la.parcov.to_uncfile(prior_uncfile,covmat_file=None)
###Output
_____no_output_____
###Markdown
PRECUNC7write a response file to feed ```stdin``` to ```predunc7```
###Code
post_mat = "post.cov"
post_unc = "post.unc"
args = [ord_base + ".pst","1.0",prior_uncfile,
post_mat,post_unc,"1"]
pd7_in = "predunc7.in"
f = open(pd7_in,'w')
f.write('\n'.join(args)+'\n')
f.close()
out = "pd7.out"
pd7 = os.path.join("i64predunc7.exe")
os.system(pd7 + " <" + pd7_in + " >"+out)
for line in open(out).readlines():
print(line)
###Output
PREDUNC7 Version 13.6. Watermark Numerical Computing.
Enter name of PEST control file: Enter observation reference variance:
Enter name of prior parameter uncertainty file:
Enter name for posterior parameter covariance matrix file: Enter name for posterior parameter uncertainty file:
Use which version of linear predictive uncertainty equation:-
if version optimized for small number of parameters - enter 1
if version optimized for small number of observations - enter 2
Enter your choice:
- reading PEST control file freyberg_ord.pst....
- file freyberg_ord.pst read ok.
- reading Jacobian matrix file freyberg_ord.jco....
- file freyberg_ord.jco read ok.
- reading parameter uncertainty file pest.unc....
- parameter uncertainty file pest.unc read ok.
- forming XtC-1(e)X matrix....
- inverting prior C(p) matrix....
- inverting [XtC-1(e)X + C-1(p)] matrix....
- writing file post.cov...
- file post.cov written ok.
- writing file post.unc...
- file post.unc written ok.
###Markdown
load the posterior matrix written by ```predunc7```
###Code
post_pd7 = pyemu.Cov.from_ascii(post_mat)
la_ord = pyemu.Schur(jco=ord_base+".jco",predictions=predictions)
post_pyemu = la_ord.posterior_parameter
#post_pyemu = post_pyemu.get(post_pd7.row_names)
###Output
_____no_output_____
###Markdown
The cumulative difference between the two posterior matrices:
###Code
post_pd7.x
post_pyemu.x
delta = (post_pd7 - post_pyemu).x
(post_pd7 - post_pyemu).to_ascii("delta.cov")
print(delta.sum())
print(delta.max(),delta.min())
delta = np.ma.masked_where(np.abs(delta) < 0.0001,delta)
plt.imshow(delta)
df = (post_pd7 - post_pyemu).to_dataframe().apply(np.abs)
df /= la_ord.pst.parameter_data.parval1
df *= 100.0
print(df.max())
delta
###Output
_____no_output_____
###Markdown
A few more metrics ...
###Code
print((delta.sum()/post_pyemu.x.sum()) * 100.0)
print(np.abs(delta).sum())
###Output
-1.29714474919e-05
1494.97671749
###Markdown
PREDUNC1write a response file to feed ```stdin```. Then run ```predunc1``` for each forecast
###Code
args = [ord_base + ".pst", "1.0", prior_uncfile, None, "1"]
pd1_in = "predunc1.in"
pd1 = os.path.join("i64predunc1.exe")
pd1_results = {}
for pv_name in pv_names:
args[3] = pv_name
f = open(pd1_in, 'w')
f.write('\n'.join(args) + '\n')
f.close()
out = "predunc1" + pv_name + ".out"
os.system(pd1 + " <" + pd1_in + ">" + out)
f = open(out,'r')
for line in f:
if "pre-cal " in line.lower():
pre_cal = float(line.strip().split()[-2])
elif "post-cal " in line.lower():
post_cal = float(line.strip().split()[-2])
f.close()
pd1_results[pv_name.split('.')[0].lower()] = [pre_cal, post_cal]
###Output
_____no_output_____
###Markdown
organize the ```pyemu``` results into a structure for comparison
###Code
# save the results for verification testing
pd.DataFrame(pd1_results).to_csv("predunc1_results.dat")
pyemu_results = {}
for pname in la_ord.prior_prediction.keys():
pyemu_results[pname] = [np.sqrt(la_ord.prior_prediction[pname]),
np.sqrt(la_ord.posterior_prediction[pname])]
###Output
_____no_output_____
###Markdown
compare the results:
###Code
f = open("predunc1_textable.dat",'w')
for pname in pd1_results.keys():
print(pname)
f.write(pname+"&{0:6.5f}&{1:6.5}&{2:6.5f}&{3:6.5f}\\\n"\
.format(pd1_results[pname][0],pyemu_results[pname][0],
pd1_results[pname][1],pyemu_results[pname][1]))
print("prior",pname,pd1_results[pname][0],pyemu_results[pname][0])
print("post",pname,pd1_results[pname][1],pyemu_results[pname][1])
f.close()
###Output
or28c05_1
prior or28c05_1 104.0399 104.039856267
post or28c05_1 103.9241 103.924143736
sw_gw_1
prior sw_gw_1 480268.8 480268.818236
post sw_gw_1 479750.0 479750.032693
or28c05_0
prior or28c05_0 91.25489 91.2548977153
post or28c05_0 30.53031 30.5303084385
sw_gw_0
prior sw_gw_0 96600.42 96600.4203489
post sw_gw_0 87527.32 87527.3190459
###Markdown
PREDVAR1bwrite the nessecary files to run ```predvar1b```
###Code
f = open("pred_list.dat",'w')
out_files = []
for pv in pv_names:
out_name = pv+".predvar1b.out"
out_files.append(out_name)
f.write(pv+" "+out_name+"\n")
f.close()
args = [ord_base+".pst","1.0","pest.unc","pred_list.dat"]
for i in range(36):
args.append(str(i))
args.append('')
args.append("n")
args.append("n")
args.append("y")
args.append("n")
args.append("n")
f = open("predvar1b.in", 'w')
f.write('\n'.join(args) + '\n')
f.close()
os.system("predvar1b.exe <predvar1b.in")
pv1b_results = {}
for out_file in out_files:
pred_name = out_file.split('.')[0]
f = open(out_file,'r')
for _ in range(3):
f.readline()
arr = np.loadtxt(f)
pv1b_results[pred_name] = arr
###Output
_____no_output_____
###Markdown
now for pyemu
###Code
omitted_parameters = [pname for pname in la.pst.parameter_data.parnme if pname.startswith("wf")]
la_ord_errvar = pyemu.ErrVar(jco=ord_base+".jco",
predictions=predictions,
omitted_parameters=omitted_parameters,
verbose=False)
df = la_ord_errvar.get_errvar_dataframe(np.arange(36))
df
###Output
_____no_output_____
###Markdown
generate some plots to verify
###Code
fig = plt.figure(figsize=(6,8))
max_idx = 13
idx = np.arange(max_idx)
for ipred,pred in enumerate(predictions):
arr = pv1b_results[pred][:max_idx,:]
first = df[("first", pred)][:max_idx]
second = df[("second", pred)][:max_idx]
third = df[("third", pred)][:max_idx]
ax = plt.subplot(len(predictions),1,ipred+1)
#ax.plot(arr[:,1],color='b',dashes=(6,6),lw=4,alpha=0.5)
#ax.plot(first,color='b')
#ax.plot(arr[:,2],color='g',dashes=(6,4),lw=4,alpha=0.5)
#ax.plot(second,color='g')
#ax.plot(arr[:,3],color='r',dashes=(6,4),lw=4,alpha=0.5)
#ax.plot(third,color='r')
ax.scatter(idx,arr[:,1],marker='x',s=40,color='g',
label="PREDVAR1B - first term")
ax.scatter(idx,arr[:,2],marker='x',s=40,color='b',
label="PREDVAR1B - second term")
ax.scatter(idx,arr[:,3],marker='x',s=40,color='r',
label="PREVAR1B - third term")
ax.scatter(idx,first,marker='o',facecolor='none',
s=50,color='g',label='pyEMU - first term')
ax.scatter(idx,second,marker='o',facecolor='none',
s=50,color='b',label="pyEMU - second term")
ax.scatter(idx,third,marker='o',facecolor='none',
s=50,color='r',label="pyEMU - third term")
ax.set_ylabel("forecast variance")
ax.set_title("forecast: " + pred)
if ipred == len(predictions) -1:
ax.legend(loc="lower center",bbox_to_anchor=(0.5,-0.75),
scatterpoints=1,ncol=2)
ax.set_xlabel("singular values")
else:
ax.set_xticklabels([])
#break
plt.savefig("predvar1b_ver.eps")
###Output
_____no_output_____
###Markdown
Identifiability
###Code
cmd_args = [os.path.join("i64identpar.exe"),ord_base,"5",
"null","null","ident.out","/s"]
cmd_line = ' '.join(cmd_args)+'\n'
print(cmd_line)
print(os.getcwd())
os.system(cmd_line)
identpar_df = pd.read_csv("ident.out",delim_whitespace=True)
la_ord_errvar = pyemu.ErrVar(jco=ord_base+".jco",
predictions=predictions,
verbose=False)
df = la_ord_errvar.get_identifiability_dataframe(5)
df
###Output
_____no_output_____
###Markdown
cheap plot to verify
###Code
diff = identpar_df["identifiability"].values - df["ident"].values
diff.max()
fig = plt.figure()
ax = plt.subplot(111)
axt = plt.twinx()
ax.plot(identpar_df["identifiability"])
ax.plot(df.ident.values)
ax.set_xlim(-10,600)
diff = identpar_df["identifiability"].values - df["ident"].values
#print(diff)
axt.plot(diff)
axt.set_ylim(-1,1)
ax.set_xlabel("parameter")
ax.set_ylabel("identifiability")
axt.set_ylabel("difference")
###Output
_____no_output_____ |
Assignment/06 Exercises.ipynb | ###Markdown
Exercise 06.1 (selecting and passing data structures)The task in Exercise 04 for computing the area of a triangle involved a function with six arguments ($x$ and $y$ components of each vertex). With six arguments, the likelihood of a user passing arguments in the wrong order is high. Use an appropriate data structure, e.g. a `list`, `tuple`, `dict`, etc, to develop a new version of the function with a simpler interface (the interface is the arguments that are passed to the function). Add appropriate checks inside your function to validate the input data.
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Exercise 06.2 (selecting data structures)For a simple (non-intersecting) polygon with $n$ vertices, $(x_0, y_0)$, $(x_1, y_1)$, . . , $(x_{n-1}, y_{n-1})$, the area $A$ is given by$$A = \left| \frac{1}{2} \sum_{i=0}^{n-1} \left(x_{i} y _{i+1} - x_{i+1} y_{i} \right) \right|$$and where $(x_n, y_n) = (x_0, y_0)$. The vertices should be ordered as you move around the polygon.Write a function that computes the area of a simple polygon with an arbitrary number of vertices. Test your function for some simple shapes. Pay careful attention to the range of any loops.
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Exercise 06.3 (indexing)Write a function that uses list indexing to add two vectors of arbitrary length, and returns the new vector. Include a check that the vector sizes match, and print a warning message if there is a size mismatch. The more error information you provide, the easier it would be for someone using your function to debug their code.Add some tests of your code. Hint: You can create a list of zeros of length `n` by z = [0]*n Optional (advanced) Try writing a one-line version of this operation using list comprehension and the built-in function [`zip`](https://docs.python.org/3/library/functions.htmlzip).
###Code
def sum_vector(x, y):
"Return sum of two vectors"
# YOUR CODE HERE
raise NotImplementedError()
a = [0, 4.3, -5, 7]
b = [-2, 7, -15, 1]
c = sum_vector(a, b)
assert c == [-2, 11.3, -20, 8]
###Output
_____no_output_____
###Markdown
Extension: list comprehension
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Exercise 06.4 (dictionaries)Create a dictionary that maps college names (the key) to college abbreviations for at least 5 colleges(you can find abbreviations at https://en.wikipedia.org/wiki/Colleges_of_the_University_of_CambridgeColleges).From the dictionary, produce and print1. A dictionary from college abbreviation to name; and1. A list of college abbreviations sorted into alphabetical order.*Optional extension:* Create a dictionary that maps college names (the key) to dictionaries of:- College abbreviation- Year of foundation - Total number students for at least 5 colleges. Take the data from https://en.wikipedia.org/wiki/Colleges_of_the_University_of_CambridgeColleges. Using this dictionary, 1. Find the college with the greatest number of students and print the abbreviation; and 2. Find the oldest college, and print the number of students and the abbreviation for this college.
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____
###Markdown
Optional extension
###Code
# YOUR CODE HERE
raise NotImplementedError()
###Output
_____no_output_____ |
Numpy-Pandas-Matplotlib/.ipynb_checkpoints/Matplot50-checkpoint.ipynb | ###Markdown
1. 散点图
###Code
# Import dataset
# midwest = pd.read_csv("https://raw.githubusercontent.com/selva86/datasets/master/midwest_filter.csv")
midwest = pd.read_csv("midwest_filter.csv")
# Prepare Data
# Create as many colors as there are unique midwest['category']
categories = np.unique(midwest['category'])
colors = [plt.cm.tab10(i/float(len(categories)-1)) for i in range(len(categories))]
# Draw Plot for Each Category
plt.figure(figsize=(16, 10), dpi= 80, facecolor='w', edgecolor='k')
for i, category in enumerate(categories):
plt.scatter('area', 'poptotal',
data=midwest.loc[midwest.category==category, :],
s=20, c=colors[i], label=str(category))
# Decorations
plt.gca().set(xlim=(0.0, 0.1), ylim=(0, 90000),
xlabel='Area', ylabel='Population')
plt.xticks(fontsize=12); plt.yticks(fontsize=12)
plt.title("Scatterplot of Midwest Area vs Population", fontsize=22)
plt.legend(fontsize=12)
plt.show()
###Output
_____no_output_____ |
Homework/.ipynb_checkpoints/HW5-checkpoint.ipynb | ###Markdown
**Homework 5** **Problem 1**
###Code
#Setup
%matplotlib inline
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import scipy
from scipy import stats
plt.rcParams["figure.figsize"] = (20,15)
#Creating Simulated Background
background_events = stats.norm.rvs(loc = 0, size = 1000000, scale = 3)
signal = stats.uniform.rvs(loc = 0, scale = 20, size = 1000000)
data = background_events + signal
signaledges = np.linspace(0,20,41)
dataedges = np.linspace(-7,27,69)
Psd, temp, temp2= np.histogram2d(data,signal, bins=[dataedges,signaledges], density=True)
datacenters = (dataedges[:-1] + dataedges[1:]) / 2
signalcenters = (signaledges[:-1] + signaledges[1:]) / 2
plt.pcolormesh(datacenters,signalcenters,Psd.T)
plt.ylabel('True signal, $P(s|d)$', fontsize = 24)
plt.xlabel('Observed data, $P(d|s)$', fontsize = 24)
true = signaledges[21]
observed = dataedges[25]
plt.axhline(true,color = 'red')
plt.axvline(observed,color = 'orange')
plt.show()
###Output
_____no_output_____
###Markdown
**b)** True Injected signal shown by the red line, what is P(d|s)?
###Code
plt.step(temp[:-1],Psd[:,21],Linewidth = 3, color = 'red')
plt.title('P(d|s) for a true signal of ' + str(true),fontsize = 24)
plt.xlabel('Observed Value',fontsize = 24)
plt.ylabel('P(d|s)',fontsize = 24)
plt.show()
###Output
_____no_output_____
###Markdown
**c)** Observed Data Value as shown by the orange line, what is P(s|d)?
###Code
plt.step(temp2[:-1],Psd[25,:],Linewidth = 3, color = 'orange')
plt.title('P(s|d) for an observed signal of ' + str(np.round(observed,3)) ,fontsize = 24)
plt.xlabel('True Value',fontsize = 24)
plt.ylabel('P(s|d)',fontsize = 24)
plt.show()
###Output
_____no_output_____
###Markdown
**Problem 2** Now repeat the above, but with a background with non-zero mean.
###Code
background_events2 = stats.norm.rvs(loc = 9, size = 1000000, scale = 3)
signal2 = stats.uniform.rvs(loc = 0, scale = 20, size = 1000000)
data2 = background_events2 + signal2
signaledges2 = np.linspace(0,20,41)
dataedges2 = np.linspace(-4,41,91)
Psd2, temp_, temp2_= np.histogram2d(data2,signal2, bins=[dataedges2,signaledges2], density=True)
datacenters2 = (dataedges2[:-1] + dataedges2[1:]) / 2
signalcenters2 = (signaledges2[:-1] + signaledges2[1:]) / 2
plt.pcolormesh(datacenters2,signalcenters2,Psd2.T)
plt.ylabel('True signal, $P(s|d)$', fontsize = 24)
plt.xlabel('Observed data, $P(d|s)$', fontsize = 24)
true = signaledges2[21]
observed = dataedges2[19]
plt.axhline(true,color = 'red')
plt.axvline(observed,color = 'orange')
plt.show()
plt.step(temp[:-1],Psd[:,21],Linewidth = 3, color = 'red', label = 'From 1b')
plt.step(temp_[:-1],Psd2[:,21],Linewidth = 3, color = 'maroon', label = 'Non-zero background mean')
plt.title('P(d|s) for a true signal of '+str(true),fontsize = 24)
plt.xlabel('Observed Value',fontsize = 24)
plt.ylabel('P(d|s)',fontsize = 24)
plt.legend(fontsize = 18,loc = 0)
plt.show()
plt.step(temp2[:-1],Psd[25,:],Linewidth = 3, color = 'orange', label = 'From 1c')
plt.step(temp2_[:-1],Psd2[19,:],Linewidth = 3, color = 'purple',label = 'Non-zero background mean')
plt.title('P(s|d) for an observed signal of ' + str(np.round(observed,3)) ,fontsize = 24)
plt.xlabel('True Value',fontsize = 24)
plt.ylabel('P(s|d)',fontsize = 24)
plt.legend(fontsize = 18,loc = 0)
plt.show()
###Output
_____no_output_____ |
benchmarks/earthquake/mar2022/FFFFWNPFEARTHQ_newTFTv29-gregor-parameters.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors and Geoffrey Fox 2020
###Code
! which python
! python --version
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from cloudmesh.common.util import banner
import os
import sys
import socket
import pathlib
import humanize
from cloudmesh.common.console import Console
from cloudmesh.common.Shell import Shell
from cloudmesh.common.dotdict import dotdict
from cloudmesh.common.Printer import Printer
from cloudmesh.common.StopWatch import StopWatch
from cloudmesh.common.util import readfile
from cloudmesh.gpu.gpu import Gpu
from pprint import pprint
import sys
from IPython.display import display
import tensorflow_datasets as tfds
import tensorflow as tf
from tqdm.keras import TqdmCallback
from tqdm import tnrange
from tqdm import notebook
from tqdm import tqdm
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import LSTM
from tensorflow.keras.layers import GRU
from tensorflow.keras.layers import Dense
import os
import subprocess
import gc
from csv import reader
from csv import writer
import sys
import random
import math
import numpy as np
import matplotlib.pyplot as plt
from textwrap import wrap
import pandas as pd
import io as io
import string
import time
import datetime
from datetime import timedelta
from datetime import date
# TODO: better would be to distinguish them and not overwritethe one datetime with the other.
from datetime import datetime
import yaml
from typing import Dict
from typing import Tuple
from typing import Optional
from typing import List
from typing import Union
from typing import Callable
import matplotlib
import matplotlib.patches as patches
# import matplotlib.pyplot as plt
from matplotlib.figure import Figure
from matplotlib.path import Path
import matplotlib.dates as mdates
import scipy as sc
import scipy.linalg as solver
from matplotlib import colors
import enum
import pandas as pd
import abc
import json
import psutil
gregor = True
%load_ext autotime
StopWatch.start("total")
in_colab = 'google.colab' in sys.modules
in_rivanna = "hpc.virginia.edu" in socket.getfqdn()
in_nonstandard = not in_colab and not in_rivanna
config = dotdict()
content = readfile("config.yaml")
program_config = dotdict(yaml.safe_load(content))
config.update(program_config)
print(Printer.attribute(config))
if in_colab:
# Avoids scroll-in-the-scroll in the entire Notebook
#test
from IPython.display import Javascript
def resize_colab_cell():
display(Javascript('google.colab.output.setIframeHeight(0, true, {maxHeight: 20000})'))
get_ipython().events.register('pre_run_cell', resize_colab_cell)
from google.colab import drive
drive.mount('/content/gdrive')
if in_rivanna or gregor:
tf.config.set_soft_device_placement(config.set_soft_device_placement)
tf.debugging.set_log_device_placement(config.debugging_set_log_device_placement)
# tf.test.is_gpu_available
import re
print(str(sys.executable))
if in_rivanna:
test_localscratch = re.search('localscratch', str(sys.executable))
test_scratch = re.search('scratch', str(sys.executable))
test_project = re.search('project', str(sys.executable))
a100 = re.search('a100', str(sys.executable))
v100 = re.search('v100', str(sys.executable))
p100 = re.search('p100', str(sys.executable))
k80 = re.search('k80', str(sys.executable))
rtx2080 = re.search('rtx2080', str(sys.executable))
if test_localscratch != None:
in_localscratch = test_localscratch.group(0) == 'localscratch'
else:
in_localscratch = False
if test_scratch != None:
in_scratch = test_scratch.group(0) == 'scratch'
else:
in_scratch = False
if test_project != None:
in_project = test_project.group(0) == 'project'
else:
in_project = False
if a100 != None:
if a100.group(0) == 'a100':
gpu_rivanna = 'a100'
if v100 != None:
if v100.group(0) == 'v100':
gpu_rivanna = 'v100'
if p100 != None:
if p100.group(0) == 'p100':
gpu_rivanna = 'p100'
if k80 != None:
if k80.group(0) == 'k80':
gpu_rivanna = 'k80'
if rtx2080 != None:
if rtx2080.group(0) == 'rtx2080':
gpu_rivanna = 'rtx2080'
else:
in_localscratch = False
in_scratch = False
in_project = False
def TIME_start(name):
banner(f"Start timer {name}")
StopWatch.start(name)
def TIME_stop(name):
StopWatch.stop(name)
t = StopWatch.get(name)
h = humanize.naturaldelta(timedelta(seconds=t))
banner(f"Stop timer {name}: {t}s or {h}")
# who am i
config.user = Shell.run('whoami').strip()
try:
config.user_id = Shell.run('id -u').strip()
config.group_id = Shell.run('id -g').strip()
except subprocess.CalledProcessError:
print("The command <id> is not on your path.")
print(Printer.attribute(config))
StopWatch.benchmark()
r = Gpu().system()
try:
## Once Cloudmesh GPU PR2 is merged, the above block can be removed and the below be used.
config.gpuname = [x['product_name'] for x in r]
config.gpuvendor = [x.get('vendor', "Unknown Vendor") for x in r]
except:
pass
print (Printer.attribute(config))
def SAVEFIG(plt, filename):
if ".png" in filename:
_filename = filename.replace(".png", "")
else:
_filename = filename
plt.savefig(f'{filename}.png', format='png')
plt.savefig(f'{filename}.pdf', format='pdf')
# Set Runname
RunName = 'EARTHQ-newTFTv29'
RunComment = ' TFT Dev on EarthQuakes -- 2 weeks 3 months 6 months 1 year d_model 160 dropout 0.1 Location Based Validation BS 64 Simple 2 layers CUDA LSTM MSE corrected MHA Plot 28'
#
# StopWatch.benchmark(sysinfo=True)
# make sure we have free memory in it
# replace the following and if needed read from StopWatch
memory = StopWatch.get_sysinfo()["mem.available"]
print(f'Your runtime has {memory} of available RAM\n')
config.runname = RunName
###Output
_____no_output_____
###Markdown
Initial System Code
###Code
startbold = "\033[1m"
resetfonts = "\033[0m"
startred = '\033[31m'
startpurple = '\033[35m'
startyellowbkg = '\033[43m'
banner("System information")
r = StopWatch.systeminfo()
print (r)
banner("nvidia-smi")
gpu_info = Shell.run("nvidia-smi")
if gpu_info.find('failed') >= 0:
print('Select the Runtime > "Change runtime type" menu to enable a GPU accelerator, ')
else:
print(gpu_info)
###Output
_____no_output_____
###Markdown
Transformer model for science data based on original for language understanding View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Science Data Parameters and Sizes -------Here is structure of science time series module. We will need several arrays that will need to be flattened at times. Note Python defaults to row major i.e. final index describes contiguous positions in memoryAt highest level data is labeled by Time and Location* Ttot is total number of time steps* Tseq is length of each sequence in time steps* Num_Seq is number of sequences in time: Num_Seq = Ttot-Tseq + 1* Nloc is Number of locations. The locations could be a 1D list or have array structure such as an image.* Nsample is number of data samples Nloc * Num_SeqInput data is at each location* Nprop time independent properties describing the location* Nforcing is number of time dependent forcing features INPUT at each time valueOutput (predicted) data at each location and for each time sequence is* Npred predicted time dependent values defined at every time step* Recorded at Nforecast time values measured wrt final time value of sequence* ForecastDay is an array of length Nforecast defining how many days into future prediction is. Typically ForecastDay[0] = 1 and Nforecast is often 1* There is also a class of science problems that are more similar to classic Seq2Seq. Here Nforecast = Tseq and ForecastDay = [-Tseq+1 ... 0]* We also support Nwishful predictions of events in future such probability of an earthquake of magnitude 6 in next 3 years. These are defined by araays EventType and Timestart, TimeInterval of length Nwishful. EventType is user defined and Timestart, TimeInterval is measured in time steps* Any missing output values should be set to NaN and Loss function must ensure that these points are ignored in derivative calculation and value calculationWe have an input module that supports either LSTM or Transformer (multi-head attention) modelsExample Problem AICov* Ttot = 114* Tseq = 9* Num_Seq = 106* Nloc = 110* Nprop = 35* Nforcing = 5 including infections, fatalities, plus 3 temporal position variables (last 3 not in current version) * Npred = 2 (predicted infections and fatalities). Could be 5 if predicted temporal position of output)* Nforecast= 15* ForecastDay = [1, 2, .......14, 15]* Nwishful = 0 Science Data Arrays Typical Arrays[ time, Location ] as Pandas array with label [name of time-dependent variable] as an array or just name of Pandas arraytime labels rows indexed by datetime or the difference datetime - startNon windowed data is stored with propert name as row index and location as column index[ static property, Location]Covid Input is[Sequence number 0..Num_Seq-1 ] [ Location 0..Nloc-1 ] [position in time sequence Tseq] [ Input Features]Covid Output is [Sequence number Num_Seq ] [ Location Nloc ] [ Output Features] Output Features are [ ipred = 0 ..Npred-1 ] [ iforecast = 0 ..Nforecast-1 ]Input Features are static fields followed by if present by dynamic system fields (cos-theta sin-theta linear) chosen followed by cases, deaths. In fact this is user chosen as they set static and dynamic system properties to use We will have various numpy and pandas arrays where we designate label[Ttot] is all time values [Num_Seq] is all sequences of window size ***Tseq***We can select time values or sequences [Ttot-reason] [Num_Seq-reason] for a given "reason"[Num_Seq][Tseq] is all time values in all sequences[Nloc] is all locations while [Nloc-reason] is subset of locations for given "reason"[Model1] is initial embedding of each data point[Model1+TrPosEnc] is initial embedding of each data point with Transformer style positional encoding[Nforcing] is time dependent input parameters and [Nprop] static properties while [ExPosEnc] are explicit positional (temporal) encoding.[Nforcing+ExPosEnc+Nprop] are all possible inputs[Npred] is predicted values with [Npred+ExPosEnc] as predictions plus encodings with actually used [Predvals] = [Npred+ExPosEnc-Selout] [Predtimes] = [Forecast time range] are times forecasted with "time range" separately defined Define Basic Control parameters
###Code
def wraptotext(textinput,size=None):
if size is None:
size = 120
textlist = wrap(textinput,size)
textresult = textlist[0]
for itext in range(1,len(textlist)):
textresult += '\n'+textlist[itext]
return textresult
def timenow():
now = datetime.now()
return now.strftime("%m/%d/%Y, %H:%M:%S") + " UTC"
def float32fromstrwithNaN(instr):
if instr == 'NaN':
return NaN
return np.float32(instr)
def printexit(exitmessage):
print(exitmessage)
sys.exit()
def strrnd(value):
return str(round(value,4))
NaN = np.float32("NaN")
ScaleProperties = False
ConvertDynamicPredictedQuantity = False
ConvertDynamicProperties = True
GenerateFutures = False
GenerateSequences = False
PredictionsfromInputs = False
RereadMay2020 = False
UseOLDCovariates = False
Dropearlydata = 0
NIHCovariates = False
UseFutures = True
Usedaystart = False
PopulationNorm = False
SymbolicWindows = False
Hydrology = False
Earthquake = False
EarthquakeImagePlots = False
AddSpecialstoSummedplots = False
UseRealDatesonplots = False
Dumpoutkeyplotsaspics = False
OutputNetworkPictures = False
NumpredbasicperTime = 2
NumpredFuturedperTime = 2
NumTimeSeriesCalculated = 0
Dailyunit = 1
TimeIntervalUnitName = 'Day'
InitialDate = datetime(2000,1,1)
NumberofTimeunits = 0
Num_Time =0
FinalDate = datetime(2000,1,1)
GlobalTrainingLoss = 0.0
GlobalValidationLoss = 0.0
# Type of Testing
LocationBasedValidation = False
LocationValidationFraction = 0.0
LocationTrainingfraction = 1.0
RestartLocationBasedValidation = False
global SeparateValandTrainingPlots
SeparateValandTrainingPlots = True
Plotsplitsize = -1 # if > 1 split time in plots
GarbageCollect = True
GarbageCollectionLimit = 0
current_time = timenow()
print(startbold + startred + current_time + ' ' + f'{RunName} {RunComment}' + resetfonts)
Earthquake = True
###Output
_____no_output_____
###Markdown
Define input structureRead in data and set it up for Tensorflow with training and validation Set train_examples, val_examples as science training and validatioon set.The shuffling of Science Data needs some care. We have ***Tseq*** * size of {[Num_Seq][Nloc]} locations in each sample. In simplease case the last is just a decomposition over location; not over time. Let's Nloc-sel be number of locations per sample. It will be helpful if Nloc-sel is divisable by 2. Perhaps Nloc-sel = 2 6 or 10 is reasonable.Then you shuffle locations every epoch and divide them into groups of size Nloc-sel with 50% overlap so you get locations0 1 2 3 4 5; 3 4 5 6 7 8; 6 7 8 9 10 11 etc.Every locations appears twice in an epoch (for each time value). You need to randomly add locations at end of sequence so it is divisiuble by Nloc-sel e.g add 4 random positions to the end if Nloc=110 and Nloc-sel = 6. Note last group of 6 has members 112 113 114 0 1 2After spatial structure set up, randomly shuffle in Num_Seq where there is an argument to do all locations for a partcular time value together.For validation, it is probably best to select validation location before chopping them into groups of size Nloc-selHow one groups locations for inference is not clear. One idea is to take trained network and use it to find for each location which other locations have the most attention with it. Use those locations in prediction More general input. NaN allowed value* Number time values* Number locations* Number driving values* Number predicted valuesFor COVID driving same as predicted* a) Clean up >=0 daily* b) Normalize* c) Add Futures* d) Add time/location encoding Setup File Systems
###Code
if in_colab:
# find the data
COLABROOTDIR="/content/gdrive/My Drive/Colab Datasets"
else:
base_path = f'{config["run.workdir"]}/_{config["meta.parent.uuid"]}/workspace-{config["meta.uuid"]}'
exp_path = f'{base_path}/{config["run.datadir"]}'
Shell.mkdir(f'{exp_path}/EarthquakeDec2020/Outputs')
COLABROOTDIR=exp_path
print (COLABROOTDIR)
if not os.path.exists(COLABROOTDIR):
Console.error(f"Missing data directory: {COLABROOTDIR}")
sys.exit(1)
os.environ["COLABROOTDIR"] = COLABROOTDIR
APPLDIR=f"{COLABROOTDIR}/EarthquakeDec2020"
CHECKPOINTDIR = f"{APPLDIR}/checkpoints/{RunName}dir/"
Shell.mkdir(CHECKPOINTDIR)
print(f'Checkpoint set up in directory {CHECKPOINTDIR}')
config.checkpointdir = CHECKPOINTDIR
config.appldir = APPLDIR
config.colabrootdir = COLABROOTDIR
print(Printer.attribute(config))
###Output
_____no_output_____
###Markdown
Space Filling Curves
###Code
def cal_gilbert2d(width: int, height: int) -> List[Tuple[int, int]]:
coordinates: List[Tuple[int, int]] = []
def sgn(x: int) -> int:
return (x > 0) - (x < 0)
def gilbert2d(x: int, y: int, ax: int, ay: int, bx: int, by: int):
"""
Generalized Hilbert ('gilbert') space-filling curve for arbitrary-sized
2D rectangular grids.
"""
w = abs(ax + ay)
h = abs(bx + by)
(dax, day) = (sgn(ax), sgn(ay)) # unit major direction
(dbx, dby) = (sgn(bx), sgn(by)) # unit orthogonal direction
if h == 1:
# trivial row fill
for i in range(0, w):
coordinates.append((x, y))
(x, y) = (x + dax, y + day)
return
if w == 1:
# trivial column fill
for i in range(0, h):
coordinates.append((x, y))
(x, y) = (x + dbx, y + dby)
return
(ax2, ay2) = (ax // 2, ay // 2)
(bx2, by2) = (bx // 2, by // 2)
w2 = abs(ax2 + ay2)
h2 = abs(bx2 + by2)
if 2 * w > 3 * h:
if (w2 % 2) and (w > 2):
# prefer even steps
(ax2, ay2) = (ax2 + dax, ay2 + day)
# long case: split in two parts only
gilbert2d(x, y, ax2, ay2, bx, by)
gilbert2d(x + ax2, y + ay2, ax - ax2, ay - ay2, bx, by)
else:
if (h2 % 2) and (h > 2):
# prefer even steps
(bx2, by2) = (bx2 + dbx, by2 + dby)
# standard case: one step up, one long horizontal, one step down
gilbert2d(x, y, bx2, by2, ax2, ay2)
gilbert2d(x + bx2, y + by2, ax, ay, bx - bx2, by - by2)
gilbert2d(x + (ax - dax) + (bx2 - dbx), y + (ay - day) + (by2 - dby), -bx2, -by2, -(ax - ax2), -(ay - ay2))
if width >= height:
gilbert2d(0, 0, width, 0, 0, height)
else:
gilbert2d(0, 0, 0, height, width, 0)
return coordinates
def lookup_color(unique_colors, color_value: float) -> int:
ids = np.where(unique_colors == color_value)
color_id = ids[0][0]
return color_id
def plot_gilbert2d_space_filling(
vertices: List[Tuple[int, int]],
width: int,
height: int,
filling_color: Optional[np.ndarray] = None,
color_map: str = "rainbow",
figsize: Tuple[int, int] = (12, 8),
linewidth: int = 1,
) -> None:
fig, ax = plt.subplots(figsize=figsize)
patch_list: List = []
if filling_color is None:
cmap = matplotlib.cm.get_cmap(color_map, len(vertices))
for i in range(len(vertices) - 1):
path = Path([vertices[i], vertices[i + 1]], [Path.MOVETO, Path.LINETO])
patch = patches.PathPatch(path, fill=False, edgecolor=cmap(i), lw=linewidth)
patch_list.append(patch)
ax.set_xlim(-1, width)
ax.set_ylim(-1, height)
else:
unique_colors = np.unique(filling_color)
# np.random.shuffle(unique_colors)
cmap = matplotlib.cm.get_cmap(color_map, len(unique_colors))
for i in range(len(vertices) - 1):
x, y = vertices[i]
fi, fj = x, height - 1 - y
color_value = filling_color[fj, fi]
color_id = lookup_color(unique_colors, color_value)
path = Path(
[rescale_xy(x, y), rescale_xy(vertices[i + 1][0], vertices[i + 1][1])], [Path.MOVETO, Path.LINETO]
)
# path = Path([vertices[i], vertices[i + 1]], [Path.MOVETO, Path.LINETO])
patch = patches.PathPatch(path, fill=False, edgecolor=cmap(color_id), lw=linewidth)
patch_list.append(patch)
ax.set_xlim(-120 - 0.1, width / 10 - 120)
ax.set_ylim(32 - 0.1, height / 10 + 32)
collection = matplotlib.collections.PatchCollection(patch_list, match_original=True)
# collection.set_array()
# plt.colorbar(collection)
ax.add_collection(collection)
ax.set_aspect("equal")
plt.show()
return
def rescale_xy(x: int, y: int) -> Tuple[float, float]:
return x / 10 - 120, y / 10 + 32
def remapfaults(InputFaultNumbers, Numxlocations, Numylocations, SpaceFillingCurve):
TotalLocations = Numxlocations*Numylocations
OutputFaultNumbers = np.full_like(InputFaultNumbers, -1, dtype=int)
MaxOldNumber = np.amax(InputFaultNumbers)
mapping = np.full(MaxOldNumber+1, -1,dtype=int)
newlabel=-1
for sfloc in range(0, TotalLocations):
[x,y] = SpaceFillingCurve[sfloc]
pixellocation = y*Numxlocations + x
pixellocation1 = y*Numxlocations + x
oldfaultnumber = InputFaultNumbers[pixellocation1]
if mapping[oldfaultnumber] < 0:
newlabel += 1
mapping[oldfaultnumber] = newlabel
OutputFaultNumbers[pixellocation] = mapping[oldfaultnumber]
MinNewNumber = np.amin(OutputFaultNumbers)
if MinNewNumber < 0:
printexit('Incorrect Fault Mapping')
print('new Fault Labels generated 0 through ' + str(newlabel))
plot_gilbert2d_space_filling(SpaceFillingCurve,Numxlocations, Numylocations, filling_color = np.reshape(OutputFaultNumbers,(40,60)), color_map="gist_ncar")
return OutputFaultNumbers
def annotate_faults_ndarray(pix_faults: np.ndarray, figsize=(10, 8), color_map="rainbow"):
matplotlib.rcParams.update(matplotlib.rcParamsDefault)
plt.rcParams.update({"font.size": 12})
unique_colors = np.unique(pix_faults)
np.random.shuffle(unique_colors)
cmap = matplotlib.cm.get_cmap(color_map, len(unique_colors))
fig, ax = plt.subplots(figsize=figsize)
height, width = pix_faults.shape
for j in range(height):
for i in range(width):
x, y = i / 10 - 120, (height - j - 1) / 10 + 32
ax.annotate(str(pix_faults[j, i]), (x + 0.05, y + 0.05), ha="center", va="center")
color_id = lookup_color(unique_colors, pix_faults[j, i])
ax.add_patch(patches.Rectangle((x, y), 0.1, 0.1, color=cmap(color_id), alpha=0.5))
ax.set_xlim(-120, width / 10 - 120)
ax.set_ylim(32, height / 10 + 32)
plt.show()
###Output
_____no_output_____
###Markdown
CELL READ DATA
###Code
def makeadateplot(plotfigure,plotpointer, Dateaxis=None, datemin=None, datemax=None, Yearly=True, majoraxis = 5):
if not Yearly:
sys.exit('Only yearly supported')
plt.rcParams.update({'font.size': 9})
years5 = mdates.YearLocator(majoraxis) # every 5 years
years_fmt = mdates.DateFormatter('%Y')
plotpointer.xaxis.set_major_locator(years5)
plotpointer.xaxis.set_major_formatter(years_fmt)
if datemin is None:
datemin = np.datetime64(Dateaxis[0], 'Y')
if datemax is None:
datemax = np.datetime64(Dateaxis[-1], 'Y') + np.timedelta64(1, 'Y')
plotpointer.set_xlim(datemin, datemax)
plotfigure.autofmt_xdate()
return datemin, datemax
def makeasmalldateplot(figure,ax, Dateaxis):
plt.rcParams.update({'font.size': 9})
months = mdates.MonthLocator(interval=2) # every month
datemin = np.datetime64(Dateaxis[0], 'M')
datemax = np.datetime64(Dateaxis[-1], 'M') + np.timedelta64(1, 'M')
ax.set_xlim(datemin, datemax)
months_fmt = mdates.DateFormatter('%y-%b')
locator = mdates.AutoDateLocator()
locator.intervald['MONTHLY'] = [2]
formatter = mdates.ConciseDateFormatter(locator)
# ax.xaxis.set_major_locator(locator)
# ax.xaxis.set_major_formatter(formatter)
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(months_fmt)
figure.autofmt_xdate()
return datemin, datemax
def Addfixedearthquakes(plotpointer,graphmin, graphmax, ylogscale = False, quakecolor = None, Dateplot = True, vetoquake = None):
if vetoquake is None: # Vetoquake = True means do not plot this quake
vetoquake = np.full(numberspecialeqs, False, dtype = bool)
if quakecolor is None: # Color of plot
quakecolor = 'black'
Place =np.arange(numberspecialeqs, dtype =int)
Place[8] = 11
Place[10] = 3
Place[12] = 16
Place[7] = 4
Place[2] = 5
Place[4] = 14
Place[11] = 18
ymin, ymax = plotpointer.get_ylim() # Or work with transform=ax.transAxes
for iquake in range(0,numberspecialeqs):
if vetoquake[iquake]:
continue
# This is the x position for the vertical line
if Dateplot:
x_line_annotation = Specialdate[iquake] # numpy date format
else:
x_line_annotation = Numericaldate[iquake] # Float where each interval 1 and start is 0
if (x_line_annotation < graphmin) or (x_line_annotation > graphmax):
continue
# This is the x position for the label
if Dateplot:
x_text_annotation = x_line_annotation + np.timedelta64(5*Dailyunit,'D')
else:
x_text_annotation = x_line_annotation + 5.0
# Draw a line at the position
plotpointer.axvline(x=x_line_annotation, linestyle='dashed', alpha=1.0, linewidth = 0.5, color=quakecolor)
# Draw a text
if Specialuse[iquake]:
ascii = str(round(Specialmags[iquake],1)) + '\n' + Specialeqname[iquake]
if ylogscale:
yminl = max(0.01*ymax,ymin)
yminl = math.log(yminl,10)
ymaxl = math.log(ymax,10)
logyplot = yminl + (0.1 + 0.8*(float(Place[iquake])/float(numberspecialeqs-1)))*(ymaxl-yminl)
yplot = pow(10, logyplot)
else:
yplot = ymax - (0.1 + 0.8*(float(Place[iquake])/float(numberspecialeqs-1)))*(ymax-ymin)
if Dateplot:
if x_text_annotation > graphmax - np.timedelta64(2000, 'D'):
x_text_annotation = graphmax - np.timedelta64(2000, 'D')
else:
if x_text_annotation > graphmax - 100:
x_text_annotation = graphmax - 100
# print(str(yplot) + " " + str(ymin) + " " + str(ymax) + " " + str(x_text_annotation) + " " + str(x_line_annotation)) + " " + ascii
plotpointer.text(x=x_text_annotation, y=yplot, s=wraptotext(ascii,size=10), alpha=1.0, color='black', fontsize = 6)
def quakesearch(iquake, iloc):
# see if top earthquake iquake llies near location iloc
# result = 0 NO; =1 YES Primary: locations match exactly; = -1 Secondary: locations near
# iloc is location before mapping
xloc = iloc%60
yloc = (iloc - xloc)/60
if (xloc == Specialxpos[iquake]) and (yloc == Specialypos[iquake]):
return 1
if (abs(xloc - Specialxpos[iquake]) <= 1) and (abs(yloc - Specialypos[iquake]) <= 1):
return -1
return 0
# Read Earthquake Data
def log_sum_exp10(ns, sumaxis =0):
max_v = np.max(ns, axis=None)
ds = ns - max_v
sum_of_exp = np.power(10, ds).sum(axis=sumaxis)
return max_v + np.log10(sum_of_exp)
def log_energyweightedsum(nvalue, ns, sumaxis = 0):
max_v = np.max(ns, axis=None)
ds = ns - max_v
ds = np.power(10, 1.5*ds)
dvalue = (np.multiply(nvalue,ds)).sum(axis=sumaxis)
ds = ds.sum(axis=0)
return np.divide(dvalue,ds)
# Set summed magnitude as log summed energy = 10^(1.5 magnitude)
def log_energy(mag, sumaxis =0):
return log_sum_exp10(1.5 * mag, sumaxis = sumaxis) / 1.5
def AggregateEarthquakes(itime, DaysDelay, DaysinInterval, Nloc, Eqdata, Approach, weighting = None):
if (itime + DaysinInterval + DaysDelay) > NumberofTimeunits:
return np.full([Nloc],NaN,dtype = np.float32)
if Approach == 0: # Magnitudes
if MagnitudeMethod == 0:
TotalMagnitude = log_energy(Eqdata[itime +DaysDelay:itime+DaysinInterval+DaysDelay])
else:
TotalMagnitude = Eqdata[itime +DaysDelay:itime+DaysinInterval+DaysDelay,:].sum(axis=0)
return TotalMagnitude
if Approach == 1: # Depth -- energy weighted
WeightedResult = log_energyweightedsum(Eqdata[itime +DaysDelay:itime+DaysinInterval+DaysDelay],
weighting[itime +DaysDelay:itime+DaysinInterval+DaysDelay])
return WeightedResult
if Approach == 2: # Multiplicity -- summed
SimpleSum = Eqdata[itime +DaysDelay:itime+DaysinInterval+DaysDelay,:].sum(axis=0)
return SimpleSum
def TransformMagnitude(mag):
if MagnitudeMethod == 0:
return mag
if MagnitudeMethod == 1:
return np.power(10, 0.375*(mag-3.29))
return np.power(10, 0.75*(mag-3.29))
# Change Daily Unit
# Accumulate data in Dailyunit chunks.
# This changes data so it looks like daily data bu really collections of chunked data.
# For earthquakes, the aggregations uses energy averaging for depth and magnitude. It just adds for multiplicity
def GatherUpData(OldInputTimeSeries):
Skipped = NumberofTimeunits%Dailyunit
NewInitialDate = InitialDate + timedelta(days=Skipped)
NewNum_Time = int(Num_Time/Dailyunit)
NewFinalDate = NewInitialDate + Dailyunit * timedelta(days=NewNum_Time-1)
print(' Daily Unit ' +str(Dailyunit) + ' number of ' + TimeIntervalUnitName + ' Units ' + str(NewNum_Time)+ ' ' +
NewInitialDate.strftime("%d/%m/%Y") + ' To ' + NewFinalDate.strftime("%d/%m/%Y"))
NewInputTimeSeries = np.empty([NewNum_Time,Nloc,NpropperTimeDynamicInput],dtype = np.float32)
for itime in range(0,NewNum_Time):
NewInputTimeSeries[itime,:,0] = AggregateEarthquakes(Skipped + itime*Dailyunit,0,Dailyunit, Nloc,
BasicInputTimeSeries[:,:,0], 0)
NewInputTimeSeries[itime,:,1] = AggregateEarthquakes(Skipped + itime*Dailyunit,0,Dailyunit, Nloc,
BasicInputTimeSeries[:,:,1], 1,
weighting = BasicInputTimeSeries[:,:,0])
NewInputTimeSeries[itime,:,2] = AggregateEarthquakes(Skipped + itime*Dailyunit,0,Dailyunit, Nloc,
BasicInputTimeSeries[:,:,2], 2)
NewInputTimeSeries[itime,:,3] = AggregateEarthquakes(Skipped + itime*Dailyunit,0,Dailyunit, Nloc,
BasicInputTimeSeries[:,:,3], 2)
return NewInputTimeSeries, NewNum_Time, NewNum_Time, NewInitialDate, NewFinalDate
# Daily Read in Version
if Earthquake:
read1950 = True
Eigenvectors = 2
UseEarthquakeEigenSystems = False
Dailyunit = 14
addwobblingposition = False
r = Shell.ls(config.appldir)
print (r)
if read1950:
MagnitudeDataFile = APPLDIR + '/1950start/SC_1950-2019.freq-D-25567x2400-log_eng.multi.csv'
DepthDataFile = APPLDIR + '/1950start/SC_1950-2019.freq-D-25567x2400-w_depth.multi.csv'
MultiplicityDataFile = APPLDIR + '/1950start/SC_1950-2019.freq-D-25567x2400-n_shock.multi.csv'
RundleMultiplicityDataFile = APPLDIR + '/1950start/SC_1950-2019.freq-D-25567x2400-n_shock-mag-3.29.multi.csv'
NumberofTimeunits = 25567
InitialDate = datetime(1950,1,1)
else:
MagnitudeDataFile = APPLDIR + '/SC_1990-2019.freq-D-10759x2400.csv'
DepthDataFile = APPLDIR + '/SC_1990-2019.freq-D-w_depth-10759x2400.multi.csv'
MultiplicityDataFile = APPLDIR + '/SC_1990-2019.freq-D-num_evts-10759x2400.csv'
RundleMultiplicityDataFile = APPLDIR + '/SC_1990-2019.freq-D-10755x2400-n_shock-mag-3.29.multi.csv'
NumberofTimeunits = 10759
InitialDate = datetime(1990,1,1)
Topearthquakesfile = APPLDIR + '/topearthquakes_20.csv'
FaultLabelDataFile = APPLDIR + '/pix_faults_SmallJan21.csv'
MagnitudeMethod = 0
ReadFaultMethod = 2 # one set of x values for each input row
Numberxpixels = 60
Numberypixels = 40
Numberpixels = Numberxpixels*Numberypixels
Nloc = Numberpixels
Nlocdimension = 2
Nlocaxislengths = np.array((Numberxpixels,Numberypixels), ndmin = 1, dtype=int) # First row is top (north)
vertices = cal_gilbert2d(Numberxpixels,Numberypixels)
# print(vertices[0], vertices[1],vertices[2399], vertices[1198], vertices[1199],vertices[1200], vertices[1201])
sfcurvelist = vertices
plot_gilbert2d_space_filling(sfcurvelist, Numberxpixels, Numberypixels)
Dropearlydata = 0
FinalDate = InitialDate + timedelta(days=NumberofTimeunits-1)
print(startbold + startred + InitialDate.strftime("%d/%m/%Y") + ' To ' + FinalDate.strftime("%d/%m/%Y")
+ ' days ' + str(NumberofTimeunits) + resetfonts)
print( ' Pixels ' + str(Nloc) + ' x dimension ' + str(Nlocaxislengths[0]) + ' y dimension ' + str(Nlocaxislengths[1]) )
# Set up location information
Num_Time = NumberofTimeunits
NFIPS = Numberpixels
Locationname = [''] * NFIPS
Locationstate = [' '] * NFIPS
Locationpopulation = np.ones(NFIPS, dtype=int)
Locationfips = np.empty(NFIPS, dtype=int) # integer version of FIPs
Locationcolumns = [] # String version of FIPS
FIPSintegerlookup = {}
FIPSstringlookup = {}
for iloc in range (0, Numberpixels):
localfips = iloc
xvalue = localfips%Nlocaxislengths[0]
yvalue = np.floor(localfips/Nlocaxislengths[0])
Stringfips = str(xvalue) + ',' + str(yvalue)
Locationcolumns.append(Stringfips)
Locationname[iloc] = Stringfips
Locationfips[iloc] = localfips
FIPSintegerlookup[localfips] = localfips
FIPSstringlookup[Stringfips] = localfips
# TimeSeries 0 magnitude 1 depth 2 Multiplicity 3 Rundle Multiplicity
NpropperTimeDynamicInput = 4
BasicInputTimeSeries = np.empty([Num_Time,Nloc,NpropperTimeDynamicInput],dtype = np.float32)
# StaticProps 0...NumFaultLabels-1 Fault Labels
NumFaultLabels = 4
BasicInputStaticProps = np.empty([Nloc,NumFaultLabels],dtype = np.float32)
RawFaultData = np.empty(Nloc,dtype = int)
# Read in Magnitude Data into BasicInputTimeSeries
with open(MagnitudeDataFile, 'r') as read_obj:
csv_reader = reader(read_obj)
header = next(csv_reader)
Ftype = header[0]
if Ftype != '':
printexit('EXIT: Wrong header on line 1 ' + Ftype + ' of ' + MagnitudeDataFile)
itime = 0
for nextrow in csv_reader:
if len(nextrow)!=Numberpixels + 1:
printexit('EXIT: Incorrect row length Magnitude ' + str(itime) + ' ' +str(len(nextrow)))
localtime = nextrow[0]
if itime != int(localtime):
printexit('EXIT: Unexpected Time in Magnitude ' + localtime + ' ' +str(itime))
for iloc in range(0, Numberpixels):
BasicInputTimeSeries[itime,iloc,0] = TransformMagnitude(float(nextrow[iloc + 1]))
itime += 1
if itime != Num_Time:
printexit('EXIT Inconsistent time lengths in Magnitude Data ' +str(itime) + ' ' + str(Num_Time))
print('Read Magnitude data locations ' + str(Nloc) + ' Time Steps ' + str(Num_Time))
# End Reading in Magnitude data
# Read in Depth Data into BasicInputTimeSeries
with open(DepthDataFile, 'r') as read_obj:
csv_reader = reader(read_obj)
header = next(csv_reader)
Ftype = header[0]
if Ftype != '':
printexit('EXIT: Wrong header on line 1 ' + Ftype + ' of ' + DepthDataFile)
itime = 0
for nextrow in csv_reader:
if len(nextrow)!=Numberpixels + 1:
printexit('EXIT: Incorrect row length Depth ' + str(itime) + ' ' +str(len(nextrow)))
localtime = nextrow[0]
if itime != int(localtime):
printexit('EXIT: Unexpected Time in Depth ' + localtime + ' ' +str(itime))
for iloc in range(0, Numberpixels):
BasicInputTimeSeries[itime,iloc,1] = nextrow[iloc + 1]
itime += 1
if itime != Num_Time:
printexit('EXIT Inconsistent time lengths in Depth Data ' +str(itime) + ' ' + str(Num_Time))
print('Read Depth data locations ' + str(Nloc) + ' Time Steps ' + str(Num_Time))
# End Reading in Depth data
# Read in Multiplicity Data into BasicInputTimeSeries
with open(MultiplicityDataFile, 'r') as read_obj:
csv_reader = reader(read_obj)
header = next(csv_reader)
Ftype = header[0]
if Ftype != '':
printexit('EXIT: Wrong header on line 1 ' + Ftype + ' of ' + MultiplicityDataFile)
itime = 0
for nextrow in csv_reader:
if len(nextrow)!=Numberpixels + 1:
printexit('EXIT: Incorrect row length Multiplicity ' + str(itime) + ' ' +str(len(nextrow)))
localtime = nextrow[0]
if itime != int(localtime):
printexit('EXIT: Unexpected Time in Multiplicity ' + localtime + ' ' +str(itime))
for iloc in range(0, Numberpixels):
BasicInputTimeSeries[itime,iloc,2] = nextrow[iloc + 1]
itime += 1
if itime != Num_Time:
printexit('EXIT Inconsistent time lengths in Multiplicity Data ' +str(itime) + ' ' + str(Num_Time))
print('Read Multiplicity data locations ' + str(Nloc) + ' Time Steps ' + str(Num_Time))
# End Reading in Multiplicity data
# Read in Rundle Multiplicity Data into BasicInputTimeSeries
with open(RundleMultiplicityDataFile, 'r') as read_obj:
csv_reader = reader(read_obj)
header = next(csv_reader)
Ftype = header[0]
if Ftype != '':
printexit('EXIT: Wrong header on line 1 ' + Ftype + ' of ' + RundleMultiplicityDataFile)
itime = 0
for nextrow in csv_reader:
if len(nextrow)!=Numberpixels + 1:
printexit('EXIT: Incorrect row length Rundle Multiplicity ' + str(itime) + ' ' +str(len(nextrow)))
localtime = nextrow[0]
if itime != int(localtime):
printexit('EXIT: Unexpected Time in Rundle Multiplicity ' + localtime + ' ' +str(itime))
for iloc in range(0, Numberpixels):
BasicInputTimeSeries[itime,iloc,3] = nextrow[iloc + 1]
itime += 1
if itime != Num_Time:
printexit('EXIT Inconsistent time lengths in Rundle Multiplicity Data ' +str(itime) + ' ' + str(Num_Time))
print('Read Rundle Multiplicity data locations ' + str(Nloc) + ' Time Steps ' + str(Num_Time))
# End Reading in Rundle Multiplicity data
# Read in Top Earthquake Data
numberspecialeqs = 20
Specialuse = np.full(numberspecialeqs, True, dtype=bool)
Specialuse[14] = False
Specialuse[15] = False
Specialuse[18] = False
Specialuse[19] = False
Specialmags = np.empty(numberspecialeqs, dtype=np.float32)
Specialdepth = np.empty(numberspecialeqs, dtype=np.float32)
Speciallong = np.empty(numberspecialeqs, dtype=np.float32)
Speciallat = np.empty(numberspecialeqs, dtype=np.float32)
Specialdate = np.empty(numberspecialeqs, dtype = 'datetime64[D]')
Specialxpos = np.empty(numberspecialeqs, dtype=np.int32)
Specialypos = np.empty(numberspecialeqs, dtype=np.int32)
Specialeqname = []
with open(Topearthquakesfile, 'r') as read_obj:
csv_reader = reader(read_obj)
header = next(csv_reader)
Ftype = header[0]
if Ftype != 'date':
printexit('EXIT: Wrong header on line 1 ' + Ftype + ' of ' + Topearthquakesfile)
iquake = 0
for nextrow in csv_reader:
if len(nextrow)!=6:
printexit('EXIT: Incorrect row length Special Earthquakes ' + str(iquake) + ' ' +str(len(nextrow)))
Specialdate[iquake] = nextrow[0]
Speciallong[iquake] = nextrow[1]
Speciallat[iquake] = nextrow[2]
Specialmags[iquake] = nextrow[3]
Specialdepth[iquake] = nextrow[4]
Specialeqname.append(nextrow[5])
ixpos = math.floor((Speciallong[iquake]+120.0)*10.0)
ixpos = max(0,ixpos)
ixpos = min(59,ixpos)
iypos = math.floor((36.0-Speciallat[iquake])*10.0)
iypos = max(0,iypos)
iypos = min(39,iypos)
Specialxpos[iquake] = ixpos
Specialypos[iquake] = iypos
iquake += 1
for iquake in range(0,numberspecialeqs):
line = str(iquake) + ' mag ' + str(round(Specialmags[iquake],1)) + ' Lat/Long '
line += str(round(Speciallong[iquake],2)) + ' ' + str(round(Speciallong[iquake],2)) + ' ' + np.datetime_as_string(Specialdate[iquake])
line += Specialeqname[iquake]
print(line)
# Possibly change Unit
current_time = timenow()
print(startbold + startred + current_time + ' Data read in ' + RunName + ' ' + RunComment + resetfonts)
if Dailyunit != 1:
if Dailyunit == 14:
TimeIntervalUnitName = 'Fortnight'
if Dailyunit == 28:
TimeIntervalUnitName = 'LunarMonth'
BasicInputTimeSeries, NumberofTimeunits, Num_Time, InitialDate, FinalDate = GatherUpData(BasicInputTimeSeries)
current_time = timenow()
print(startbold + startred + current_time + ' Data unit changed ' +RunName + ' ' + RunComment + resetfonts)
Dateaxis = np.empty(Num_Time, dtype = 'datetime64[D]')
Dateaxis[0] = np.datetime64(InitialDate).astype('datetime64[D]')
for idate in range(1,Num_Time):
Dateaxis[idate] = Dateaxis[idate-1] + np.timedelta64(Dailyunit,'D')
for idate in range(0,Num_Time):
Dateaxis[idate] = Dateaxis[idate] + np.timedelta64(int(Dailyunit/2),'D')
print('Mid unit start time ' + np.datetime_as_string(Dateaxis[0]))
Totalmag = np.zeros(Num_Time,dtype = np.float32)
Totalefourthroot = np.zeros(Num_Time,dtype = np.float32)
Totalesquareroot = np.zeros(Num_Time,dtype = np.float32)
Totaleavgedmag = np.zeros(Num_Time,dtype = np.float32)
Totalmult = np.zeros(Num_Time,dtype = np.float32)
Totalmag[:] = BasicInputTimeSeries[:,:,0].sum(axis=1)
Totaleavgedmag = log_energy(BasicInputTimeSeries[:,:,0], sumaxis=1)
Totalmult[:] = BasicInputTimeSeries[:,:,3].sum(axis=1)
MagnitudeMethod = 1
Tempseries = TransformMagnitude(BasicInputTimeSeries[:,:,0])
Totalefourthroot = Tempseries.sum(axis=1)
MagnitudeMethod = 2
Tempseries = TransformMagnitude(BasicInputTimeSeries[:,:,0])
Totalesquareroot = Tempseries.sum(axis=1)
MagnitudeMethod = 0
basenorm = Totalmult.max(axis=0)
magnorm = Totalmag.max(axis=0)
eavgedmagnorm = Totaleavgedmag.max(axis=0)
efourthrootnorm = Totalefourthroot.max(axis=0)
esquarerootnorm = Totalesquareroot.max(axis=0)
print('Maximum Mult ' + str(round(basenorm,2)) + ' Mag 0.15 ' + str(round(magnorm,2))
+ ' E-avg 0.5 ' + str(round(eavgedmagnorm,2)) + ' E^0.25 1.0 ' + str(round(efourthrootnorm,2))
+ ' E^0.5 1.0 ' + str(round(esquarerootnorm,2)) )
Totalmag = np.multiply(Totalmag, 0.15*basenorm/magnorm)
Totaleavgedmag = np.multiply(Totaleavgedmag, 0.5*basenorm/eavgedmagnorm)
Totalefourthroot= np.multiply(Totalefourthroot, basenorm/efourthrootnorm)
Totalesquareroot= np.multiply(Totalesquareroot, basenorm/esquarerootnorm)
plt.rcParams["figure.figsize"] = [16,8]
figure, ax = plt.subplots()
datemin, datemax = makeadateplot(figure, ax, Dateaxis)
ax.plot(Dateaxis, Totalmult, label='Multiplicity')
ax.plot(Dateaxis, Totalmag, label='Summed Magnitude')
ax.plot(Dateaxis, Totaleavgedmag, label='E-averaged Magnitude')
ax.plot(Dateaxis, Totalefourthroot, label='Summed E^0.25')
ax.plot(Dateaxis, Totalesquareroot, label='Summed E^0.5')
ax.set_title('Observables summed over space')
ax.set_xlabel("Years")
ax.set_ylabel("Mult/Mag/Energy")
ax.grid(True)
ax.legend(loc='upper right')
Addfixedearthquakes(ax, datemin, datemax)
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
figure.tight_layout()
plt.show()
else:
print(' Data unit is the day and input this way')
Dateaxis = np.empty(Num_Time, dtype = 'datetime64[D]')
Dateaxis[0] = np.datetime64(InitialDate).astype('datetime64[D]')
for idate in range(1,Num_Time):
Dateaxis[idate] = Dateaxis[idate-1] + np.timedelta64(Dailyunit,'D')
for idate in range(0,Num_Time):
Dateaxis[idate] = Dateaxis[idate] + np.timedelta64(int(Dailyunit/2),'D')
print('Mid unit start time ' + np.datetime_as_string(Dateaxis[0]))
# Read in Fault Label Data into BasicInputStaticProps
# No header for data
with open(FaultLabelDataFile, 'r') as read_obj:
csv_reader = reader(read_obj)
iloc = 0
if ReadFaultMethod ==1:
for nextrow in csv_reader:
if len(nextrow)!=1:
printexit('EXIT: Incorrect row length Fault Label Data ' + str(iloc) + ' ' + str(len(nextrow)))
RawFaultData[iloc] = nextrow[0]
iloc += 1
else:
for nextrow in csv_reader:
if len(nextrow)!=Numberxpixels:
printexit('EXIT: Incorrect row length Fault Label Data ' + str(iloc) + ' ' + str(len(nextrow)) + ' ' + str(Numberxpixels))
for jloc in range(0, len(nextrow)):
RawFaultData[iloc] = nextrow[jloc]
iloc += 1
if iloc != Nloc:
printexit('EXIT Inconsistent location lengths in Fault Label Data ' +str(iloc) + ' ' + str(Nloc))
print('Read Fault Label data locations ' + str(Nloc))
# End Reading in Fault Label data
if NumFaultLabels == 1:
BasicInputStaticProps[:,0] = RawFaultData.astype(np.float32)
else: # remap fault label more reasonably
unique, counts = np.unique(RawFaultData, return_counts=True)
num = len(unique)
print('Number Fault Collections ' + str(num))
# for i in range(0,num):
# print(str(unique[i]) + ' ' + str(counts[i]))
BasicInputStaticProps[:,0] = remapfaults(RawFaultData, Numberxpixels,Numberypixels, sfcurvelist).astype(np.float32)
pix_faults = np.reshape(BasicInputStaticProps[:,0],(40,60)).astype(int)
annotate_faults_ndarray(pix_faults,figsize=(24, 16))
sfcurvelist2 = []
for yloc in range(0, Numberypixels):
for xloc in range(0, Numberxpixels):
pixellocation = yloc*Numberxpixels + xloc
[x,y] = sfcurvelist[pixellocation]
sfcurvelist2.append([x,39-y])
BasicInputStaticProps[:,1] = remapfaults(RawFaultData, Numberxpixels,Numberypixels, sfcurvelist2).astype(np.float32)
sfcurvelist3 = []
for yloc in range(0, Numberypixels):
for xloc in range(0, Numberxpixels):
pixellocation = yloc*Numberxpixels + xloc
[x,y] = sfcurvelist[pixellocation]
sfcurvelist3.append([59-x,y])
BasicInputStaticProps[:,2] = remapfaults(RawFaultData, Numberxpixels,Numberypixels, sfcurvelist3).astype(np.float32)
sfcurvelist4 = []
for yloc in range(0, Numberypixels):
for xloc in range(0, Numberxpixels):
pixellocation = yloc*Numberxpixels + xloc
[x,y] = sfcurvelist[pixellocation]
sfcurvelist4.append([59-x,39-y])
BasicInputStaticProps[:,3] = remapfaults(RawFaultData, Numberxpixels,Numberypixels, sfcurvelist4).astype(np.float32)
NpropperTimeDynamicCalculated = 11
NpropperTimeDynamic = NpropperTimeDynamicInput + NpropperTimeDynamicCalculated
NpropperTimeStatic = NumFaultLabels
# NumpredbasicperTime = NpropperTimeDynamic
NumpredbasicperTime = 1 # Can be 1 upto NpropperTimeDynamic
NumpredFuturedperTime = NumpredbasicperTime
# Setup Transformed Data
MagnitudeMethodTransform = 1
TransformName = 'E^0.25'
NpropperTime = NpropperTimeStatic + NpropperTimeDynamic
InputPropertyNames = [' '] * NpropperTime
DynamicNames = ['Magnitude Now',
'Depth Now',
'Multiplicity Now',
'Mult >3.29 Now',
'Mag 2/3 Month Back',
'Mag 1.5 Month Back',
'Mag 3 Months Back',
'Mag 6 Months Back',
'Mag Year Back',
TransformName + ' Now',
TransformName+' 2/3 Month Back',
TransformName+' 1.5 Month Back',
TransformName+' 3 Months Back',
TransformName+' 6 Months Back',
TransformName+' Year Back']
if Dailyunit == 14:
DynamicNames = ['Magnitude 2 weeks Now',
'Depth 2 weeks Now',
'Multiplicity 2 weeks Now',
'Mult >3.29 2 weeks Now',
'Mag 4 Weeks Back',
'Mag 2 Months Back',
'Mag 3 Months Back',
'Mag 6 Months Back',
'Mag Year Back',
TransformName+ ' 2 weeks Back',
TransformName+' 4 weeks Back',
TransformName+' 2 Months Back',
TransformName+' 3 Months Back',
TransformName+' 6 Months Back',
TransformName+' Year Back']
Property_is_Intensive = np.full(NpropperTime, True, dtype = bool)
for iprop in range(0, NpropperTimeStatic):
InputPropertyNames[iprop] = 'Fault ' +str(iprop)
for iprop in range(0, NpropperTimeDynamic):
InputPropertyNames[iprop+NpropperTimeStatic] = DynamicNames[iprop]
Num_Extensive = 0
ScaleProperties = True
GenerateFutures = False
GenerateSequences = True
PredictionsfromInputs = True
ConvertDynamicPredictedQuantity = False
AddSpecialstoSummedplots = True
UseRealDatesonplots = True
EarthquakeImagePlots = False
UseFutures = False
PopulationNorm = False
OriginalNloc = Nloc
MapLocation = False
# Add summed magnitudes as properties to use in prediction and Calculated Properties for some
# Calculated Properties are sums starting at given time and are set to NaN if necessary
NumTimeSeriesCalculatedBasic = 9
NumTimeSeriesCalculated = 2*NumTimeSeriesCalculatedBasic + 1
NamespredCalculated = ['Mag 2/3 Month Ahead',
'Mag 1.5 Month Ahead',
'Mag 3 Months Ahead',
'Mag 6 Months Ahead',
'Mag Year Ahead Ahead',
'Mag 2 Years Ahead',
'Mag 4 years Ahead',
'Mag Skip 1, Year ahead',
'Mag 2 years 2 ahead',
TransformName+' Daily Now',
TransformName+' 2/3 Month Ahead',
TransformName+' 1.5 Month Ahead',
TransformName+' 3 Months Ahead',
TransformName+' 6 Months Ahead',
TransformName+' Year Ahead',
TransformName+' 2 Years Ahead',
TransformName+' 4 years Ahead',
TransformName+' Skip 1, Year ahead',
TransformName+' 2 years 2 ahead']
Unitjumps = [ 23, 46, 92, 183, 365, 730, 1460, 365, 730]
Unitdelays = [ 0, 0, 0, 0, 0, 0, 0, 365, 730]
Plottingdelay = 1460
if Dailyunit == 14:
NumTimeSeriesCalculatedBasic = 9
NumTimeSeriesCalculated = 2*NumTimeSeriesCalculatedBasic + 1
NamespredCalculated = ['Mag 4 Weeks Ahead',
'Mag 2 Month Ahead',
'Mag 3 Months Ahead',
'Mag 6 Months Ahead',
'Mag Year Ahead',
'Mag 2 Years Ahead',
'Mag 4 years Ahead',
'Mag Skip 1, Year ahead',
'Mag 2 years 2 ahead',
TransformName+' 2 Weeks Now',
TransformName+' 4 Weeks Ahead',
TransformName+' 2 Months Ahead',
TransformName+' 3 Months Ahead',
TransformName+' 6 Months Ahead',
TransformName+' Year Ahead',
TransformName+' 2 Years Ahead',
TransformName+' 4 years Ahead',
TransformName+' Skip 1, Year ahead',
TransformName+' 2 years 2 ahead']
Unitjumps = [ 2, 4, 7, 13, 26, 52, 104, 26, 52]
Unitdelays = [ 0, 0, 0, 0, 0, 0, 0, 26, 52]
Plottingdelay = 104
NumpredbasicperTime += NumTimeSeriesCalculated
CalculatedTimeSeries = np.empty([Num_Time,Nloc,NumTimeSeriesCalculated],dtype = np.float32)
for icalc in range (0, NumTimeSeriesCalculatedBasic):
newicalc = icalc+1+NumTimeSeriesCalculatedBasic
for itime in range(0,Num_Time):
MagnitudeMethod = 0
CalculatedTimeSeries[itime,:,icalc] = AggregateEarthquakes(itime,Unitdelays[icalc],Unitjumps[icalc], Nloc,
BasicInputTimeSeries[:,:,0], 0)
MagnitudeMethod = MagnitudeMethodTransform
CalculatedTimeSeries[itime,:,newicalc] = TransformMagnitude(CalculatedTimeSeries[itime,:,icalc])
MagnitudeMethod = 0
current_time = timenow()
print(startbold + startred + 'Earthquake ' + str(icalc) + ' ' + NamespredCalculated[icalc] + ' ' + current_time + ' ' +RunName + resetfonts)
print(startbold + startred + 'Earthquake ' + str(newicalc) + ' ' + NamespredCalculated[newicalc] + ' ' + current_time + ' ' +RunName + resetfonts)
MagnitudeMethod = MagnitudeMethodTransform
CalculatedTimeSeries[:,:,NumTimeSeriesCalculatedBasic] = TransformMagnitude(BasicInputTimeSeries[:,:,0])
MagnitudeMethod = 0
print(startbold + startred + 'Earthquake ' + str(NumTimeSeriesCalculatedBasic) + ' ' + NamespredCalculated[NumTimeSeriesCalculatedBasic] + ' ' + current_time + ' ' +RunName + resetfonts)
for iprop in range(0,NumTimeSeriesCalculated):
InputPropertyNames.append(NamespredCalculated[iprop])
###Output
_____no_output_____
###Markdown
Earthquake Eigensystems
###Code
if UseEarthquakeEigenSystems:
version = sc.version.version
print(f'SciPy version {version}')
#x = np.array([[1,2.0],[2.0,0]])
#w, v = solver.eigh(x, driver='evx')
#print(w)
#print(v)
###Output
_____no_output_____
###Markdown
Multiplicity Data
###Code
def histogrammultiplicity(Type, numbins, Data):
hitcounts = np.zeros(Nloc, dtype=int)
rawcounts = np.zeros(Nloc, dtype=int)
for iloc in range(0,Nloc):
rawcounts[iloc] = int(0.1+Data[:,iloc].sum(0))
hitcounts[iloc] = int(min(numbins, rawcounts[iloc]))
matplotlib.rcParams.update(matplotlib.rcParamsDefault)
plt.rcParams.update({'font.size': 9})
plt.rcParams["figure.figsize"] = [8,6]
plt.hist(hitcounts, numbins, facecolor='b', alpha=0.75, log=True)
plt.title('\n'.join(wrap(RunComment + ' ' + RunName + ' ' + Type + ' Earthquake Count per location ',70)))
plt.xlabel('Hit Counts')
plt.ylabel('Occurrences')
plt.grid(True)
plt.show()
return rawcounts
def threebythree(pixellocation,numxlocations,numylocations):
indices = np.empty([3,3], dtype=int)
y = int(0.1 + pixellocation/numxlocations)
x = pixellocation - y*numxlocations
bottomx = max(0,x-1)
bottomx = min(bottomx,numxlocations-3)
bottomy = max(0,y-1)
bottomy = min(bottomy,numylocations-3)
for ix in range(0,3):
for iy in range(0,3):
x= bottomx+ix
y= bottomy+iy
pixellocation = y*numxlocations + x
indices[ix,iy] = pixellocation
return indices
if Earthquake:
MappedLocations = np.arange(0,Nloc, dtype=int)
LookupLocations = np.arange(0,Nloc, dtype=int)
MappedNloc = Nloc
histogrammultiplicity('Basic', 100, BasicInputTimeSeries[:,:,2])
nbins = 10
if read1950:
nbins= 50
rawcounts1 = histogrammultiplicity('Rundle > 3.29', nbins, BasicInputTimeSeries[:,:,3])
TempTimeSeries = np.zeros([Num_Time,Nloc],dtype = np.float32)
for iloc in range (0,Nloc):
indices = threebythree(iloc,60,40)
for itime in range(0,Num_Time):
sum3by3 = 0.0
for ix in range(0,3):
for iy in range(0,3):
pixellocation = indices[ix,iy]
sum3by3 += BasicInputTimeSeries[itime,pixellocation,3]
TempTimeSeries[itime,iloc] = sum3by3
nbins =40
if read1950:
nbins= 150
rawcounts2 = histogrammultiplicity('3x3 Rundle > 3.29', nbins, TempTimeSeries)
#
# Define "Interesting Locations"
if read1950:
singleloccut = 25
groupedloccut = 110
singleloccut = 7.1
groupedloccut = 34.1
# groupedloccut = 1000000000
else:
singleloccut = 5.1
groupedloccut = 24.9
MappedLocations.fill(-1)
MappedNloc = 0
ct1 = 0
ct2 = 0
for iloc in range (0,Nloc):
if rawcounts1[iloc] >= singleloccut:
ct1 += 1
if rawcounts2[iloc] >= groupedloccut:
ct2 += 1
if rawcounts1[iloc] < singleloccut and rawcounts2[iloc] < groupedloccut:
continue
MappedLocations[iloc] = MappedNloc
MappedNloc += 1
LookupLocations = None
LookupLocations = np.empty(MappedNloc, dtype=int)
for iloc in range (0,Nloc):
jloc = MappedLocations[iloc]
if jloc >= 0:
LookupLocations[jloc] = iloc
TempTimeSeries = None
print('Total ' + str(MappedNloc) +
' Single location multiplicity cut ' + str(singleloccut) +
' ' + str(ct1) + ' 3x3 ' + str(groupedloccut) + ' ' + str(ct2))
if UseEarthquakeEigenSystems:
if Eigenvectors > 0:
UseTopEigenTotal = 16
UseTopEigenLocal = 0
if Eigenvectors > 1:
UseTopEigenLocal = 4
Num_EigenProperties = UseTopEigenTotal + UseTopEigenLocal
EigenTimeSeries = np.empty([Num_Time,MappedNloc],dtype = np.float32)
PsiTimeSeries = np.empty([Num_Time,MappedNloc],dtype = np.float32)
FiTimeSeries = np.empty([Num_Time,MappedNloc],dtype = np.float32)
EigenTimeSeries[:,:] = BasicInputTimeSeries[:,LookupLocations,3]
StoreEigenvectors = np.zeros([Num_Time,MappedNloc,MappedNloc],dtype = np.float32)
StoreEigencorrels = np.zeros([Num_Time,MappedNloc,MappedNloc],dtype = np.float32)
StoreNormingfactor = np.zeros([Num_Time],dtype = np.float32)
StoreNormingfactor1 = np.zeros([Num_Time],dtype = np.float32)
StoreNormingfactor2 = np.zeros([Num_Time],dtype = np.float32)
current_time = timenow()
print(startbold + startred + 'Start Eigen Earthquake '
+ current_time + ' ' +RunName + resetfonts)
for itime in range (0,Num_Time):
imax = itime
imin = max(0, imax-25)
Result = np.zeros(MappedNloc, dtype = np.float64)
Result = AggregateEarthquakes(imin,0,imax-imin+1, MappedNloc, EigenTimeSeries[:,:], 2)
PsiTimeSeries[itime,:] = Result
FiTimeSeries[itime,:] = EigenTimeSeries[itime,:]
current_time = timenow()
print(startbold + startred + 'End Eigen Earthquake 1 '
+ current_time + ' ' +RunName + resetfonts)
Eigenvals = np.zeros([Num_Time,MappedNloc], dtype = np.float32)
Chi1 = np.zeros(Num_Time, dtype = np.float32)
Chi2 = np.zeros(Num_Time, dtype = np.float32)
Sumai = np.zeros(Num_Time, dtype = np.float32)
Bestindex = np.zeros(Num_Time, dtype = int)
Numbereigs = np.zeros(Num_Time, dtype = int)
Besttrailingindex = np.zeros(Num_Time, dtype = int)
Eig0coeff = np.zeros(Num_Time, dtype = np.float32)
meanmethod = 0
if meanmethod == 1:
Meanovertime = np.empty(MappedNloc, dtype = np.float32)
sigmaovertime = np.empty(MappedNloc, dtype = np.float32)
Meanovertime = FiTimeSeries.mean(axis=0)
Meanovertime = Meanovertime.reshape(1,MappedNloc)
sigmaovertime = FiTimeSeries.std(axis=0)
sigmaovertime = sigmaovertime.reshape(1,MappedNloc)
countbad = 0
OldActualNumberofLocationsUsed = -1
for itime in range (25,Num_Time):
LocationCounts = FiTimeSeries[0:itime,:].sum(axis=0)
NumLocsToday = np.count_nonzero(LocationCounts)
Nonzeromapping = np.zeros(NumLocsToday, dtype = int)
#gregor
# Nonzeromapping = np.zeros(NumLocsToday, dtype = int)
ActualNumberofLocationsUsed = 0
for ipos in range (0,MappedNloc):
if LocationCounts[ipos] == 0:
continue
Nonzeromapping[ActualNumberofLocationsUsed] = ipos
ActualNumberofLocationsUsed +=1
if ActualNumberofLocationsUsed <= 1:
print(str(itime) + ' Abandoned ' + str(ActualNumberofLocationsUsed))
continue
FiHatTimeSeries = np.empty([itime+1,ActualNumberofLocationsUsed], dtype = np.float32)
if meanmethod == 1:
FiHatTimeSeries[:,:] = np.divide(np.subtract(FiTimeSeries[0:(itime+1),Nonzeromapping],Meanovertime[0,Nonzeromapping]),
sigmaovertime[0,Nonzeromapping])
else:
FiHatTimeSeries[:,:] = FiTimeSeries[0:(itime+1),Nonzeromapping]
# FiHatTimeSeries[:,:] = PsiTimeSeries[0:(itime+1),Nonzeromapping]
CorrelationMatrix = np.corrcoef(FiHatTimeSeries, rowvar =False)
bad = np.count_nonzero(np.isnan(CorrelationMatrix))
if bad > 0:
countbad += 1
continue
evalues, evectors = solver.eigh(CorrelationMatrix)
Newevector = evectors[:,ActualNumberofLocationsUsed-1]
Newevalue = evalues[ActualNumberofLocationsUsed-1]
debug = False
if debug:
if OldActualNumberofLocationsUsed == ActualNumberofLocationsUsed:
Mapdiff = np.where(np.not_equal(OldNonzeromapping,Nonzeromapping),1,0.).sum()
if Mapdiff > 0:
print(str(itime) + ' Change in mapping ' + str(ActualNumberofLocationsUsed) + ' Change ' + str(Mapdiff))
else:
Corrdiff = np.absolute(np.subtract(OldCorrelationMatrix,CorrelationMatrix)).sum()
Corrorg = np.absolute(CorrelationMatrix).sum()
yummy = CorrelationMatrix.dot(Oldevector)
vTMv = yummy.dot(Oldevector)
Doubleyummy = CorrelationMatrix.dot(Newevector)
newvTMv = Doubleyummy.dot(Newevector)
print(str(itime) + ' Change in correlation ' + str(ActualNumberofLocationsUsed) + ' Change '
+ str(Corrdiff) + ' original ' + str(Corrorg) + ' eval ' + str(Oldevalue) + ' new '
+ str(Newevalue) + ' vTMv ' + str(vTMv) + ' New ' + str(newvTMv))
else:
print(str(itime) + ' Change in size ' + str(OldActualNumberofLocationsUsed) + ' ' +
str(ActualNumberofLocationsUsed))
OldActualNumberofLocationsUsed = ActualNumberofLocationsUsed
OldNonzeromapping = Nonzeromapping
OldCorrelationMatrix = CorrelationMatrix
Oldevector = Newevector
Oldevalue = Newevalue
normcoeff = 100.0/evalues.sum()
evalues = np.multiply(evalues, normcoeff)
Numbereigs[itime] = ActualNumberofLocationsUsed
for ieig in range(0,ActualNumberofLocationsUsed):
Eigenvals[itime, ieig] = evalues[ActualNumberofLocationsUsed-ieig-1]
chival = 0.0
sumaieig = 0.0
Checkvector = np.zeros(ActualNumberofLocationsUsed,dtype = np.float32)
largesteigcoeff = -1.0
largestindex = -1
Keepaisquared = np.zeros(ActualNumberofLocationsUsed, dtype=np.float32)
for ieig in range(0,ActualNumberofLocationsUsed):
aieig = 0.0
backwards = ActualNumberofLocationsUsed-ieig-1
for vectorindex in range(0,ActualNumberofLocationsUsed):
StoreEigenvectors[itime,backwards,Nonzeromapping[vectorindex]] = evectors[vectorindex,ieig]
aieig += evectors[vectorindex,ieig]*PsiTimeSeries[itime,Nonzeromapping[vectorindex]]
for vectorindex in range(0,ActualNumberofLocationsUsed):
Checkvector[vectorindex] += aieig*evectors[vectorindex, ieig]
aieig *= aieig
chival += aieig*evalues[ieig]
sumaieig += aieig
Keepaisquared[backwards] = aieig
for ieig in range(0,ActualNumberofLocationsUsed):
backwards = ActualNumberofLocationsUsed-ieig-1
aieig = Keepaisquared[backwards]
aieig = aieig/sumaieig
if backwards == 0:
Eig0coeff[itime] = aieig
test = evalues[ieig]*aieig
if test > largesteigcoeff:
largesteigcoeff = test
largestindex = backwards
Bestindex[itime] = largestindex
discrep = 0.0
for vectorindex in range(0,ActualNumberofLocationsUsed):
discrep += pow(Checkvector[vectorindex] - PsiTimeSeries[itime,Nonzeromapping[vectorindex]], 2)
if discrep > 0.01:
print('Eigendecomposition Failure ' + str(itime) + ' ' + str(discrep))
Chi1[itime] = chival
Chi2[itime] = chival/sumaieig
Sumai[itime] = sumaieig
largesteigcoeff = -1.0
largestindex = -1
sumaieig = 0.0
Trailingtimeindex = itime-3
if itime > 40:
Trailinglimit = Numbereigs[Trailingtimeindex]
KeepTrailingaisquared = np.zeros(Trailinglimit, dtype=np.float32)
for ieig in range(0,Trailinglimit):
aieig = 0.0
for vectorindex in range(0,MappedNloc):
# aieig += StoreEigenvectors[Trailingtimeindex,ieig,vectorindex]*PsiTimeSeries[itime,vectorindex]
aieig += StoreEigenvectors[Trailingtimeindex,ieig,vectorindex]*StoreEigenvectors[itime,
Bestindex[itime],vectorindex]
aieig *= aieig
sumaieig += aieig
KeepTrailingaisquared[ieig] = aieig
for ieig in range(0,Trailinglimit):
aieig = KeepTrailingaisquared[ieig]
aieig = aieig/sumaieig
test = Eigenvals[Trailingtimeindex, ieig]*aieig
if test > largesteigcoeff:
largesteigcoeff = test
largestindex = ieig
Besttrailingindex[itime] = largestindex
if itime >40: # Calculate eigenvector tracking
Leader = StoreEigenvectors[itime,:,:]
Trailer = StoreEigenvectors[itime-3,:,:]
StoreEigencorrels[itime,:,:] = np.tensordot(Leader, Trailer, (1, (1)))
StrippedDown = StoreEigencorrels[itime,Bestindex[itime],:]
Normingfactor = np.multiply(StrippedDown,StrippedDown).sum()
Normingfactor1 = np.multiply(StrippedDown[0:8],StrippedDown[0:8]).sum()
Normingfactor2 = np.multiply(StrippedDown[0:30],StrippedDown[0:30]).sum()
StoreNormingfactor[itime] = Normingfactor
StoreNormingfactor1[itime] = Normingfactor1
StoreNormingfactor2[itime] = Normingfactor2
averagesumai = Sumai.mean()
Chi1 = np.divide(Chi1,averagesumai)
print('Bad Correlation Matrices ' + str(countbad))
print(startbold + startred + 'End Eigen Earthquake 2 '
+ current_time + ' ' +RunName + resetfonts)
def makeasmalldateplot(figure,ax, Dateaxis):
plt.rcParams.update({'font.size': 9})
months = mdates.MonthLocator(interval=2) # every month
datemin = np.datetime64(Dateaxis[0], 'M')
datemax = np.datetime64(Dateaxis[-1], 'M') + np.timedelta64(1, 'M')
ax.set_xlim(datemin, datemax)
months_fmt = mdates.DateFormatter('%y-%b')
locator = mdates.AutoDateLocator()
locator.intervald['MONTHLY'] = [2]
formatter = mdates.ConciseDateFormatter(locator)
# ax.xaxis.set_major_locator(locator)
# ax.xaxis.set_major_formatter(formatter)
ax.xaxis.set_major_locator(months)
ax.xaxis.set_major_formatter(months_fmt)
figure.autofmt_xdate()
return datemin, datemax
def plotquakeregions(HalfSize,xaxisdates, SetofPlots, Commontitle, ylabel, SetofColors, Startx, ncols):
numplotted = SetofPlots.shape[1]
totusedquakes = 0
for iquake in range(0,numberspecialeqs):
x_line_index = Specialindex[iquake]
if (x_line_index <= Startx) or (x_line_index >= Num_Time-1):
continue
if Specialuse[iquake]:
totusedquakes +=1
nrows = math.ceil(totusedquakes/ncols)
sortedquakes = np.argsort(Specialindex)
jplot = 0
kplot = -1
for jquake in range(0,numberspecialeqs):
iquake = sortedquakes[jquake]
if not Specialuse[iquake]:
continue
x_line_annotation = Specialdate[iquake]
x_line_index = Specialindex[iquake]
if (x_line_index <= Startx) or (x_line_index >= Num_Time-1):
continue
kplot +=1
if kplot == ncols:
SAVEFIG(plt, f'{APPLDIR}/Outputs/QRegions' + str(jplot) + f'{RunName}.png')
plt.show()
kplot = 0
jplot +=1
if kplot == 0:
plt.rcParams["figure.figsize"] = [16,6]
figure, axs = plt.subplots(nrows=1, ncols=ncols, squeeze=False)
beginplotindex = x_line_index - HalfSize
beginplotindex = max(beginplotindex, Startx)
endplotindex = x_line_index + HalfSize
endplotindex = min(endplotindex, Num_Time-1)
eachplt = axs[0,kplot]
ascii = ''
if Specialuse[iquake]:
ascii = np.datetime_as_string(Specialdate[iquake]) + ' ' + str(round(Specialmags[iquake],1)) + ' ' + Specialeqname[iquake]
eachplt.set_title(str(iquake) + ' ' + RunName + ' Best Eigenvalue (Black) Trailing (Red) \n' + ascii)
datemin, datemax = makeasmalldateplot(figure, eachplt, xaxisdates[beginplotindex:endplotindex+1])
for curves in range(0,numplotted):
eachplt.plot(xaxisdates[beginplotindex:endplotindex+1], SetofPlots[beginplotindex:endplotindex+1,curves],
'o', color=SetofColors[curves], markersize =1)
ymin, ymax = eachplt.get_ylim()
if ymax >= 79.9:
ymax = 82
eachplt.set_ylim(bottom=-1.0, top=max(ymax,20))
eachplt.set_ylabel(ylabel)
eachplt.set_xlabel('Time')
eachplt.grid(True)
eachplt.set_yscale("linear")
eachplt.axvline(x=x_line_annotation, linestyle='dashed', alpha=1.0, linewidth = 2.0, color='red')
for kquake in range(0,numberspecialeqs):
if not Specialuse[kquake]:
continue
if kquake == iquake:
continue
anotherx_line_index = Specialindex[kquake]
if (anotherx_line_index < beginplotindex) or (anotherx_line_index >= endplotindex):
continue
eachplt.axvline(x=Specialdate[kquake], linestyle='dashed', alpha=1.0, linewidth = 1.0, color='purple')
eachplt.tick_params('x', direction = 'in', length=15, width=2, which='major')
SAVEFIG(plt, f'{APPLDIR}/Outputs/QRegions' + str(jplot) + f'{RunName}.png')
plt.show()
EigenAnalysis = False
if Earthquake and EigenAnalysis:
UseTopEigenTotal = 40
FirstTopEigenTotal = 10
PLTlabels = []
for ieig in range(0,UseTopEigenTotal):
PLTlabels.append('Eig-' + str(ieig))
plt.rcParams["figure.figsize"] = [12,10]
figure, ax = plt.subplots()
datemin, datemax = makeadateplot(figure, ax, Dateaxis[26:])
plt.rcParams["figure.figsize"] = [12,10]
for ieig in range(0,FirstTopEigenTotal):
ax.plot(Dateaxis[26:],np.maximum(Eigenvals[26:, ieig],0.1))
def gregor_plot(RunName, scale="log"): # linear
ax.set_title(RunName + ' Multiplicity Eigenvalues')
ax.set_ylabel('Eigenvalue')
ax.set_xlabel('Time')
ax.set_yscale(scale)
ax.grid(True)
ax.legend(PLTlabels[0:FirstTopEigenTotal], loc='upper right')
Addfixedearthquakes(ax, datemin, datemax,ylogscale=True )
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
plt.show()
# gregor_plot(RunName,scale="log")
ax.set_title(RunName + ' Multiplicity Eigenvalues')
ax.set_ylabel('Eigenvalue')
ax.set_xlabel('Time')
ax.set_yscale("log")
ax.grid(True)
ax.legend(PLTlabels[0:FirstTopEigenTotal], loc='upper right')
Addfixedearthquakes(ax, datemin, datemax,ylogscale=True )
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
plt.show()
# end gregor plot
plt.rcParams["figure.figsize"] = [12,10]
figure, ax = plt.subplots()
datemin, datemax = makeadateplot(figure, ax, Dateaxis[26:])
plt.rcParams["figure.figsize"] = [12,10]
for ieig in range(FirstTopEigenTotal,UseTopEigenTotal):
ax.plot(Dateaxis[26:],np.maximum(Eigenvals[26:, ieig],0.1))
# gregor_plot(RunName,scale="linear")
ax.set_title(RunName + ' Multiplicity Eigenvalues')
ax.set_ylabel('Eigenvalue')
ax.set_xlabel('Time')
ax.set_yscale("linear")
ax.grid(True)
ax.legend(PLTlabels[FirstTopEigenTotal:], loc='upper right')
Addfixedearthquakes(ax, datemin, datemax,ylogscale=False )
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
plt.show()
# end gregor plot
ShowEigencorrels = False
if ShowEigencorrels:
for mastereig in range(0, UseTopEigenTotal):
figure, ax = plt.subplots()
plt.rcParams["figure.figsize"] = [12,8]
datemin, datemax = makeadateplot(figure, ax, Dateaxis[26:])
for ieig in range(0,UseTopEigenTotal):
alpha = 1.0
width = 3
if ieig == mastereig:
alpha=0.5
width = 1
ax.plot(Dateaxis[26:],np.power(StoreEigencorrels[26:,mastereig,ieig],2), alpha=alpha, linewidth = width)
ax.set_title(RunName + ' Eigenvalue ' + str(mastereig) + ' Current versus Past Total Correlation')
ax.set_ylabel('Norm')
ax.set_xlabel('Time')
ax.grid(True)
ax.legend(PLTlabels, loc='upper right')
Addfixedearthquakes(ax, datemin, datemax,ylogscale=False )
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
plt.show()
def gregor_plot_normfacor(title,normfactor=StoreNormingfactor):
figure, ax = plt.subplots()
plt.rcParams["figure.figsize"] = [12,8]
datemin, datemax = makeadateplot(figure, ax, Dateaxis[26:])
alpha = 1.0
width = 0.5
ax.plot(Dateaxis[26:],StoreNormingfactor[26:], alpha=alpha, linewidth = width)
ax.set_title(f'{RunName} Eigenvalue Full Norming Factor with Past')
ax.set_ylabel('Norming Factor')
ax.set_xlabel('Time')
ax.grid(True)
Addfixedearthquakes(ax, datemin, datemax,ylogscale=False )
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
plt.show()
# Gregor: creat functions for plots
# gregor_plot_normfactor(f"{RunName} Eigenvalue Full Norming Factor with Past", StoreNormingfactor)
figure, ax = plt.subplots()
plt.rcParams["figure.figsize"] = [12,8]
datemin, datemax = makeadateplot(figure, ax, Dateaxis[26:])
alpha = 1.0
width = 0.5
ax.plot(Dateaxis[26:],StoreNormingfactor[26:], alpha=alpha, linewidth = width)
ax.set_title(f'{RunName} Eigenvalue Full Norming Factor with Past')
ax.set_ylabel('Norming Factor')
ax.set_xlabel('Time')
ax.grid(True)
Addfixedearthquakes(ax, datemin, datemax,ylogscale=False )
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
plt.show()
# Gregor: creat functions for plots
# gregor_plot_normfactor(f"{RunName} Eigenvalue First 8 Norming Factor with Past", StoreNormingfactor1)
figure, ax = plt.subplots()
plt.rcParams["figure.figsize"] = [12,8]
datemin, datemax = makeadateplot(figure, ax, Dateaxis[26:])
alpha = 1.0
width = 0.5
ax.plot(Dateaxis[26:],StoreNormingfactor1[26:], alpha=alpha, linewidth = width)
ax.set_title(f"{RunName} Eigenvalue First 8 Norming Factor with Past")
ax.set_ylabel('Norming Factor')
ax.set_xlabel('Time')
ax.grid(True)
Addfixedearthquakes(ax, datemin, datemax,ylogscale=False )
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
plt.show()
# gregor_plot_normfactor(f"{RunName} Eigenvalue First 38 Norming Factor with Past", StoreNormingfactor2)
# Gregor: creat functions for plots
figure, ax = plt.subplots()
plt.rcParams["figure.figsize"] = [12,8]
datemin, datemax = makeadateplot(figure, ax, Dateaxis[26:])
alpha = 1.0
width = 0.5
ax.plot(Dateaxis[26:],StoreNormingfactor2[26:], alpha=alpha, linewidth = width)
ax.set_title(RunName + ' Eigenvalue First 30 Norming Factor with Past')
ax.set_ylabel('Norming Factor')
ax.set_xlabel('Time')
ax.grid(True)
Addfixedearthquakes(ax, datemin, datemax,ylogscale=False )
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
plt.show()
# Gregor: creat functions for plots
figure, ax = plt.subplots()
plt.rcParams["figure.figsize"] = [12,8]
datemin, datemax = makeadateplot(figure, ax, Dateaxis[26:])
plt.rcParams["figure.figsize"] = [12,8]
ax.plot(Dateaxis[26:],Chi1[26:])
ax.set_title(RunName + ' Correlations Normalized on average over time')
ax.set_ylabel('Chi1')
ax.set_xlabel('Time')
ax.grid(True)
Addfixedearthquakes(ax, datemin, datemax)
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
ax.set_yscale("linear")
plt.show()
figure, ax = plt.subplots()
datemin, datemax = makeadateplot(figure, ax, Dateaxis[26:])
plt.rcParams["figure.figsize"] = [12,8]
ax.plot(Dateaxis[26:],Chi2[26:])
ax.set_title(RunName + ' Correlations Normalized at each time')
ax.set_ylabel('Chi2')
ax.set_xlabel('Time')
ax.grid(True)
Addfixedearthquakes(ax, datemin, datemax)
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
ax.set_yscale("linear")
plt.show()
figure, ax = plt.subplots()
datemin, datemax = makeadateplot(figure, ax, Dateaxis[26:])
plt.rcParams["figure.figsize"] = [12,8]
norm = np.amax(Chi1[26:])
Maxeig = 80
# ax.plot(Dateaxis[26:],Chi1[26:]*Maxeig/norm)
ax.plot(Dateaxis[26:], 0.5 + np.minimum(Maxeig, Bestindex[26:]), 'o', color='black', markersize =1)
ax.plot(Dateaxis[26:], np.minimum(Maxeig, Besttrailingindex[26:]), 'o', color='red', markersize =1)
ax.set_title(RunName + ' Best Eigenvalue (Black) Trailing (Red)')
ax.set_ylabel('Eig#')
ax.set_xlabel('Time')
ax.grid(True)
Addfixedearthquakes(ax, datemin, datemax)
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
ax.set_yscale("linear")
plt.show()
SetofPlots = np.empty([len(Bestindex),2], dtype=np.float32)
SetofPlots[:,0] = 0.5 + np.minimum(Maxeig, Bestindex[:])
SetofPlots[:,1] = np.minimum(Maxeig, Besttrailingindex[:])
SetofColors = ['black', 'red']
plotquakeregions(25, Dateaxis, SetofPlots,
RunName + ' Best Eigenvalue (Black) Trailing (Red)', 'Eig#', SetofColors, 26,2)
plt.rcParams["figure.figsize"] = [12,8]
figure, ax = plt.subplots()
datemin, datemax = makeadateplot(figure, ax, Dateaxis[26:])
ax.plot(Dateaxis[26:], Eig0coeff[26:], 'o', color='black', markersize =2)
ymin, ymax = ax.get_ylim()
ax.plot(Dateaxis[26:], Chi1[26:]*ymax/norm)
ax.set_title(RunName + ' Fraction Largest Eigenvalue')
ax.set_ylabel('Eig 0')
ax.set_xlabel('Time')
ax.grid(True)
Addfixedearthquakes(ax, datemin, datemax)
ax.tick_params('x', direction = 'in', length=15, width=2, which='major')
ax.xaxis.set_minor_locator(mdates.YearLocator(1))
ax.tick_params('x', direction = 'in', length=10, width=1, which='minor')
ax.set_yscale("linear")
plt.show()
###Output
_____no_output_____
###Markdown
End of Earthquake. Reset Timing
###Code
# Reset Start Date by a year so first entry has a 365 day sample ending at that day and so can be made an input as can all
# lower time intervals
# Do NOT include 2 year or 4 year in input stream
# So we reset start date by one year skipping first 364 daya except to calculate the first one year (and lower limit) observables
# Time indices go from 0 to NumberofTimeunits-1
# Sequence Indices go from Begin to Begin+Tseq-1 where Begin goes from 0 to NumberofTimeunits-1-Tseq
# So Num_Seq = Numberodays-Tseq and Begin has Num_Seq values
if Earthquake:
SkipTimeUnits = 364
if Dailyunit == 14:
SkipTimeUnits = 25
Num_Time_old = NumberofTimeunits
NumberofTimeunits = NumberofTimeunits - SkipTimeUnits
Num_Time = NumberofTimeunits
InitialDate = InitialDate + timedelta(days=SkipTimeUnits*Dailyunit)
FinalDate = InitialDate + timedelta(days=(NumberofTimeunits-1)*Dailyunit)
print('Skip ' +str(SkipTimeUnits) + ' New dates: ' + InitialDate.strftime("%d/%m/%Y") + ' To '
+ FinalDate.strftime("%d/%m/%Y")+ ' days ' + str(NumberofTimeunits*Dailyunit))
DynamicPropertyTimeSeries = np.empty([Num_Time,Nloc,NpropperTimeDynamic],dtype = np.float32)
CountNaN = np.zeros(NpropperTimeDynamic, dtype=int)
# Skewtime makes certain propert ENDS at given cell and is the cell itself if size = DailyUnit
SkewTime = [0] * NpropperTimeDynamicInput
if Dailyunit == 1:
SkewTime = SkewTime + [22,45,91,182,364,0,22,45,91,182,364]
if Dailyunit == 14:
SkewTime = SkewTime + [1, 3, 6, 12, 25,0,1, 3, 6, 12, 25]
i = 0
total = NumberofTimeunits * Nloc * NpropperTimeDynamic
for itime in range(0,NumberofTimeunits):
for iloc in range(0,Nloc):
for iprop in range(0,NpropperTimeDynamic):
i = i + 1
addtime = SkipTimeUnits - SkewTime[iprop]
if iprop < NpropperTimeDynamicInput:
# BUG HERE
if i % 1000 == 0:
print(itime+addtime,f"{i}/{total}", iloc,iprop)
localval = BasicInputTimeSeries[itime+addtime,iloc,iprop]
elif iprop < (NpropperTimeDynamic-5):
localval = CalculatedTimeSeries[itime+addtime,iloc,iprop-NpropperTimeDynamicInput]
else:
localval = CalculatedTimeSeries[itime+addtime,iloc,iprop-NpropperTimeDynamicInput+4]
if np.math.isnan(localval):
localval = NaN
CountNaN[iprop] +=1
DynamicPropertyTimeSeries[itime,iloc,iprop] = localval
print(startbold+startred+'Input NaN values ' + resetfonts)
# Add E^0.25 Input Quantities
MagnitudeMethod = MagnitudeMethodTransform
jprop = 9
for iprop in range(0,9):
line = ''
if iprop == 0 or iprop > 3:
DynamicPropertyTimeSeries[:,:,jprop] = TransformMagnitude(DynamicPropertyTimeSeries[:,:,iprop])
jprop += 1
line = ' New ' + str(jprop) + ' ' + InputPropertyNames[jprop+NpropperTimeStatic] + ' NaN ' + str(CountNaN[iprop])
print(str(iprop) + ' ' + InputPropertyNames[iprop+NpropperTimeStatic] + ' NaN ' + str(CountNaN[iprop]) + line)
NpropperTimeDynamic = jprop
MagnitudeMethod = 0
NewCalculatedTimeSeries = np.empty([Num_Time,Nloc,NumTimeSeriesCalculated],dtype = np.float32)
# NewCalculatedTimeSeries = CalculatedTimeSeries[SkipTimeUnits:Num_Time+SkipTimeUnits]
NewCalculatedTimeSeries = TransformMagnitude(CalculatedTimeSeries[SkipTimeUnits:Num_Time+SkipTimeUnits])
CalculatedTimeSeries = None
CalculatedTimeSeries = NewCalculatedTimeSeries
BasicInputTimeSeries = None
if GarbageCollect:
gc.collect()
MagnitudeMethod = 0
current_time = timenow()
print(startbold + startred + 'Earthquake Setup ' + current_time + ' ' +RunName + ' ' + RunComment + resetfonts)
###Output
_____no_output_____
###Markdown
Set Earthquake Execution Mode
###Code
if Earthquake:
SymbolicWindows = True
Tseq = 26
Tseq = config.Tseq #num_encoder_steps
if Dailyunit == 14:
GenerateFutures = True
UseFutures = True
###Output
_____no_output_____
###Markdown
Plot Earthquake Images
###Code
# Tom: added local min and max to graphs not based on absolute values.
# added localmin and localmax values to the plotimages function and modified last line of code to add those values in.
def plotimages(Array,Titles,nrows,ncols,localmin,localmax):
usedcolormap = "YlGnBu"
plt.rcParams["figure.figsize"] = [16,6*nrows]
figure, axs = plt.subplots(nrows=nrows, ncols=ncols, squeeze=False)
iplot=0
images = []
norm = colors.Normalize(vmin=localmin, vmax=localmax)
for jplot in range(0,nrows):
for kplot in range (0,ncols):
eachplt = axs[jplot,kplot]
if MapLocation:
Plotit = np.zeros(OriginalNloc, dtype = np.float32)
for jloc in range (0,Nloc):
Plotit[LookupLocations[jloc]] = Array[iplot][jloc]
TwoDArray = np.reshape(Plotit,(40,60))
else:
TwoDArray = np.reshape(Array[iplot],(40,60))
extent = (-120,-114, 36,32)
images.append(eachplt.imshow(TwoDArray, cmap=usedcolormap, norm=norm,extent=extent))
eachplt.label_outer()
eachplt.set_title(Titles[iplot])
iplot +=1
figure.colorbar(images[0], ax=axs, orientation='vertical', fraction=.05)
plt.show()
if Earthquake:
# DynamicPropertyTimeSeries and CalculatedTimeSeries are dimensione by time 0 ...Num_Time-1
# DynamicPropertyTimeSeries holds values upto and including that time
# CalculatedTimeSeries holds values STARTING at that time
fullmin = np.nanmin(CalculatedTimeSeries)
fullmax = np.nanmax(CalculatedTimeSeries)
fullmin = min(fullmin,np.nanmin(DynamicPropertyTimeSeries[:,:,0]))
fullmax = max(fullmax,np.nanmax(DynamicPropertyTimeSeries[:,:,0]))
print('Full Magnitude Ranges ' + str(fullmin) + ' ' + str(fullmax))
Num_Seq = NumberofTimeunits-Tseq
dayindexmax = Num_Seq-Plottingdelay
Numdates = 4
denom = 1.0/np.float64(Numdates-1)
for plotdays in range(0,Numdates):
dayindexvalue = math.floor(0.1 + (plotdays*dayindexmax)*denom)
if dayindexvalue < 0:
dayindexvalue = 0
if dayindexvalue > dayindexmax:
dayindexvalue = dayindexmax
dayindexvalue += Tseq
InputImages =[]
InputTitles =[]
InputImages.append(DynamicPropertyTimeSeries[dayindexvalue,:,0])
ActualDate = InitialDate + timedelta(days=dayindexvalue)
localmax1 = DynamicPropertyTimeSeries[dayindexvalue,:,0].max()
localmin1 = DynamicPropertyTimeSeries[dayindexvalue,:,0].min()
InputTitles.append('Day ' +str(dayindexvalue) + ' ' + ActualDate.strftime("%d/%m/%Y") + ' One day max/min '
+ str(round(localmax1,3)) + ' ' + str(round(localmin1,3)))
for localplot in range(0,NumTimeSeriesCalculated):
localmax1 = CalculatedTimeSeries[dayindexvalue,:,0].max()
localmin1 = CalculatedTimeSeries[dayindexvalue,:,0].min()
InputImages.append(CalculatedTimeSeries[dayindexvalue,:,localplot])
InputTitles.append('Day ' +str(dayindexvalue) + ' ' + ActualDate.strftime("%d/%m/%Y") + NamespredCalculated[localplot] + ' max/min '
+ str(round(localmax1,3)) + ' ' + str(round(localmin1,3)))
print(f'Local Magnitude Ranges {round(localmin1,3)} - {round(localmax1,3)}')
plotimages(InputImages,InputTitles,5,2, round(localmin1,3), round(localmax1,3))
###Output
_____no_output_____
###Markdown
Read and setup NIH Covariates August 2020 and January, April 2021 Datanew collection of time dependent covariates (even if constant).cases and deaths and location property from previous data Process Input Data in various ways Set TFT Mode
###Code
UseTFTModel = True
###Output
_____no_output_____
###Markdown
Convert Cumulative to Daily
###Code
# Gregor: DELETE
# Convert cumulative to Daily.
# Replace negative daily values by zero
# remove daily to sqrt(daily) and Then normalize maximum to 1
if ConvertDynamicPredictedQuantity:
NewBasicInputTimeSeries = np.empty_like(BasicInputTimeSeries, dtype=np.float32)
Zeroversion = np.zeros_like(BasicInputTimeSeries, dtype=np.float32)
Rolleddata = np.roll(BasicInputTimeSeries, 1, axis=0)
Rolleddata[0,:,:] = Zeroversion[0,:,:]
NewBasicInputTimeSeries = np.maximum(np.subtract(BasicInputTimeSeries,Rolleddata),Zeroversion)
originalnumber = np.sum(BasicInputTimeSeries[NumberofTimeunits-1,:,:],axis=0)
newnumber = np.sum(NewBasicInputTimeSeries,axis=(0,1))
print('Original summed counts ' + str(originalnumber) + ' become ' + str(newnumber)+ ' Cases, Deaths')
BasicInputTimeSeries = NewBasicInputTimeSeries
###Output
_____no_output_____
###Markdown
Normalize All Static and Dynamic Propertiesfor Static Properties BasicInputStaticProps[Nloc,NpropperTimeStatic] converts to NormedInputStaticProps[Nloc,NpropperTimeStatic]
###Code
# Gregor: DELETE some portions of this to be reviewed
def SetTakeroot(x,n):
if np.isnan(x):
return NaN
if n == 3:
return np.cbrt(x)
elif n == 2:
if x <= 0.0:
return 0.0
return np.sqrt(x)
return x
def DynamicPropertyScaling(InputTimeSeries):
Results = np.full(7, 0.0,dtype=np.float32)
Results[1] = np.nanmax(InputTimeSeries, axis = (0,1))
Results[0] = np.nanmin(InputTimeSeries, axis = (0,1))
Results[3] = np.nanmean(InputTimeSeries, axis = (0,1))
Results[4] = np.nanstd(InputTimeSeries, axis = (0,1))
Results[2] = np.reciprocal(np.subtract(Results[1],Results[0]))
Results[5] = np.multiply(Results[2],np.subtract(Results[3],Results[0]))
Results[6] = np.multiply(Results[2],Results[4])
return Results
NpropperTimeMAX = NpropperTime + NumTimeSeriesCalculated
print(NpropperTimeStatic,NpropperTime,NumTimeSeriesCalculated, NpropperTimeMAX)
if ScaleProperties:
QuantityTakeroot = np.full(NpropperTimeMAX,1,dtype=int)
# Scale data by roots if requested
for iprop in range(0, NpropperTimeMAX):
if QuantityTakeroot[iprop] >= 2:
if iprop < NpropperTimeStatic:
for iloc in range(0,Nloc):
BasicInputStaticProps[iloc,iprop] = SetTakeroot(BasicInputStaticProps[iloc,iprop],QuantityTakeroot[iprop])
elif iprop < NpropperTime:
for itime in range(0,NumberofTimeunits):
for iloc in range(0,Nloc):
DynamicPropertyTimeSeries[itime,iloc,iprop-NpropperTimeStatic] = SetTakeroot(
DynamicPropertyTimeSeries[itime,iloc,iprop-NpropperTimeStatic],QuantityTakeroot[iprop])
else:
for itime in range(0,NumberofTimeunits):
for iloc in range(0,Nloc):
CalculatedTimeSeries[itime,iloc,iprop-NpropperTime] =SetTakeroot(
CalculatedTimeSeries[itime,iloc,iprop-NpropperTime],QuantityTakeroot[iprop])
QuantityStatisticsNames = ['Min','Max','Norm','Mean','Std','Normed Mean','Normed Std']
QuantityStatistics = np.zeros([NpropperTimeMAX,7], dtype=np.float32)
if NpropperTimeStatic > 0:
print(BasicInputStaticProps.shape)
max_value = np.amax(BasicInputStaticProps, axis = 0)
min_value = np.amin(BasicInputStaticProps, axis = 0)
mean_value = np.mean(BasicInputStaticProps, axis = 0)
std_value = np.std(BasicInputStaticProps, axis = 0)
normval = np.reciprocal(np.subtract(max_value,min_value))
normed_mean = np.multiply(normval,np.subtract(mean_value,min_value))
normed_std = np.multiply(normval,std_value)
QuantityStatistics[0:NpropperTimeStatic,0] = min_value
QuantityStatistics[0:NpropperTimeStatic,1] = max_value
QuantityStatistics[0:NpropperTimeStatic,2] = normval
QuantityStatistics[0:NpropperTimeStatic,3] = mean_value
QuantityStatistics[0:NpropperTimeStatic,4] = std_value
QuantityStatistics[0:NpropperTimeStatic,5] = normed_mean
QuantityStatistics[0:NpropperTimeStatic,6] = normed_std
NormedInputStaticProps =np.empty_like(BasicInputStaticProps)
for iloc in range(0,Nloc):
NormedInputStaticProps[iloc,:] = np.multiply((BasicInputStaticProps[iloc,:] - min_value[:]),normval[:])
if (NpropperTimeDynamic > 0) or (NumTimeSeriesCalculated>0):
for iprop in range(NpropperTimeStatic,NpropperTimeStatic+NpropperTimeDynamic):
QuantityStatistics[iprop,:] = DynamicPropertyScaling(DynamicPropertyTimeSeries[:,:,iprop-NpropperTimeStatic])
for iprop in range(0,NumTimeSeriesCalculated):
QuantityStatistics[iprop+NpropperTime,:] = DynamicPropertyScaling(CalculatedTimeSeries[:,:,iprop])
NormedDynamicPropertyTimeSeries = np.empty_like(DynamicPropertyTimeSeries)
for iprop in range(NpropperTimeStatic,NpropperTimeStatic+NpropperTimeDynamic):
NormedDynamicPropertyTimeSeries[:,:,iprop - NpropperTimeStatic] = np.multiply((DynamicPropertyTimeSeries[:,:,iprop - NpropperTimeStatic]
- QuantityStatistics[iprop,0]),QuantityStatistics[iprop,2])
if NumTimeSeriesCalculated > 0:
NormedCalculatedTimeSeries = np.empty_like(CalculatedTimeSeries)
for iprop in range(NpropperTime,NpropperTimeMAX):
NormedCalculatedTimeSeries[:,:,iprop - NpropperTime] = np.multiply((CalculatedTimeSeries[:,:,iprop - NpropperTime]
- QuantityStatistics[iprop,0]),QuantityStatistics[iprop,2])
CalculatedTimeSeries = None
BasicInputStaticProps = None
DynamicPropertyTimeSeries = None
print(startbold + "Properties scaled" +resetfonts)
line = 'Name '
for propval in range (0,7):
line += QuantityStatisticsNames[propval] + ' '
print('\n' + startbold +startpurple + line + resetfonts)
for iprop in range(0,NpropperTimeMAX):
if iprop == NpropperTimeStatic:
print('\n')
line = startbold + startpurple + str(iprop) + ' ' + InputPropertyNames[iprop] + resetfonts + ' Root ' + str(QuantityTakeroot[iprop])
for propval in range (0,7):
line += ' ' + str(round(QuantityStatistics[iprop,propval],3))
print(line)
###Output
_____no_output_____
###Markdown
Set up Futures-- currently at unit time level
###Code
class Future:
def __init__(self, name, daystart = 0, days =[], wgt=1.0, classweight = 1.0):
self.name = name
self.days = np.array(days)
self.daystart = daystart
self.wgts = np.full_like(self.days,wgt,dtype=float)
self.size = len(self.days)
self.classweight = classweight
LengthFutures = 0
Unit = "Day"
if Earthquake:
Unit = "2wk"
if GenerateFutures:
Futures =[]
daylimit = 14
if Earthquake:
daylimit = 25
for ifuture in range(0,daylimit):
xx = Future(Unit + '+' + str(ifuture+2), days=[ifuture+2])
Futures.append(xx)
LengthFutures = len(Futures)
Futuresmaxday = 0
Futuresmaxweek = 0
for i in range(0,LengthFutures):
j = len(Futures[i].days)
if j == 1:
Futuresmaxday = max(Futuresmaxday, Futures[i].days[0])
else:
Futuresmaxweek = max(Futuresmaxweek, Futures[i].days[j-1])
Futures[i].daystart -= Dropearlydata
if Futures[i].daystart < 0: Futures[i].daystart = 0
if Earthquake:
Futures[i].daystart = 0
###Output
_____no_output_____
###Markdown
Set up mappings of locationsIn next cell, we map locations for BEFORE location etc addedIn cell after that we do same for sequences
###Code
OriginalNloc = Nloc
if Earthquake:
MapLocation = True
MappedDynamicPropertyTimeSeries = np.empty([Num_Time,MappedNloc,NpropperTimeDynamic],dtype = np.float32)
MappedNormedInputStaticProps = np.empty([MappedNloc,NpropperTimeStatic],dtype = np.float32)
MappedCalculatedTimeSeries = np.empty([Num_Time,MappedNloc,NumTimeSeriesCalculated],dtype = np.float32)
print(LookupLocations)
MappedDynamicPropertyTimeSeries[:,:,:] = NormedDynamicPropertyTimeSeries[:,LookupLocations,:]
NormedDynamicPropertyTimeSeries = None
NormedDynamicPropertyTimeSeries = MappedDynamicPropertyTimeSeries
MappedCalculatedTimeSeries[:,:,:] = NormedCalculatedTimeSeries[:,LookupLocations,:]
NormedCalculatedTimeSeries = None
NormedCalculatedTimeSeries = MappedCalculatedTimeSeries
MappedNormedInputStaticProps[:,:] = NormedInputStaticProps[LookupLocations,:]
NormedInputStaticProps = None
NormedInputStaticProps = MappedNormedInputStaticProps
Nloc = MappedNloc
if GarbageCollect:
gc.collect()
print('Number of locations reduced to ' + str(Nloc))
else:
MappedLocations = np.arange(0,Nloc, dtype=int)
LookupLocations = np.arange(0,Nloc, dtype=int)
MappedNloc = Nloc
###Output
_____no_output_____
###Markdown
Property and Prediction Data StructuresTwo important Lists Properties and Predictions that are related * Data stored in series is for properties, the calculated value occuring at or ending that day * For predictions, the data is the calculated value from that date or later. * We store data labelled by time so that * for inputs we use time 0 upto last value - 1 i.e. position [length of array - 1] * for outputs (predictions) with sequence Tseq, we use array locations [Tseq] to [length of array -1] * This implies Num_Seq = Num_Time - Tseq**Properties**Everything appears in Property list -- both input and output (predicted)DynamicPropertyTimeSeries holds input property time series where value is value at that time using data before this time for aggregations * NpropperTimeStatic is the number of static properties -- typically read in or calculated from input information * NpropperTimeDynamicInput is total number of input time series * NpropperTimeDynamicCalculated is total number of calculated dynamic quantities used in Time series analysis as input properties and/or output predictions * NpropperTimeDynamic = NpropperTimeDynamicInput + NpropperTimeDynamicCalculated ONLY includes input properties * NpropperTime = NpropperTimeStatic + NpropperTimeDynamic will not include futures and NOT include calculated predictions * InputPropertyNames is a list of size NpropperTime holding names * NpropperTimeMAX = NpropperTime + NumTimeSeriesCalculated has calculated predictions following input properties ignoring futures * QuantityStatistics has 7 statistics used in normalizing for NpropperTimeMAX properties * Normalization takes NpropperTimeStatic static features in BasicInputStaticProps and stores in NormedInputStaticProps * Normalization takes NpropperTimeDynamicInput dynamic features in BasicInputTimeSeries and stores in NormedInputTimeSeries * Normalization takes NpropperTimeDynamicCalculated dynamic features in DynamicPropertyTimeSeries and stores in NormedDynamicPropertyTimeSeries**Predictions** * NumpredbasicperTime can be 1 upto NpropperTimeDynamic and are part of dynamic input series. It includes input values that are to be predicted (these MUST be at start) plus NumTimeSeriesCalculated calculated series * NumpredFuturedperTime is <= NumpredbasicperTime and is the number of input dynamic series that are futured * NumTimeSeriesCalculated is number of calculated (not as futures) time series stored in CalculatedTimeSeries and names in NamespredCalculated * Typically NumpredbasicperTime = NumTimeSeriesCalculated + NumpredFuturedperTime (**Currently this is assumed**) * Normalization takes NumTimeSeriesCalculated calculated series in CalculatedTimeSeries and stores in NormedCalculatedTimeSeries * Predictions per Time are NpredperTime = NumpredbasicperTime + NumpredFuturedperTime*LengthFutures * Predictions per sequence Npredperseq = NpredperTime Set Requested Properties Predictions Encodings
###Code
# FuturePred = -1 Means NO FUTURE >= 0 FUTURED
# BASIC EARTHQUAKE SET JUST LOG ENERGY AND MULTIPLICITY
# PARAMETER IMPORTANT
if Earthquake:
InputSource = ['Static','Static','Static','Static','Dynamic','Dynamic','Dynamic','Dynamic'
,'Dynamic','Dynamic','Dynamic','Dynamic','Dynamic']
InputSourceNumber = [0,1,2,3,0,1,2,3,4,5,6,7,8]
PredSource = ['Dynamic','Calc','Calc','Calc','Calc','Calc','Calc','Calc','Calc','Calc']
PredSourceNumber = [0,0,1,2,3,4,5,6,7,8]
FuturedPred = [-1]*len(PredSource)
# Earthquake Space-Time
PropTypes = ['Spatial', 'TopDown', 'TopDown','TopDown','TopDown','TopDown','BottomUp','BottomUp','BottomUp','BottomUp']
PropValues = [0, 0, 1, 2, 3,4, 8,16,32,64]
PredTypes = ['Spatial', 'TopDown', 'TopDown','TopDown','TopDown','TopDown','BottomUp','BottomUp','BottomUp','BottomUp']
PredValues = [0, 0, 1, 2, 3,4, 8,16,32,64]
if UseTFTModel:
InputSource = ['Static','Static','Static','Static','Dynamic','Dynamic','Dynamic','Dynamic'
,'Dynamic','Dynamic','Dynamic','Dynamic','Dynamic']
InputSourceNumber = [0,1,2,3,0,1,2,3,4,5,6,7,8]
PredSource = ['Dynamic','Dynamic']
PredSourceNumber = [0,7]
PredTypes =[]
PredValues = []
FuturedPred = [1,1]
#TFT2 1 year
PredSource = ['Dynamic','Dynamic','Dynamic','Dynamic']
PredSourceNumber = [0,6,7,8]
FuturedPred = [1,1,1,1]
###Output
_____no_output_____
###Markdown
Choose Input and Predicted Quantities
###Code
# Gregor: DELETE some portions of this, review and identify
# PARAMETER. SUPER IMPORTANT. NEEDS TO BE STUDIED
if len(InputSource) != len(InputSourceNumber):
printexit(' Inconsistent Source Lengths ' + str(len(InputSource)) + ' ' +str(len(InputSourceNumber)) )
if len(PredSource) != len(PredSourceNumber):
printexit(' Inconsistent Prediction Lengths ' + str(len(PredSource)) + ' ' + str(len(PredSourceNumber)) )
# Executed by all even if GenerateFutures false except for direct Romeo data
if not UseFutures:
LengthFutures = 0
print(startbold + "Number of Futures -- separate for each regular prediction " +str(LengthFutures) + resetfonts)
Usedaystart = False
if len(PredSource) > 0: # set up Predictions
NumpredbasicperTime = len(PredSource)
FuturedPointer = np.full(NumpredbasicperTime,-1,dtype=int)
NumpredFuturedperTime = 0
NumpredfromInputsperTime = 0
for ipred in range(0,len(PredSource)):
if PredSource[ipred] == 'Dynamic':
NumpredfromInputsperTime += 1
countinputs = 0
countcalcs = 0
for ipred in range(0,len(PredSource)):
if not(PredSource[ipred] == 'Dynamic' or PredSource[ipred] == 'Calc'):
printexit('Illegal Prediction ' + str(ipred) + ' ' + PredSource[ipred])
if PredSource[ipred] == 'Dynamic':
countinputs += 1
else:
countcalcs += 1
if FuturedPred[ipred] >= 0:
if LengthFutures > 0:
FuturedPred[ipred] = NumpredFuturedperTime
FuturedPointer[ipred] = NumpredFuturedperTime
NumpredFuturedperTime += 1
else:
FuturedPred[ipred] = -1
else: # Set defaults
NumpredfromInputsperTime = NumpredFuturedperTime
FuturedPointer = np.full(NumpredbasicperTime,-1,dtype=int)
PredSource =[]
PredSourceNumber = []
FuturedPred =[]
futurepos = 0
for ipred in range(0,NumpredFuturedperTime):
PredSource.append('Dynamic')
PredSourceNumber.append(ipred)
futured = -1
if LengthFutures > 0:
futured = futurepos
FuturedPointer[ipred] = futurepos
futurepos += 1
FuturedPred.append(futured)
for ipred in range(0,NumTimeSeriesCalculated):
PredSource.append('Calc')
PredSourceNumber.append(ipred)
FuturedPred.append(-1)
print('Number of Predictions ' + str(len(PredSource)))
PropertyNameIndex = np.empty(NpropperTime, dtype = np.int32)
PropertyAverageValuesPointer = np.empty(NpropperTime, dtype = np.int32)
for iprop in range(0,NpropperTime):
PropertyNameIndex[iprop] = iprop # names
PropertyAverageValuesPointer[iprop] = iprop # normalizations
# Reset Source -- if OK as read don't set InputSource InputSourceNumber
# Reset NormedDynamicPropertyTimeSeries and NormedInputStaticProps
# Reset NpropperTime = NpropperTimeStatic + NpropperTimeDynamic
if len(InputSource) > 0: # Reset Input Source
NewNpropperTimeStatic = 0
NewNpropperTimeDynamic = 0
for isource in range(0,len(InputSource)):
if InputSource[isource] == 'Static':
NewNpropperTimeStatic += 1
if InputSource[isource] == 'Dynamic':
NewNpropperTimeDynamic += 1
NewNormedDynamicPropertyTimeSeries = np.empty([Num_Time,Nloc,NewNpropperTimeDynamic],dtype = np.float32)
NewNormedInputStaticProps = np.empty([Nloc,NewNpropperTimeStatic],dtype = np.float32)
NewNpropperTime = NewNpropperTimeStatic + NewNpropperTimeDynamic
NewPropertyNameIndex = np.empty(NewNpropperTime, dtype = np.int32)
NewPropertyAverageValuesPointer = np.empty(NewNpropperTime, dtype = np.int32)
countstatic = 0
countdynamic = 0
for isource in range(0,len(InputSource)):
if InputSource[isource] == 'Static':
OldstaticNumber = InputSourceNumber[isource]
NewNormedInputStaticProps[:,countstatic] = NormedInputStaticProps[:,OldstaticNumber]
NewPropertyNameIndex[countstatic] = PropertyNameIndex[OldstaticNumber]
NewPropertyAverageValuesPointer[countstatic] = PropertyAverageValuesPointer[OldstaticNumber]
countstatic += 1
elif InputSource[isource] == 'Dynamic':
OlddynamicNumber =InputSourceNumber[isource]
NewNormedDynamicPropertyTimeSeries[:,:,countdynamic] = NormedDynamicPropertyTimeSeries[:,:,OlddynamicNumber]
NewPropertyNameIndex[countdynamic+NewNpropperTimeStatic] = PropertyNameIndex[OlddynamicNumber+NpropperTimeStatic]
NewPropertyAverageValuesPointer[countdynamic+NewNpropperTimeStatic] = PropertyAverageValuesPointer[OlddynamicNumber+NpropperTimeStatic]
countdynamic += 1
else:
printexit('Illegal Property ' + str(isource) + ' ' + InputSource[isource])
else: # pretend data altered
NewPropertyNameIndex = PropertyNameIndex
NewPropertyAverageValuesPointer = PropertyAverageValuesPointer
NewNpropperTime = NpropperTime
NewNpropperTimeStatic = NpropperTimeStatic
NewNpropperTimeDynamic = NpropperTimeDynamic
NewNormedInputStaticProps = NormedInputStaticProps
NewNormedDynamicPropertyTimeSeries = NormedDynamicPropertyTimeSeries
###Output
_____no_output_____
###Markdown
Calculate FuturesStart Predictions
###Code
# Order of Predictions *****************************
# Basic "futured" Predictions from property dynamic arrays
# Additional predictions without futures and NOT in property arrays including Calculated time series
# LengthFutures predictions for first NumpredFuturedperTime predictions
# Special predictions (temporal, positional) added later
NpredperTime = NumpredbasicperTime + NumpredFuturedperTime*LengthFutures
Npredperseq = NpredperTime
Predictionbasicname = [' '] * NumpredbasicperTime
for ipred in range(0,NumpredbasicperTime):
if PredSource[ipred] == 'Dynamic':
Predictionbasicname[ipred] = InputPropertyNames[PredSourceNumber[ipred]+NpropperTimeStatic]
else:
Predictionbasicname[ipred]= NamespredCalculated[PredSourceNumber[ipred]]
TotalFutures = 0
if NumpredFuturedperTime <= 0:
GenerateFutures = False
if GenerateFutures:
TotalFutures = NumpredFuturedperTime * LengthFutures
print(startbold + 'Predictions Total ' + str(Npredperseq) + ' Basic ' + str(NumpredbasicperTime) + ' Of which futured are '
+ str(NumpredFuturedperTime) + ' Giving number explicit futures ' + str(TotalFutures) + resetfonts )
Predictionname = [' '] * Npredperseq
Predictionnametype = [' '] * Npredperseq
Predictionoldvalue = np.empty(Npredperseq, dtype=int)
Predictionnewvalue = np.empty(Npredperseq, dtype=int)
Predictionday = np.empty(Npredperseq, dtype=int)
PredictionAverageValuesPointer = np.empty(Npredperseq, dtype=int)
Predictionwgt = [1.0] * Npredperseq
for ipred in range(0,NumpredbasicperTime):
Predictionnametype[ipred] = PredSource[ipred]
Predictionoldvalue[ipred] = PredSourceNumber[ipred]
Predictionnewvalue[ipred] = ipred
if PredSource[ipred] == 'Dynamic':
PredictionAverageValuesPointer[ipred] = NpropperTimeStatic + Predictionoldvalue[ipred]
else:
PredictionAverageValuesPointer[ipred] = NpropperTime + PredSourceNumber[ipred]
Predictionwgt[ipred] = 1.0
Predictionday[ipred] = 1
extrastring =''
Predictionname[ipred] = 'Next ' + Predictionbasicname[ipred]
if FuturedPred[ipred] >= 0:
extrastring = ' Explicit Futures Added '
print(str(ipred)+ ' Internal Property # ' + str(PredictionAverageValuesPointer[ipred]) + ' ' + Predictionname[ipred]
+ ' Weight ' + str(round(Predictionwgt[ipred],3)) + ' Day ' + str(Predictionday[ipred]) + extrastring )
for ifuture in range(0,LengthFutures):
for ipred in range(0,NumpredbasicperTime):
if FuturedPred[ipred] >= 0:
FuturedPosition = NumpredbasicperTime + NumpredFuturedperTime*ifuture + FuturedPred[ipred]
Predictionname[FuturedPosition] = Predictionbasicname[ipred] + ' ' + Futures[ifuture].name
Predictionday[FuturedPosition] = Futures[ifuture].days[0]
Predictionwgt[FuturedPosition] = Futures[ifuture].classweight
Predictionnametype[FuturedPosition] = Predictionnametype[ipred]
Predictionoldvalue[FuturedPosition] = Predictionoldvalue[ipred]
Predictionnewvalue[FuturedPosition] = Predictionnewvalue[ipred]
PredictionAverageValuesPointer[FuturedPosition] = PredictionAverageValuesPointer[ipred]
print(str(iprop)+ ' Internal Property # ' + str(PredictionAverageValuesPointer[FuturedPosition]) + ' ' +
Predictionname[FuturedPosition] + ' Weight ' + str(round(Predictionwgt[FuturedPosition],3))
+ ' Day ' + str(Predictionday[FuturedPosition]) + ' This is Explicit Future ')
Predictionnamelookup = {}
print(startbold + '\nBasic Predicted Quantities' + resetfonts)
for ipred in range(0,Npredperseq):
Predictionnamelookup[Predictionname[ipred]] = ipred
iprop = Predictionnewvalue[ipred]
line = startbold + startred + Predictionbasicname[iprop]
line += ' Weight ' + str(round(Predictionwgt[ipred],4))
if (iprop < NumpredFuturedperTime) or (iprop >= NumpredbasicperTime):
line += ' Day= ' + str(Predictionday[ipred])
line += ' Name ' + Predictionname[ipred]
line += resetfonts
jpred = PredictionAverageValuesPointer[ipred]
line += ' Processing Root ' + str(QuantityTakeroot[jpred])
for proppredval in range (0,7):
line += ' ' + QuantityStatisticsNames[proppredval] + ' ' + str(round(QuantityStatistics[jpred,proppredval],3))
print(wraptotext(line,size=150))
print(line)
# Note that only Predictionwgt and Predictionname defined for later addons
###Output
_____no_output_____
###Markdown
Set up Predictions first for time arrays; we will extend to sequences next. Sequences include the predictions for final time in sequence.This is prediction for sequence ending one day before the labelling time index. So sequence must end one unit before last time valueNote this is "pure forecast" which are of quantities used in driving data allowing us to iitialize prediction to inputNaN represents non existent data
###Code
if PredictionsfromInputs:
InputPredictionsbyTime = np.zeros([Num_Time, Nloc, Npredperseq], dtype = np.float32)
for ipred in range (0,NumpredbasicperTime):
if Predictionnametype[ipred] == 'Dynamic':
InputPredictionsbyTime[:,:,ipred] = NormedDynamicPropertyTimeSeries[:,:,Predictionoldvalue[ipred]]
else:
InputPredictionsbyTime[:,:,ipred] = NormedCalculatedTimeSeries[:,:,Predictionoldvalue[ipred]]
# Add Futures based on Futured properties
if LengthFutures > 0:
NaNall = np.full([Nloc],NaN,dtype = np.float32)
daystartveto = 0
atendveto = 0
allok = NumpredbasicperTime
for ifuture in range(0,LengthFutures):
for itime in range(0,Num_Time):
ActualTime = itime+Futures[ifuture].days[0]-1
if ActualTime >= Num_Time:
for ipred in range (0,NumpredbasicperTime):
Putithere = FuturedPred[ipred]
if Putithere >=0:
InputPredictionsbyTime[itime,:,NumpredbasicperTime + NumpredFuturedperTime*ifuture + Putithere] = NaNall
atendveto +=1
elif Usedaystart and (itime < Futures[ifuture].daystart):
for ipred in range (0,NumpredbasicperTime):
Putithere = FuturedPred[ipred]
if Putithere >=0:
InputPredictionsbyTime[itime,:,NumpredbasicperTime + NumpredFuturedperTime*ifuture + Putithere] = NaNall
daystartveto +=1
else:
for ipred in range (0,NumpredbasicperTime):
Putithere = FuturedPred[ipred]
if Putithere >=0:
if Predictionnametype[ipred] == 'Dynamic':
InputPredictionsbyTime[itime,:,NumpredbasicperTime + NumpredFuturedperTime*ifuture + Putithere] \
= NormedDynamicPropertyTimeSeries[ActualTime,:,Predictionoldvalue[ipred]]
else:
InputPredictionsbyTime[itime,:,NumpredbasicperTime + NumpredFuturedperTime*ifuture + Putithere] \
= NormedCalculatedTimeSeries[ActualTime,:,Predictionoldvalue[ipred]]
allok += NumpredFuturedperTime
print(startbold + 'Futures Added: Predictions set from inputs OK ' +str(allok) +
' Veto at end ' + str(atendveto) + ' Veto at start ' + str(daystartveto) + ' Times number of locations' + resetfonts)
###Output
_____no_output_____
###Markdown
Clean-up Input quantities
###Code
def checkNaN(y):
countNaN = 0
countnotNaN = 0
ctprt = 0
if y is None:
return
if len(y.shape) == 2:
for i in range(0,y.shape[0]):
for j in range(0,y.shape[1]):
if np.math.isnan(y[i, j]):
countNaN += 1
else:
countnotNaN += 1
else:
for i in range(0,y.shape[0]):
for j in range(0,y.shape[1]):
for k in range(0,y.shape[2]):
if np.math.isnan(y[i, j, k]):
countNaN += 1
ctprt += 1
print(str(i) + ' ' + str(j) + ' ' + str(k))
if ctprt > 10:
sys.exit(0)
else:
countnotNaN += 1
percent = (100.0*countNaN)/(countNaN + countnotNaN)
print(' is NaN ',str(countNaN),' percent ',str(round(percent,2)),' not NaN ', str(countnotNaN))
# Clean-up Input Source
if len(InputSource) > 0:
PropertyNameIndex = NewPropertyNameIndex
NewPropertyNameIndex = None
PropertyAverageValuesPointer = NewPropertyAverageValuesPointer
NewPropertyAverageValuesPointer = None
NormedInputStaticProps = NewNormedInputStaticProps
NewNormedInputStaticProps = None
NormedDynamicPropertyTimeSeries = NewNormedDynamicPropertyTimeSeries
NewNormedDynamicPropertyTimeSeries = None
NpropperTime = NewNpropperTime
NpropperTimeStatic = NewNpropperTimeStatic
NpropperTimeDynamic = NewNpropperTimeDynamic
print('Static Properties')
if NpropperTimeStatic > 0 :
checkNaN(NormedInputStaticProps)
else:
print(' None Defined')
print('Dynamic Properties')
checkNaN(NormedDynamicPropertyTimeSeries)
###Output
_____no_output_____
###Markdown
Setup Sequences and UseTFTModel
###Code
Num_SeqExtraUsed = Tseq-1
Num_Seq = Num_Time - Tseq
Num_SeqPred = Num_Seq
TSeqPred = Tseq
TFTExtraTimes = 0
Num_TimeTFT = Num_Time
if UseTFTModel:
TFTExtraTimes = 1 + LengthFutures
SymbolicWindows = True
Num_SeqExtraUsed = Tseq # as last position needed in input
Num_TimeTFT = Num_Time +TFTExtraTimes
Num_SeqPred = Num_Seq
TseqPred = Tseq
# If SymbolicWindows, sequences are not made but we use same array with that dimension (RawInputSeqDimension) set to 1
# reshape can get rid of this irrelevant dimension
# Predictions and Input Properties are associated with sequence number which is first time value used in sequence
# if SymbolicWindows false then sequences are labelled by sequence # and contain time values from sequence # to sequence# + Tseq-1
# if SymbolicWindows True then sequences are labelled by time # and contain one value. They are displaced by Tseq
# If TFT Inputs and Predictions do NOT differ by Tseq
# Num_SeqExtra extra positions in RawInputSequencesTOT for Symbolic windows True as need to store full window
# TFTExtraTimes are extra times
RawInputSeqDimension = Tseq
Num_SeqExtra = 0
if SymbolicWindows:
RawInputSeqDimension = 1
Num_SeqExtra = Num_SeqExtraUsed
###Output
_____no_output_____
###Markdown
Generate Sequences from Time labelled data given Tseq set above
###Code
if GenerateSequences:
UseProperties = np.full(NpropperTime, True, dtype=bool)
Npropperseq = 0
IndexintoPropertyArrays = np.empty(NpropperTime, dtype = int)
for iprop in range(0,NpropperTime):
if UseProperties[iprop]:
IndexintoPropertyArrays[Npropperseq] = iprop
Npropperseq +=1
RawInputSequences = np.zeros([Num_Seq + Num_SeqExtra, Nloc, RawInputSeqDimension, Npropperseq], dtype =np.float32)
RawInputPredictions = np.zeros([Num_SeqPred, Nloc, Npredperseq], dtype =np.float32)
locationarray = np.empty(Nloc, dtype=np.float32)
for iseq in range(0,Num_Seq + Num_SeqExtra):
for windowposition in range(0,RawInputSeqDimension):
itime = iseq + windowposition
for usedproperty in range (0,Npropperseq):
iprop = IndexintoPropertyArrays[usedproperty]
if iprop>=NpropperTimeStatic:
jprop =iprop-NpropperTimeStatic
locationarray = NormedDynamicPropertyTimeSeries[itime,:,jprop]
else:
locationarray = NormedInputStaticProps[:,iprop]
RawInputSequences[iseq,:,windowposition,usedproperty] = locationarray
if iseq < Num_SeqPred:
RawInputPredictions[iseq,:,:] = InputPredictionsbyTime[iseq+TseqPred,:,:]
print(startbold + 'Sequences set from Time values Num Seq ' + str(Num_SeqPred) + ' Time ' +str(Num_Time) + resetfonts)
NormedInputTimeSeries = None
NormedDynamicPropertyTimeSeries = None
if GarbageCollect:
gc.collect()
GlobalTimeMask = np.empty([1,1,1,Tseq,Tseq],dtype =np.float32)
###Output
_____no_output_____
###Markdown
Define Possible Temporal and Spatial Positional Encodings
###Code
# PARAMETER. Possible functions as input MLCOMMONS RELEVANT
def LinearLocationEncoding(TotalLoc):
linear = np.empty(TotalLoc, dtype=float)
for i in range(0,TotalLoc):
linear[i] = float(i)/float(TotalLoc)
return linear
def LinearTimeEncoding(Dateslisted):
Firstdate = Dateslisted[0]
numtofind = len(Dateslisted)
dayrange = (Dateslisted[numtofind-1]-Firstdate).days + 1
linear = np.empty(numtofind, dtype=float)
for i in range(0,numtofind):
linear[i] = float((Dateslisted[i]-Firstdate).days)/float(dayrange)
return linear
def P2TimeEncoding(numtofind):
P2 = np.empty(numtofind, dtype=float)
for i in range(0,numtofind):
x = -1 + 2.0*i/(numtofind-1)
P2[i] = 0.5*(3*x*x-1)
return P2
def P3TimeEncoding(numtofind):
P3 = np.empty(numtofind, dtype=float)
for i in range(0,numtofind):
x = -1 + 2.0*i/(numtofind-1)
P3[i] = 0.5*(5*x*x-3)*x
return P3
def P4TimeEncoding(numtofind):
P4 = np.empty(numtofind, dtype=float)
for i in range(0,numtofind):
x = -1 + 2.0*i/(numtofind-1)
P4[i] = 0.125*(35*x*x*x*x - 30*x*x + 3)
return P4
def WeeklyTimeEncoding(Dateslisted):
numtofind = len(Dateslisted)
costheta = np.empty(numtofind, dtype=float)
sintheta = np.empty(numtofind, dtype=float)
for i in range(0,numtofind):
j = Dateslisted[i].date().weekday()
theta = float(j)*2.0*math.pi/7.0
costheta[i] = math.cos(theta)
sintheta[i] = math.sin(theta)
return costheta, sintheta
def AnnualTimeEncoding(Dateslisted):
numtofind = len(Dateslisted)
costheta = np.empty(numtofind, dtype=float)
sintheta = np.empty(numtofind, dtype=float)
for i in range(0,numtofind):
runningdate = Dateslisted[i]
year = runningdate.year
datebeginyear = datetime(year, 1, 1)
displacement = (runningdate-datebeginyear).days
daysinyear = (datetime(year,12,31)-datebeginyear).days+1
if displacement >= daysinyear:
printexit("EXIT Bad Date ", runningdate)
theta = float(displacement)*2.0*math.pi/float(daysinyear)
costheta[i] = math.cos(theta)
sintheta[i] = math.sin(theta)
return costheta, sintheta
def ReturnEncoding(numtofind,Typeindex, Typevalue):
Dummy = costheta = np.empty(0, dtype=float)
if Typeindex == 1:
return LinearoverLocationEncoding, Dummy, ('LinearSpace',0.,1.0,0.5,0.2887), ('Dummy',0.,0.,0.,0.)
if Typeindex == 2:
if Dailyunit == 1:
return CosWeeklytimeEncoding, SinWeeklytimeEncoding, ('CosWeekly',-1.0, 1.0, 0.,0.7071), ('SinWeekly',-1.0, 1.0, 0.,0.7071)
else:
return Dummy, Dummy, ('Dummy',0.,0.,0.,0.), ('Dummy',0.,0.,0.,0.)
if Typeindex == 3:
return CosAnnualtimeEncoding, SinAnnualtimeEncoding, ('CosAnnual',-1.0, 1.0, 0.,0.7071), ('SinAnnual',-1.0, 1.0, 0.,0.7071)
if Typeindex == 4:
if Typevalue == 0:
ConstArray = np.full(numtofind,0.5, dtype = float)
return ConstArray, Dummy, ('Constant',0.5,0.5,0.5,0.0), ('Dummy',0.,0.,0.,0.)
if Typevalue == 1:
return LinearovertimeEncoding, Dummy, ('LinearTime',0., 1.0, 0.5,0.2887), ('Dummy',0.,0.,0.,0.)
if Typevalue == 2:
return P2TimeEncoding(numtofind), Dummy, ('P2-Time',-1.0, 1.0, 0.,0.4472), ('Dummy',0.,0.,0.,0.)
if Typevalue == 3:
return P3TimeEncoding(numtofind), Dummy, ('P3-Time',-1.0, 1.0, 0.,0.3780), ('Dummy',0.,0.,0.,0.)
if Typevalue == 4:
return P4TimeEncoding(numtofind), Dummy, ('P4-Time',-1.0, 1.0, 0.,0.3333), ('Dummy',0.,0.,0.,0.)
if Typeindex == 5:
costheta = np.empty(numtofind, dtype=float)
sintheta = np.empty(numtofind, dtype=float)
j = 0
for i in range(0,numtofind):
theta = float(j)*2.0*math.pi/Typevalue
costheta[i] = math.cos(theta)
sintheta[i] = math.sin(theta)
j += 1
if j >= Typevalue:
j = 0
return costheta, sintheta,('Cos '+str(Typevalue)+ ' Len',-1.0, 1.0,0.,0.7071), ('Sin '+str(Typevalue)+ ' Len',-1.0, 1.0,0.,0.7071)
# Dates set up in Python datetime format as Python LISTS
# All encodings are Numpy arrays
print("Total number of Time Units " + str(NumberofTimeunits) + ' ' + TimeIntervalUnitName)
if NumberofTimeunits != (Num_Seq + Tseq):
printexit("EXIT Wrong Number of Time Units " + str(Num_Seq + Tseq))
Dateslist = []
for i in range(0,NumberofTimeunits + TFTExtraTimes):
Dateslist.append(InitialDate+timedelta(days=i*Dailyunit))
LinearoverLocationEncoding = LinearLocationEncoding(Nloc)
LinearovertimeEncoding = LinearTimeEncoding(Dateslist)
if Dailyunit == 1:
CosWeeklytimeEncoding, SinWeeklytimeEncoding = WeeklyTimeEncoding(Dateslist)
CosAnnualtimeEncoding, SinAnnualtimeEncoding = AnnualTimeEncoding(Dateslist)
# Encodings
# linearlocationposition
# Supported Time Dependent Probes that can be in properties and/or predictions
# Special
# Annual
# Weekly
#
# Top Down
# TD0 Constant at 0.5
# TD1 Linear from 0 to 1
# TD2 P2(x) where x goes from -1 to 1 as time goes from start to end
#
# Bottom Up
# n-way Cos and sin theta where n = 4 7 8 16 24 32
EncodingTypes = {'Spatial':1, 'Weekly':2,'Annual':3,'TopDown':4,'BottomUp':5}
PropIndex =[]
PropNameMeanStd = []
PropMeanStd = []
PropArray = []
PropPosition = []
PredIndex =[]
PredNameMeanStd = []
PredArray = []
PredPosition = []
Numberpropaddons = 0
propposition = Npropperseq
Numberpredaddons = 0
predposition = Npredperseq
numprop = len(PropTypes)
if numprop != len(PropValues):
printexit('Error in property addons ' + str(numprop) + ' ' + str(len(PropValues)))
for newpropinlist in range(0,numprop):
Typeindex = EncodingTypes[PropTypes[newpropinlist]]
a,b,c,d = ReturnEncoding(Num_Time + TFTExtraTimes,Typeindex, PropValues[newpropinlist])
if c[0] != 'Dummy':
PropIndex.append(Typeindex)
PropNameMeanStd.append(c)
InputPropertyNames.append(c[0])
PropArray.append(a)
PropPosition.append(propposition)
propposition += 1
Numberpropaddons += 1
line = ' '
for ipr in range(0,20):
line += str(round(a[ipr],4)) + ' '
# print('c'+line)
if d[0] != 'Dummy':
PropIndex.append(Typeindex)
PropNameMeanStd.append(d)
InputPropertyNames.append(d[0])
PropArray.append(b)
PropPosition.append(propposition)
propposition += 1
Numberpropaddons += 1
line = ' '
for ipr in range(0,20):
line += str(round(b[ipr],4)) + ' '
# print('d'+line)
numpred = len(PredTypes)
if numpred != len(PredValues):
printexit('Error in prediction addons ' + str(numpred) + ' ' + str(len(PredValues)))
for newpredinlist in range(0,numpred):
Typeindex = EncodingTypes[PredTypes[newpredinlist]]
a,b,c,d = ReturnEncoding(Num_Time + TFTExtraTimes,Typeindex, PredValues[newpredinlist])
if c[0] != 'Dummy':
PredIndex.append(Typeindex)
PredNameMeanStd.append(c)
PredArray.append(a)
Predictionname.append(c[0])
Predictionnamelookup[c] = predposition
PredPosition.append(predposition)
predposition += 1
Numberpredaddons += 1
Predictionwgt.append(0.25)
if d[0] != 'Dummy':
PredIndex.append(Typeindex)
PredNameMeanStd.append(d)
PredArray.append(b)
Predictionname.append(d[0])
Predictionnamelookup[d[0]] = predposition
PredPosition.append(predposition)
predposition += 1
Numberpredaddons += 1
Predictionwgt.append(0.25)
###Output
_____no_output_____
###Markdown
Add in Temporal and Spatial Encoding
###Code
def SetNewAverages(InputList): # name min max mean std
results = np.empty(7, dtype = np.float32)
results[0] = InputList[1]
results[1] = InputList[2]
results[2] = 1.0
results[3] = InputList[3]
results[4] = InputList[4]
results[5] = InputList[3]
results[6] = InputList[4]
return results
NpropperseqTOT = Npropperseq + Numberpropaddons
# These include both Property and Prediction Variables
NpropperTimeMAX =len(QuantityTakeroot)
NewNpropperTimeMAX = NpropperTimeMAX + Numberpropaddons + Numberpredaddons
NewQuantityStatistics = np.zeros([NewNpropperTimeMAX,7], dtype=np.float32)
NewQuantityTakeroot = np.full(NewNpropperTimeMAX,1,dtype=int) # All new ones aare 1 and are set here
NewQuantityStatistics[0:NpropperTimeMAX,:] = QuantityStatistics[0:NpropperTimeMAX,:]
NewQuantityTakeroot[0:NpropperTimeMAX] = QuantityTakeroot[0:NpropperTimeMAX]
# Lookup for property names
NewPropertyNameIndex = np.empty(NpropperseqTOT, dtype = np.int32)
NumberofNames = len(InputPropertyNames)-Numberpropaddons
NewPropertyNameIndex[0:Npropperseq] = PropertyNameIndex[0:Npropperseq]
NewPropertyAverageValuesPointer = np.empty(NpropperseqTOT, dtype = np.int32)
NewPropertyAverageValuesPointer[0:Npropperseq] = PropertyAverageValuesPointer[0:Npropperseq]
for propaddons in range(0,Numberpropaddons):
NewPropertyNameIndex[Npropperseq+propaddons] = NumberofNames + propaddons
NewPropertyAverageValuesPointer[Npropperseq+propaddons] = NpropperTimeMAX + propaddons
NewQuantityStatistics[NpropperTimeMAX + propaddons,:] = SetNewAverages(PropNameMeanStd[propaddons])
# Set extra Predictions metadata for Sequences
NpredperseqTOT = Npredperseq + Numberpredaddons
NewPredictionAverageValuesPointer = np.empty(NpredperseqTOT, dtype = np.int32)
NewPredictionAverageValuesPointer[0:Npredperseq] = PredictionAverageValuesPointer[0:Npredperseq]
for predaddons in range(0,Numberpredaddons):
NewPredictionAverageValuesPointer[Npredperseq +predaddons] = NpropperTimeMAX + +Numberpropaddons + predaddons
NewQuantityStatistics[NpropperTimeMAX + Numberpropaddons + predaddons,:] = SetNewAverages(PredNameMeanStd[predaddons])
RawInputSequencesTOT = np.empty([Num_Seq + Num_SeqExtra + TFTExtraTimes, Nloc, RawInputSeqDimension, NpropperseqTOT], dtype =np.float32)
flsize = np.float(Num_Seq + Num_SeqExtra)*np.float(Nloc)*np.float(RawInputSeqDimension)* np.float(NpropperseqTOT)* 4.0
print('Total storage ' +str(round(flsize,0)) + ' Bytes')
for i in range(0,Num_Seq + Num_SeqExtra):
for iprop in range(0,Npropperseq):
RawInputSequencesTOT[i,:,:,iprop] = RawInputSequences[i,:,:,iprop]
for i in range(Num_Seq + Num_SeqExtra,Num_Seq + Num_SeqExtra + TFTExtraTimes):
for iprop in range(0,Npropperseq):
RawInputSequencesTOT[i,:,:,iprop] = NaN
for i in range(0,Num_Seq + Num_SeqExtra + TFTExtraTimes):
for k in range(0,RawInputSeqDimension):
for iprop in range(0, Numberpropaddons):
if PropIndex[iprop] == 1:
continue
RawInputSequencesTOT[i,:,k,PropPosition[iprop]] = PropArray[iprop][i+k]
for iprop in range(0, Numberpropaddons):
if PropIndex[iprop] == 1:
for j in range(0,Nloc):
RawInputSequencesTOT[:,j,:,PropPosition[iprop]] = PropArray[iprop][j]
# Set extra Predictions for Sequences
RawInputPredictionsTOT = np.empty([Num_SeqPred + TFTExtraTimes, Nloc, NpredperseqTOT], dtype =np.float32)
for i in range(0,Num_SeqPred):
for ipred in range(0,Npredperseq):
RawInputPredictionsTOT[i,:,ipred] = RawInputPredictions[i,:,ipred]
for i in range(Num_SeqPred, Num_SeqPred + TFTExtraTimes):
for ipred in range(0,Npredperseq):
RawInputPredictionsTOT[i,:,ipred] = NaN
for i in range(0,Num_SeqPred + TFTExtraTimes):
for ipred in range(0, Numberpredaddons):
if PredIndex[ipred] == 1:
continue
actualarray = PredArray[ipred]
RawInputPredictionsTOT[i,:,PredPosition[ipred]] = actualarray[i+TseqPred]
for ipred in range(0, Numberpredaddons):
if PredIndex[ipred] == 1:
for j in range(0,Nloc):
RawInputPredictionsTOT[:,j,PredPosition[ipred]] = PredArray[ipred][j]
PropertyNameIndex = None
PropertyNameIndex = NewPropertyNameIndex
QuantityStatistics = None
QuantityStatistics = NewQuantityStatistics
QuantityTakeroot = None
QuantityTakeroot = NewQuantityTakeroot
PropertyAverageValuesPointer = None
PropertyAverageValuesPointer = NewPropertyAverageValuesPointer
PredictionAverageValuesPointer = None
PredictionAverageValuesPointer = NewPredictionAverageValuesPointer
print('Time and Space encoding added to input and predictions')
if SymbolicWindows:
SymbolicInputSequencesTOT = np.empty([Num_Seq, Nloc], dtype =np.int32) # This is sequences
for iseq in range(0,Num_Seq):
for iloc in range(0,Nloc):
SymbolicInputSequencesTOT[iseq,iloc] = np.left_shift(iseq,16) + iloc
ReshapedSequencesTOT = np.transpose(RawInputSequencesTOT,(1,0,3,2))
ReshapedSequencesTOT = np.reshape(ReshapedSequencesTOT,(Nloc,Num_Seq + Num_SeqExtra + TFTExtraTimes,NpropperseqTOT))
# To calculate masks (identical to Symbolic windows)
SpacetimeforMask = np.empty([Num_Seq, Nloc], dtype =np.int32)
for iseq in range(0,Num_Seq):
for iloc in range(0,Nloc):
SpacetimeforMask[iseq,iloc] = np.left_shift(iseq,16) + iloc
print(PropertyNameIndex)
print(InputPropertyNames)
for iprop in range(0,NpropperseqTOT):
line = 'Property ' + str(iprop) + ' ' + InputPropertyNames[PropertyNameIndex[iprop]]
jprop = PropertyAverageValuesPointer[iprop]
line += ' Processing Root ' + str(QuantityTakeroot[jprop])
for proppredval in range (0,7):
line += ' ' + QuantityStatisticsNames[proppredval] + ' ' + str(round(QuantityStatistics[jprop,proppredval],3))
print(wraptotext(line,size=150))
for ipred in range(0,NpredperseqTOT):
line = 'Prediction ' + str(ipred) + ' ' + Predictionname[ipred] + ' ' + str(round(Predictionwgt[ipred],3))
jpred = PredictionAverageValuesPointer[ipred]
line += ' Processing Root ' + str(QuantityTakeroot[jpred])
for proppredval in range (0,7):
line += ' ' + QuantityStatisticsNames[proppredval] + ' ' + str(round(QuantityStatistics[jpred,proppredval],3))
print(wraptotext(line,size=150))
RawInputPredictions = None
RawInputSequences = None
if SymbolicWindows:
RawInputSequencesTOT = None
if GarbageCollect:
gc.collect()
###Output
_____no_output_____
###Markdown
Set up NNSE and Plots including Futures
###Code
#Set up NNSE Normalized Nash Sutcliffe Efficiency
CalculateNNSE = np.full(NpredperseqTOT, False, dtype = bool)
PlotPredictions = np.full(NpredperseqTOT, False, dtype = bool)
for ipred in range(0,NpredperseqTOT):
CalculateNNSE[ipred] = True
PlotPredictions[ipred] = True
###Output
_____no_output_____
###Markdown
Location Based Validation
###Code
LocationBasedValidation = False
LocationValidationFraction = 0.0
RestartLocationBasedValidation = False
RestartRunName = RunName
if Earthquake:
LocationBasedValidation = True
LocationValidationFraction = 0.2
RestartLocationBasedValidation = True
RestartRunName = 'EARTHQN-Transformer3'
FullSetValidation = False
global SeparateValandTrainingPlots
SeparateValandTrainingPlots = True
if not LocationBasedValidation:
SeparateValandTrainingPlots = False
LocationValidationFraction = 0.0
NlocValplusTraining = Nloc
ListofTrainingLocs = np.arange(Nloc, dtype = np.int32)
ListofValidationLocs = np.full(Nloc, -1, dtype = np.int32)
MappingtoTraining = np.arange(Nloc, dtype = np.int32)
MappingtoValidation = np.full(Nloc, -1, dtype = np.int32)
TrainingNloc = Nloc
ValidationNloc = 0
if LocationBasedValidation:
if RestartLocationBasedValidation:
InputFileName = APPLDIR + '/Validation' + RestartRunName
with open(InputFileName, 'r', newline='') as inputfile:
Myreader = reader(inputfile, delimiter=',')
header = next(Myreader)
LocationValidationFraction = np.float32(header[0])
TrainingNloc = np.int32(header[1])
ValidationNloc = np.int32(header[2])
ListofTrainingLocs = np.empty(TrainingNloc, dtype = np.int32)
ListofValidationLocs = np.empty(ValidationNloc, dtype = np.int32)
nextrow = next(Myreader)
for iloc in range(0, TrainingNloc):
ListofTrainingLocs[iloc] = np.int32(nextrow[iloc])
nextrow = next(Myreader)
for iloc in range(0, ValidationNloc):
ListofValidationLocs[iloc] = np.int32(nextrow[iloc])
LocationTrainingfraction = 1.0 - LocationValidationFraction
if TrainingNloc + ValidationNloc != Nloc:
printexit('EXIT: Inconsistent location counts for Location Validation ' +str(Nloc)
+ ' ' + str(TrainingNloc) + ' ' + str(ValidationNloc))
print(' Validation restarted Fraction ' +str(round(LocationValidationFraction,4)) + ' ' + RestartRunName)
else:
LocationTrainingfraction = 1.0 - LocationValidationFraction
TrainingNloc = math.ceil(LocationTrainingfraction*Nloc)
ValidationNloc = Nloc - TrainingNloc
np.random.shuffle(ListofTrainingLocs)
ListofValidationLocs = ListofTrainingLocs[TrainingNloc:Nloc]
ListofTrainingLocs = ListofTrainingLocs[0:TrainingNloc]
for iloc in range(0,TrainingNloc):
jloc = ListofTrainingLocs[iloc]
MappingtoTraining[jloc] = iloc
MappingtoValidation[jloc] = -1
for iloc in range(0,ValidationNloc):
jloc = ListofValidationLocs[iloc]
MappingtoValidation[jloc] = iloc
MappingtoTraining[jloc] = -1
if ValidationNloc <= 0:
SeparateValandTrainingPlots = False
if not RestartLocationBasedValidation:
OutputFileName = APPLDIR + '/Validation' + RunName
with open(OutputFileName, 'w', newline='') as outputfile:
Mywriter = writer(outputfile, delimiter=',')
Mywriter.writerow([LocationValidationFraction, TrainingNloc, ValidationNloc] )
Mywriter.writerow(ListofTrainingLocs)
Mywriter.writerow(ListofValidationLocs)
print('Training Locations ' + str(TrainingNloc) + ' Validation Locations ' + str(ValidationNloc))
if ValidationNloc <=0:
LocationBasedValidation = False
if Earthquake:
StartDate = np.datetime64(InitialDate).astype('datetime64[D]') + np.timedelta64(Tseq*Dailyunit + int(Dailyunit/2),'D')
dayrange = np.timedelta64(Dailyunit,'D')
Numericaldate = np.empty(numberspecialeqs, dtype=np.float32)
PrimaryTrainingList = []
SecondaryTrainingList = []
PrimaryValidationList = []
SecondaryValidationList = []
for iquake in range(0,numberspecialeqs):
Numericaldate[iquake] = max(0,math.floor((Specialdate[iquake] - StartDate)/dayrange))
Trainingsecondary = False
Validationsecondary = False
for jloc in range(0,Nloc):
iloc = LookupLocations[jloc] # original location
result = quakesearch(iquake, iloc)
if result == 0:
continue
kloc = MappingtoTraining[jloc]
if result == 1: # Primary
if kloc >= 0:
PrimaryTrainingList.append(iquake)
Trainingsecondary = True
else:
PrimaryValidationList.append(iquake)
Validationsecondary = True
else: # Secondary
if kloc >= 0:
if Trainingsecondary:
continue
Trainingsecondary = True
SecondaryTrainingList.append(iquake)
else:
if Validationsecondary:
continue
Validationsecondary = True
SecondaryValidationList.append(iquake)
iloc = Specialxpos[iquake] + 60*Specialypos[iquake]
jloc = MappedLocations[iloc]
kloc = -2
if jloc >= 0:
kloc = LookupLocations[jloc]
line = str(iquake) + " " + str(Trainingsecondary) + " " + str(Validationsecondary) + " "
line += str(iloc) + " " + str(jloc) + " " + str(kloc) + " " + str(round(Specialmags[iquake],1)) + ' ' + Specialeqname[iquake]
print(line)
PrimaryTrainingvetoquake = np.full(numberspecialeqs,True, dtype = bool)
SecondaryTrainingvetoquake = np.full(numberspecialeqs,True, dtype = bool)
PrimaryValidationvetoquake = np.full(numberspecialeqs,True, dtype = bool)
SecondaryValidationvetoquake = np.full(numberspecialeqs,True, dtype = bool)
for jquake in PrimaryTrainingList:
PrimaryTrainingvetoquake[jquake] = False
for jquake in PrimaryValidationList:
PrimaryValidationvetoquake[jquake] = False
for jquake in SecondaryTrainingList:
if not PrimaryTrainingvetoquake[jquake]:
continue
SecondaryTrainingvetoquake[jquake] = False
for jquake in SecondaryValidationList:
if not PrimaryValidationvetoquake[jquake]:
continue
SecondaryValidationvetoquake[jquake] = False
for iquake in range(0,numberspecialeqs):
iloc = Specialxpos[iquake] + 60*Specialypos[iquake]
line = str(iquake) + " Loc " + str(iloc) + " " + str(MappedLocations[iloc]) + " Date " + str(Specialdate[iquake]) + " " + str(Numericaldate[iquake])
line += " " + str(PrimaryTrainingvetoquake[iquake]) + " " + str(SecondaryTrainingvetoquake[iquake])
line += " Val " + str(PrimaryValidationvetoquake[iquake]) + " " + str(SecondaryValidationvetoquake[iquake])
print(line)
###Output
_____no_output_____
###Markdown
LSTM Control Parameters EDIT
###Code
CustomLoss = 1
UseClassweights = True
PredictionTraining = False
# Gregor: MODIFY
if (not Hydrology) and (not Earthquake) and (NpredperseqTOT <=2):
useFutures = False
CustomLoss = 0
UseClassweights = False
number_of_LSTMworkers = 1
TFTTransformerepochs = 10
LSTMbatch_size = TrainingNloc
LSTMbatch_size = min(LSTMbatch_size, TrainingNloc)
LSTMactivationvalue = "selu"
LSTMrecurrent_activation = "sigmoid"
LSTMoptimizer = 'adam'
LSTMdropout1=0.2
LSTMrecurrent_dropout1 = 0.2
LSTMdropout2=0.2
LSTMrecurrent_dropout2 = 0.2
number_LSTMnodes= 16
LSTMFinalMLP = 64
LSTMInitialMLP = 32
LSTMThirdLayer = False
LSTMSkipInitial = False
LSTMverbose = 0
AnyOldValidation = 0.0
if LocationBasedValidation:
AnyOldValidation = LocationBasedValidation
LSTMvalidationfrac = AnyOldValidation
###Output
_____no_output_____
###Markdown
Important Parameters defining Transformer project EDIT
###Code
ActivateAttention = False
DoubleQKV = False
TimeShufflingOnly = False
Transformerbatch_size = 1
Transformervalidationfrac = 0.0
UsedTransformervalidationfrac = 0.0
Transformerepochs = 200
Transformeroptimizer ='adam'
Transformerverbose = 0
TransformerOnlyFullAttention = True
d_model =64
d_Attention = 2 * d_model
if TransformerOnlyFullAttention:
d_Attention = d_model
d_qk = d_model
d_intermediateqk = 2 * d_model
num_heads = 2
num_Encoderlayers = 2
EncoderDropout= 0.1
EncoderActivation = 'selu'
d_EncoderLayer = d_Attention
d_merge = d_model
d_ffn = 4*d_model
MaskingOption = 0
PeriodicInputTemporalEncoding = 7 # natural for COVID
LinearInputTemporalEncoding = -1 # natural for COVID
TransformerInputTemporalEncoding = 10000
UseTransformerInputTemporalEncoding = False
###Output
_____no_output_____
###Markdown
General Control Parameters
###Code
OuterBatchDimension = Num_Seq * TrainingNloc
IndividualPlots = False
Plotrealnumbers = False
PlotsOnlyinTestFIPS = True
ListofTestFIPS = ['36061','53033','17031','6037']
if Earthquake:
ListofTestFIPS = ['','']
Plotrealnumbers = True
StartDate = np.datetime64(InitialDate).astype('datetime64[D]') + np.timedelta64(Tseq*Dailyunit + int(Dailyunit/2),'D')
dayrange = np.timedelta64(Dailyunit,'D')
CutoffDate = np.datetime64('1989-01-01')
NumericalCutoff = math.floor((CutoffDate - StartDate)/dayrange)
print('Start ' + str(StartDate) + ' Cutoff ' + str(CutoffDate) + " sequence index " + str(NumericalCutoff))
TimeCutLabel = [' All Time ',' Start ',' End ']
print("Size of sequence window Tseq ", str(Tseq))
print("Number of Sequences in time Num_Seq ", str(Num_Seq))
print("Number of locations Nloc ", str(Nloc))
print("Number of Training Sequences in Location and Time ", str(OuterBatchDimension))
print("Number of internal properties per sequence including static or dynamic Npropperseq ", str(Npropperseq))
print("Number of internal properties per sequence adding in explicit space-time encoding ", str(NpropperseqTOT))
print("Total number of predictions per sequence NpredperseqTOT ", str(NpredperseqTOT))
###Output
_____no_output_____
###Markdown
Useful Time series utilities DLpredictionPrediction and Visualization LSTM+Transformer
###Code
def DLprediction(Xin, yin, DLmodel, modelflag, LabelFit =''):
# modelflag = 0 LSTM = 1 Transformer
# Input is the windows [Num_Seq] [Nloc] [Tseq] [NpropperseqTOT] (SymbolicWindows False)
# Input is the sequences [Nloc] [Num_Time-1] [NpropperseqTOT] (SymbolicWindows True)
# Input Predictions are always [Num_Seq] [NLoc] [NpredperseqTOT]
current_time = timenow()
print(startbold + startred + current_time + ' ' + RunName + " DLPrediction " +RunComment + resetfonts)
FitPredictions = np.zeros([Num_Seq, Nloc, NpredperseqTOT], dtype =np.float32)
# Compare to RawInputPredictionsTOT
RMSEbyclass = np.zeros([NpredperseqTOT,3], dtype=np.float64)
RMSETRAINbyclass = np.zeros([NpredperseqTOT,3], dtype=np.float64)
RMSEVALbyclass = np.zeros([NpredperseqTOT,3], dtype=np.float64)
RMSVbyclass = np.zeros([NpredperseqTOT], dtype=np.float64)
AbsEbyclass = np.zeros([NpredperseqTOT], dtype=np.float64)
AbsVbyclass = np.zeros([NpredperseqTOT], dtype=np.float64)
ObsVbytimeandclass = np.zeros([Num_Seq, NpredperseqTOT,3], dtype=np.float64)
Predbytimeandclass = np.zeros([Num_Seq, NpredperseqTOT,3], dtype=np.float64)
countbyclass = np.zeros([NpredperseqTOT,3], dtype=np.float64)
countVALbyclass = np.zeros([NpredperseqTOT,3], dtype=np.float64)
countTRAINbyclass = np.zeros([NpredperseqTOT,3], dtype=np.float64)
totalcount = 0
overcount = 0
weightedcount = 0.0
weightedovercount = 0.0
weightedrmse1 = 0.0
weightedrmse1TRAIN = 0.0
weightedrmse1VAL = 0.0
closs = 0.0
dloss = 0.0
eloss = 0.0
floss = 0.0
sw = np.empty([Nloc,NpredperseqTOT],dtype = np.float32)
for iloc in range(0,Nloc):
for k in range(0,NpredperseqTOT):
sw[iloc,k] = Predictionwgt[k]
global tensorsw
tensorsw = tf.convert_to_tensor(sw, np.float32)
Ctime1 = 0.0
Ctime2 = 0.0
Ctime3 = 0.0
samplebar = notebook.trange(Num_Seq, desc='Predict loop', unit = 'sequences')
countingcalls = 0
for iseq in range(0, Num_Seq):
StopWatch.start('label1')
if SymbolicWindows:
if modelflag == 2:
InputVector = np.empty((Nloc,2), dtype = int)
for iloc in range (0,Nloc):
InputVector[iloc,0] = iloc
InputVector[iloc,1] = iseq
else:
InputVector = Xin[:,iseq:iseq+Tseq,:]
else:
InputVector = Xin[iseq]
Time = None
if modelflag == 0:
InputVector = np.reshape(InputVector,(-1,Tseq,NpropperseqTOT))
elif modelflag == 1:
InputVector = np.reshape(InputVector,(1,Tseq*Nloc,NpropperseqTOT))
BasicTimes = np.full(Nloc,iseq, dtype=np.int32)
Time = SetSpacetime(np.reshape(BasicTimes,(1,-1)))
StopWatch.stop('label1')
Ctime1 += StopWatch.get('label1', digits=4)
StopWatch.start('label2')
PredictedVector = DLmodel(InputVector, training = PredictionTraining, Time=Time)
StopWatch.stop('label2')
Ctime2 += StopWatch.get('label2', digits=4)
StopWatch.start('label3')
PredictedVector = np.reshape(PredictedVector,(Nloc,NpredperseqTOT))
TrueVector = yin[iseq]
functionval = numpycustom_lossGCF1(TrueVector,PredictedVector,sw)
closs += functionval
PredictedVector_t = tf.convert_to_tensor(PredictedVector)
yin_t = tf.convert_to_tensor(TrueVector)
dloss += weightedcustom_lossGCF1(yin_t,PredictedVector_t,tensorsw)
eloss += custom_lossGCF1spec(yin_t,PredictedVector_t)
OutputLoss = 0.0
FitPredictions[iseq] = PredictedVector
for iloc in range(0,Nloc):
yy = yin[iseq,iloc]
yyhat = PredictedVector[iloc]
sum1 = 0.0
for i in range(0,NpredperseqTOT):
overcount += 1
weightedovercount += Predictionwgt[i]
if math.isnan(yy[i]):
continue
weightedcount += Predictionwgt[i]
totalcount += 1
mse1 = ((yy[i]-yyhat[i])**2)
mse = mse1*sw[iloc,i]
if i < Npredperseq:
floss += mse
sum1 += mse
AbsEbyclass[i] += abs(yy[i] - yyhat[i])
RMSVbyclass[i] += yy[i]**2
AbsVbyclass[i] += abs(yy[i])
RMSEbyclass[i,0] += mse
countbyclass[i,0] += 1.0
if iseq < NumericalCutoff:
countbyclass[i,1] += 1.0
RMSEbyclass[i,1] += mse
else:
countbyclass[i,2] += 1.0
RMSEbyclass[i,2] += mse
if LocationBasedValidation:
if MappingtoTraining[iloc] >= 0:
ObsVbytimeandclass [iseq,i,1] += abs(yy[i])
Predbytimeandclass [iseq,i,1] += abs(yyhat[i])
RMSETRAINbyclass[i,0] += mse
countTRAINbyclass[i,0] += 1.0
if iseq < NumericalCutoff:
RMSETRAINbyclass[i,1] += mse
countTRAINbyclass[i,1] += 1.0
else:
RMSETRAINbyclass[i,2] += mse
countTRAINbyclass[i,2] += 1.0
if MappingtoValidation[iloc] >= 0:
ObsVbytimeandclass [iseq,i,2] += abs(yy[i])
Predbytimeandclass [iseq,i,2] += abs(yyhat[i])
RMSEVALbyclass[i,0] += mse
countVALbyclass[i,0] += 1.0
if iseq < NumericalCutoff:
RMSEVALbyclass[i,1] += mse
countVALbyclass[i,1] += 1.0
else:
RMSEVALbyclass[i,2] += mse
countVALbyclass[i,2] += 1.0
ObsVbytimeandclass [iseq,i,0] += abs(yy[i])
Predbytimeandclass [iseq,i,0] += abs(yyhat[i])
weightedrmse1 += sum1
if LocationBasedValidation:
if MappingtoTraining[iloc] >= 0:
weightedrmse1TRAIN += sum1
if MappingtoValidation[iloc] >= 0:
weightedrmse1VAL += sum1
OutputLoss += sum1
StopWatch.stop('label3')
Ctime3 += StopWatch.get('label3', digits=4)
OutputLoss /= Nloc
countingcalls += 1
samplebar.update(1)
samplebar.set_postfix( Call = countingcalls, TotalLoss = OutputLoss)
print('Times ' + str(round(Ctime1,5)) + ' ' + str(round(Ctime3,5)) + ' TF ' + str(round(Ctime2,5)))
weightedrmse1 /= (Num_Seq * Nloc)
floss /= (Num_Seq * Nloc)
if LocationBasedValidation:
weightedrmse1TRAIN /= (Num_Seq * TrainingNloc)
if ValidationNloc>0:
weightedrmse1VAL /= (Num_Seq * ValidationNloc)
dloss = dloss.numpy()
eloss = eloss.numpy()
closs /= Num_Seq
dloss /= Num_Seq
eloss /= Num_Seq
current_time = timenow()
line1 = ''
global GlobalTrainingLoss, GlobalValidationLoss, GlobalLoss
GlobalLoss = weightedrmse1
if LocationBasedValidation:
line1 = ' Training ' + str(round(weightedrmse1TRAIN,6)) + ' Validation ' + str(round(weightedrmse1VAL,6))
GlobalTrainingLoss = weightedrmse1TRAIN
GlobalValidationLoss = weightedrmse1VAL
print( startbold + startred + current_time + ' DLPrediction Averages' + ' ' + RunName + ' ' + RunComment + resetfonts)
line = LabelFit + ' ' + RunName + ' Weighted sum over predicted values ' + str(round(weightedrmse1,6))
line += ' No Encoding Preds ' + str(round(floss,6)) + line1
line += ' from loss function ' + str(round(closs,6)) + ' TF version ' + str(round(dloss,6)) + ' TFspec version ' + str(round(eloss,6))
print(wraptotext(line))
print('Count ignoring NaN ' +str(round(weightedcount,4))+ ' Counting NaN ' + str(round(weightedovercount,4)), 70 )
print(' Unwgt Count no NaN ',totalcount, ' Unwgt Count with NaN ',overcount, ' Number Sequences ', Nloc*Num_Seq)
ObsvPred = np.sum( np.abs(ObsVbytimeandclass-Predbytimeandclass) , axis=0)
TotalObs = np.sum( ObsVbytimeandclass , axis=0)
SummedEbyclass = np.divide(ObsvPred,TotalObs)
RMSEbyclass1 = np.divide(RMSEbyclass,countbyclass) # NO SQRT
RMSEbyclass2 = np.sqrt(np.divide(RMSEbyclass[:,0],RMSVbyclass))
RelEbyclass = np.divide(AbsEbyclass, AbsVbyclass)
extracomments = []
line1 = '\nErrors by Prediction Components -- class weights not included except in final Loss components\n Name Count without NaN, '
line2 = 'sqrt(sum errors**2/sum target**2), sum(abs(error)/sum(abs(value), abs(sum(abs(value)-abs(pred)))/sum(abs(pred)'
print(wraptotext(startbold + startred + line1 + line2 + resetfonts))
countbasic = 0
for i in range(0,NpredperseqTOT):
line = startbold + startred + ' AVG MSE '
for timecut in range(0,3):
line += TimeCutLabel[timecut] + 'Full ' + str(round(RMSEbyclass1[i,timecut],6)) + resetfonts
if LocationBasedValidation:
RTRAIN = np.divide(RMSETRAINbyclass[i],countTRAINbyclass[i])
RVAL = np.full(3,0.0, dtype =np.float32)
if countVALbyclass[i,0] > 0:
RVAL = np.divide(RMSEVALbyclass[i],countVALbyclass[i])
for timecut in range(0,3):
line += startbold + startpurple + TimeCutLabel[timecut] + 'TRAIN ' + resetfonts + str(round(RTRAIN[timecut],6))
line += startbold + ' VAL ' + resetfonts + str(round(RVAL[timecut],6))
else:
RTRAIN = RMSEbyclass1[i]
RVAL = np.full(3,0.0, dtype =np.float32)
print(wraptotext(str(i) + ' ' + startbold + Predictionname[i] + resetfonts + ' All Counts ' + str(round(countbyclass[i,0],0)) + ' IndE^2/IndObs^2 '
+ str(round(100.0*RMSEbyclass2[i],2)) + '% IndE/IndObs ' + str(round(100.0*RelEbyclass[i],2)) + '% summedErr/SummedObs ' + str(round(100.0*SummedEbyclass[i,0],2)) + '%' +line ) )
Trainline = 'AVG MSE F=' + str(round(RTRAIN[0],6)) + ' S=' + str(round(RTRAIN[1],6)) + ' E=' + str(round(RTRAIN[2],6)) + ' TOTAL summedErr/SummedObs ' + str(round(100.0*SummedEbyclass[i,1],2)) + '%'
Valline = 'AVG MSE F=' + str(round(RVAL[0],6)) + ' S=' + str(round(RVAL[1],6)) + ' E=' + str(round(RVAL[2],6)) + ' TOTAL summedErr/SummedObs ' + str(round(100.0*SummedEbyclass[i,2],2)) + '%'
extracomments.append([Trainline, Valline] )
countbasic += 1
if countbasic == NumpredbasicperTime:
countbasic = 0
print(' ')
# Don't use DLPrediction for Transformer Plots. Wait for DL2B,D,E
if modelflag == 1:
return FitPredictions
FindNNSE(yin, FitPredictions)
print('\n Next plots come from DLPrediction')
PredictedQuantity = -NumpredbasicperTime
for ifuture in range (0,1+LengthFutures):
increment = NumpredbasicperTime
if ifuture > 1:
increment = NumpredFuturedperTime
PredictedQuantity += increment
if not PlotPredictions[PredictedQuantity]:
continue
Dumpplot = False
if PredictedQuantity ==0:
Dumpplot = True
Location_summed_plot(ifuture, yin, FitPredictions, extracomments = extracomments, Dumpplot = Dumpplot)
if IndividualPlots:
ProduceIndividualPlots(yin, FitPredictions)
if Earthquake and EarthquakeImagePlots:
ProduceSpatialQuakePlot(yin, FitPredictions)
# Call DLprediction2F here if modelflag=0
DLprediction2F(Xin, yin, DLmodel, modelflag)
return FitPredictions
###Output
_____no_output_____
###Markdown
Spatial Earthquake Plots
###Code
def ProduceSpatialQuakePlot(Observations, FitPredictions):
current_time = timenow()
print(startbold + startred + current_time + ' Produce Spatial Earthquake Plots ' + RunName + ' ' + RunComment + resetfonts)
dayindexmax = Num_Seq-Plottingdelay
Numdates = 4
denom = 1.0/np.float64(Numdates-1)
for plotdays in range(0,Numdates):
dayindexvalue = math.floor(0.1 + (plotdays*dayindexmax)*denom)
if dayindexvalue < 0:
dayindexvalue = 0
if dayindexvalue > dayindexmax:
dayindexvalue = dayindexmax
FixedTimeSpatialQuakePlot(dayindexvalue,Observations, FitPredictions)
def EQrenorm(casesdeath,value):
if Plotrealnumbers:
predaveragevaluespointer = PredictionAverageValuesPointer[casesdeath]
newvalue = value/QuantityStatistics[predaveragevaluespointer,2] + QuantityStatistics[predaveragevaluespointer,0]
rootflag = QuantityTakeroot[predaveragevaluespointer]
if rootflag == 2:
newvalue = newvalue**2
if rootflag == 3:
newvalue = newvalue**3
else:
newvalue=value
return newvalue
def FixedTimeSpatialQuakePlot(PlotTime,Observations, FitPredictions):
Actualday = InitialDate + timedelta(days=(PlotTime+Tseq))
print(startbold + startred + ' Spatial Earthquake Plots ' + Actualday.strftime("%d/%m/%Y") + ' ' + RunName + ' ' + RunComment + resetfonts)
NlocationsPlotted = Nloc
real = np.zeros([NumpredbasicperTime,NlocationsPlotted])
predict = np.zeros([NumpredbasicperTime,NlocationsPlotted])
print('Ranges for Prediction numbers/names/property pointer')
for PredictedQuantity in range(0,NumpredbasicperTime):
for iloc in range(0,NlocationsPlotted):
real[PredictedQuantity,iloc] = EQrenorm(PredictedQuantity,Observations[PlotTime, iloc, PredictedQuantity])
predict[PredictedQuantity,iloc] = EQrenorm(PredictedQuantity,FitPredictions[PlotTime, iloc, PredictedQuantity])
localmax1 = real[PredictedQuantity].max()
localmin1 = real[PredictedQuantity].min()
localmax2 = predict[PredictedQuantity].max()
localmin2 = predict[PredictedQuantity].min()
predaveragevaluespointer = PredictionAverageValuesPointer[PredictedQuantity]
expectedmax = QuantityStatistics[predaveragevaluespointer,1]
expectedmin = QuantityStatistics[predaveragevaluespointer,0]
print(' Real max/min ' + str(round(localmax1,3)) + ' ' + str(round(localmin1,3))
+ ' Predicted max/min ' + str(round(localmax2,3)) + ' ' + str(round(localmin2,3))
+ ' Overall max/min ' + str(round(expectedmax,3)) + ' ' + str(round(expectedmin,3))
+ str(PredictedQuantity) + ' ' + Predictionbasicname[PredictedQuantity] + str(predaveragevaluespointer))
InputImages =[]
InputTitles =[]
for PredictedQuantity in range(0,NumpredbasicperTime):
InputImages.append(real[PredictedQuantity])
InputTitles.append(Actualday.strftime("%d/%m/%Y") + ' Observed ' + Predictionbasicname[PredictedQuantity])
InputImages.append(predict[PredictedQuantity])
InputTitles.append(Actualday.strftime("%d/%m/%Y") + ' Predicted ' + Predictionbasicname[PredictedQuantity])
plotimages(InputImages,InputTitles,NumpredbasicperTime,2)
###Output
_____no_output_____
###Markdown
Organize Location v Time Plots
###Code
def ProduceIndividualPlots(Observations, FitPredictions):
current_time = timenow()
print(startbold + startred + current_time + ' Produce Individual Plots ' + RunName + ' ' + RunComment + resetfonts)
# Find Best and Worst Locations
fips_b, fips_w = bestandworst(Observations, FitPredictions)
if Hydrology or Earthquake:
plot_by_fips(fips_b, Observations, FitPredictions)
plot_by_fips(fips_w, Observations, FitPredictions)
else:
plot_by_fips(6037, Observations, FitPredictions)
plot_by_fips(36061, Observations, FitPredictions)
plot_by_fips(17031, Observations, FitPredictions)
plot_by_fips(53033, Observations, FitPredictions)
if (fips_b!=6037) and (fips_b!=36061) and (fips_b!=17031) and (fips_b!=53033):
plot_by_fips(fips_b, Observations, FitPredictions)
if (fips_w!=6037) and (fips_w!=36061) and (fips_w!=17031) and (fips_w!=53033):
plot_by_fips(fips_w, Observations, FitPredictions)
# Plot top 10 largest cities
sortedcities = np.flip(np.argsort(Locationpopulation))
for pickout in range (0,10):
Locationindex = sortedcities[pickout]
fips = Locationfips[Locationindex]
if not(Hydrology or Earthquake):
if fips == 6037 or fips == 36061 or fips == 17031 or fips == 53033:
continue
if fips == fips_b or fips == fips_w:
continue
plot_by_fips(fips, Observations, FitPredictions)
if LengthFutures > 1:
plot_by_futureindex(2, Observations, FitPredictions)
if LengthFutures > 6:
plot_by_futureindex(7, Observations, FitPredictions)
if LengthFutures > 11:
plot_by_futureindex(12, Observations, FitPredictions)
return
def bestandworst(Observations, FitPredictions):
current_time = timenow()
print(startbold + startred + current_time + ' ' + RunName + " Best and Worst " +RunComment + resetfonts)
keepabserrorvalues = np.zeros([Nloc,NumpredbasicperTime], dtype=np.float64)
keepRMSEvalues = np.zeros([Nloc,NumpredbasicperTime], dtype=np.float64)
testabserrorvalues = np.zeros(Nloc, dtype=np.float64)
testRMSEvalues = np.zeros(Nloc, dtype=np.float64)
real = np.zeros([NumpredbasicperTime,Num_Seq], dtype=np.float64)
predictsmall = np.zeros([NumpredbasicperTime,Num_Seq], dtype=np.float64)
c_error_props = np.zeros([NumpredbasicperTime], dtype=np.float64)
c_error_props = np.zeros([NumpredbasicperTime], dtype=np.float64)
for icity in range(0,Nloc):
validcounts = np.zeros([NumpredbasicperTime], dtype=np.float64)
RMSE = np.zeros([NumpredbasicperTime], dtype=np.float64)
for PredictedQuantity in range(0,NumpredbasicperTime):
for itime in range (0,Num_Seq):
if not math.isnan(Observations[itime, icity, PredictedQuantity]):
real[PredictedQuantity,itime] = Observations[itime, icity, PredictedQuantity]
predictsmall[PredictedQuantity,itime] = FitPredictions[itime, icity, PredictedQuantity]
validcounts[PredictedQuantity] += 1.0
RMSE[PredictedQuantity] += (Observations[itime, icity, PredictedQuantity]-FitPredictions[itime, icity, PredictedQuantity])**2
c_error_props[PredictedQuantity] = cumulative_error(predictsmall[PredictedQuantity], real[PredictedQuantity]) # abs(error) as percentage
keepabserrorvalues[icity,PredictedQuantity] = c_error_props[PredictedQuantity]
keepRMSEvalues[icity,PredictedQuantity] = RMSE[PredictedQuantity] *100. / validcounts[PredictedQuantity]
testabserror = 0.0
testRMSE = 0.0
for PredictedQuantity in range(0,NumpredbasicperTime):
testabserror += c_error_props[PredictedQuantity]
testRMSE += keepRMSEvalues[icity,PredictedQuantity]
testabserrorvalues[icity] = testabserror
testRMSEvalues[icity] = testRMSE
sortingindex = np.argsort(testabserrorvalues)
bestindex = sortingindex[0]
worstindex = sortingindex[Nloc-1]
fips_b = Locationfips[bestindex]
fips_w = Locationfips[worstindex]
current_time = timenow()
print( startbold + "\n" + current_time + " Best " + str(fips_b) + " " + Locationname[bestindex] + " " + Locationstate[bestindex] + ' ABS(error) ' +
str(round(testabserrorvalues[bestindex],2)) + ' RMSE ' + str(round(testRMSEvalues[bestindex],2)) + resetfonts)
for topcities in range(0,10):
localindex = sortingindex[topcities]
printstring = str(topcities) + ") " + str(Locationfips[localindex]) + " " + Locationname[localindex] + " ABS(error) Total " + str(round(testabserrorvalues[localindex],4)) + " Components "
for PredictedQuantity in range(0,NumpredbasicperTime):
printstring += ' ' + str(round(keepabserrorvalues[localindex,PredictedQuantity],2))
print(printstring)
print("\nlist RMSE")
for topcities in range(0,9):
localindex = sortingindex[topcities]
printstring = str(topcities) + ") " + str(Locationfips[localindex]) + " " + Locationname[localindex] + " RMSE Total " + str(round(testRMSEvalues[localindex],4)) + " Components "
for PredictedQuantity in range(0,NumpredbasicperTime):
printstring += ' ' + str(round(keepRMSEvalues[localindex,PredictedQuantity],2))
print(printstring)
print( startbold + "\n" + current_time + " Worst " + str(fips_w) + " " + Locationname[worstindex] + " " + Locationstate[worstindex] + ' ABS(error) ' +
str(round(testabserrorvalues[worstindex],2)) + ' RMSE ' + str(round(testRMSEvalues[worstindex],2)) + resetfonts)
for badcities in range(Nloc-1,Nloc-11,-1):
localindex = sortingindex[badcities]
printstring = str(badcities) + ") " + str(Locationfips[localindex]) + " " + Locationname[localindex] + " ABS(error) Total " + str(round(testabserrorvalues[localindex],4)) + " Components "
for PredictedQuantity in range(0,NumpredbasicperTime):
printstring += ' ' + str(round(keepabserrorvalues[localindex,PredictedQuantity],2))
print(printstring)
print("\nlist RMSE")
for badcities in range(0,9):
localindex = sortingindex[badcities]
printstring = str(badcities) + ") " + str(Locationfips[localindex]) + " " + Locationname[localindex] + " RMSE Total " + str(round(testRMSEvalues[localindex],4)) + " Components "
for PredictedQuantity in range(0,NumpredbasicperTime):
printstring += ' ' + str(round(keepRMSEvalues[localindex,PredictedQuantity],2))
print(printstring)
return fips_b,fips_w
###Output
_____no_output_____
###Markdown
Summed & By Location Plots
###Code
def setValTrainlabel(iValTrain):
if SeparateValandTrainingPlots:
if iValTrain == 0:
Overalllabel = 'Training '
if GlobalTrainingLoss > 0.0001:
Overalllabel += str(round(GlobalTrainingLoss,5)) + ' '
if iValTrain == 1:
Overalllabel = 'Validation '
if GlobalValidationLoss > 0.0001:
Overalllabel += str(round(GlobalValidationLoss,5)) + ' '
else:
Overalllabel = 'Full ' + str(round(GlobalLoss,5)) + ' '
Overalllabel += RunName + ' '
return Overalllabel
def Location_summed_plot(selectedfuture, Observations, FitPredictions, fill=True, otherlabs= [], otherfits=[], extracomments = None, Dumpplot = False):
# plot sum over locations
current_time = timenow()
print(wraptotext(startbold + startred + current_time + ' Location_summed_plot ' + RunName + ' ' + RunComment + resetfonts))
otherlen = len(otherlabs)
basiclength = Num_Seq
predictlength = LengthFutures
if (not UseFutures) or (selectedfuture > 0):
predictlength = 0
totallength = basiclength + predictlength
if extracomments is None:
extracomments = []
for PredictedQuantity in range(0,NpredperseqTOT):
extracomments.append([' ',''])
NumberValTrainLoops = 1
if SeparateValandTrainingPlots:
NumberValTrainLoops = 2
selectedfield = NumpredbasicperTime + NumpredFuturedperTime*(selectedfuture-1)
selectednumplots = NumpredFuturedperTime
if selectedfuture == 0:
selectedfield = 0
selectednumplots = NumpredbasicperTime
ActualQuantity = np.arange(selectednumplots,dtype=np.int32)
if selectedfuture > 0:
for ipred in range(0,NumpredbasicperTime):
ifuture = FuturedPointer[ipred]
if ifuture >= 0:
ActualQuantity[ifuture] = ipred
real = np.zeros([selectednumplots,NumberValTrainLoops,basiclength])
predictsmall = np.zeros([selectednumplots,NumberValTrainLoops,basiclength])
predict = np.zeros([selectednumplots,NumberValTrainLoops,totallength])
if otherlen!=0:
otherpredict = np.zeros([otherlen,selectednumplots,NumberValTrainLoops, totallength])
for PlottedIndex in range(0,selectednumplots):
PredictedPos = PlottedIndex+selectedfield
ActualObservable = ActualQuantity[PlottedIndex]
for iValTrain in range(0,NumberValTrainLoops):
for iloc in range(0,Nloc):
if SeparateValandTrainingPlots:
if iValTrain == 0:
if MappingtoTraining[iloc] < 0:
continue
if iValTrain == 1:
if MappingtoTraining[iloc] >= 0:
continue
for itime in range (0,Num_Seq):
if np.math.isnan(Observations[itime, iloc, PredictedPos]):
real[PlottedIndex,iValTrain,itime] += FitPredictions[itime, iloc, PredictedPos]
else:
real[PlottedIndex,iValTrain,itime] += Observations[itime, iloc, PredictedPos]
predict[PlottedIndex,iValTrain,itime] += FitPredictions[itime, iloc, PredictedPos]
for others in range (0,otherlen):
otherpredict[others,PlottedIndex,iValTrain,itime] += FitPredictions[itime, iloc, PredictedPos] + otherfits[others,itime, iloc, PredictedPos]
if selectedfuture == 0:
if FuturedPointer[PlottedIndex] >= 0:
for ifuture in range(selectedfuture,LengthFutures):
jfuture = NumpredbasicperTime + NumpredFuturedperTime*ifuture
predict[PlottedIndex,iValTrain,Num_Seq+ifuture] += FitPredictions[itime, iloc,
FuturedPointer[PlottedIndex] + jfuture]
for others in range (0,otherlen):
otherpredict[others,PlottedIndex,iValTrain,Num_Seq+ifuture] += FitPredictions[itime, iloc, PlottedIndex + jfuture] + otherfits[others, itime, iloc, PlottedIndex + jfuture]
for itime in range(0,basiclength):
predictsmall[PlottedIndex,iValTrain,itime] = predict[PlottedIndex,iValTrain,itime]
error = np.absolute(real - predictsmall)
xsmall = np.arange(0,Num_Seq)
neededrows = math.floor((selectednumplots*NumberValTrainLoops +1.1)/2)
iValTrain = -1
PlottedIndex = -1
for rowloop in range(0,neededrows):
plt.rcParams["figure.figsize"] = [16,6]
figure, (ax1,ax2) = plt.subplots(nrows=1, ncols=2)
for kplot in range (0,2):
if NumberValTrainLoops == 2:
iValTrain = kplot
else:
iValTrain = 0
if iValTrain == 0:
PlottedIndex +=1
if PlottedIndex > (selectednumplots-1):
PlottedIndex = selectednumplots-1
Overalllabel = setValTrainlabel(iValTrain)
PredictedPos = PlottedIndex+selectedfield
ActualObservable = ActualQuantity[PlottedIndex]
eachplt = ax1
if kplot == 1:
eachplt = ax2
Overalllabel = 'Full '
if SeparateValandTrainingPlots:
if iValTrain == 0:
Overalllabel = 'Training '
if GlobalTrainingLoss > 0.0001:
Overalllabel += str(round(GlobalTrainingLoss,5)) + ' '
if iValTrain == 1:
Overalllabel = 'Validation '
if GlobalValidationLoss > 0.0001:
Overalllabel += str(round(GlobalValidationLoss,5)) + ' '
else:
Overalllabel += RunName + ' ' + str(round(GlobalLoss,5)) + ' '
maxplot = np.float32(totallength)
if UseRealDatesonplots:
StartDate = np.datetime64(InitialDate).astype('datetime64[D]') + np.timedelta64(Tseq*Dailyunit + math.floor(Dailyunit/2),'D')
EndDate = StartDate + np.timedelta64(totallength*Dailyunit)
datemin, datemax = makeadateplot(figure,eachplt, datemin=StartDate, datemax=EndDate)
Dateplot = True
Dateaxis = np.empty(totallength, dtype = 'datetime64[D]')
Dateaxis[0] = StartDate
for idate in range(1,totallength):
Dateaxis[idate] = Dateaxis[idate-1] + np.timedelta64(Dailyunit,'D')
else:
Dateplot = False
datemin = 0.0
datemax = maxplot
sumreal = 0.0
sumerror = 0.0
for itime in range(0,Num_Seq):
sumreal += abs(real[PlottedIndex,iValTrain,itime])
sumerror += error[PlottedIndex,iValTrain,itime]
c_error = round(100.0*sumerror/sumreal,2)
if UseRealDatesonplots:
eachplt.plot(Dateaxis[0:real.shape[-1]],real[PlottedIndex,iValTrain,:], label=f'real')
eachplt.plot(Dateaxis,predict[PlottedIndex,iValTrain,:], label='prediction')
eachplt.plot(Dateaxis[0:error.shape[-1]],error[PlottedIndex,iValTrain,:], label=f'error', color="red")
for others in range (0,otherlen):
eachplt.plot(Dateaxis[0:otherpredict.shape[-1]],otherpredict[others,PlottedIndex,iValTrain,:], label=otherlabs[others])
if fill:
eachplt.fill_between(Dateaxis[0:predictsmall.shape[-1]], predictsmall[PlottedIndex,iValTrain,:],
real[PlottedIndex,iValTrain,:], alpha=0.1, color="grey")
eachplt.fill_between(Dateaxis[0:error.shape[-1]], error[PlottedIndex,iValTrain,:], alpha=0.05, color="red")
else:
eachplt.plot(real[PlottedIndex,iValTrain,:], label=f'real')
eachplt.plot(predict[PlottedIndex,iValTrain,:], label='prediction')
eachplt.plot(error[PlottedIndex,iValTrain,:], label=f'error', color="red")
for others in range (0,otherlen):
eachplt.plot(otherpredict[others,PlottedIndex,iValTrain,:], label=otherlabs[others])
if fill:
eachplt.fill_between(xsmall, predictsmall[PlottedIndex,iValTrain,:], real[PlottedIndex,iValTrain,:],
alpha=0.1, color="grey")
eachplt.fill_between(xsmall, error[PlottedIndex,iValTrain,:], alpha=0.05, color="red")
if Earthquake and AddSpecialstoSummedplots:
if NumberValTrainLoops == 2:
if iValTrain == 0:
Addfixedearthquakes(eachplt, datemin, datemax, quakecolor = 'black', Dateplot = Dateplot,
vetoquake = PrimaryTrainingvetoquake)
Addfixedearthquakes(eachplt, datemin, datemax, quakecolor = 'purple', Dateplot = Dateplot,
vetoquake = SecondaryTrainingvetoquake)
else:
Addfixedearthquakes(eachplt, datemin, datemax, quakecolor = 'black', Dateplot = Dateplot,
vetoquake = PrimaryValidationvetoquake)
Addfixedearthquakes(eachplt, datemin, datemax, quakecolor = 'purple', Dateplot = Dateplot,
vetoquake = SecondaryValidationvetoquake)
else:
vetoquake = np.full(numberspecialeqs,False, dtype = bool)
Addfixedearthquakes(eachplt, datemin, datemax, quakecolor = 'black', Dateplot = Dateplot,
vetoquake = vetoquake)
extrastring = Overalllabel + current_time + ' ' + RunName + " "
extrastring += f"Length={Num_Seq}, Location Summed Results {Predictionbasicname[ActualObservable]}, "
yaxislabel = Predictionbasicname[ActualObservable]
if selectedfuture > 0:
yaxislabel = Predictionname[PredictedPos]
extrastring += " FUTURE " + yaxislabel
newyaxislabel = yaxislabel.replace("Months","Months\n")
newyaxislabel = newyaxislabel.replace("weeks","weeks\n")
newyaxislabel = newyaxislabel.replace("year","year\n")
eachplt.text(0.05,0.75,"FUTURE \n" + newyaxislabel,transform=eachplt.transAxes, color="red",fontsize=14, fontweight='bold')
extrastring += extracomments[PredictedPos][iValTrain]
eachplt.set_title('\n'.join(wrap(extrastring,70)))
if Dateplot:
eachplt.set_xlabel('Years')
else:
eachplt.set_xlabel(TimeIntervalUnitName+'s')
eachplt.set_ylabel(yaxislabel, color="red",fontweight='bold')
eachplt.grid(False)
eachplt.legend()
figure.tight_layout()
if Dumpplot and Dumpoutkeyplotsaspics:
VT = 'Both'
if NumberValTrainLoops == 1:
VT='Full'
SAVEFIG(plt, APPLDIR +'/Outputs/DLResults' + VT + str(PredictedPos) +RunName + '.png')
plt.show()
# Produce more detailed plots in time
# ONLY done for first quantity
splitsize = Plotsplitsize
if splitsize <= 1:
return
Numpoints = math.floor((Num_Seq+0.001)/splitsize)
extraone = Num_Seq%Numpoints
neededrows = math.floor((splitsize*NumberValTrainLoops +1.1)/2)
iValTrain = -1
PlottedIndex = 0
iseqnew = 0
counttimes = 0
for rowloop in range(0,neededrows):
plt.rcParams["figure.figsize"] = [16,6]
figure, (ax1,ax2) = plt.subplots(nrows=1, ncols=2)
for kplot in range (0,2):
if NumberValTrainLoops == 2:
iValTrain = kplot
else:
iValTrain = 0
Overalllabel = setValTrainlabel(iValTrain)
eachplt = ax1
if kplot == 1:
eachplt = ax2
sumreal = 0.0
sumerror = 0.0
if iValTrain == 0:
iseqold = iseqnew
iseqnew = iseqold + Numpoints
if counttimes < extraone:
iseqnew +=1
counttimes += 1
for itime in range(iseqold,iseqnew):
sumreal += abs(real[PlottedIndex,iValTrain,itime])
sumerror += error[PlottedIndex,iValTrain,itime]
c_error = round(100.0*sumerror/sumreal,2)
eachplt.plot(xsmall[iseqold:iseqnew],predict[PlottedIndex,iValTrain,iseqold:iseqnew], label='prediction')
eachplt.plot(xsmall[iseqold:iseqnew],real[PlottedIndex,iValTrain,iseqold:iseqnew], label=f'real')
eachplt.plot(xsmall[iseqold:iseqnew],error[PlottedIndex,iValTrain,iseqold:iseqnew], label=f'error', color="red")
if fill:
eachplt.fill_between(xsmall[iseqold:iseqnew], predictsmall[PlottedIndex,iValTrain,iseqold:iseqnew], real[PlottedIndex,iseqold:iseqnew], alpha=0.1, color="grey")
eachplt.fill_between(xsmall[iseqold:iseqnew], error[PlottedIndex,iValTrain,iseqold:iseqnew], alpha=0.05, color="red")
extrastring = Overalllabel + current_time + ' ' + RunName + " " + f"Range={iseqold}, {iseqnew} Rel Error {c_error} Location Summed Results {Predictionbasicname[PredictedPos]}, "
eachplt.set_title('\n'.join(wrap(extrastring,70)))
eachplt.set_xlabel(TimeIntervalUnitName+'s')
eachplt.set_ylabel(Predictionbasicname[PredictedPos])
eachplt.grid(True)
eachplt.legend()
figure.tight_layout()
plt.show()
def normalizeforplot(casesdeath,Locationindex,value):
if np.math.isnan(value):
return value
if Plotrealnumbers:
predaveragevaluespointer = PredictionAverageValuesPointer[casesdeath]
newvalue = value/QuantityStatistics[predaveragevaluespointer,2] + QuantityStatistics[predaveragevaluespointer,0]
rootflag = QuantityTakeroot[predaveragevaluespointer]
if rootflag == 2:
newvalue = newvalue**2
if rootflag == 3:
newvalue = newvalue**3
else:
newvalue = value
if PopulationNorm:
newvalue *= Locationpopulation[Locationindex]
return newvalue
# PLOT individual city data
def plot_by_fips(fips, Observations, FitPredictions, dots=True, fill=True):
Locationindex = FIPSintegerlookup[fips]
current_time = timenow()
print(startbold + startred + current_time + ' plot by location ' + str(Locationindex) + ' ' + str(fips) + ' ' + Locationname[Locationindex] + ' ' +RunName + ' ' + RunComment + resetfonts)
basiclength = Num_Seq
predictlength = LengthFutures
if not UseFutures:
predictlength = 0
totallength = basiclength + predictlength
real = np.zeros([NumpredbasicperTime,basiclength])
predictsmall = np.zeros([NumpredbasicperTime,basiclength])
predict = np.zeros([NumpredbasicperTime,totallength])
for PredictedQuantity in range(0,NumpredbasicperTime):
for itime in range (0,Num_Seq):
if np.math.isnan(Observations[itime, Locationindex, PredictedQuantity]):
Observations[itime, Locationindex, PredictedQuantity] = FitPredictions[itime, Locationindex, PredictedQuantity]
else:
real[PredictedQuantity,itime] = normalizeforplot(PredictedQuantity, Locationindex, Observations[itime, Locationindex, PredictedQuantity])
predict[PredictedQuantity,itime] = normalizeforplot(PredictedQuantity, Locationindex, FitPredictions[itime, Locationindex, PredictedQuantity])
if FuturedPointer[PredictedQuantity] >= 0:
for ifuture in range(0,LengthFutures):
jfuture = NumpredbasicperTime + NumpredFuturedperTime*ifuture
predict[PredictedQuantity,Num_Seq+ifuture] += normalizeforplot(PredictedQuantity,Locationindex,
FitPredictions[itime, Locationindex, FuturedPointer[PredictedQuantity] + jfuture])
for itime in range(0,basiclength):
predictsmall[PredictedQuantity,itime] = predict[PredictedQuantity,itime]
error = np.absolute(real - predictsmall)
xsmall = np.arange(0,Num_Seq)
neededrows = math.floor((NumpredbasicperTime +1.1)/2)
iplot = -1
for rowloop in range(0,neededrows):
plt.rcParams["figure.figsize"] = [16,6]
figure, (ax1,ax2) = plt.subplots(nrows=1, ncols=2)
for kplot in range (0,2):
iplot +=1
if iplot > (NumpredbasicperTime-1):
iplot = NumpredbasicperTime-1
eachplt = ax1
if kplot == 1:
eachplt = ax2
sumreal = 0.0
sumerror = 0.0
for itime in range(0,Num_Seq):
sumreal += abs(real[iplot,itime])
sumerror += error[iplot,itime]
c_error = round(100.0*sumerror/sumreal,2)
RMSEstring = ''
if not Plotrealnumbers:
sumRMSE = 0.0
count = 0.0
for itime in range(0,Num_Seq):
sumRMSE += (real[iplot,itime] - predict[iplot,itime])**2
count += 1.0
RMSE_error = round(100.0*sumRMSE/count,4)
RMSEstring = ' RMSE ' + str(RMSE_error)
x = list(range(0, totallength))
if dots:
eachplt.scatter(x, predict[iplot])
eachplt.scatter(xsmall, real[iplot])
eachplt.plot(predict[iplot], label=f'{fips} prediction')
eachplt.plot(real[iplot], label=f'{fips} real')
eachplt.plot(error[iplot], label=f'{fips} error', color="red")
if fill:
eachplt.fill_between(xsmall, predictsmall[iplot], real[iplot], alpha=0.1, color="grey")
eachplt.fill_between(xsmall, error[iplot], alpha=0.05, color="red")
name = Locationname[Locationindex]
if Plotrealnumbers:
name = "Actual Numbers " + name
stringpopulation = " "
if not Hydrology:
stringpopulation = " Population " +str(Locationpopulation[Locationindex])
titlestring = current_time + ' ' + RunName + f" {name}, Label={fips}" + stringpopulation + f" Length={Num_Seq}, Abs Rel Error={c_error}%" + RMSEstring + ' ' + RunName
eachplt.set_title('\n'.join(wrap(titlestring,70)))
eachplt.set_xlabel(TimeIntervalUnitName+'s')
eachplt.set_ylabel(Predictionbasicname[iplot])
eachplt.grid(True)
eachplt.legend()
figure.tight_layout()
plt.show();
def cumulative_error(real,predicted):
error = np.absolute(real-predicted).sum()
basevalue = np.absolute(real).sum()
return 100.0*error/basevalue
# Plot summed results by Prediction Type
# selectedfuture one more than usual future index
def plot_by_futureindex(selectedfuture, Observations, FitPredictions, fill=True, extrastring=''):
current_time = timenow()
print(startbold + startred + current_time + ' plot by Future Index ' + str(selectedfuture) + ' ' + RunName + ' ' + RunComment + resetfonts)
selectedfield = NumpredbasicperTime + NumpredFuturedperTime*(selectedfuture-1)
if selectedfuture == 0:
selectedfield = 0
real = np.zeros([NumpredFuturedperTime,Num_Seq])
predictsmall = np.zeros([NumpredFuturedperTime,Num_Seq])
validdata = 0
for PredictedQuantity in range(0,NumpredFuturedperTime):
for iloc in range(0,Nloc):
for itime in range (0,Num_Seq):
real[PredictedQuantity,itime] += Observations[itime, iloc, selectedfield+PredictedQuantity]
predictsmall[PredictedQuantity,itime] += FitPredictions[itime, iloc, selectedfield+PredictedQuantity]
for itime in range (0,Num_Seq):
if np.math.isnan(real[PredictedQuantity,itime]):
real[PredictedQuantity,itime] = predictsmall[PredictedQuantity,itime]
else:
if PredictedQuantity == 0:
validdata += 1
error = np.absolute(real - predictsmall)
xsmall = np.arange(0,Num_Seq)
neededrows = math.floor((NumpredFuturedperTime +1.1)/2)
iplot = -1
for rowloop in range(0,neededrows):
plt.rcParams["figure.figsize"] = [16,6]
figure, (ax1,ax2) = plt.subplots(nrows=1, ncols=2)
for kplot in range (0,2):
iplot +=1
if iplot > (NumpredbasicperTime-1):
iplot = NumpredbasicperTime-1
eachplt = ax1
if kplot == 1:
eachplt = ax2
sumreal = 0.0
sumerror = 0.0
for itime in range(0,Num_Seq):
sumreal += abs(real[iplot,itime])
sumerror += error[iplot,itime]
c_error = round(100.0*sumerror/sumreal,2)
eachplt.plot(predictsmall[iplot,:], label='prediction')
eachplt.plot(real[iplot,:], label=f'real')
eachplt.plot(error[iplot,:], label=f'error', color="red")
if fill:
eachplt.fill_between(xsmall, predictsmall[iplot,:], real[iplot,:], alpha=0.1, color="grey")
eachplt.fill_between(xsmall, error[iplot,:], alpha=0.05, color="red")
errorstring= " Error % " + str(c_error)
printstring = current_time + " Future Index " + str(selectedfuture) + " " + RunName
printstring += " " + f"Length={Num_Seq}, Location Summed Results {Predictionbasicname[iplot]}, " + errorstring + " " + extrastring
eachplt.set_title('\n'.join(wrap(printstring,70)))
eachplt.set_xlabel(TimeIntervalUnitName+'s')
eachplt.set_ylabel(Predictionbasicname[iplot])
eachplt.grid(True)
eachplt.legend()
figure.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Calculate NNSE
###Code
# Calculate NNSE
# Sum (Obsevations - Mean)^2 / [Sum (Obsevations - Mean)^2 + Sum(Observations-Predictions)^2]
def FindNNSE(Observations, FitPredictions, Label=''):
NNSEList = np.empty(NpredperseqTOT, dtype = int)
NumberNNSEcalc = 0
for ipred in range(0,NpredperseqTOT):
if CalculateNNSE[ipred]:
NNSEList[NumberNNSEcalc] = ipred
NumberNNSEcalc +=1
if NumberNNSEcalc == 0:
return
StoreNNSE = np.zeros([Nloc,NumberNNSEcalc], dtype = np.float64)
basiclength = Num_Seq
current_time = timenow()
print(wraptotext(startbold + startred + current_time + ' Calculate NNSE ' + Label + ' ' +RunName + ' ' + RunComment + resetfonts))
for NNSEpredindex in range(0,NumberNNSEcalc):
PredictedQuantity = NNSEList[NNSEpredindex]
averageNNSE = 0.0
averageNNSETraining = 0.0
averageNNSEValidation = 0.0
line = ''
for Locationindex in range(0, Nloc):
QTObssq = 0.0
QTDiffsq = 0.0
QTObssum = 0.0
for itime in range (0,Num_Seq):
Observed = Observations[itime, Locationindex, PredictedQuantity]
if np.math.isnan(Observed):
Observed = FitPredictions[itime, Locationindex, PredictedQuantity]
real = normalizeforplot(PredictedQuantity, Locationindex, Observed)
predict = normalizeforplot(PredictedQuantity, Locationindex, FitPredictions[itime,
Locationindex, PredictedQuantity])
QTObssq += real**2
QTDiffsq += (real-predict)**2
QTObssum += real
Obsmeasure = QTObssq - (QTObssum**2 / Num_Seq )
StoreNNSE[Locationindex,NNSEpredindex] = Obsmeasure / (Obsmeasure +QTDiffsq )
if MappingtoTraining[Locationindex] >= 0:
averageNNSETraining += StoreNNSE[Locationindex,NNSEpredindex]
if MappingtoValidation[Locationindex] >= 0:
averageNNSEValidation += StoreNNSE[Locationindex,NNSEpredindex]
averageNNSE += StoreNNSE[Locationindex,NNSEpredindex]
line += str(round(StoreNNSE[Locationindex,NNSEpredindex],3)) + ' '
if ValidationNloc > 0:
averageNNSEValidation = averageNNSEValidation / ValidationNloc
averageNNSETraining = averageNNSETraining / TrainingNloc
averageNNSE = averageNNSE / Nloc
# Location Summed
QTObssq = 0.0
QTDiffsq = 0.0
QTObssum = 0.0
QTObssqT = 0.0
QTDiffsqT = 0.0
QTObssumT = 0.0
QTObssqV = 0.0
QTDiffsqV = 0.0
QTObssumV = 0.0
for itime in range (0,Num_Seq):
real = 0.0
predict = 0.0
realT = 0.0
predictT = 0.0
realV = 0.0
predictV = 0.0
for Locationindex in range(0, Nloc):
Observed = Observations[itime, Locationindex, PredictedQuantity]
if np.math.isnan(Observed):
Observed = FitPredictions[itime, Locationindex, PredictedQuantity]
localreal = normalizeforplot(PredictedQuantity, Locationindex, Observed)
localpredict = normalizeforplot(PredictedQuantity, Locationindex, FitPredictions[itime,
Locationindex, PredictedQuantity])
real += localreal
predict += localpredict
if MappingtoTraining[Locationindex] >= 0:
realT += localreal
predictT += localpredict
if MappingtoValidation[Locationindex] >= 0:
realV += localreal
predictV += localpredict
QTObssq += real**2
QTDiffsq += (real-predict)**2
QTObssum += real
QTObssqT += realT**2
QTDiffsqT += (realT-predictT)**2
QTObssumT += realT
QTObssqV += realV**2
QTDiffsqV += (realV-predictV)**2
QTObssumV += realV
Obsmeasure = QTObssq - (QTObssum**2 / Num_Seq )
SummedNNSE = Obsmeasure / (Obsmeasure +QTDiffsq )
ObsmeasureT = QTObssqT - (QTObssumT**2 / Num_Seq )
SummedNNSET = ObsmeasureT / (ObsmeasureT +QTDiffsqT )
ObsmeasureV = QTObssqV - (QTObssumV**2 / Num_Seq )
if ValidationNloc > 0:
SummedNNSEV = ObsmeasureV / (ObsmeasureV +QTDiffsqV )
else:
SummedNNSEV = 0.0
line = ''
if PredictedQuantity >= NumpredbasicperTime:
line = startred + 'Future ' + resetfonts
print(wraptotext(line + 'NNSE ' + startbold + Label + ' ' + str(PredictedQuantity) + ' ' + Predictionname[PredictedQuantity] + startred + ' Averaged ' +
str(round(averageNNSE,3)) + resetfonts + ' Training ' + str(round(averageNNSETraining,3)) +
' Validation ' + str(round(averageNNSEValidation,3)) + startred + startbold + ' Summed ' +
str(round(SummedNNSE,3)) + resetfonts + ' Training ' + str(round(SummedNNSET,3)) +
' Validation ' + str(round(SummedNNSEV,3)), size=200))
def weightedcustom_lossGCF1(y_actual, y_pred, sample_weight):
tupl = np.shape(y_actual)
flagGCF = tf.math.is_nan(y_actual)
y_actual = y_actual[tf.math.logical_not(flagGCF)]
y_pred = y_pred[tf.math.logical_not(flagGCF)]
sw = sample_weight[tf.math.logical_not(flagGCF)]
tensordiff = tf.math.reduce_sum(tf.multiply(tf.math.square(y_actual-y_pred),sw))
if len(tupl) >= 2:
tensordiff /= tupl[0]
if len(tupl) >= 3:
tensordiff /= tupl[1]
if len(tupl) >= 4:
tensordiff /= tupl[2]
return tensordiff
def numpycustom_lossGCF1(y_actual, y_pred, sample_weight):
tupl = np.shape(y_actual)
flagGCF = np.isnan(y_actual)
y_actual = y_actual[np.logical_not(flagGCF)]
y_pred = y_pred[np.logical_not(flagGCF)]
sw = sample_weight[np.logical_not(flagGCF)]
tensordiff = np.sum(np.multiply(np.square(y_actual-y_pred),sw))
if len(tupl) >= 2:
tensordiff /= tupl[0]
if len(tupl) >= 3:
tensordiff /= tupl[1]
if len(tupl) >= 4:
tensordiff /= tupl[2]
return tensordiff
def weightedcustom_lossGCF1(y_actual, y_pred, sample_weight):
tupl = np.shape(y_actual)
flagGCF = tf.math.is_nan(y_actual)
y_actual = y_actual[tf.math.logical_not(flagGCF)]
y_pred = y_pred[tf.math.logical_not(flagGCF)]
sw = sample_weight[tf.math.logical_not(flagGCF)]
tensordiff = tf.math.reduce_sum(tf.multiply(tf.math.square(y_actual-y_pred),sw))
if len(tupl) >= 2:
tensordiff /= tupl[0]
if len(tupl) >= 3:
tensordiff /= tupl[1]
if len(tupl) >= 4:
tensordiff /= tupl[2]
return tensordiff
###Output
_____no_output_____
###Markdown
Custom Loss Functions
###Code
def custom_lossGCF1(y_actual,y_pred):
tupl = np.shape(y_actual)
flagGCF = tf.math.is_nan(y_actual)
y_actual = y_actual[tf.math.logical_not(flagGCF)]
y_pred = y_pred[tf.math.logical_not(flagGCF)]
tensordiff = tf.math.reduce_sum(tf.math.square(y_actual-y_pred))
if len(tupl) >= 2:
tensordiff /= tupl[0]
if len(tupl) >= 3:
tensordiff /= tupl[1]
if len(tupl) >= 4:
tensordiff /= tupl[2]
return tensordiff
@tf.autograph.experimental.do_not_convert
def custom_lossGCF1spec(y_actual,y_pred):
global tensorsw
tupl = np.shape(y_actual)
flagGCF = tf.math.is_nan(y_actual)
y_actual = y_actual[tf.math.logical_not(flagGCF)]
y_pred = y_pred[tf.math.logical_not(flagGCF)]
sw = tensorsw[tf.math.logical_not(flagGCF)]
tensordiff = tf.math.reduce_sum(tf.multiply(tf.math.square(y_actual-y_pred),sw))
if len(tupl) >= 2:
tensordiff /= tupl[0]
if len(tupl) >= 3:
tensordiff /= tupl[1]
if len(tupl) >= 4:
tensordiff /= tupl[2]
return tensordiff
def custom_lossGCF1A(y_actual,y_pred):
print(np.shape(y_actual), np.shape(y_pred))
flagGCF = tf.math.is_nan(y_actual)
y_actual = y_actual[tf.math.logical_not(flagGCF)]
y_pred = y_pred[tf.math.logical_not(flagGCF)]
tensordiff = tf.math.square(y_actual-y_pred)
return tf.math.reduce_mean(tensordiff)
# Basic TF does NOT supply sample_weight
def custom_lossGCF1B(y_actual,y_pred,sample_weight=None):
tupl = np.shape(y_actual)
flagGCF = tf.math.is_nan(y_actual)
y_actual = y_actual[tf.math.logical_not(flagGCF)]
y_pred = y_pred[tf.math.logical_not(flagGCF)]
sw = sample_weight[tf.math.logical_not(flagGCF)]
tensordiff = tf.math.reduce_sum(tf.multiply(tf.math.square(y_actual-y_pred),sw))
if len(tupl) >= 2:
tensordiff /= tupl[0]
if len(tupl) >= 3:
tensordiff /= tupl[1]
if len(tupl) >= 4:
tensordiff /= tupl[2]
return tensordiff
def custom_lossGCF4(y_actual,y_pred):
tensordiff = y_actual-y_pred
newtensordiff = tf.where(tf.math.is_nan(tensordiff), tf.zeros_like(tensordiff), tensordiff)
return tf.math.reduce_mean(tf.math.square(newtensordiff))
###Output
_____no_output_____
###Markdown
Utility: Shuffle, Finalize
###Code
def SetSpacetime(BasicTimes):
global GlobalTimeMask
Time = None
if (MaskingOption == 0) or (not GlobalSpacetime):
return Time
NumTOTAL = BasicTimes.shape[1]
BasicTimes = BasicTimes.astype(np.int16)
BasicTimes = np.reshape(BasicTimes,(BasicTimes.shape[0],NumTOTAL,1))
addons = np.arange(0,Tseq,dtype =np.int16)
addons = np.reshape(addons,(1,1,Tseq))
Time = BasicTimes+addons
Time = np.reshape(Time,(BasicTimes.shape[0], NumTOTAL*Tseq))
BasicPureTime = np.arange(0,Tseq,dtype =np.int16)
BasicPureTime = np.reshape(BasicPureTime,(Tseq,1))
GlobalTimeMask = tf.where( (BasicPureTime-np.transpose(BasicPureTime))>0, 0.0,1.0)
GlobalTimeMask = np.reshape(GlobalTimeMask,(1,1,1,Tseq,Tseq))
return Time
def shuffleDLinput(Xin,yin,AuxiliaryArray=None, Spacetime=None):
# Auxiliary array could be weight or location/time tracker
# These are per batch so sorted axis is first
np.random.seed(int.from_bytes(os.urandom(4), byteorder='little'))
trainingorder = list(range(0, len(Xin)))
random.shuffle(trainingorder)
Xinternal = list()
yinternal = list()
if AuxiliaryArray is not None:
AuxiliaryArrayinternal = list()
if Spacetime is not None:
Spacetimeinternal = list()
for i in trainingorder:
Xinternal.append(Xin[i])
yinternal.append(yin[i])
if AuxiliaryArray is not None:
AuxiliaryArrayinternal.append(AuxiliaryArray[i])
if Spacetime is not None:
Spacetimeinternal.append(Spacetime[i])
X = np.array(Xinternal)
y = np.array(yinternal)
if (AuxiliaryArray is None) and (Spacetime is None):
return X, y
if (AuxiliaryArray is not None) and (Spacetime is None):
AA = np.array(AuxiliaryArrayinternal)
return X,y,AA
if (AuxiliaryArray is None) and (Spacetime is not None):
St = np.array(Spacetimeinternal)
return X,y,St
AA = np.array(AuxiliaryArrayinternal)
St = np.array(Spacetimeinternal)
return X,y,AA,St
# Simple Plot of Loss from history
def finalizeDL(ActualModel, recordtrainloss, recordvalloss, validationfrac, X_in, y_in, modelflag, LabelFit =''):
# Ouput Loss v Epoch
histlen = len(recordtrainloss)
trainloss = recordtrainloss[histlen-1]
plt.rcParams["figure.figsize"] = [8,6]
plt.plot(recordtrainloss)
if (validationfrac > 0.001) and len(recordvalloss) > 0:
valloss = recordvalloss[histlen-1]
plt.plot(recordvalloss)
else:
valloss = 0.0
current_time = timenow()
print(startbold + startred + current_time + ' ' + RunName + ' finalizeDL ' + RunComment +resetfonts)
plt.title(LabelFit + ' ' + RunName+' model loss ' + str(round(trainloss,7)) + ' Val ' + str(round(valloss,7)))
plt.ylabel('loss')
plt.xlabel('epoch')
plt.yscale("log")
plt.grid(True)
plt.legend(['train', 'val'], loc='upper left')
plt.show()
# Setup TFT
if modelflag == 2:
global SkipDL2F, IncreaseNloc_sample, DecreaseNloc_sample
SkipDL2F = True
IncreaseNloc_sample = 1
DecreaseNloc_sample = 1
TFToutput_map = TFTpredict(TFTmodel,TFTtest_datacollection)
VisualizeTFT(TFTmodel, TFToutput_map)
else:
FitPredictions = DLprediction(X_in, y_in,ActualModel,modelflag, LabelFit = LabelFit)
for debugfips in ListofTestFIPS:
if debugfips != '':
debugfipsoutput(debugfips, FitPredictions, X_in, y_in)
return
def debugfipsoutput(debugfips, FitPredictions, Xin, Observations):
print(startbold + startred + 'debugfipsoutput for ' + str(debugfips) + RunName + ' ' + RunComment +resetfonts)
# Set Location Number in Arrays
LocationNumber = FIPSstringlookup[debugfips]
# Sequences to look at
Seqcount = 5
Seqnumber = np.empty(Seqcount, dtype = int)
Seqnumber[0] = 0
Seqnumber[1] = int(Num_Seq/4)-1
Seqnumber[2] = int(Num_Seq/2)-1
Seqnumber[3] = int((3*Num_Seq)/4) -1
Seqnumber[4] = Num_Seq-1
# Window Positions to look at
Wincount = 5
Winnumber = np.empty(Wincount, dtype = int)
Winnumber[0] = 0
Winnumber[1] = int(Tseq/4)-1
Winnumber[2] = int(Tseq/2)-1
Winnumber[3] = int((3*Tseq)/4) -1
Winnumber[4] = Tseq-1
if SymbolicWindows:
InputSequences = np.empty([Seqcount,Wincount, NpropperseqTOT], dtype=np.float32)
for jseq in range(0,Seqcount):
iseq = Seqnumber[jseq]
for jwindow in range(0,Wincount):
window = Winnumber[jwindow]
InputSequences[jseq,jwindow] = Xin[LocationNumber,iseq+jseq]
else:
InputSequences = Xin
# Location Info
print('\n' + startbold + startred + debugfips + ' # ' + str(LocationNumber) + ' ' +
Locationname[LocationNumber] + ' ' + Locationstate[LocationNumber] + ' Pop '
+ str(Locationpopulation[LocationNumber]) + resetfonts)
plot_by_fips(int(debugfips), Observations, FitPredictions)
if PlotsOnlyinTestFIPS:
return
# Print Input Data to Test
# Static Properties
print(startbold + startred + 'Static Properties ' + debugfips + ' ' +
Locationname[LocationNumber] + resetfonts)
line = ''
for iprop in range(0,NpropperTimeStatic):
if SymbolicWindows:
val = InputSequences[0,0,iprop]
else:
val = InputSequences[0,LocationNumber,0,iprop]
line += startbold + InputPropertyNames[PropertyNameIndex[iprop]] + resetfonts + ' ' + str(round(val,3)) + ' '
print('\n'.join(wrap(line,200)))
# Dynamic Properties
for iprop in range(NpropperTimeStatic, NpropperTime):
print('\n')
for jwindow in range(0,Wincount):
window = Winnumber[jwindow]
line = startbold + InputPropertyNames[PropertyNameIndex[iprop]] + ' W= '+str(window) +resetfonts
for jseq in range(0,Seqcount):
iseq = Seqnumber[jseq]
line += startbold + startred + ' ' + str(iseq) + ')' +resetfonts
if SymbolicWindows:
val = InputSequences[jseq,jwindow,iprop]
else:
val = InputSequences[iseq,LocationNumber,window,iprop]
line += ' ' + str(round(val,3))
print('\n'.join(wrap(line,200)))
# Total Input
print('\n')
line = startbold + 'Props: ' + resetfonts
for iprop in range(0,NpropperseqTOT):
if iprop%5 == 0:
line += startbold + startred + ' ' + str(iprop) + ')' + resetfonts
line += ' ' + InputPropertyNames[PropertyNameIndex[iprop]]
print('\n'.join(wrap(line,200)))
for jseq in range(0,Seqcount):
iseq = Seqnumber[jseq]
for jwindow in range(0,Wincount):
window = Winnumber[jwindow]
line = startbold + 'Input: All in Seq ' + str(iseq) + ' W= ' + str(window) + resetfonts
for iprop in range(0,NpropperseqTOT):
if iprop%5 == 0:
line += startbold + startred + ' ' + str(iprop) + ')' +resetfonts
if SymbolicWindows:
val = InputSequences[jseq,jwindow,iprop]
else:
val = InputSequences[iseq,LocationNumber,window,iprop]
result = str(round(val,3))
line += ' ' + result
print('\n'.join(wrap(line,200)))
# Total Prediction
print('\n')
line = startbold + 'Preds: ' + resetfonts
for ipred in range(0,NpredperseqTOT):
if ipred%5 == 0:
line += startbold + startred + ' ' + str(ipred) + ')' + resetfonts
line += ' ' + Predictionname[ipred]
for jseq in range(0,Seqcount):
iseq = Seqnumber[jseq]
line = startbold + 'Preds: All in Seq ' + str(iseq) + resetfonts
for ipred in range(0,NpredperseqTOT):
fred = Observations[iseq,LocationNumber,ipred]
if np.math.isnan(fred):
result = 'NaN'
else:
result = str(round(fred,3))
if ipred%5 == 0:
line += startbold + startred + ' ' + str(ipred) + ')' + resetfonts
line += ' ' + result
print('\n'.join(wrap(line,200)))
###Output
_____no_output_____
###Markdown
DLPrediction2E printloss ?DEL
###Code
def printloss(name,mean,var,SampleSize, lineend =''):
mean /= SampleSize
var /= SampleSize
std = math.sqrt(var - mean**2)
print(name + ' Mean ' + str(round(mean,5)) + ' Std Deviation ' + str(round(std,7)) + ' ' + lineend)
def DLprediction2E(Xin, yin, DLmodel, modelflag):
# Form restricted Attention separately over Training and Validation
if not LocationBasedValidation:
return
if UsedTransformervalidationfrac < 0.001 or ValidationNloc <= 0:
return
if SkipDL2E:
return
if GarbageCollect:
gc.collect()
SampleSize = 1
FitRanges_PartialAtt = np.zeros([Num_Seq, Nloc, NpredperseqTOT,5], dtype =np.float32)
FRanges = np.full(NpredperseqTOT, 1.0, dtype = np.float32)
# 0 count 1 mean 2 Standard Deviation 3 Min 4 Max
print(wraptotext(startbold+startred+ 'DLPrediction2E Partial Attention ' +current_time + ' ' + RunName + RunComment + resetfonts))
global OuterBatchDimension, Nloc_sample, d_sample, max_d_sample
global FullSetValidation
saveFullSetValidation = FullSetValidation
FullSetValidation = False
X_predict, y_predict, Spacetime_predict, X_val, y_val, Spacetime_val = setSeparateDLinput(1, Spacetime = True)
FullSetValidation = saveFullSetValidation
Nloc_sample = TrainingNloc
OuterBatchDimension = Num_Seq
d_sample = Tseq * TrainingNloc
max_d_sample = d_sample
UsedValidationNloc = ValidationNloc
if SymbolicWindows:
X_Transformertraining = np.reshape(X_predict, (OuterBatchDimension, Nloc_sample))
else:
X_Transformertraining = np.reshape(X_predict, (OuterBatchDimension, d_sample, NpropperseqTOT))
y_Transformertraining = np.reshape(y_predict, (OuterBatchDimension, Nloc_sample, NpredperseqTOT))
Spacetime_Transformertraining = np.reshape(Spacetime_predict, (OuterBatchDimension, Nloc_sample))
if SymbolicWindows:
X_Transformerval = np.reshape(X_val, (OuterBatchDimension, UsedValidationNloc))
else:
X_Transformerval = np.reshape(X_val, (OuterBatchDimension, UsedValidationNloc*Tseq, NpropperseqTOT))
y_Transformerval = np.reshape(y_val, (OuterBatchDimension, UsedValidationNloc, NpredperseqTOT))
Spacetime_Transformerval = np.reshape(Spacetime_val, (OuterBatchDimension, UsedValidationNloc))
if UseClassweights:
sw_Transformertraining = np.empty_like(y_predict, dtype=np.float32)
for i in range(0,sw_Transformertraining.shape[0]):
for j in range(0,sw_Transformertraining.shape[1]):
for k in range(0,NpredperseqTOT):
sw_Transformertraining[i,j,k] = Predictionwgt[k]
sw_Transformerval = np.empty_like(y_val, dtype=np.float32)
for i in range(0,sw_Transformerval.shape[0]):
for jloc in range(0,sw_Transformerval.shape[1]):
for k in range(0,NpredperseqTOT):
sw_Transformerval[i,jloc,k] = Predictionwgt[k]
else:
sw_Transformertraining = []
sw_Transformerval = []
if SymbolicWindows:
X_Transformertrainingflat2 = np.reshape(X_Transformertraining, (-1, TrainingNloc))
X_Transformertrainingflat1 = np.reshape(X_Transformertrainingflat2, (-1))
else:
X_Transformertrainingflat2 = np.reshape(X_Transformertraining, (-1, TrainingNloc,Tseq, NpropperseqTOT))
X_Transformertrainingflat1 = np.reshape(X_Transformertrainingflat2, (-1, Tseq, NpropperseqTOT))
y_Transformertrainingflat1 = np.reshape(y_Transformertraining, (-1,NpredperseqTOT) )
Spacetime_Transformertrainingflat1 = np.reshape(Spacetime_Transformertraining,(-1))
if UseClassweights:
sw_Transformertrainingflat1 = np.reshape(sw_Transformertraining, (-1,NpredperseqTOT) )
if SymbolicWindows:
X_Transformervalflat2 = np.reshape(X_Transformerval, (-1, UsedValidationNloc))
X_Transformervalflat1 = np.reshape(X_Transformervalflat2, (-1))
else:
X_Transformervalflat2 = np.reshape(X_Transformerval, (-1, UsedValidationNloc,Tseq, NpropperseqTOT))
X_Transformervalflat1 = np.reshape(X_Transformervalflat2, (-1, Tseq, NpropperseqTOT))
y_Transformervalflat1 = np.reshape(y_Transformerval, (-1,NpredperseqTOT) )
Spacetime_Transformervalflat1 = np.reshape(Spacetime_Transformerval,(-1))
if UseClassweights:
sw_Transformervalflat1 = np.reshape(sw_Transformerval, (-1,NpredperseqTOT) )
meanvalue2 = 0.0
meanvalue3 = 0.0
meanvalue4 = 0.0
variance2= 0.0
variance3= 0.0
variance4= 0.0
# START LOOP OVER SAMPLES
samplebar = notebook.trange(SampleSize, desc='Full Samples', unit = 'sample')
epochsize = 2*OuterBatchDimension
if IncreaseNloc_sample > 1:
epochsize = int(epochsize/IncreaseNloc_sample)
elif DecreaseNloc_sample > 1:
epochsize = int(epochsize*DecreaseNloc_sample)
bbar = notebook.trange(epochsize, desc='Batch loop', unit = 'sample')
for shuffling in range (0,SampleSize):
if GarbageCollect:
gc.collect()
# TRAINING SET
if TimeShufflingOnly:
X_train, y_train, sw_train, Spacetime_train = shuffleDLinput(X_Transformertraining,
y_Transformertraining, sw_Transformertraining, Spacetime = Spacetime_Transformertraining)
else:
X_train, y_train, sw_train, Spacetime_train = shuffleDLinput(X_Transformertrainingflat1,
y_Transformertrainingflat1, sw_Transformertrainingflat1, Spacetime = Spacetime_Transformertrainingflat1)
Nloc_sample = TrainingNloc
OuterBatchDimension = Num_Seq
Totaltodo = Nloc_sample*OuterBatchDimension
if IncreaseNloc_sample > 1:
Nloc_sample = int(Nloc_sample*IncreaseNloc_sample)
elif DecreaseNloc_sample > 1:
Nloc_sample = int(Nloc_sample/DecreaseNloc_sample)
OuterBatchDimension = int(Totaltodo/Nloc_sample)
if OuterBatchDimension * Nloc_sample != Totaltodo:
printexit('Inconsistent Nloc_sample ' + str(Nloc_sample))
d_sample = Tseq * Nloc_sample
max_d_sample = d_sample
if SymbolicWindows:
X_train = np.reshape(X_train, (OuterBatchDimension, Nloc_sample))
else:
X_train = np.reshape(X_train, (OuterBatchDimension, d_sample, NpropperseqTOT))
y_train = np.reshape(y_train, (OuterBatchDimension, Nloc_sample, NpredperseqTOT))
sw_train = np.reshape(sw_train, (OuterBatchDimension, Nloc_sample, NpredperseqTOT))
Spacetime_train = np.reshape(Spacetime_train, (OuterBatchDimension, Nloc_sample))
quan3 = 0.0
quan4 = 0.0
losspercallVl = 0.0
losspercallTr = 0.0
TotalTr = 0.0
TotalVl = 0.0
for Trainingindex in range(0, OuterBatchDimension):
if GarbageCollect:
gc.collect()
X_trainlocal = X_train[Trainingindex]
if SymbolicWindows:
X_trainlocal = np.reshape(X_trainlocal,[1,X_trainlocal.shape[0]])
else:
X_trainlocal = np.reshape(X_trainlocal,[1,X_trainlocal.shape[0],X_trainlocal.shape[1]])
Numinbatch = X_trainlocal.shape[0]
NuminAttention = X_trainlocal.shape[1]
NumTOTAL = Numinbatch*NuminAttention
# SymbolicWindows X_train is indexed by Batch index, Location List for Attention. Missing 1(replace by Window), 1 (replace by properties)
if SymbolicWindows:
X_trainlocal = np.reshape(X_trainlocal,NumTOTAL)
iseqarray = np.right_shift(X_trainlocal,16)
ilocarray = np.bitwise_and(X_trainlocal, 0b1111111111111111)
X_train_withSeq = list()
for iloc in range(0,NumTOTAL):
X_train_withSeq.append(ReshapedSequencesTOT[ilocarray[iloc],iseqarray[iloc]:iseqarray[iloc]+Tseq])
X_train_withSeq = np.array(X_train_withSeq)
X_train_withSeq = np.reshape(X_train_withSeq,(Numinbatch, d_sample, NpropperseqTOT))
Time = None
if modelflag==1:
Time = SetSpacetime(np.reshape(iseqarray,[Numinbatch,-1]))
PredictedVector = DLmodel(X_train_withSeq, training = PredictionTraining, Time=Time )
else:
Spacetime_trainlocal = Spacetime_train[Trainingindex]
iseqarray = np.right_shift(Spacetime_trainlocal,16)
ilocarray = np.bitwise_and(Spacetime_trainlocal, 0b1111111111111111)
Time = SetSpacetime(np.reshape(iseqarray,[Numinbatch,-1]))
PredictedVector = DLmodel(X_trainlocal, training = PredictionTraining, Time=Time )
PredictedVector = np.reshape(PredictedVector,(1,Nloc_sample,NpredperseqTOT))
TrueVector = y_train[Trainingindex]
TrueVector = np.reshape(TrueVector,(1,Nloc_sample,NpredperseqTOT))
sw_trainlocal = sw_train[Trainingindex]
sw_trainlocal = np.reshape(sw_trainlocal,[1,sw_trainlocal.shape[0],sw_trainlocal.shape[1]])
losspercallTr = numpycustom_lossGCF1(TrueVector,PredictedVector,sw_trainlocal)
quan3 += losspercallTr
for iloc_sample in range(0,Nloc_sample):
LocLocal = ilocarray[iloc_sample]
SeqLocal = iseqarray[iloc_sample]
yyhat = PredictedVector[0,iloc_sample]
if FitRanges_PartialAtt [SeqLocal, LocLocal, 0, 0] < 0.1:
FitRanges_PartialAtt [SeqLocal,LocLocal,:,3] = yyhat
FitRanges_PartialAtt [SeqLocal,LocLocal,:,4] = yyhat
else:
FitRanges_PartialAtt [SeqLocal,LocLocal,:,3] = np.maximum(FitRanges_PartialAtt[SeqLocal,LocLocal,:,3],yyhat)
FitRanges_PartialAtt [SeqLocal,LocLocal,:,4] = np.minimum(FitRanges_PartialAtt[SeqLocal,LocLocal,:,4],yyhat)
FitRanges_PartialAtt [SeqLocal,LocLocal,:,0] += FRanges
FitRanges_PartialAtt[SeqLocal,LocLocal,:,1] += yyhat
FitRanges_PartialAtt[SeqLocal,LocLocal,:,2] += np.square(yyhat)
fudge = 1.0/(1+Trainingindex)
TotalTr = quan3 *fudge
bbar.set_postfix(TotalTr = TotalTr, Tr = losspercallTr)
bbar.update(Transformerbatch_size)
# END Training Batch Loop
TotalTr= quan3/OuterBatchDimension
# VALIDATION SET
Nloc_sample = UsedValidationNloc
OuterBatchDimension = Num_Seq
Totaltodo = Nloc_sample*OuterBatchDimension
if IncreaseNloc_sample > 1:
Nloc_sample = int(Nloc_sample*IncreaseNloc_sample)
elif DecreaseNloc_sample > 1:
Nloc_sample = int(Nloc_sample/DecreaseNloc_sample)
OuterBatchDimension = int(Totaltodo/Nloc_sample)
if OuterBatchDimension * Nloc_sample != Totaltodo:
printexit('Inconsistent Nloc_sample ' + str(Nloc_sample))
d_sample = Tseq * Nloc_sample
max_d_sample = d_sample
if TimeShufflingOnly:
X_val, y_val, sw_val, Spacetime_val = shuffleDLinput(
X_Transformerval, y_Transformerval, sw_Transformerval, Spacetime_Transformerval)
else:
X_val, y_val, sw_val, Spacetime_val = shuffleDLinput(
X_Transformervalflat1, y_Transformervalflat1, sw_Transformervalflat1, Spacetime_Transformervalflat1)
if SymbolicWindows:
X_val = np.reshape(X_val, (OuterBatchDimension, Nloc_sample))
else:
X_val = np.reshape(X_val, (OuterBatchDimension, d_sample, NpropperseqTOT))
y_val = np.reshape(y_val, (OuterBatchDimension, Nloc_sample, NpredperseqTOT))
sw_val = np.reshape(sw_val, (OuterBatchDimension, Nloc_sample, NpredperseqTOT))
Spacetime_val = np.reshape(Spacetime_val, (OuterBatchDimension, Nloc_sample))
# START VALIDATION Batch Loop
for Validationindex in range(0,OuterBatchDimension):
X_valbatch = X_val[Validationindex]
y_valbatch = y_val[Validationindex]
sw_valbatch = sw_val[Validationindex]
Spacetime_valbatch = Spacetime_val[Validationindex]
if SymbolicWindows:
X_valbatch = np.reshape(X_valbatch,[1,X_valbatch.shape[0]])
else:
X_valbatch = np.reshape(X_valbatch,[1,X_valbatch.shape[0],X_valbatch.shape[1]])
y_valbatch = np.reshape(y_valbatch,[1,y_valbatch.shape[0],y_valbatch.shape[1]])
sw_valbatch = np.reshape(sw_valbatch,[1,sw_valbatch.shape[0],sw_valbatch.shape[1]])
Numinbatch = X_valbatch.shape[0]
NuminAttention = X_valbatch.shape[1]
NumTOTAL = Numinbatch*NuminAttention
if SymbolicWindows:
X_valbatch = np.reshape(X_valbatch,NumTOTAL)
iseqarray = np.right_shift(X_valbatch,16)
ilocarray = np.bitwise_and(X_valbatch, 0b1111111111111111)
X_valbatch_withSeq = list()
for iloc in range(0,NumTOTAL):
X_valbatch_withSeq.append(ReshapedSequencesTOT[ilocarray[iloc],iseqarray[iloc]:iseqarray[iloc]+Tseq])
X_valbatch_withSeq = np.array(X_valbatch_withSeq)
X_valbatch_withSeq = np.reshape(X_valbatch_withSeq,(Numinbatch, d_sample, NpropperseqTOT))
Time = SetSpacetime(np.reshape(iseqarray,[Numinbatch,-1]))
PredictedVector = DLmodel(X_valbatch_withSeq, training = PredictionTraining, Time=Time )
else:
Spacetime_valbatch = np.reshape(Spacetime_valbatch,-1)
iseqarray = np.right_shift(Spacetime_valbatch,16)
ilocarray = np.bitwise_and(Spacetime_valbatch, 0b1111111111111111)
Time = SetSpacetime(np.reshape(iseqarray,[Numinbatch,-1]))
PredictedVector = DLmodel(X_valbatch, training = PredictionTraining, Time=Time )
PredictedVector = np.reshape(PredictedVector,(1,Nloc_sample,NpredperseqTOT))
TrueVector = np.reshape(y_valbatch,(1,Nloc_sample,NpredperseqTOT))
sw_valbatch = np.reshape(sw_valbatch,(1,Nloc_sample,NpredperseqTOT))
losspercallVl = numpycustom_lossGCF1(TrueVector,PredictedVector,sw_valbatch)
quan4 += losspercallVl
for iloc_sample in range(0,Nloc_sample):
LocLocal = ilocarray[iloc_sample]
SeqLocal = iseqarray[iloc_sample]
yyhat = PredictedVector[0,iloc_sample]
if FitRanges_PartialAtt [SeqLocal, LocLocal, 0, 0] < 0.1:
FitRanges_PartialAtt [SeqLocal,LocLocal,:,3] = yyhat
FitRanges_PartialAtt [SeqLocal,LocLocal,:,4] = yyhat
else:
FitRanges_PartialAtt [SeqLocal,LocLocal,:,3] = np.maximum(FitRanges_PartialAtt[SeqLocal,LocLocal,:,3],yyhat)
FitRanges_PartialAtt [SeqLocal,LocLocal,:,4] = np.minimum(FitRanges_PartialAtt[SeqLocal,LocLocal,:,4],yyhat)
FitRanges_PartialAtt [SeqLocal,LocLocal,:,0] += FRanges
FitRanges_PartialAtt[SeqLocal,LocLocal,:,1] += yyhat
FitRanges_PartialAtt[SeqLocal,LocLocal,:,2] += np.square(yyhat)
TotalVl = quan4/(1+Validationindex)
losspercall = (TotalTr*TrainingNloc+TotalVl*ValidationNloc)/Nloc
bbar.update(Transformerbatch_size)
bbar.set_postfix(Loss = losspercall, TotalTr = TotalTr, TotalVl= TotalVl, Vl = losspercallVl)
# END VALIDATION BATCH LOOP
# Processing at the end of Sampling Loop
fudge = 1.0/OuterBatchDimension
quan2 = (quan3*TrainingNloc + quan4*ValidationNloc)/Nloc
quan2 *= fudge
meanvalue2 += quan2
variance2 += quan2**2
if LocationBasedValidation:
quan3 *= fudge
quan4 *= fudge
meanvalue3 += quan3
meanvalue4 += quan4
variance3 += quan3**2
variance4 += quan4**2
samplebar.update(1)
if LocationBasedValidation:
samplebar.set_postfix(Shuffle=shuffling, Loss = quan2, Tr = quan3, Val = quan4)
else:
samplebar.set_postfix(Shuffle=shuffling, Loss = quan2)
bbar.reset()
# End Shuffling loop
printloss(' Full Loss ',meanvalue2,variance2,SampleSize)
printloss(' Training Loss ',meanvalue3,variance3,SampleSize)
printloss(' Validation Loss ',meanvalue4,variance4,SampleSize)
global GlobalTrainingLoss, GlobalValidationLoss, GlobalLoss
GlobalLoss = meanvalue2
GlobalTrainingLoss = meanvalue3
GlobalValidationLoss = meanvalue4
FitRanges_PartialAtt[:,:,:,1] = np.divide(FitRanges_PartialAtt[:,:,:,1],FitRanges_PartialAtt[:,:,:,0])
FitRanges_PartialAtt[:,:,:,2] = np.sqrt(np.maximum(np.divide(FitRanges_PartialAtt[:,:,:,2],FitRanges_PartialAtt[:,:,:,0]) -
np.square(FitRanges_PartialAtt[:,:,:,1]), 0.0))
FitPredictions = np.zeros([Num_Seq, Nloc, NpredperseqTOT], dtype =np.float32)
for iseq in range(0,Num_Seq):
for iloc in range(0,Nloc):
FitPredictions[iseq,iloc,:] = FitRanges_PartialAtt[iseq,iloc,:,1]
DLprediction3(yin, FitPredictions, ' Separate Attention mean values')
FindNNSE(yin, FitPredictions, Label='Separate Attention' )
print(startbold+startred+ 'END DLPrediction2E ' +current_time + ' ' + RunName + RunComment +resetfonts)
return
###Output
_____no_output_____
###Markdown
DLPrediction2F Sensitivity
###Code
def DLprediction2F(Xin, yin, DLmodel, modelflag):
# Input is the windows [Num_Seq] [Nloc] [Tseq] [NpropperseqTOT] (SymbolicWindows False)
# Input is the sequences [Nloc] [Num_Time-1] [NpropperseqTOT] (SymbolicWindows True)
# Input Predictions are always [Num_Seq] [NLoc] [NpredperseqTOT]
# Label Array is always [Num_Seq][Nloc] [0=Window(first sequence)#, 1=Location]
if SkipDL2F:
return
if GarbageCollect:
gc.collect()
global OuterBatchDimension, Nloc_sample, d_sample, max_d_sample
SensitivityAnalyze = np.full((NpropperseqTOT), False, dtype = bool)
SensitivityChange = np.zeros ((NpropperseqTOT), dtype = np.float32)
SensitvitybyPrediction = False
something = 0
SensitivityList = []
for iprop in range(0,NpropperseqTOT):
if SensitivityAnalyze[iprop]:
something +=1
SensitivityList.append(iprop)
if something == 0:
return
ScaleProperty = 0.99
SampleSize = 1
SensitivityFitPredictions = np.zeros([Num_Seq, Nloc, NpredperseqTOT, 1 + something], dtype =np.float32)
FRanges = np.full((NpredperseqTOT), 1.0, dtype = np.float32)
current_time = timenow()
print(wraptotext(startbold+startred+ 'DLPrediction2F ' +current_time + ' ' + RunName + RunComment + resetfonts))
sw = np.empty_like(yin, dtype=np.float32)
for i in range(0,sw.shape[0]):
for j in range(0,sw.shape[1]):
for k in range(0,NpredperseqTOT):
sw[i,j,k] = Predictionwgt[k]
labelarray =np.empty([Num_Seq, Nloc, 2], dtype = np.int32)
for iseq in range(0, Num_Seq):
for iloc in range(0,Nloc):
labelarray[iseq,iloc,0] = iseq
labelarray[iseq,iloc,1] = iloc
Totaltodo = Num_Seq*Nloc
Nloc_sample = Nloc # default
if IncreaseNloc_sample > 1:
Nloc_sample = int(Nloc_sample*IncreaseNloc_sample)
elif DecreaseNloc_sample > 1:
Nloc_sample = int(Nloc_sample/DecreaseNloc_sample)
if Totaltodo%Nloc_sample != 0:
printexit('Invalid Nloc_sample ' + str(Nloc_sample) + " " + str(Totaltodo))
d_sample = Tseq * Nloc_sample
max_d_sample = d_sample
OuterBatchDimension = int(Totaltodo/Nloc_sample)
print(' Predict with ' +str(Nloc_sample) + ' sequences per sample and batch size ' + str(OuterBatchDimension))
print(startbold+startred+ 'Sensitivity using Property ScaleFactor ' + str(round(ScaleProperty,3)) + resetfonts)
for Sensitivities in range(0,1+something):
if Sensitivities == 0: # BASIC unmodified run
iprop = -1
print(startbold+startred+ 'Basic Predictions' + resetfonts)
if SymbolicWindows:
ReshapedSequencesTOTmodified = ReshapedSequencesTOT # NOT used if modelflag == 2
if modelflag == 2:
DLmodel.MakeMapping()
else:
Xinmodified = Xin
else:
iprop = SensitivityList[Sensitivities-1]
maxminplace = PropertyNameIndex[iprop]
lastline = ''
if iprop < Npropperseq:
lastline = ' Normed Mean ' +str(round(QuantityStatistics[maxminplace,5],4))
print(startbold+startred+ 'Property ' + str(iprop) + ' ' + InputPropertyNames[maxminplace] + resetfonts + lastline)
if SymbolicWindows:
if modelflag == 2:
DLmodel.SetupProperty(iprop)
DLmodel.ScaleProperty(ScaleProperty)
DLmodel.MakeMapping()
else:
ReshapedSequencesTOTmodified = np.copy(ReshapedSequencesTOT)
ReshapedSequencesTOTmodified[:,:,iprop] = ScaleProperty * ReshapedSequencesTOTmodified[:,:,iprop]
else:
Xinmodified = np.copy(Xin)
Xinmodified[:,:,:,iprop] = ScaleProperty*Xinmodified[:,:,:,iprop]
CountFitPredictions = np.zeros([Num_Seq, Nloc, NpredperseqTOT], dtype =np.float32)
meanvalue2 = 0.0
meanvalue3 = 0.0
meanvalue4 = 0.0
variance2= 0.0
variance3= 0.0
variance4= 0.0
samplebar = notebook.trange(SampleSize, desc='Full Samples', unit = 'sample')
bbar = notebook.trange(OuterBatchDimension, desc='Batch loop', unit = 'sample')
for shuffling in range (0,SampleSize):
if GarbageCollect:
gc.collect()
yuse = yin
labeluse = labelarray
y2= np.reshape(yuse, (-1, NpredperseqTOT)).copy()
labelarray2 = np.reshape(labeluse, (-1,2))
if SymbolicWindows:
# Xin X2 X3 not used rather ReshapedSequencesTOT
labelarray2, y2 = shuffleDLinput(labelarray2, y2)
ReshapedSequencesTOTuse = ReshapedSequencesTOTmodified
else:
Xuse = Xinmodified
X2 = np.reshape(Xuse, (-1, Tseq, NpropperseqTOT)).copy()
X2, y2, labelarray2 = shuffleDLinput(X2, y2,labelarray2)
X3 = np.reshape(X2, (-1, d_sample, NpropperseqTOT))
y3 = np.reshape(y2, (-1, Nloc_sample, NpredperseqTOT))
sw = np.reshape(sw, (-1, Nloc_sample, NpredperseqTOT))
labelarray3 = np.reshape(labelarray2, (-1, Nloc_sample, 2))
quan2 = 0.0
quan3 = 0.0
quan4 = 0.0
for Batchindex in range(0, OuterBatchDimension):
if GarbageCollect:
gc.collect()
if SymbolicWindows:
if modelflag == 2: # Note first index of InputVector Location, Second is sequence number; labelarray3 is opposite
InputVector = np.empty((Nloc_sample,2), dtype = np.int32)
for iloc_sample in range(0,Nloc_sample):
InputVector[iloc_sample,0] = labelarray3[Batchindex, iloc_sample,1]
InputVector[iloc_sample,1] = labelarray3[Batchindex, iloc_sample,0]
else:
X3local = list()
for iloc_sample in range(0,Nloc_sample):
LocLocal = labelarray3[Batchindex, iloc_sample,1]
SeqLocal = labelarray3[Batchindex, iloc_sample,0]
X3local.append(ReshapedSequencesTOTuse[LocLocal,SeqLocal:SeqLocal+Tseq])
InputVector = np.array(X3local)
else:
InputVector = X3[Batchindex]
Labelsused = labelarray3[Batchindex]
Time = None
if modelflag == 0:
InputVector = np.reshape(InputVector,(-1,Tseq,NpropperseqTOT))
elif modelflag == 1:
Time = SetSpacetime(np.reshape(Labelsused[:,0],(1,-1)))
InputVector = np.reshape(InputVector,(1,Tseq*Nloc_sample,NpropperseqTOT))
PredictedVector = DLmodel(InputVector, training = PredictionTraining, Time=Time )
PredictedVector = np.reshape(PredictedVector,(1,Nloc_sample,NpredperseqTOT))
swbatched = sw[Batchindex,:,:]
if LocationBasedValidation:
swT = np.zeros([1,Nloc_sample,NpredperseqTOT],dtype = np.float32)
swV = np.zeros([1,Nloc_sample,NpredperseqTOT],dtype = np.float32)
for iloc_sample in range(0,Nloc_sample):
fudgeT = Nloc/TrainingNloc
fudgeV = Nloc/ValidationNloc
iloc = Labelsused[iloc_sample,1]
if MappingtoTraining[iloc] >= 0:
swT[0,iloc_sample,:] = swbatched[iloc_sample,:]*fudgeT
else:
swV[0,iloc_sample,:] = swbatched[iloc_sample,:]*fudgeV
TrueVector = y3[Batchindex]
TrueVector = np.reshape(TrueVector,(1,Nloc_sample,NpredperseqTOT))
swbatched = np.reshape(swbatched,(1,Nloc_sample,NpredperseqTOT))
losspercall = numpycustom_lossGCF1(TrueVector,PredictedVector,swbatched)
quan2 += losspercall
bbar.update(1)
if LocationBasedValidation:
losspercallTr = numpycustom_lossGCF1(TrueVector,PredictedVector,swT)
quan3 += losspercallTr
losspercallVl = numpycustom_lossGCF1(TrueVector,PredictedVector,swV)
quan4 += losspercallVl
for iloc_sample in range(0,Nloc_sample):
LocLocal = Labelsused[iloc_sample,1]
SeqLocal = Labelsused[iloc_sample,0]
yyhat = PredictedVector[0,iloc_sample]
CountFitPredictions [SeqLocal,LocLocal,:] += FRanges
SensitivityFitPredictions [SeqLocal,LocLocal,:,Sensitivities] += yyhat
fudge = 1.0/(1.0 + Batchindex)
mean2 = quan2 * fudge
if LocationBasedValidation:
mean3 = quan3 * fudge
mean4 = quan4 * fudge
bbar.set_postfix(AvLoss = mean2, AvTr = mean3, AvVl = mean4, Loss = losspercall, Tr = losspercallTr, Vl = losspercallVl)
else:
bbar.set_postfix(Loss = losspercall, AvLoss = mean2 )
# Processing at the end of Sampling Loop
fudge = 1.0/OuterBatchDimension
quan2 *= fudge
quan3 *= fudge
quan4 *= fudge
meanvalue2 += quan2
variance2 += quan2**2
variance3 += quan3**2
variance4 += quan4**2
if LocationBasedValidation:
meanvalue3 += quan3
meanvalue4 += quan4
samplebar.update(1)
if LocationBasedValidation:
samplebar.set_postfix(Shuffle=shuffling, Loss = quan2, Tr = quan3, Val = quan4)
else:
samplebar.set_postfix(Shuffle=shuffling, Loss = quan2)
bbar.reset()
# End Shuffling loop
if Sensitivities == 0:
iprop = -1
lineend = startbold+startred+ 'Basic Predictions' + resetfonts
else:
iprop = SensitivityList[Sensitivities-1]
nameplace = PropertyNameIndex[iprop]
maxminplace = PropertyAverageValuesPointer[iprop]
lastline = ' Normed Mean ' +str(round(QuantityStatistics[maxminplace,5],4))
lineend= startbold+startred + 'Property ' + str(iprop) + ' ' + InputPropertyNames[nameplace] + resetfonts + lastline
if modelflag == 2:
DLmodel.ResetProperty()
meanvalue2 /= SampleSize
global GlobalTrainingLoss, GlobalValidationLoss, GlobalLoss
printloss(' Full Loss ',meanvalue2,variance2,SampleSize, lineend = lineend)
meanvalue2 /= SampleSize
GlobalLoss = meanvalue2
GlobalTrainingLoss = 0.0
GlobalValidationLoss = 0.0
if LocationBasedValidation:
printloss(' Training Loss ',meanvalue3,variance3,SampleSize, lineend = lineend)
printloss(' Validation Loss ',meanvalue4,variance4,SampleSize, lineend = lineend)
meanvalue3 /= SampleSize
meanvalue4 /= SampleSize
GlobalTrainingLoss = meanvalue3
GlobalValidationLoss = meanvalue4
label = 'Sensitivity ' +str(Sensitivities)
Location_summed_plot(0, yin, SensitivityFitPredictions[:,:,:,Sensitivities] , extracomments = [label,label], Dumpplot = False)
# Sequence Location Predictions
SensitivityFitPredictions[:,:,:,Sensitivities] = np.divide(SensitivityFitPredictions[:,:,:,Sensitivities],CountFitPredictions[:,:,:])
if Sensitivities == 0:
Goldstandard = np.sum(np.abs(SensitivityFitPredictions[:,:,:,Sensitivities]), axis =(0,1))
TotalGS = np.sum(Goldstandard)
continue
Change = np.sum(np.abs(np.subtract(SensitivityFitPredictions[:,:,:,Sensitivities],SensitivityFitPredictions[:,:,:,0])), axis =(0,1))
TotalChange = np.sum(Change)
SensitivityChange[iprop] = TotalChange
print(str(round(TotalChange,5)) + ' GS ' + str(round(TotalGS,5)) + ' ' +lineend)
if SensitvitybyPrediction:
for ipred in range(0,NpredperseqTOT):
print(str(round(Change[ipred],5)) + ' GS ' + str(round(Goldstandard[ipred],5))
+ ' ' + str(ipred) + ' ' + Predictionname[ipred] + ' wgt ' + str(round(Predictionwgt[ipred],3)))
print(startbold+startred+ '\nSummarize Changes Total ' + str(round(TotalGS,5))+ ' Property ScaleFactor ' + str(round(ScaleProperty,3)) + resetfonts )
for Sensitivities in range(1,1+something):
iprop = SensitivityList[Sensitivities-1]
nameplace = PropertyNameIndex[iprop]
maxminplace = PropertyAverageValuesPointer[iprop]
lastline = ' Normed Mean ' +str(round(QuantityStatistics[maxminplace,5],4))
lastline += ' Normed Std ' +str(round(QuantityStatistics[maxminplace,6],4))
TotalChange = SensitivityChange[iprop]
NormedChange = TotalChange/((1-ScaleProperty)*TotalGS)
stdmeanratio = 0.0
stdchangeratio = 0.0
if np.abs(QuantityStatistics[maxminplace,5]) > 0.0001:
stdmeanratio = QuantityStatistics[maxminplace,6]/QuantityStatistics[maxminplace,5]
if np.abs(QuantityStatistics[maxminplace,6]) > 0.0001:
stdchangeratio = NormedChange/QuantityStatistics[maxminplace,6]
lratios = ' Normed Change '+ str(round(NormedChange,5)) + ' /std ' + str(round(stdchangeratio,5))
lratios += ' Std/Mean ' + str(round(stdmeanratio,5))
print(str(iprop) + ' Change '+ str(round(TotalChange,2)) + startbold + lratios
+ ' ' + InputPropertyNames[nameplace] + resetfonts + lastline)
current_time = timenow()
print(startbold+startred+ '\nEND DLPrediction2F ' + current_time + ' ' + RunName + RunComment +resetfonts)
return
###Output
_____no_output_____
###Markdown
TFT Model Set up TFT Data and Input Types
###Code
# Type defintions
class DataTypes(enum.IntEnum):
"""Defines numerical types of each column."""
REAL_VALUED = 0
CATEGORICAL = 1
DATE = 2
NULL = -1
STRING = 3
BOOL = 4
class InputTypes(enum.IntEnum):
"""Defines input types of each column."""
TARGET = 0 # Known before and after t for training
OBSERVED_INPUT = 1 # Known upto time t
KNOWN_INPUT = 2 # Known at all times
STATIC_INPUT = 3 # By definition known at all times
ID = 4 # Single column used as an entity identifier
TIME = 5 # Single column exclusively used as a time index
NULL = -1
def checkdfNaN(label, AttributeSpec, y):
countNaN = 0
countnotNaN = 0
if y is None:
return
names = y.columns.tolist()
count = np.zeros(y.shape[1])
for j in range(0,y.shape[1]):
colname = names[j]
if AttributeSpec.loc[colname,'DataTypes'] != DataTypes.REAL_VALUED:
continue
for i in range(0,y.shape[0]):
if np.math.isnan(y.iloc[i, j]):
countNaN += 1
count[j] += 1
else:
countnotNaN += 1
percent = (100.0*countNaN)/(countNaN + countnotNaN)
print(label + ' is NaN ',str(countNaN),' percent ',str(round(percent,2)),' not NaN ', str(countnotNaN))
for j in range(0,y.shape[1]):
if count[j] == 0:
continue
print(names[j] + ' has NaN ' + str(count[j]))
###Output
_____no_output_____
###Markdown
Convert FFFFWNPF to TFT
###Code
if UseTFTModel:
# Pick Values setting InputType
# Currently ONLY pick from properties BUT
# If PropPick = 0 (target) then these should be selected as predictions in FFFFWNPF and futured of length LengthFutures
# Set Prediction Property mappings and calculations
# PredictionTFTAction -2 a Future -1 Ignore 0 Futured Basic Prediction, 1 Nonfutured Simple Sum, 2 Nonfutured Energy Averaged Earthquake
# CalculatedPredmaptoRaw is Raw Prediction on which Calculated Prediction based
# PredictionCalcLength is >1 if Action=1,2 and says action based on this number of consequitive predictions
# PredictionTFTnamemapping if a non trivial string it is that returned by TFT in output map; if ' ' it isd a special extra prediction
PredictionTFTnamemapping =np.full(NpredperseqTOT,' ',dtype=object)
PredictionTFTAction = np.full(NpredperseqTOT, -1, dtype = np.int32)
for ipred in range(0,NpredperseqTOT):
if ipred >= NumpredbasicperTime:
PredictionTFTAction[ipred] = -2
elif FuturedPointer[ipred] >= 0:
PredictionTFTAction[ipred] = 0
# Default is -1
CalculatedPredmaptoRaw = np.full(NpredperseqTOT, -1, dtype = np.int32)
PredictionCalcLength = np.full(NpredperseqTOT, 1, dtype = np.int32)
# TFT Pick flags
# 0 Target and observed input
# 1 Observed Input NOT predicted
# 2 Known Input
# 3 Static Observed Input
#
# Data Types 0 Float or Integer converted to Float
# Assuming Special non futured 6 months forward prediction defined but NOT directly predicted by TFT
PropPick = [3,3,3,3,0,1,1,1,1,1,0,0,0,2,2,2,2,2,2,2,2,2,2,2,2,2,2]
PropDataType = [0] * NpropperseqTOT
# Dataframe is overall label (real starting at 0), Location Name, Time Input Properties, Predicted Properties Nloc times Num_Time values
# Row major order in Location-Time Space
Totalsize = (Num_Time + TFTExtraTimes) * Nloc
RawLabel = np.arange(0, Totalsize, dtype =np.float32)
LocationLabel = []
FFFFWNPFUniqueLabel = []
RawTime = np.empty([Nloc,Num_Time + TFTExtraTimes], dtype = np.float32)
RawTrain = np.full([Nloc,Num_Time + TFTExtraTimes], True, dtype = bool)
RawVal = np.full([Nloc,Num_Time + TFTExtraTimes], True, dtype = bool)
# print('Times ' + str(Num_Time) + ' ' + str(TFTExtraTimes))
ierror = 0
for ilocation in range(0,Nloc):
# locname = Locationstate[LookupLocations[ilocation]] + ' ' + Locationname[LookupLocations[ilocation]]
locname = Locationname[LookupLocations[ilocation]] + ' ' + Locationstate[LookupLocations[ilocation]]
if locname == "":
printexit('Illegal null location name ' + str(ilocation))
for idupe in range(0,len(FFFFWNPFUniqueLabel)):
if locname == FFFFWNPFUniqueLabel[idupe]:
print(' Duplicate location name ' + str(ilocation) + ' ' + str(idupe) + ' ' + locname)
ierror += 1
FFFFWNPFUniqueLabel.append(locname)
# print(str(ilocation) + ' ' +locname)
for jtime in range(0,Num_Time + TFTExtraTimes):
RawTime[ilocation,jtime] = np.float32(jtime)
LocationLabel.append(locname)
if LocationBasedValidation:
if MappingtoTraining[ilocation] >= 0:
RawTrain[ilocation,jtime] = True
else:
RawTrain[ilocation,jtime] = False
if MappingtoValidation[ilocation] >= 0:
RawVal[ilocation,jtime] = True
else:
RawVal[ilocation,jtime] = False
if ierror > 0:
printexit(" Duplicate Names " + str(ierror))
RawTime = np.reshape(RawTime,-1)
RawTrain = np.reshape(RawTrain,-1)
RawVal = np.reshape(RawVal,-1)
TFTdf1 = pd.DataFrame(RawLabel, columns=['RawLabel'])
if LocationBasedValidation:
TFTdf2 = pd.DataFrame(RawTrain, columns=['TrainingSet'])
TFTdf3 = pd.DataFrame(RawVal, columns=['ValidationSet'])
TFTdf4 = pd.DataFrame(LocationLabel, columns=['Location'])
TFTdf5 = pd.DataFrame(RawTime, columns=['Time from Start'])
TFTdfTotal = pd.concat([TFTdf1,TFTdf2,TFTdf3,TFTdf4,TFTdf5], axis=1)
else:
TFTdf2 = pd.DataFrame(LocationLabel, columns=['Location'])
TFTdf3 = pd.DataFrame(RawTime, columns=['Time from Start'])
TFTdfTotal = pd.concat([TFTdf1,TFTdf2,TFTdf3], axis=1)
TFTdfTotalSpec = pd.DataFrame([['RawLabel', DataTypes.REAL_VALUED, InputTypes.NULL]], columns=['AttributeName', 'DataTypes', 'InputTypes'])
if LocationBasedValidation:
TFTdfTotalSpec.loc[len(TFTdfTotalSpec.index)] = ['TrainingSet', DataTypes.BOOL, InputTypes.NULL]
TFTdfTotalSpec.loc[len(TFTdfTotalSpec.index)] = ['ValidationSet', DataTypes.BOOL, InputTypes.NULL]
TFTdfTotalSpec.loc[len(TFTdfTotalSpec.index)] = ['Location', DataTypes.STRING, InputTypes.ID]
TFTdfTotalSpec.loc[len(TFTdfTotalSpec.index)] = ['Time from Start', DataTypes.REAL_VALUED, InputTypes.TIME]
ColumnsProp=[]
for iprop in range(0,NpropperseqTOT):
line = str(iprop) + ' ' + InputPropertyNames[PropertyNameIndex[iprop]]
jprop = PropertyAverageValuesPointer[iprop]
if QuantityTakeroot[jprop] > 1:
line += ' Root ' + str(QuantityTakeroot[jprop])
ColumnsProp.append(line)
QuantityStatisticsNames = ['Min','Max','Norm','Mean','Std','Normed Mean','Normed Std']
TFTInputSequences = np.reshape(ReshapedSequencesTOT,(-1,NpropperseqTOT))
TFTPropertyChoice = np.full(NpropperseqTOT, -1, dtype = np.int32)
TFTNumberTargets = 0
for iprop in range(0,NpropperseqTOT):
if PropPick[iprop] >= 0:
if PropPick[iprop] == 0:
TFTNumberTargets += 1
nextcol = TFTInputSequences[:,iprop]
dftemp = pd.DataFrame(nextcol, columns=[ColumnsProp[iprop]])
TFTdfTotal = pd.concat([TFTdfTotal,dftemp], axis=1)
jprop = TFTdfTotal.columns.get_loc(ColumnsProp[iprop])
print('Property column ' + str(jprop) + ' ' + ColumnsProp[iprop])
TFTPropertyChoice[iprop] = jprop
TFTdfTotalSpec.loc[len(TFTdfTotalSpec.index)] = [ColumnsProp[iprop], PropDataType[iprop], PropPick[iprop]]
FFFFWNPFNumberTargets = TFTNumberTargets
ReshapedPredictionsTOT = np.transpose(RawInputPredictionsTOT,(1,0,2))
TFTdfTotalSpec = TFTdfTotalSpec.set_index('AttributeName', drop= False)
TFTdfTotalshape = TFTdfTotal.shape
TFTdfTotalcols = TFTdfTotal.columns
print(TFTdfTotalshape)
print(TFTdfTotalcols)
pd.set_option('display.max_rows', 100)
display(TFTdfTotalSpec)
print('Prediction mapping')
ifuture = 0
itarget = 0
for ipred in range(0,NpredperseqTOT):
predstatus = PredictionTFTAction[ipred]
if (predstatus == -1) or (predstatus > 0):
PredictionTFTnamemapping[ipred] = ' '
text = 'NOT PREDICTED DIRECTLY'
elif (predstatus == -2) or (predstatus == 0):
text = f't+{ifuture}-Obs{itarget}'
PredictionTFTnamemapping[ipred] = text
itarget += 1
if itarget >= TFTNumberTargets:
itarget = 0
ifuture += 1
fp = -2
if ipred < NumpredbasicperTime:
fp = FuturedPointer[ipred]
line = startbold + startpurple + str(ipred) + ' ' + Predictionname[ipred] + ' ' + text + resetfonts + ' Futured ' +str(fp) + ' '
line += 'Action ' + str(predstatus) + ' Property ' + str(CalculatedPredmaptoRaw[ipred]) + ' Length ' + str(PredictionCalcLength[ipred])
jpred = PredictionAverageValuesPointer[ipred]
line += ' Processing Root ' + str(QuantityTakeroot[jpred])
for proppredval in range (0,7):
line += ' ' + QuantityStatisticsNames[proppredval] + ' ' + str(round(QuantityStatistics[jpred,proppredval],3))
print(wraptotext(line,size=150))
# Rescaling done by that appropriate for properties and predictions
TFTdfTotalSpecshape = TFTdfTotalSpec.shape
TFTcolumn_definition = []
for i in range(0,TFTdfTotalSpecshape[0]):
TFTcolumn_definition.append((TFTdfTotalSpec.iloc[i,0],TFTdfTotalSpec.iloc[i,1],TFTdfTotalSpec.iloc[i,2]))
print(TFTcolumn_definition)
print(TFTdfTotalSpec.columns)
print(TFTdfTotalSpec.index)
# Set Futures to be calculated
PlotFutures = np.full(1+LengthFutures,False, dtype=bool)
PlotFutures[0] = True
PlotFutures[6] = True
PlotFutures[12] = True
PlotFutures[25] = True
PredictedQuantity = -NumpredbasicperTime
for ifuture in range (0,1+LengthFutures):
increment = NumpredbasicperTime
if ifuture > 1:
increment = NumpredFuturedperTime
PredictedQuantity += increment
for j in range(0,increment):
PlotPredictions[PredictedQuantity+j] = PlotFutures[ifuture]
CalculateNNSE[PredictedQuantity+j] = PlotFutures[ifuture]
###Output
_____no_output_____
###Markdown
TFT Setup
###Code
if gregor:
content = readfile("config.yaml")
program_config = dotdict(yaml.safe_load(content))
config.update(program_config)
print(config)
DLAnalysisOnly = config.DLAnalysisOnly
DLRestorefromcheckpoint = config.DLRestorefromcheckpoint
DLinputRunName = RunName
DLinputCheckpointpostfix = config.DLinputCheckpointpostfix
TFTTransformerepochs = config.TFTTransformerepochs #num_epochs
#TFTTransformerepochs = 10 # just temporarily
#TFTTransformerepochs = 2 # just temporarily
# set transformer epochs to lower values for faster runs
# set them to 60 for now I set them to 20 so we get faster run
if False:
DLAnalysisOnly = True
DLRestorefromcheckpoint = True
DLinputRunName = RunName
DLinputRunName = 'EARTHQ-newTFTv28'
DLinputCheckpointpostfix = '-67'
TFTTransformerepochs = 40
TFTdropout_rate = 0.1
TFTdropout_rate = config.TFTdropout_rate #dropout_rate
TFTTransformerbatch_size = 64
TFTTransformerbatch_size = config.TFTTransformerbatch_size #minibatch_size
TFTd_model = 160
TFTd_model = config.TFTd_model #hidden_layer_size
TFTTransformertestvalbatch_size = max(128,TFTTransformerbatch_size) #maxibatch_size
TFThidden_layer_size = TFTd_model
number_LSTMnodes = TFTd_model
LSTMactivationvalue = 'tanh'
LSTMrecurrent_activation = 'sigmoid'
LSTMdropout1 = 0.0
LSTMrecurrent_dropout1 = 0.0
TFTLSTMEncoderInitialMLP = 0
TFTLSTMDecoderInitialMLP = 0
TFTLSTMEncoderrecurrent_dropout1 = LSTMrecurrent_dropout1
TFTLSTMDecoderrecurrent_dropout1 = LSTMrecurrent_dropout1
TFTLSTMEncoderdropout1 = LSTMdropout1
TFTLSTMDecoderdropout1 = LSTMdropout1
TFTLSTMEncoderrecurrent_activation = LSTMrecurrent_activation
TFTLSTMDecoderrecurrent_activation = LSTMrecurrent_activation
TFTLSTMEncoderactivationvalue = LSTMactivationvalue
TFTLSTMDecoderactivationvalue = LSTMactivationvalue
TFTLSTMEncoderSecondLayer = True
TFTLSTMDecoderSecondLayer = True
TFTLSTMEncoderThirdLayer = False
TFTLSTMDecoderThirdLayer = False
TFTLSTMEncoderFinalMLP = 0
TFTLSTMDecoderFinalMLP = 0
TFTnum_heads = 4
TFTnum_heads = config.TFTnum_heads #num_heads
TFTnum_AttentionLayers = 2
TFTnum_AttentionLayers = config.TFTnum_AttentionLayers #num_stacks | stack_size
# For default TFT
TFTuseCUDALSTM = True
TFTdefaultLSTM = False
if TFTdefaultLSTM:
TFTuseCUDALSTM = True
TFTLSTMEncoderFinalMLP = 0
TFTLSTMDecoderFinalMLP = 0
TFTLSTMEncoderrecurrent_dropout1 = 0.0
TFTLSTMDecoderrecurrent_dropout1 = 0.0
TFTLSTMEncoderdropout1 = 0.0
TFTLSTMDecoderdropout1 = 0.0
TFTLSTMEncoderSecondLayer = False
TFTLSTMDecoderSecondLayer = False
TFTFutures = 0
TFTFutures = 1 + LengthFutures
if TFTFutures == 0:
printexit('No TFT Futures defined')
TFTSingleQuantity = True
TFTLossFlag = 11
HuberLosscut = 0.01
if TFTSingleQuantity:
TFTQuantiles =[1.0]
TFTQuantilenames = ['MSE']
TFTPrimaryQuantileIndex = 0
else:
TFTQuantiles = [0.1,0.5,0.9]
TFTQuantilenames = ['p10','p50','p90']
TFTPrimaryQuantileIndex = 1
if TFTLossFlag == 11:
TFTQuantilenames = ['MAE']
if TFTLossFlag == 12:
TFTQuantilenames = ['Huber']
TFTfixed_params = {
'total_time_steps': Tseq + TFTFutures,
'num_encoder_steps': Tseq,
'num_epochs': TFTTransformerepochs,
#'early_stopping_patience': 60,
'early_stopping_patience': config.early_stopping_patience, #early_stopping_patience
'multiprocessing_workers': 12,
'optimizer': 'adam',
'lossflag': TFTLossFlag,
'HuberLosscut': HuberLosscut,
'AnalysisOnly': DLAnalysisOnly,
'inputRunName': DLinputRunName,
'Restorefromcheckpoint': DLRestorefromcheckpoint,
'inputCheckpointpostfix': DLinputCheckpointpostfix,
'maxibatch_size': TFTTransformertestvalbatch_size,
'TFTuseCUDALSTM':TFTuseCUDALSTM,
'TFTdefaultLSTM':TFTdefaultLSTM,
}
TFTmodel_params = {
'dropout_rate': TFTdropout_rate,
'hidden_layer_size': TFTd_model,
#'learning_rate': 0.0000005,
'learning_rate': config.learning_rate, #learning_rate
'minibatch_size': TFTTransformerbatch_size,
#'max_gradient_norm': 0.01,
'max_gradient_norm': config.max_gradient_norm, #max_gradient_norm
'num_heads': TFTnum_heads,
'stack_size': TFTnum_AttentionLayers,
}
TFTSymbolicWindows = False
TFTFinalGatingOption = 1
TFTMultivariate = True
TFTuse_testing_mode = False
###Output
{'DLAnalysisOnly': False, 'DLRestorefromcheckpoint': False, 'DLinputCheckpointpostfix': '', 'TFTTransformerepochs': 10}
###Markdown
Base Formatter
###Code
class GenericDataFormatter(abc.ABC):
"""Abstract base class for all data formatters.
User can implement the abstract methods below to perform dataset-specific
manipulations.
"""
@abc.abstractmethod
def set_scalers(self, df):
"""Calibrates scalers using the data supplied."""
raise NotImplementedError()
@abc.abstractmethod
def transform_inputs(self, df):
"""Performs feature transformation."""
raise NotImplementedError()
@abc.abstractmethod
def format_predictions(self, df):
"""Reverts any normalisation to give predictions in original scale."""
raise NotImplementedError()
@abc.abstractmethod
def split_data(self, df):
"""Performs the default train, validation and test splits."""
raise NotImplementedError()
@property
@abc.abstractmethod
def _column_definition(self):
"""Defines order, input type and data type of each column."""
raise NotImplementedError()
@abc.abstractmethod
def get_fixed_params(self):
"""Defines the fixed parameters used by the model for training.
Requires the following keys:
'total_time_steps': Defines the total number of time steps used by TFT
'num_encoder_steps': Determines length of LSTM encoder (i.e. history)
'num_epochs': Maximum number of epochs for training
'early_stopping_patience': Early stopping param for keras
'multiprocessing_workers': # of cpus for data processing
Returns:
A dictionary of fixed parameters, e.g.:
fixed_params = {
'total_time_steps': 252 + 5,
'num_encoder_steps': 252,
'num_epochs': 100,
'early_stopping_patience': 5,
'multiprocessing_workers': 5,
}
"""
raise NotImplementedError
# Shared functions across data-formatters
@property
def num_classes_per_cat_input(self):
"""Returns number of categories per relevant input.
This is seqeuently required for keras embedding layers.
"""
return self._num_classes_per_cat_input
def get_num_samples_for_calibration(self):
"""Gets the default number of training and validation samples.
Use to sub-sample the data for network calibration and a value of -1 uses
all available samples.
Returns:
Tuple of (training samples, validation samples)
"""
return -1, -1
def get_column_definition(self):
""""Returns formatted column definition in order expected by the TFT."""
column_definition = self._column_definition
# Sanity checks first.
# Ensure only one ID and time column exist
def _check_single_column(input_type):
length = len([tup for tup in column_definition if tup[2] == input_type])
if length != 1:
raise ValueError(f'Illegal number of inputs ({length}) of type {input_type}')
_check_single_column(InputTypes.ID)
_check_single_column(InputTypes.TIME)
identifier = [tup for tup in column_definition if tup[2] == InputTypes.ID]
time = [tup for tup in column_definition if tup[2] == InputTypes.TIME]
real_inputs = [
tup for tup in column_definition if tup[1] == DataTypes.REAL_VALUED and
tup[2] not in {InputTypes.ID, InputTypes.TIME}
]
categorical_inputs = [
tup for tup in column_definition if tup[1] == DataTypes.CATEGORICAL and
tup[2] not in {InputTypes.ID, InputTypes.TIME}
]
return identifier + time + real_inputs + categorical_inputs
# XXX Looks important in reordering
def _get_input_columns(self):
"""Returns names of all input columns."""
return [
tup[0]
for tup in self.get_column_definition()
if tup[2] not in {InputTypes.ID, InputTypes.TIME}
]
def _get_tft_input_indices(self):
"""Returns the relevant indexes and input sizes required by TFT."""
# Functions
def _extract_tuples_from_data_type(data_type, defn):
return [
tup for tup in defn if tup[1] == data_type and
tup[2] not in {InputTypes.ID, InputTypes.TIME}
]
def _get_locations(input_types, defn):
return [i for i, tup in enumerate(defn) if tup[2] in input_types]
# Start extraction
column_definition = [
tup for tup in self.get_column_definition()
if tup[2] not in {InputTypes.ID, InputTypes.TIME}
]
categorical_inputs = _extract_tuples_from_data_type(DataTypes.CATEGORICAL,
column_definition)
real_inputs = _extract_tuples_from_data_type(DataTypes.REAL_VALUED,
column_definition)
locations = {
'input_size':
len(self._get_input_columns()),
'output_size':
len(_get_locations({InputTypes.TARGET}, column_definition)),
'category_counts':
self.num_classes_per_cat_input,
'input_obs_loc':
_get_locations({InputTypes.TARGET}, column_definition),
'static_input_loc':
_get_locations({InputTypes.STATIC_INPUT}, column_definition),
'known_regular_inputs':
_get_locations({InputTypes.STATIC_INPUT, InputTypes.KNOWN_INPUT},
real_inputs),
'known_categorical_inputs':
_get_locations({InputTypes.STATIC_INPUT, InputTypes.KNOWN_INPUT},
categorical_inputs),
}
return locations
def get_experiment_params(self):
"""Returns fixed model parameters for experiments."""
required_keys = [
'total_time_steps', 'num_encoder_steps', 'num_epochs',
'early_stopping_patience', 'multiprocessing_workers'
]
fixed_params = self.get_fixed_params()
for k in required_keys:
if k not in fixed_params:
raise ValueError(f'Field {k} missing from fixed parameter definitions!')
fixed_params['column_definition'] = self.get_column_definition()
fixed_params.update(self._get_tft_input_indices())
return fixed_params
###Output
_____no_output_____
###Markdown
TFT FFFFWNPF Formatter
###Code
# Custom formatting functions for FFFFWNPF datasets.
#GenericDataFormatter = data_formatters.base.GenericDataFormatter
#DataTypes = data_formatters.base.DataTypes
#InputTypes = data_formatters.base.InputTypes
class FFFFWNPFFormatter(GenericDataFormatter):
"""
Defines and formats data for the Covid April 21 dataset.
Attributes:
column_definition: Defines input and data type of column used in the
experiment.
identifiers: Entity identifiers used in experiments.
"""
_column_definition = TFTcolumn_definition
def __init__(self):
"""Initialises formatter."""
self.identifiers = None
self._real_scalers = None
self._cat_scalers = None
self._target_scaler = None
self._num_classes_per_cat_input = []
self._time_steps = self.get_fixed_params()['total_time_steps']
def split_data(self, df, valid_boundary=-1, test_boundary=-1):
"""Splits data frame into training-validation-test data frames.
This also calibrates scaling object, and transforms data for each split.
Args:
df: Source data frame to split.
valid_boundary: Starting time for validation data
test_boundary: Starting time for test data
Returns:
Tuple of transformed (train, valid, test) data.
"""
print('Formatting train-valid-test splits.')
if LocationBasedValidation:
index = df['TrainingSet']
train = df[index == True]
index = df['ValidationSet']
valid = df[index == True]
index = train['Time from Start']
train = train[index<(Num_Time-0.5)]
index = valid['Time from Start']
valid = valid[index<(Num_Time-0.5)]
if test_boundary == -1:
test = df
# train.drop('TrainingSet', axis=1, inplace=True)
# train.drop('ValidationSet', axis=1, inplace=True)
# valid.drop('TrainingSet', axis=1, inplace=True)
# valid.drop('ValidationSet', axis=1, inplace=True)
else:
index = df['Time from Start']
train = df[index<(Num_Time-0.5)]
valid = df[index<(Num_Time-0.5)]
if test_boundary == -1:
test = df
if valid_boundary > 0:
train = df.loc[index < valid_boundary]
if test_boundary > 0:
valid = df.loc[(index >= valid_boundary - 7) & (index < test_boundary)]
else:
valid = df.loc[(index >= valid_boundary - 7)]
if test_boundary > 0:
test = df.loc[index >= test_boundary - 7]
self.set_scalers(train)
Trainshape = train.shape
Traincols = train.columns
print(' Train Shape ' + str(Trainshape))
print(Traincols)
Validshape = valid.shape
Validcols = valid.columns
print(' Validation Shape ' + str(Validshape))
print(Validcols)
if test_boundary >= -1:
return (self.transform_inputs(data) for data in [train, valid, test])
else:
return [train, valid]
def set_scalers(self, df):
"""Calibrates scalers using the data supplied.
Args:
df: Data to use to calibrate scalers.
"""
print('Setting scalers with training data...')
column_definitions = self.get_column_definition()
# print(column_definitions)
# print(InputTypes.TARGET)
id_column = myTFTTools.utilsget_single_col_by_input_type(InputTypes.ID,
column_definitions, TFTMultivariate)
target_column = myTFTTools.utilsget_single_col_by_input_type(InputTypes.TARGET,
column_definitions, TFTMultivariate)
# Format real scalers
real_inputs = myTFTTools.extract_cols_from_data_type(
DataTypes.REAL_VALUED, column_definitions,
{InputTypes.ID, InputTypes.TIME})
# Initialise scaler caches
self._real_scalers = {}
self._target_scaler = {}
identifiers = []
for identifier, sliced in df.groupby(id_column):
data = sliced[real_inputs].values
if TFTMultivariate == True:
targets = sliced[target_column].values
else:
targets = sliced[target_column].values
# self._real_scalers[identifier] = sklearn.preprocessing.StandardScaler().fit(data)
# self._target_scaler[identifier] = sklearn.preprocessing.StandardScaler().fit(targets)
identifiers.append(identifier)
# Format categorical scalers
categorical_inputs = myTFTTools.extract_cols_from_data_type(
DataTypes.CATEGORICAL, column_definitions,
{InputTypes.ID, InputTypes.TIME})
categorical_scalers = {}
num_classes = []
# Set categorical scaler outputs
self._cat_scalers = categorical_scalers
self._num_classes_per_cat_input = num_classes
# Extract identifiers in case required
self.identifiers = identifiers
def transform_inputs(self, df):
"""Performs feature transformations.
This includes both feature engineering, preprocessing and normalisation.
Args:
df: Data frame to transform.
Returns:
Transformed data frame.
"""
return df
def format_predictions(self, predictions):
"""Reverts any normalisation to give predictions in original scale.
Args:
predictions: Dataframe of model predictions.
Returns:
Data frame of unnormalised predictions.
"""
return predictions
# Default params
def get_fixed_params(self):
"""Returns fixed model parameters for experiments."""
fixed_params = TFTfixed_params
return fixed_params
def get_default_model_params(self):
"""Returns default optimised model parameters."""
model_params = TFTmodel_params
return model_params
def get_num_samples_for_calibration(self):
"""Gets the default number of training and validation samples.
Use to sub-sample the data for network calibration and a value of -1 uses
all available samples.
Returns:
Tuple of (training samples, validation samples)
"""
numtrain = TFTdfTotalshape[0]
numvalid = TFTdfTotalshape[0]
return numtrain, numvalid
###Output
_____no_output_____
###Markdown
Set TFT Parameter Dictionary
###Code
def setTFTparameters(data_formatter):
# Sets up default params
fixed_params = data_formatter.get_experiment_params()
params = data_formatter.get_default_model_params()
params["model_folder"] = TFTmodel_folder
params['optimizer'] = Transformeroptimizer
fixed_params["quantiles"] = TFTQuantiles
fixed_params["quantilenames"] = TFTQuantilenames
fixed_params["quantileindex"] = TFTPrimaryQuantileIndex
fixed_params["TFTLSTMEncoderFinalMLP"] = TFTLSTMEncoderFinalMLP
fixed_params["TFTLSTMDecoderFinalMLP"] = TFTLSTMDecoderFinalMLP
fixed_params["TFTLSTMEncoderrecurrent_dropout1"] = TFTLSTMEncoderrecurrent_dropout1
fixed_params["TFTLSTMDecoderrecurrent_dropout1"] = TFTLSTMDecoderrecurrent_dropout1
fixed_params["TFTLSTMEncoderdropout1"] = TFTLSTMEncoderdropout1
fixed_params["TFTLSTMDecoderdropout1"] = TFTLSTMDecoderdropout1
fixed_params["TFTLSTMEncoderSecondLayer"] = TFTLSTMEncoderSecondLayer
fixed_params["TFTLSTMDecoderSecondLayer"] = TFTLSTMDecoderSecondLayer
fixed_params["TFTLSTMEncoderThirdLayer"] = TFTLSTMEncoderThirdLayer
fixed_params["TFTLSTMDecoderThirdLayer"] = TFTLSTMDecoderThirdLayer
fixed_params["TFTLSTMEncoderrecurrent_activation"] = TFTLSTMEncoderrecurrent_activation
fixed_params["TFTLSTMDecoderrecurrent_activation"] = TFTLSTMDecoderrecurrent_activation
fixed_params["TFTLSTMEncoderactivationvalue"] = TFTLSTMEncoderactivationvalue
fixed_params["TFTLSTMDecoderactivationvalue"] = TFTLSTMDecoderactivationvalue
fixed_params["TFTLSTMEncoderInitialMLP"] = TFTLSTMEncoderInitialMLP
fixed_params["TFTLSTMDecoderInitialMLP"] = TFTLSTMDecoderInitialMLP
fixed_params['number_LSTMnodes'] = number_LSTMnodes
fixed_params["TFTOption1"] = 1
fixed_params["TFTOption2"] = 0
fixed_params['TFTMultivariate'] = TFTMultivariate
fixed_params['TFTFinalGatingOption'] = TFTFinalGatingOption
fixed_params['TFTSymbolicWindows'] = TFTSymbolicWindows
fixed_params['name'] = 'TemporalFusionTransformer'
fixed_params['nameFFF'] = TFTexperimentname
fixed_params['runname'] = TFTRunName
fixed_params['runcomment'] = TFTRunComment
fixed_params['data_formatter'] = data_formatter
fixed_params['Validation'] = LocationBasedValidation
# Parameter overrides for testing only! Small sizes used to speed up script.
if TFTuse_testing_mode:
fixed_params["num_epochs"] = 1
params["hidden_layer_size"] = 5
# train_samples, valid_samples = 100, 10 is applied later
# Load all parameters -- fixed and model
for k in fixed_params:
params[k] = fixed_params[k]
return params
###Output
_____no_output_____
###Markdown
TFTTools
###Code
class TFTTools(object):
def __init__(self, params, **kwargs):
# Args: params: Parameters to define TFT
self.name = params['name']
self.experimentname = params['nameFFF']
self.runname = params['runname']
self.runcomment = params['runcomment']
self.data_formatter = params['data_formatter']
self.lossflag = params['lossflag']
self.HuberLosscut = params['HuberLosscut']
self.optimizer = params['optimizer']
self.validation = params['Validation']
self.AnalysisOnly = params['AnalysisOnly']
self.Restorefromcheckpoint = params['Restorefromcheckpoint']
self.inputRunName = params['inputRunName']
self.inputCheckpointpostfix = params['inputCheckpointpostfix']
# Data parameters
self.time_steps = int(params['total_time_steps'])
self.input_size = int(params['input_size'])
self.output_size = int(params['output_size'])
self.category_counts = json.loads(str(params['category_counts']))
self.n_multiprocessing_workers = int(params['multiprocessing_workers'])
# Relevant indices for TFT
self._input_obs_loc = json.loads(str(params['input_obs_loc']))
self._static_input_loc = json.loads(str(params['static_input_loc']))
self._known_regular_input_idx = json.loads(
str(params['known_regular_inputs']))
self._known_categorical_input_idx = json.loads(
str(params['known_categorical_inputs']))
self.column_definition = params['column_definition']
# Network params
# self.quantiles = [0.1, 0.5, 0.9]
self.quantiles = params['quantiles']
self.NumberQuantiles = len(self.quantiles)
self.Quantilenames = params['quantilenames']
self.PrimaryQuantileIndex = int(params['quantileindex'])
self.useMSE = False
if self.NumberQuantiles == 1 and self.Quantilenames[0] == 'MSE':
self.useMSE = True
self.TFTOption1 = params['TFTOption1']
self.TFTOption2 = params['TFTOption2']
self.TFTMultivariate = params['TFTMultivariate']
self.TFTuseCUDALSTM = params['TFTuseCUDALSTM']
self.TFTdefaultLSTM = params['TFTdefaultLSTM']
self.number_LSTMnodes = params['number_LSTMnodes']
self.TFTLSTMEncoderInitialMLP = params["TFTLSTMEncoderInitialMLP"]
self.TFTLSTMDecoderInitialMLP = params["TFTLSTMDecoderInitialMLP"]
self.TFTLSTMEncoderFinalMLP = params['TFTLSTMEncoderFinalMLP']
self.TFTLSTMDecoderFinalMLP = params['TFTLSTMDecoderFinalMLP']
self.TFTLSTMEncoderrecurrent_dropout1 = params["TFTLSTMEncoderrecurrent_dropout1"]
self.TFTLSTMDecoderrecurrent_dropout1 = params["TFTLSTMDecoderrecurrent_dropout1"]
self.TFTLSTMEncoderdropout1 = params["TFTLSTMEncoderdropout1"]
self.TFTLSTMDecoderdropout1 = params["TFTLSTMDecoderdropout1"]
self.TFTLSTMEncoderrecurrent_activation = params["TFTLSTMEncoderrecurrent_activation"]
self.TFTLSTMDecoderrecurrent_activation = params["TFTLSTMDecoderrecurrent_activation"]
self.TFTLSTMEncoderactivationvalue = params["TFTLSTMEncoderactivationvalue"]
self.TFTLSTMDecoderactivationvalue = params["TFTLSTMDecoderactivationvalue"]
self.TFTLSTMEncoderSecondLayer = params["TFTLSTMEncoderSecondLayer"]
self.TFTLSTMDecoderSecondLayer = params["TFTLSTMDecoderSecondLayer"]
self.TFTLSTMEncoderThirdLayer = params["TFTLSTMEncoderThirdLayer"]
self.TFTLSTMDecoderThirdLayer = params["TFTLSTMDecoderThirdLayer"]
self.TFTFinalGatingOption = params['TFTFinalGatingOption']
self.TFTSymbolicWindows = params['TFTSymbolicWindows']
self.FinalLoopSize = 1
if (self.output_size == 1) and (self.NumberQuantiles == 1):
self.TFTFinalGatingOption = 0
if self.TFTFinalGatingOption > 0:
self.TFTLSTMFinalMLP = 0
self.FinalLoopSize = self.output_size * self.NumberQuantiles
# HYPER PARAMETERS
self.hidden_layer_size = int(params['hidden_layer_size']) # PARAMETER TFTd_model search for them in code
self.dropout_rate = float(params['dropout_rate']) # PARAMETER TFTdropout_rate
self.max_gradient_norm = float(params['max_gradient_norm']) # PARAMETER max_gradient_norm
self.learning_rate = float(params['learning_rate']) # PARAMETER learning_rate
self.minibatch_size = int(params['minibatch_size']) # PARAMETER TFTTransformerbatch_size
self.maxibatch_size = int(params['maxibatch_size']) # PARAMETER TFTTransformertestvalbatch_size = max(128, TFTTransformerbatch_size)
self.num_epochs = int(params['num_epochs']) # PARAMETER TFTTransformerepochs
self.early_stopping_patience = int(params['early_stopping_patience']) # PARAMETER early_stopping_patience????
self.num_encoder_steps = int(params['num_encoder_steps']) # PARAMETER Tseq (fixed by the problem, may not be useful)
self.num_stacks = int(params['stack_size']) # PARAMETER TFTnum_AttentionLayers ???
self.num_heads = int(params['num_heads']) # PARAMETER TFTnum_heads +++++
# Serialisation options
# XXX
# self._temp_folder = os.path.join(params['model_folder'], 'tmp')
# self.reset_temp_folder()
# Extra components to store Tensorflow nodes for attention computations
# XXX
# self._input_placeholder = None
# self._attention_components = None
# self._prediction_parts = None
self.TFTSeq = 0
self.TFTNloc = 0
self.UniqueLocations = []
def utilsget_single_col_by_input_type(self, input_type, column_definition, TFTMultivariate):
"""Returns name of single or multiple column.
Args:
input_type: Input type of column to extract
column_definition: Column definition list for experiment
"""
columnname = [tup[0] for tup in column_definition if tup[2] == input_type]
# allow multiple targets
if TFTMultivariate and (input_type == 0):
return columnname
else:
if len(columnname) != 1:
printexit(f'Invalid number of columns for Type {input_type}')
return columnname[0]
def _get_single_col_by_type(self, input_type):
return self.utilsget_single_col_by_input_type(input_type, self.column_definition, self.TFTMultivariate)
def extract_cols_from_data_type(self, data_type, column_definition,
excluded_input_types):
"""Extracts the names of columns that correspond to a define data_type.
Args:
data_type: DataType of columns to extract.
column_definition: Column definition to use.
excluded_input_types: Set of input types to exclude
Returns:
List of names for columns with data type specified.
"""
return [
tup[0]
for tup in column_definition
if tup[1] == data_type and tup[2] not in excluded_input_types
]
# Quantile Loss functions.
def tensorflow_quantile_loss(self, y, y_pred, quantile):
"""Computes quantile loss for tensorflow.
Standard quantile loss as defined in the "Training Procedure" section of
the main TFT paper
Args:
y: Targets
y_pred: Predictions
quantile: Quantile to use for loss calculations (between 0 & 1)
Returns:
Tensor for quantile loss.
"""
# Checks quantile
if quantile < 0 or quantile > 1:
printexit(f'Illegal quantile value={quantile}! Values should be between 0 and 1.')
prediction_underflow = y - y_pred
q_loss = quantile * tf.maximum(prediction_underflow, 0.) + (
1. - quantile) * tf.maximum(-prediction_underflow, 0.)
return tf.reduce_sum(q_loss, axis=-1)
def PrintTitle(self, extrawords):
current_time = timenow()
line = self.name + ' ' + self.experimentname + ' ' + self.runname + ' ' + self.runcomment
beginwords = ''
if extrawords != '':
beginwords = extrawords + ' '
print(wraptotext(startbold + startred + beginwords + current_time + ' ' + line + resetfonts))
ram_gb = StopWatch.get_sysinfo()["mem.available"]
print(f'Your runtime has {ram_gb} gigabytes of available RAM\n')
###Output
_____no_output_____
###Markdown
Setup Classic TFT
###Code
'''
%cd "/content/gdrive/MyDrive/Colab Datasets/TFToriginal/"
%ls
%cd TFTCode/
TFTexperimentname= "FFFFWNPF"
output_folder = "../TFTData" # Please don't change this path
Rootmodel_folder = os.path.join(output_folder, 'saved_models', TFTexperimentname)
TFTmodel_folder = os.path.join(Rootmodel_folder, "fixed" + RunName)
'''
TFTexperimentname= "FFFFWNPF"
TFTmodel_folder="Notused"
TFTRunName = RunName
TFTRunComment = RunComment
if TFTexperimentname == 'FFFFWNPF':
formatter = FFFFWNPFFormatter()
# Save data frames
# TFTdfTotalSpec.to_csv('TFTdfTotalSpec.csv')
# TFTdfTotal.to_csv('TFTdfTotal.csv')
else:
import expt_settings.configs
ExperimentConfig = expt_settings.configs.ExperimentConfig
config = ExperimentConfig(name, output_folder)
formatter = config.make_data_formatter()
TFTparams = setTFTparameters(formatter)
myTFTTools = TFTTools(TFTparams)
myTFTTools.PrintTitle('Start TFT')
for k in TFTparams:
print('# {} = {}'.format(k, TFTparams[k]))
###Output
_____no_output_____
###Markdown
Read TFT Data
###Code
class TFTDataCache(object):
"""Caches data for the TFT.
This is a class and has no instances so uses cls not self
It just sets and uses a dictionary to record batched data locations"""
_data_cache = {}
@classmethod
def update(cls, data, key):
"""Updates cached data.
Args:
data: Source to update
key: Key to dictionary location
"""
cls._data_cache[key] = data
@classmethod
def get(cls, key):
"""Returns data stored at key location."""
return cls._data_cache[key]
@classmethod
def contains(cls, key):
"""Retuns boolean indicating whether key is present in cache."""
return key in cls._data_cache
class TFTdatasetup(object):
def __init__(self, **kwargs):
super(TFTdatasetup, self).__init__(**kwargs)
self.TFTNloc = 0
# XXX TFTNloc bad
if myTFTTools.TFTSymbolicWindows:
# Set up Symbolic maps allowing location order to differ (due to possible sorting in TFT)
id_col = myTFTTools._get_single_col_by_type(InputTypes.ID)
time_col = myTFTTools._get_single_col_by_type(InputTypes.TIME)
target_col = myTFTTools._get_single_col_by_type(InputTypes.TARGET)
input_cols = [
tup[0]
for tup in myTFTTools.column_definition
if tup[2] not in {InputTypes.ID, InputTypes.TIME}
]
self.UniqueLocations = TFTdfTotal[id_col].unique()
self.TFTNloc = len(self.UniqueLocations)
self.LocationLookup ={}
for i,locationname in enumerate(self.UniqueLocations):
self.LocationLookup[locationname] = i # maps name to TFT master location number
self.TFTnum_entries = 0 # Number of time values per location
for identifier, df in TFTdfTotal.groupby(id_col):
localnum_entries = len(df)
if self.TFTnum_entries == 0:
self.TFTnum_entries = localnum_entries
else:
if self.TFTnum_entries != localnum_entries:
printexit('Incorrect length in time for ' + identifier + ' ' + str(localnum_entries))
self.Lookupinputs = np.zeros((self.TFTNloc, self.TFTnum_entries, myTFTTools.input_size))
for identifier, df in TFTdfTotal.groupby(id_col):
location = self.LocationLookup[identifier]
self.Lookupinputs[location,:,:] = df[input_cols].to_numpy(dtype=np.float32,copy=True)
def __call__(self, data, Dataset_key, num_samples=-1):
"""Batches Dataset for training, Validation.
Testing not Batched
Args:
data: Data to batch
Dataset_key: Key used for cache
num_samples: Maximum number of samples to extract (-1 to use all data)
"""
max_samples = num_samples
if max_samples < 0:
max_samples = data.shape[0]
sampleddata = self._sampled_data(data, Dataset_key, max_samples=max_samples)
TFTDataCache.update(sampleddata, Dataset_key)
print(f'Cached data "{Dataset_key}" updated')
return sampleddata
def _sampled_data(self, data, Dataset_key, max_samples):
"""Samples segments into a compatible format.
Args:
data: Sources data to sample and batch
max_samples: Maximum number of samples in batch
Returns:
Dictionary of batched data with the maximum samples specified.
"""
if (max_samples < 1) and (max_samples != -1):
raise ValueError(f'Illegal number of samples specified! samples={max_samples}')
id_col = myTFTTools._get_single_col_by_type(InputTypes.ID)
time_col = myTFTTools._get_single_col_by_type(InputTypes.TIME)
#data.sort_values(by=[id_col, time_col], inplace=True) # gives warning message
print('Getting legal sampling locations.')
StopWatch.start("legal sampling location")
valid_sampling_locations = []
split_data_map = {}
self.TFTSeq = 0
for identifier, df in data.groupby(id_col):
self.TFTnum_entries = len(df)
self.TFTSeq = max(self.TFTSeq, self.TFTnum_entries-myTFTTools.time_steps+1)
if self.TFTnum_entries >= myTFTTools.time_steps:
valid_sampling_locations += [
(identifier, myTFTTools.time_steps + i)
for i in range(self.TFTnum_entries - myTFTTools.time_steps + 1)
]
split_data_map[identifier] = df
print(Dataset_key + ' max samples ' + str(max_samples) + ' actual ' + str(len(valid_sampling_locations)))
actual_samples = min(max_samples, len(valid_sampling_locations))
if 0 < max_samples < len(valid_sampling_locations):
print(f'Extracting {max_samples} samples...')
ranges = [
valid_sampling_locations[i] for i in np.random.choice(
len(valid_sampling_locations), max_samples, replace=False)
]
else:
print('Max samples={} exceeds # available segments={}'.format(
max_samples, len(valid_sampling_locations)))
ranges = valid_sampling_locations
id_col = myTFTTools._get_single_col_by_type(InputTypes.ID)
time_col = myTFTTools._get_single_col_by_type(InputTypes.TIME)
target_col = myTFTTools._get_single_col_by_type(InputTypes.TARGET)
input_cols = [
tup[0]
for tup in myTFTTools.column_definition
if tup[2] not in {InputTypes.ID, InputTypes.TIME}
]
if myTFTTools.TFTSymbolicWindows:
inputs = np.zeros((actual_samples), dtype = np.int32)
outputs = np.zeros((actual_samples, myTFTTools.time_steps, myTFTTools.output_size))
time = np.empty((actual_samples, myTFTTools.time_steps, 1), dtype=object)
identifiers = np.empty((actual_samples, myTFTTools.time_steps, 1), dtype=object)
oldlocationnumber = -1
storedlocation = np.zeros(self.TFTNloc, dtype = np.int32)
for i, tup in enumerate(ranges):
identifier, start_idx = tup
newlocationnumber = self.LocationLookup[identifier]
if newlocationnumber != oldlocationnumber:
oldlocationnumber = newlocationnumber
if storedlocation[newlocationnumber] == 0:
storedlocation[newlocationnumber] = 1
sliced = split_data_map[identifier].iloc[start_idx -
myTFTTools.time_steps:start_idx]
# inputs[i, :, :] = sliced[input_cols]
inputs[i] = np.left_shift(start_idx,16) + newlocationnumber
# Sequence runs from start_idx - myTFTTools.time_steps to start_idx i.e. start_idx is label of FINAL time step in position start_idx - 1
if myTFTTools.TFTMultivariate:
outputs[i, :, :] = sliced[target_col]
else:
outputs[i, :, :] = sliced[[target_col]]
time[i, :, 0] = sliced[time_col]
identifiers[i, :, 0] = sliced[id_col]
inputs = inputs.reshape(-1,1,1)
sampled_data = {
'inputs': inputs,
'outputs': outputs[:, myTFTTools.num_encoder_steps:, :],
'active_entries': np.ones_like(outputs[:, self.num_encoder_steps:, :]),
'time': time,
'identifier': identifiers
}
else:
inputs = np.zeros((actual_samples, myTFTTools.time_steps, myTFTTools.input_size), dtype=np.float32)
outputs = np.zeros((actual_samples, myTFTTools.time_steps, myTFTTools.output_size), dtype=np.float32)
time = np.empty((actual_samples, myTFTTools.time_steps, 1), dtype=object)
identifiers = np.empty((actual_samples, myTFTTools.time_steps, 1), dtype=object)
for i, tup in enumerate(ranges):
identifier, start_idx = tup
sliced = split_data_map[identifier].iloc[start_idx -
myTFTTools.time_steps:start_idx]
inputs[i, :, :] = sliced[input_cols]
if myTFTTools.TFTMultivariate:
outputs[i, :, :] = sliced[target_col]
else:
outputs[i, :, :] = sliced[[target_col]]
time[i, :, 0] = sliced[time_col]
identifiers[i, :, 0] = sliced[id_col]
sampled_data = {
'inputs': inputs,
'outputs': outputs[:, myTFTTools.num_encoder_steps:, :],
'active_entries': np.ones_like(outputs[:, myTFTTools.num_encoder_steps:, :], dtype=np.float32),
'time': time,
'identifier': identifiers
}
StopWatch.stop("legal sampling location")
return sampled_data
def dothedatasetup():
myTFTTools.PrintTitle("Loading & splitting data...")
if myTFTTools.experimentname == 'FFFFWNPF':
raw_data = TFTdfTotal
else:
printexit('Currently only FFFWNPF supported')
# raw_data = pd.read_csv(TFTdfTotal, index_col=0)
# XXX don't use test Could simplify
train, valid, test = myTFTTools.data_formatter.split_data(raw_data, test_boundary = -1)
train_samples, valid_samples = myTFTTools.data_formatter.get_num_samples_for_calibration()
test_samples = -1
if TFTuse_testing_mode:
train_samples, valid_samples,test_samples = 100, 10, 100
myTFTReader = TFTdatasetup()
train_data = myTFTReader(train, "train", num_samples=train_samples)
val_data = None
if valid_samples > 0:
val_data = myTFTReader(valid, "valid", num_samples=valid_samples)
test_data = myTFTReader(test, "test", num_samples=test_samples)
return train_data, val_data, test_data
StopWatch.start("data head setup")
TFTtrain_datacollection, TFTval_datacollection, TFTtest_datacollection = dothedatasetup()
StopWatch.stop("data head setup")
TFToutput_map = None # holder for final output
###Output
_____no_output_____
###Markdown
Predict TFT
###Code
class TFTSaveandInterpret:
def __init__(self, currentTFTmodel, currentoutput_map, ReshapedPredictionsTOT):
# output_map is a dictionary pointing to dataframes
# output_map["targets"]) targets are called outputs on input
# output_map["p10"] is p10 quantile forecast
# output_map["p50"] is p10 quantile forecast
# output_map["p90"] is p10 quantile forecast
# Labelled by last real time in sequence (t-1) which starts at time Tseq-1 going up to Num_Time-1
# order of Dataframe columns is 'forecast_time', 'identifier',
#'t+0-Obs0', 't+0-Obs1', 't+1-Obs0', 't+1-Obs1', 't+2-Obs0', 't+2-Obs1', 't+3-Obs0', 't+3-Obs1',
#'t+4-Obs0', 't+4-Obs1', 't+5-Obs0', 't+5-Obs1', 't+6-Obs0', 't+6-Obs1', 't+7-Obs0', 't+7-Obs1',
#'t+8-Obs0', 't+8-Obs1', 't+9-Obs0', 't+9-Obs1', 't+10-Obs0', 't+10-Obs1', 't+11-Obs0', 't+11-Obs1',
#'t+12-Obs0', 't+12-Obs1', 't+13-Obs0', 't+13-Obs1', 't+14-Obs0', 't+14-Obs1''
# First time is FFFFWNPF Sequence # + Tseq-1
# Rows of data frame are ilocation*(Num_Seq+1) + FFFFWNPF Sequence #
# ilocation runs from 0 ... Nloc-1 in same order in both TFT and FFFFWNPF
self.ScaledProperty = -1
self.Scaled = False
self.savedcolumn = []
self.currentoutput_map = currentoutput_map
self.currentTFTmodel = currentTFTmodel
Sizes = self.currentoutput_map[TFTQuantilenames[TFTPrimaryQuantileIndex]].shape
self.Numx = Sizes[0]
self.Numy = Sizes[1]
self.Num_Seq1 = 1 + Num_Seq
self.MaxTFTSeq = self.Num_Seq1-1
expectednumx = self.Num_Seq1*Nloc
if expectednumx != self.Numx:
printexit(' Wrong sizes of TFT compared to FFFFWNPF ' + str(expectednumx) + ' ' + str(self.Numx))
self.ReshapedPredictionsTOT = ReshapedPredictionsTOT
return
def setFFFFmapping(self):
self.FFFFWNPFresults = np.zeros((self.Numx, NpredperseqTOT,3), dtype=np.float32)
mapFFFFtoTFT = np.empty(Nloc, dtype = np.int32)
TFTLoc = self.currentoutput_map[TFTQuantilenames[TFTPrimaryQuantileIndex]]['identifier'].unique()
FFFFWNPFLocLookup = {}
for i,locname in enumerate(FFFFWNPFUniqueLabel):
FFFFWNPFLocLookup[locname] = i
TFTLocLookup = {}
for i,locname in enumerate(TFTLoc):
TFTLocLookup[locname] = i
if FFFFWNPFLocLookup[locname] is None:
printexit('Missing TFT Location '+locname)
for i,locname in enumerate(FFFFWNPFUniqueLabel):
j = TFTLocLookup[locname]
if j is None:
printexit('Missing FFFFWNPF Location '+ locname)
mapFFFFtoTFT[i] = j
indexposition = np.empty(NpredperseqTOT, dtype=int)
output_mapcolumns = self.currentoutput_map[TFTQuantilenames[TFTPrimaryQuantileIndex]].columns
numcols = len(output_mapcolumns)
for ipred in range(0, NpredperseqTOT):
predstatus = PredictionTFTAction[ipred]
if predstatus > 0:
indexposition[ipred]= -1
continue
label = PredictionTFTnamemapping[ipred]
if label == ' ':
indexposition[ipred]=ipred
else:
findpos = -1
for i in range(0,numcols):
if label == output_mapcolumns[i]:
findpos = i
if findpos < 0:
printexit('Missing Output ' +str(ipred) + ' ' +label)
indexposition[ipred] = findpos
for iquantile in range(0,myTFTTools.NumberQuantiles):
for ilocation in range(0,Nloc):
for seqnumber in range(0,self.Num_Seq1):
for ipred in range(0,NpredperseqTOT):
predstatus = PredictionTFTAction[ipred]
if predstatus > 0:
continue
label = PredictionTFTnamemapping[ipred]
if label == ' ': # NOT calculated by TFT
if seqnumber >= Num_Seq:
value = 0.0
else:
value = self.ReshapedPredictionsTOT[ilocation, seqnumber, ipred]
else:
ActualTFTSeq = seqnumber
if ActualTFTSeq <= self.MaxTFTSeq:
ipos = indexposition[ipred]
dfindex = self.Num_Seq1*mapFFFFtoTFT[ilocation] + ActualTFTSeq
value = self.currentoutput_map[TFTQuantilenames[iquantile]].iloc[dfindex,ipos]
else:
dfindex = self.Num_Seq1*mapFFFFtoTFT[ilocation] + self.MaxTFTSeq
ifuture = int(ipred/FFFFWNPFNumberTargets)
jfuture = ActualTFTSeq - self.MaxTFTSeq + ifuture
if jfuture <= LengthFutures:
jpred = ipred + (jfuture-ifuture)*FFFFWNPFNumberTargets
value = self.currentoutput_map[TFTQuantilenames[iquantile]].iloc[dfindex,indexposition[jpred]]
else:
value = 0.0
FFFFdfindex = self.Num_Seq1*ilocation + seqnumber
self.FFFFWNPFresults[FFFFdfindex,ipred,iquantile] = value
# Set Calculated Quantities as previous ipred loop has set base values
for ipred in range(0,NpredperseqTOT):
predstatus = PredictionTFTAction[ipred]
if predstatus <= 0:
continue
Basedonprediction = CalculatedPredmaptoRaw[ipred]
predaveragevaluespointer = PredictionAverageValuesPointer[Basedonprediction]
rootflag = QuantityTakeroot[predaveragevaluespointer]
rawdata = np.empty(PredictionCalcLength[ipred],dtype =np.float32)
ActualTFTSeq = seqnumber
if ActualTFTSeq <= self.MaxTFTSeq:
for ifuture in range(0,PredictionCalcLength[ipred]):
if ifuture == 0:
kpred = Basedonprediction
else:
jfuture = NumpredbasicperTime + NumpredFuturedperTime*(ifuture-1)
kpred = jfuture + FuturedPointer[Basedonprediction]
if predstatus == 3:
newvalue = self.ReshapedPredictionsTOT[ilocation, ActualTFTSeq, kpred]/ QuantityStatistics[predaveragevaluespointer,2] + QuantityStatistics[predaveragevaluespointer,0]
else:
kpos = indexposition[kpred]
dfindex = self.Num_Seq1*mapFFFFtoTFT[ilocation] + ActualTFTSeq
newvalue = self.currentoutput_map[TFTQuantilenames[iquantile]].iloc[dfindex,kpos] / QuantityStatistics[predaveragevaluespointer,2] + QuantityStatistics[predaveragevaluespointer,0]
if rootflag == 2:
newvalue = newvalue**2
if rootflag == 3:
newvalue = newvalue**3
rawdata[ifuture] = newvalue
# Form collective quantity
if predstatus == 1:
value = rawdata.sum()
elif predstatus >= 2:
value = log_energy(rawdata, sumaxis=0)
else:
value = 0.0
value = SetTakeroot(value,QuantityTakeroot[ipred])
actualpredaveragevaluespointer = PredictionAverageValuesPointer[ipred]
value = (value-QuantityStatistics[actualpredaveragevaluespointer,0])*QuantityStatistics[actualpredaveragevaluespointer,2]
else: # Sequence out of range
value = 0.0
FFFFdfindex = self.Num_Seq1*ilocation + seqnumber
self.FFFFWNPFresults[FFFFdfindex,ipred,iquantile] = value
return
# Default returns the median (50% quantile)
def __call__(self, InputVector, Time= None, training = False, Quantile = None):
lenvector = InputVector.shape[0]
result = np.empty((lenvector,NpredperseqTOT), dtype=np.float32)
if Quantile is None:
Quantile = TFTPrimaryQuantileIndex
for ivector in range(0,lenvector):
dfindex = self.Num_Seq1*InputVector[ivector,0] + InputVector[ivector,1]
result[ivector,:] = self.FFFFWNPFresults[dfindex, :, Quantile]
return result
def CheckProperty(self, iprop):
# Return true if property defined for TFT
# set ScaledProperty to be column to be changed
if (iprop < 0) or (iprop >= NpropperseqTOT):
return False
jprop = TFTPropertyChoice[iprop]
if jprop >= 0:
return True
return False
def SetupProperty(self, iprop):
if self.Scaled:
self.ResetProperty()
if (iprop < 0) or (iprop >= NpropperseqTOT):
return False
jprop = TFTPropertyChoice[iprop]
if jprop >= 0:
self.ScaledProperty = jprop
self.savedcolumn = TFTdfTotal.iloc[:,jprop].copy()
return True
return False
def ScaleProperty(self, ScalingFactor):
jprop = self.ScaledProperty
TFTdfTotal.iloc[:,jprop] = ScalingFactor*self.savedcolumn
self.Scaled = True
return
def ResetProperty(self):
jprop = self.ScaledProperty
if jprop >= 0:
TFTdfTotal.iloc[:,jprop] = self.savedcolumn
self.Scaled = False
self.ScaledProperty = -1
return
# XXX Check MakeMapping
def MakeMapping(self):
best_params = TFTopt_manager.get_best_params()
TFTmodelnew = ModelClass(best_params, TFTdfTotal = TFTdfTotal, use_cudnn=use_tensorflow_with_gpu)
TFTmodelnew.load(TFTopt_manager.hyperparam_folder)
self.currentoutput_map = TFTmodelnew.predict(TFTdfTotal, return_targets=False)
self.setFFFFmapping()
return
###Output
_____no_output_____
###Markdown
Visualize TFTCalled from finalizeDL
###Code
def VisualizeTFT(TFTmodel, output_map):
MyFFFFWNPFLink = TFTSaveandInterpret(TFTmodel, output_map, ReshapedPredictionsTOT)
MyFFFFWNPFLink.setFFFFmapping()
modelflag = 2
FitPredictions = DLprediction(ReshapedSequencesTOT, RawInputPredictionsTOT, MyFFFFWNPFLink, modelflag, LabelFit ='TFT')
# Input Predictions RawInputPredictionsTOT for DLPrediction are ordered Sequence #, Location but
# Input Predictions ReshapedPredictionsTOT for TFTSaveandInterpret are ordered Location, Sequence#
# Note TFT maximum Sequence # is one larger than FFFFWNPF
###Output
_____no_output_____
###Markdown
TFT Routines GLUplusskip: Gated Linear unit plus add and norm with Skip
###Code
# GLU with time distribution optional
# Dropout on input dropout_rate
# Linear layer with hidden_layer_size and activation
# Linear layer with hidden_layer_size and sigmoid
# Follow with an add and norm
class GLUplusskip(tf.keras.layers.Layer):
def __init__(self, hidden_layer_size,
dropout_rate=None,
use_time_distributed=True,
activation=None,
GLUname = 'Default',
**kwargs):
"""Applies a Gated Linear Unit (GLU) to an input.
Follow with an add and norm
Args:
hidden_layer_size: Dimension of GLU
dropout_rate: Dropout rate to apply if any
use_time_distributed: Whether to apply across time (index 1)
activation: Activation function to apply to the linear feature transform if necessary
Returns:
Tuple of tensors for: (GLU output, gate)
"""
super(GLUplusskip, self).__init__(**kwargs)
self.Gatehidden_layer_size = hidden_layer_size
self.Gatedropout_rate = dropout_rate
self.Gateuse_time_distributed = use_time_distributed
self.Gateactivation = activation
if self.Gatedropout_rate is not None:
n1 = 'GLUSkip' + 'dropout' + GLUname
self.FirstDropout = tf.keras.layers.Dropout(self.Gatedropout_rate, name = n1)
n3 = 'GLUSkip' + 'DenseAct1' + GLUname
n5 = 'GLUSkip' + 'DenseAct2' + GLUname
if self.Gateuse_time_distributed:
n2 = 'GLUSkip' + 'TD1' + GLUname
self.Gateactivation_layer = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(self.Gatehidden_layer_size, activation=self.Gateactivation, name=n3), name=n2)
n4 = 'GLUSkip' + 'TD2' + GLUname
self.Gategated_layer = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(self.Gatehidden_layer_size, activation='sigmoid', name=n5), name=n4)
else:
self.Gateactivation_layer = tf.keras.layers.Dense(self.Gatehidden_layer_size, activation=self.Gateactivation, name=n3)
self.Gategated_layer = tf.keras.layers.Dense(self.Gatehidden_layer_size, activation='sigmoid', name=n5)
n6 = 'GLUSkip' + 'Mul' + GLUname
self.GateMultiply = tf.keras.layers.Multiply(name = n6)
n7 = 'GLUSkip'+ 'Add' + GLUname
n8 = 'GLUSkip' + 'Norm' + GLUname
self.GateAdd = tf.keras.layers.Add(name = n7)
self.GateNormalization = tf.keras.layers.LayerNormalization(name = n8)
#[email protected]
def call(self, Gateinput, Skipinput, training=None):
# Args:
# Gateinput: Input to gating layer
# Skipinput: Input to add and norm
if self.Gatedropout_rate is not None:
x = self.FirstDropout(Gateinput)
else:
x = Gateinput
activation_layer = self.Gateactivation_layer(x)
gated_layer = self.Gategated_layer(x)
# Formal end of GLU
GLUoutput = self.GateMultiply([activation_layer, gated_layer])
# Applies skip connection followed by layer normalisation to get GluSkip.
GLUSkipoutput = self.GateAdd([Skipinput,GLUoutput])
GLUSkipoutput = self.GateNormalization(GLUSkipoutput)
return GLUSkipoutput,gated_layer
###Output
_____no_output_____
###Markdown
Linear Layer (Dense)
###Code
# Layer utility functions.
# Single layer size activation with bias and time distribution optional
def TFTlinear_layer(size,
activation=None,
use_time_distributed=False,
use_bias=True,
LLname = 'Default'):
"""Returns simple Keras linear layer.
Args:
size: Output size
activation: Activation function to apply if required
use_time_distributed: Whether to apply layer across time
use_bias: Whether bias should be included in layer
"""
n1 = 'LL'+'Dense'+LLname
linear = tf.keras.layers.Dense(size, activation=activation, use_bias=use_bias,name=n1)
if use_time_distributed:
n2 = 'LL'+'TD'+LLname
linear = tf.keras.layers.TimeDistributed(linear,name=n2)
return linear
###Output
_____no_output_____
###Markdown
Apply MLP
###Code
class apply_mlp(tf.keras.layers.Layer):
def __init__(self, hidden_layer_size, output_size, output_activation=None, hidden_activation='tanh', use_time_distributed=False, MLPname='Default', **kwargs):
"""Applies simple feed-forward network to an input.
Args:
hidden_layer_size: Hidden state size
output_size: Output size of MLP
output_activation: Activation function to apply on output
hidden_activation: Activation function to apply on input
use_time_distributed: Whether to apply across time
Returns:
Tensor for MLP outputs.
"""
super(apply_mlp, self).__init__(**kwargs)
self.MLPhidden_layer_size = hidden_layer_size
self.MLPoutput_size = output_size
self.MLPoutput_activation = output_activation
self.MLPhidden_activation = hidden_activation
self.MLPuse_time_distributed = use_time_distributed
n1 = 'MLPDense1' + MLPname
n2 = 'MLPDense2' + MLPname
if self.MLPuse_time_distributed:
n3 = 'MLPTD1' + MLPname
n4 = 'MLPTD2' + MLPname
MLPFirstLayer = tf.keras.layers.TimeDistributed(
tf.keras.layers.Dense(self.MLPhidden_layer_size, activation=self.MLPhidden_activation, name = n1), name = n3)
MLPSecondLayer = tf.keras.layers.TimeDistributed(
tf.keras.layers.Dense(self.MLPoutput_size, activation=self.MLPoutput_activation, name = n2),name = n4)
else:
MLPFirstLayer = tf.keras.layers.Dense(self.MLPhidden_layer_size, activation=self.MLPhidden_activation, name = n1)
MLPSecondLayer = tf.keras.layers.Dense(self.MLPoutput_size, activation=self.MLPoutput_activation, name = n2)
#[email protected]
def call(self, inputs):
# inputs: MLP inputs
hidden = MLPFirstLayer(inputs)
return MLPSecondLayer(hidden)
###Output
_____no_output_____
###Markdown
GRN Gated Residual Network
###Code
# GRN Gated Residual Network
class GRN(tf.keras.layers.Layer):
def __init__(self, hidden_layer_size, output_size=None, dropout_rate=None,
use_additionalcontext = False, use_time_distributed=True, GRNname='Default', **kwargs):
"""Applies the gated residual network (GRN) as defined in paper.
Args:
hidden_layer_size: Internal state size
output_size: Size of output layer
dropout_rate: Dropout rate if dropout is applied
use_time_distributed: Whether to apply network across time dimension
Returns:
Tuple of tensors for: (GRN output, GLU gate)
"""
super(GRN, self).__init__(**kwargs)
self.GRNhidden_layer_size = hidden_layer_size
self.GRNoutput_size = output_size
if self.GRNoutput_size is None:
self.GRNusedoutput_size = self.GRNhidden_layer_size
else:
self.GRNusedoutput_size = self.GRNoutput_size
self.GRNdropout_rate = dropout_rate
self.GRNuse_time_distributed = use_time_distributed
self.use_additionalcontext = use_additionalcontext
if self.GRNoutput_size is not None:
n1 = 'GRN'+'Dense4' + GRNname
if self.GRNuse_time_distributed:
n2 = 'GRN'+'TD4' + GRNname
self.GRNDense4 = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(self.GRNusedoutput_size,name=n1),name=n2)
else:
self.GRNDense4 = tf.keras.layers.Dense(self.GRNusedoutput_size,name=n1)
n3 = 'GRNDense1' + GRNname
self.GRNDense1 = TFTlinear_layer(
self.GRNhidden_layer_size,
activation=None,
use_time_distributed=self.GRNuse_time_distributed,
LLname=n3)
if self.use_additionalcontext:
n4 = 'GRNDense2' + GRNname
self.GRNDense2= TFTlinear_layer(
self.GRNhidden_layer_size,
activation=None,
use_time_distributed=self.GRNuse_time_distributed,
use_bias=False,
LLname=n4)
n5 = 'GRNAct' + GRNname
self.GRNActivation = tf.keras.layers.Activation('elu',name=n5)
n6 = 'GRNDense3' + GRNname
self.GRNDense3 = TFTlinear_layer(
self.GRNhidden_layer_size,
activation=None,
use_time_distributed=self.GRNuse_time_distributed,
LLname =n6)
n7 = 'GRNGLU' + GRNname
self.GRNGLUplusskip = GLUplusskip(hidden_layer_size = self.GRNusedoutput_size, dropout_rate=self.GRNdropout_rate,
use_time_distributed= self.GRNuse_time_distributed, GLUname=n7)
#[email protected]
def call(self, x, additional_context=None, return_gate=False, training=None):
"""Args:
x: Network inputs
additional_context: Additional context vector to use if relevant
return_gate: Whether to return GLU gate for diagnostic purposes
"""
# Setup skip connection of given size
if self.GRNoutput_size is None:
skip = x
else:
skip = self.GRNDense4(x)
# Apply feedforward network
hidden = self.GRNDense1(x)
if additional_context is not None:
if not self.use_additionalcontext:
printexit('Inconsistent context in GRN')
hidden = hidden + self.GRNDense2(additional_context)
else:
if self.use_additionalcontext:
printexit('Inconsistent context in GRN')
hidden = self.GRNActivation(hidden)
hidden = self.GRNDense3(hidden)
gating_layer, gate = self.GRNGLUplusskip(hidden,skip)
if return_gate:
return gating_layer, gate
else:
return gating_layer
###Output
_____no_output_____
###Markdown
Process Static Variables
###Code
# Process Static inputs in TFT Style
# TFTScaledStaticInputs[Location,0...NumTrueStaticVariables]
class ProcessStaticInput(tf.keras.layers.Layer):
def __init__(self, hidden_layer_size, dropout_rate, num_staticproperties, **kwargs):
super(ProcessStaticInput, self).__init__(**kwargs)
self.hidden_layer_size = hidden_layer_size
self.num_staticproperties = num_staticproperties
self.dropout_rate = dropout_rate
n4 = 'ProcStaticFlat'
self.Flatten = tf.keras.layers.Flatten(name=n4)
n5 = 'ProcStaticG1'
n7 = 'ProcStaticSoftmax'
n8 = 'ProcStaticMul'
self.StaticInputGRN1 = GRN(self.hidden_layer_size, dropout_rate=self.dropout_rate,
output_size=self.num_staticproperties, use_time_distributed=False, GRNname=n5)
self.StaticInputGRN2 = []
for i in range(0,self.num_staticproperties):
n6 = 'ProcStaticG2-'+str(i)
self.StaticInputGRN2.append(GRN(self.hidden_layer_size, dropout_rate=self.dropout_rate,
use_time_distributed=False, GRNname = n6))
self.StaticInputsoftmax = tf.keras.layers.Activation('softmax', name= n7)
self.StaticMultiply = tf.keras.layers.Multiply(name = n8)
#[email protected]
def call(self, static_inputs, training=None):
# Embed Static Inputs
num_static = static_inputs.shape[1]
if num_static != self.num_staticproperties:
printexit('Incorrect number of static variables')
if num_static == 0:
return None, None
# static_inputs is [Batch, Static variable, TFTd_model] converted to
# flatten is [Batch, Static variable*TFTd_model]
flatten = self.Flatten(static_inputs)
# Nonlinear transformation with gated residual network.
mlp_outputs = self.StaticInputGRN1(flatten)
sparse_weights = self.StaticInputsoftmax(mlp_outputs)
sparse_weights = tf.expand_dims(sparse_weights, axis=-1)
trans_emb_list = []
for i in range(num_static):
e = self.StaticInputGRN2[i](static_inputs[:,i:i+1,:])
trans_emb_list.append(e)
transformed_embedding = tf.concat(trans_emb_list, axis=1)
combined = self.StaticMultiply([sparse_weights, transformed_embedding])
static_encoder = tf.math.reduce_sum(combined, axis=1)
return static_encoder, sparse_weights
###Output
_____no_output_____
###Markdown
Process Dynamic Variables
###Code
# Process Initial Dynamic inputs in TFT Style
# ScaledDynamicInputs[Location, time_steps,0...NumDynamicVariables]
class ProcessDynamicInput(tf.keras.layers.Layer):
def __init__(self, hidden_layer_size, dropout_rate, NumDynamicVariables, PDIname='Default', **kwargs):
super(ProcessDynamicInput, self).__init__(**kwargs)
self.hidden_layer_size = hidden_layer_size
self.NumDynamicVariables = NumDynamicVariables
self.dropout_rate = dropout_rate
n6 = PDIname + 'ProcDynG1'
n8 = PDIname + 'ProcDynSoftmax'
n9 = PDIname + 'ProcDynMul'
self.DynamicVariablesGRN1 = GRN(self.hidden_layer_size, dropout_rate=self.dropout_rate,
output_size=self.NumDynamicVariables, use_additionalcontext = True, use_time_distributed=True, GRNname = n6)
self.DynamicVariablesGRN2 = []
for i in range(0,self.NumDynamicVariables):
n7 = PDIname + 'ProcDynG2-'+str(i)
self.DynamicVariablesGRN2.append(GRN(self.hidden_layer_size, dropout_rate=self.dropout_rate,
use_additionalcontext = False, use_time_distributed=True, name = n7))
self.DynamicVariablessoftmax = tf.keras.layers.Activation('softmax', name = n8)
self.DynamicVariablesMultiply = tf.keras.layers.Multiply(name = n9)
#[email protected]
def call(self, dynamic_variables, static_context_variable_selection=None, training=None):
# Add time window index to static context
if static_context_variable_selection is None:
self.expanded_static_context = None
else:
self.expanded_static_context = tf.expand_dims(static_context_variable_selection, axis=1)
# Test Dynamic Variables
num_dynamic = dynamic_variables.shape[-1]
if num_dynamic != self.NumDynamicVariables:
printexit('Incorrect number of Dynamic Inputs ' + str(num_dynamic) + ' ' + str(self.NumDynamicVariables))
if num_dynamic == 0:
return None, None, None
# dynamic_variables is [Batch, Time window index, Dynamic variable, TFTd_model] converted to
# flatten is [Batch, Time window index, Dynamic variable,*TFTd_model]
_,time_steps,embedding_dimension,num_inputs = dynamic_variables.get_shape().as_list()
flatten = tf.reshape(dynamic_variables, [-1,time_steps,embedding_dimension * num_inputs])
# Nonlinear transformation with gated residual network.
mlp_outputs, static_gate = self.DynamicVariablesGRN1(flatten, additional_context=self.expanded_static_context, return_gate=True)
sparse_weights = self.DynamicVariablessoftmax(mlp_outputs)
sparse_weights = tf.expand_dims(sparse_weights, axis=2)
trans_emb_list = []
for i in range(num_dynamic):
e = self.DynamicVariablesGRN2[i](dynamic_variables[Ellipsis,i], additional_context=None)
trans_emb_list.append(e)
transformed_embedding = tf.stack(trans_emb_list, axis=-1)
combined = self.DynamicVariablesMultiply([sparse_weights, transformed_embedding])
temporal_ctx = tf.math.reduce_sum(combined, axis=-1)
return temporal_ctx, sparse_weights, static_gate
###Output
_____no_output_____
###Markdown
TFT LSTM
###Code
class TFTLSTMLayer(tf.keras.Model):
# Class for TFT Encoder multiple layer LSTM with possible FCN at start and end
# All parameters defined externally
def __init__(self, TFTLSTMSecondLayer, TFTLSTMThirdLayer,
TFTLSTMInitialMLP, TFTLSTMFinalMLP,
TFTnumber_LSTMnodes, TFTLSTMd_model,
TFTLSTMactivationvalue, TFTLSTMrecurrent_activation,
TFTLSTMdropout1, TFTLSTMrecurrent_dropout1,
TFTreturn_state, LSTMname='Default', **kwargs):
super(TFTLSTMLayer, self).__init__(**kwargs)
self.TFTLSTMSecondLayer = TFTLSTMSecondLayer
self.TFTLSTMThirdLayer = TFTLSTMThirdLayer
self.TFTLSTMInitialMLP = TFTLSTMInitialMLP
self.TFTLSTMFinalMLP = TFTLSTMFinalMLP
self.TFTLSTMd_model = TFTLSTMd_model
self.TFTnumber_LSTMnodes = TFTnumber_LSTMnodes
self.TFTLSTMactivationvalue = TFTLSTMactivationvalue
self.TFTLSTMdropout1 = TFTLSTMdropout1
self.TFTLSTMrecurrent_dropout1 = TFTLSTMrecurrent_dropout1
self.TFTLSTMrecurrent_activation = TFTLSTMrecurrent_activation
self.TFTLSTMreturn_state = TFTreturn_state
self.first_return_state = self.TFTLSTMreturn_state
if self.TFTLSTMSecondLayer:
self.first_return_state = True
self.second_return_state = self.TFTLSTMreturn_state
if self.TFTLSTMThirdLayer:
self.second_return_state = True
self.third_return_state = self.TFTLSTMreturn_state
if self.TFTLSTMInitialMLP > 0:
n1= LSTMname +'LSTMDense1'
self.dense_1 = tf.keras.layers.Dense(self.TFTLSTMInitialMLP, activation=self.TFTLSTMactivationvalue, name =n1)
n2= LSTMname +'LSTMLayer1'
if myTFTTools.TFTuseCUDALSTM:
self.LSTM_1 = tf.compat.v1.keras.layers.CuDNNLSTM(
self.TFTnumber_LSTMnodes,
return_sequences=True,
return_state=self.first_return_state,
stateful=False, name=n2)
else:
self.LSTM_1 =tf.keras.layers.LSTM(self.TFTnumber_LSTMnodes, recurrent_dropout= self.TFTLSTMrecurrent_dropout1, dropout = self.TFTLSTMdropout1,
return_state = self.first_return_state, activation= self.TFTLSTMactivationvalue , return_sequences=True,
recurrent_activation= self.TFTLSTMrecurrent_activation, name=n2)
if self.TFTLSTMSecondLayer:
n3= LSTMname +'LSTMLayer2'
if myTFTTools.TFTuseCUDALSTM:
self.LSTM_2 = tf.compat.v1.keras.layers.CuDNNLSTM(
self.TFTnumber_LSTMnodes,
return_sequences=True,
return_state=self.second_return_state,
stateful=False, name=n3)
else:
self.LSTM_2 =tf.keras.layers.LSTM(self.TFTnumber_LSTMnodes, recurrent_dropout= self.TFTLSTMrecurrent_dropout1, dropout = self.TFTLSTMdropout1,
return_state = self.second_return_state, activation= self.TFTLSTMactivationvalue , return_sequences=True,
recurrent_activation= self.TFTLSTMrecurrent_activation, name=n3)
if self.TFTLSTMThirdLayer:
n4= LSTMname +'LSTMLayer3'
if myTFTTools.TFTuseCUDALSTM:
self.LSTM_3 = tf.compat.v1.keras.layers.CuDNNLSTM(
self.TFTnumber_LSTMnodes,
return_sequences=True,
return_state=self.third_return_state,
stateful=False, name=n4)
else:
self.LSTM_3 =tf.keras.layers.LSTM(self.TFTnumber_LSTMnodes, recurrent_dropout= self.TFTLSTMrecurrent_dropout1, dropout = self.TFTLSTMdropout1,
return_state = self.third_return_state, activation= self.TFTLSTMactivationvalue ,
return_sequences=True, recurrent_activation= self.TFTLSTMrecurrent_activation, name=n4)
if self.TFTLSTMFinalMLP > 0:
n5= LSTMname +'LSTMDense2'
n6= LSTMname +'LSTMDense3'
self.dense_2 = tf.keras.layers.Dense(self.TFTLSTMFinalMLP, activation=self.TFTLSTMactivationvalue, name=n5)
self.dense_f = tf.keras.layers.Dense(self.TFTLSTMd_model, name= n6)
#[email protected]
def call(self, inputs, initial_state = None, training=None):
if initial_state is None:
printexit(' Missing context in LSTM ALL')
if initial_state[0] is None:
printexit(' Missing context in LSTM h')
if initial_state[1] is None:
printexit(' Missing context in LSTM c')
returnstate_h = None
returnstate_c = None
if self.TFTLSTMInitialMLP > 0:
Runningdata = self.dense_1(inputs)
else:
Runningdata = inputs
if self.first_return_state:
Runningdata, returnstate_h, returnstate_c = self.LSTM_1(inputs, training=training, initial_state=initial_state)
if returnstate_h is None:
printexit('Missing context in LSTM returnstate_h')
if returnstate_c is None:
printexit('Missing context in LSTM returnstate_c')
else:
Runningdata = self.LSTM_1(inputs, training=training, initial_state=initial_state)
if self.TFTLSTMSecondLayer:
initial_statehc2 = None
if self.first_return_state:
initial_statehc2 = [returnstate_h, returnstate_c]
if self.second_return_state:
Runningdata, returnstate_h, returnstate_c = self.LSTM_2(Runningdata, training=training, initial_state=initial_statehc2)
if returnstate_h is None:
printexit('Missing context in LSTM returnstate_h2')
if returnstate_c is None:
printexit('Missing context in LSTM returnstate_c2')
else:
Runningdata = self.LSTM_2(Runningdata, training=training, initial_state=initial_statehc2)
if self.TFTLSTMThirdLayer:
initial_statehc3 = None
if self.first_return_state:
initial_statehc3 = [returnstate_h, returnstate_c]
if self.third_return_state:
Runningdata, returnstate_h, returnstate_c = self.LSTM_3(Runningdata, training=training, initial_state=initial_statehc3)
else:
Runningdata = self.LSTM_3(Runningdata, training=training, initial_state=initial_statehc3)
if self.TFTLSTMFinalMLP > 0:
Runningdata = self.dense_2(Runningdata)
Outputdata = self.dense_f(Runningdata)
else:
Outputdata = Runningdata
if self.TFTLSTMreturn_state:
return Outputdata, returnstate_h, returnstate_c
else:
return Outputdata
def build_graph(self, shapes):
input = tf.keras.layers.Input(shape=shapes, name="Input")
return tf.keras.models.Model(inputs=[input], outputs=[self.call(input)])
###Output
_____no_output_____
###Markdown
TFT Multihead Temporal Attention
###Code
# Attention Components.
#[email protected]
def TFTget_decoder_mask(self_attn_inputs):
"""Returns causal mask to apply for self-attention layer.
Args:
self_attn_inputs: Inputs to self attention layer to determine mask shape
"""
len_s = tf.shape(self_attn_inputs)[1]
bs = tf.shape(self_attn_inputs)[:1]
mask = tf.math.cumsum(tf.eye(len_s, batch_shape=bs), 1)
return mask
class TFTScaledDotProductAttention(tf.keras.Model):
"""Defines scaled dot product attention layer for TFT
Attributes:
dropout: Dropout rate to use
activation: Normalisation function for scaled dot product attention (e.g.
softmax by default)
"""
def __init__(self, attn_dropout=0.0, SPDAname='Default', **kwargs):
super(TFTScaledDotProductAttention, self).__init__(**kwargs)
n1 = SPDAname + 'SPDADropout'
n2 = SPDAname + 'SPDASoftmax'
n3 = SPDAname + 'SPDAAdd'
self.dropoutlayer = tf.keras.layers.Dropout(attn_dropout, name= n1)
self.activationlayer = tf.keras.layers.Activation('softmax', name= n2)
self.addlayer = tf.keras.layers.Add(name=n3)
#[email protected]
def call(self, q, k, v, mask):
"""Applies scaled dot product attention.
Args:
q: Queries
k: Keys
v: Values
mask: Masking if required -- sets softmax to very large value
Returns:
Tuple of (layer outputs, attention weights)
"""
temper = tf.sqrt(tf.cast(tf.shape(k)[-1], dtype='float32'))
attn = tf.keras.layers.Lambda(lambda x: tf.keras.backend.batch_dot(x[0], x[1], axes=[2, 2]) / temper)(
[q, k]) # shape=(batch, q, k)
if mask is not None:
mmask = tf.keras.layers.Lambda(lambda x: (-1e+9) * (1. - tf.cast(x, 'float32')))( mask) # setting to infinity
attn = self.addlayer([attn, mmask])
attn = self.activationlayer(attn)
attn = self.dropoutlayer(attn)
output = tf.keras.layers.Lambda(lambda x: tf.keras.backend.batch_dot(x[0], x[1]))([attn, v])
return output, attn
class TFTInterpretableMultiHeadAttention(tf.keras.Model):
"""Defines interpretable multi-head attention layer for time only.
Attributes:
n_head: Number of heads
d_k: Key/query dimensionality per head
d_v: Value dimensionality
dropout: Dropout rate to apply
qs_layers: List of queries across heads
ks_layers: List of keys across heads
vs_layers: List of values across heads
attention: Scaled dot product attention layer
w_o: Output weight matrix to project internal state to the original TFT
state size
"""
#[email protected]
def __init__(self, n_head, d_model, dropout, MHAname ='Default', **kwargs):
super(TFTInterpretableMultiHeadAttention, self).__init__(**kwargs)
"""Initialises layer.
Args:
n_head: Number of heads
d_model: TFT state dimensionality
dropout: Dropout discard rate
"""
self.n_head = n_head
self.d_k = self.d_v = d_model // n_head
self.d_model = d_model
self.dropout = dropout
self.qs_layers = []
self.ks_layers = []
self.vs_layers = []
# Use same value layer to facilitate interp
n3= MHAname + 'MHAV'
vs_layer = tf.keras.layers.Dense(self.d_v, use_bias=False,name= n3)
self.Dropoutlayer1 =[]
for i_head in range(n_head):
n1= MHAname + 'MHAQ' + str(i)
n2= MHAname + 'MHAK' + str(i)
self.qs_layers.append(tf.keras.layers.Dense(self.d_k, use_bias=False, name = n1))
self.ks_layers.append(tf.keras.layers.Dense(self.d_k, use_bias=False, name = n2))
self.vs_layers.append(vs_layer) # use same vs_layer
n4= MHAname + 'Dropout1-' + str(i)
self.Dropoutlayer1.append(tf.keras.layers.Dropout(self.dropout, name = n4))
self.attention = TFTScaledDotProductAttention(SPDAname = MHAname)
n5= MHAname + 'Dropout2'
n6= MHAname + 'w_olayer'
self.Dropoutlayer2 = tf.keras.layers.Dropout(self.dropout, name = n5)
self.w_olayer = tf.keras.layers.Dense(d_model, use_bias=False, name = n6)
#[email protected]
def call(self, q, k, v, mask=None):
"""Applies interpretable multihead attention.
Using T to denote the number of past + future time steps fed into the transformer.
Args:
q: Query tensor of shape=(?, T, d_model)
k: Key of shape=(?, T, d_model)
v: Values of shape=(?, T, d_model)
mask: Masking if required with shape=(?, T, T)
Returns:
Tuple of (layer outputs, attention weights)
"""
heads = []
attns = []
for i in range(self.n_head):
qs = self.qs_layers[i](q)
ks = self.ks_layers[i](k)
vs = self.vs_layers[i](v)
head, attn = self.attention(qs, ks, vs, mask)
head_dropout = self.Dropoutlayer1[i](head)
heads.append(head_dropout)
attns.append(attn)
head = tf.stack(heads) if self.n_head > 1 else heads[0]
attn = tf.stack(attns)
outputs = tf.math.reduce_mean(head, axis=0) if self.n_head > 1 else head
outputs = self.w_olayer(outputs)
outputs = self.Dropoutlayer2(outputs) # output dropout
return outputs, attn
###Output
_____no_output_____
###Markdown
TFTFullNetwork
###Code
class TFTFullNetwork(tf.keras.Model):
def __init__(self, **kwargs):
super(TFTFullNetwork, self).__init__(**kwargs)
# XXX check TFTSeq TFTNloc UniqueLocations
self.TFTSeq = 0
self.TFTNloc = 0
self.UniqueLocations = []
self.hidden_layer_size = myTFTTools.hidden_layer_size
self.dropout_rate = myTFTTools.dropout_rate
self.num_heads = myTFTTools.num_heads
# New parameters in this TFT version
self.num_static = len(myTFTTools._static_input_loc)
self.num_categorical_variables = len(myTFTTools.category_counts)
self.NumDynamicHistoryVariables = myTFTTools.input_size - self.num_static # Note Future (targets) are also in history
self.num_regular_variables = myTFTTools.input_size - self.num_categorical_variables
self.NumDynamicFutureVariables = 0
for i in myTFTTools._known_regular_input_idx:
if i not in myTFTTools._static_input_loc:
self.NumDynamicFutureVariables += 1
for i in myTFTTools._known_categorical_input_idx:
if i + self.num_regular_variables not in myTFTTools._static_input_loc:
self.NumDynamicFutureVariables += 1
# Embed Categorical Variables
self.CatVariablesembeddings = []
for i in range(0,self.num_categorical_variables):
numcat = self.category_counts[i]
n1 = 'CatEmbed-'+str(i)
n2 = n1 + 'Input ' + str(numcat)
n3 = n1 + 'Map'
n1 = n1 +'Seq'
embedding = tf.keras.Sequential([
tf.keras.layers.InputLayer([myTFTTools.time_steps],name=n2),
tf.keras.layers.Embedding(
numcat,
self.hidden_layer_size,
input_length=myTFTTools.time_steps,
dtype=tf.float32,name=n3)
],name=n1)
self.CatVariablesembeddings.append(embedding)
# Embed Static Variables
numstatic = 0
self.StaticInitialembeddings = []
for i in range(self.num_regular_variables):
if i in myTFTTools._static_input_loc:
n1 = 'StaticRegEmbed-'+str(numstatic)
embedding = tf.keras.layers.Dense(self.hidden_layer_size, name=n1)
self.StaticInitialembeddings.append(embedding)
numstatic += 1
# Embed Targets _input_obs_loc - also included as part of Observed inputs
self.convert_obs_inputs = []
num_obs_inputs = 0
for i in myTFTTools._input_obs_loc:
n1 = 'OBSINPEmbed-Dense-'+str(num_obs_inputs)
n2 = 'OBSINPEmbed-Time-'+str(num_obs_inputs)
embedding = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(self.hidden_layer_size,name=n1), name=n2)
num_obs_inputs += 1
self.convert_obs_inputs.append(embedding)
# Embed unknown_inputs which are elsewhere called observed inputs
self.convert_unknown_inputs = []
num_unknown_inputs = 0
for i in range(self.num_regular_variables):
if i not in myTFTTools._known_regular_input_idx and i not in myTFTTools._input_obs_loc:
n1 = 'UNKINPEmbed-Dense-'+str(num_unknown_inputs)
n2 = 'UNKINPEmbed-Time-'+str(num_unknown_inputs)
embedding = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(self.hidden_layer_size,name=n1), name=n2)
num_unknown_inputs += 1
self.convert_unknown_inputs.append(embedding)
# Embed Known Inputs
self.convert_known_regular_inputs = []
num_known_regular_inputs = 0
for i in myTFTTools._known_regular_input_idx:
if i not in myTFTTools._static_input_loc:
n1 = 'KnownINPEmbed-Dense-'+str(num_known_regular_inputs)
n2 = 'KnownINPEmbed-Time-'+str(num_known_regular_inputs)
embedding = tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(self.hidden_layer_size,name=n1), name=n2)
num_known_regular_inputs += 1
self.convert_known_regular_inputs.append(embedding)
# Select Input Static Variables
self.ControlProcessStaticInput = ProcessStaticInput(self.hidden_layer_size,self.dropout_rate, self.num_static)
self.StaticGRN1 = GRN(self.hidden_layer_size, dropout_rate=self.dropout_rate, use_time_distributed=False, GRNname = 'Control1')
self.StaticGRN2 = GRN(self.hidden_layer_size, dropout_rate=self.dropout_rate, use_time_distributed=False, GRNname = 'Control2')
self.StaticGRN3 = GRN(self.hidden_layer_size, dropout_rate=self.dropout_rate, use_time_distributed=False, GRNname = 'Control3')
self.StaticGRN4 = GRN(self.hidden_layer_size, dropout_rate=self.dropout_rate, use_time_distributed=False, GRNname = 'Control4')
# Select Input Dynamic Variables
self.ControlProcessDynamicInput1 = ProcessDynamicInput(self.hidden_layer_size, self.dropout_rate,
self.NumDynamicHistoryVariables, PDIname='Control1')
if myTFTTools.TFTdefaultLSTM:
self.TFTLSTMEncoder = tf.compat.v1.keras.layers.CuDNNLSTM(
self.hidden_layer_size,
return_sequences=True,
return_state=True,
stateful=False,
)
self.TFTLSTMDecoder = tf.compat.v1.keras.layers.CuDNNLSTM(
self.hidden_layer_size,
return_sequences=True,
return_state=False,
stateful=False,
)
else:
self.TFTLSTMEncoder = TFTLSTMLayer( myTFTTools.TFTLSTMEncoderSecondLayer, myTFTTools.TFTLSTMEncoderThirdLayer,
myTFTTools.TFTLSTMEncoderInitialMLP, myTFTTools.TFTLSTMEncoderFinalMLP,
myTFTTools.number_LSTMnodes, self.hidden_layer_size,
myTFTTools.TFTLSTMEncoderactivationvalue, myTFTTools.TFTLSTMEncoderrecurrent_activation,
myTFTTools.TFTLSTMEncoderdropout1, myTFTTools.TFTLSTMEncoderrecurrent_dropout1, TFTreturn_state = True, LSTMname='ControlEncoder')
self.TFTLSTMDecoder = TFTLSTMLayer(myTFTTools.TFTLSTMDecoderSecondLayer, myTFTTools.TFTLSTMDecoderThirdLayer,
myTFTTools.TFTLSTMDecoderInitialMLP, myTFTTools.TFTLSTMDecoderFinalMLP,
myTFTTools.number_LSTMnodes, self.hidden_layer_size,
myTFTTools.TFTLSTMDecoderactivationvalue, myTFTTools.TFTLSTMDecoderrecurrent_activation,
myTFTTools.TFTLSTMDecoderdropout1, myTFTTools.TFTLSTMDecoderrecurrent_dropout1, TFTreturn_state = False, LSTMname='ControlDecoder')
self.TFTFullLSTMGLUplusskip = GLUplusskip(self.hidden_layer_size, self.dropout_rate, activation=None,
use_time_distributed=True, GLUname='ControlLSTM')
self.TemporalGRN5 = GRN(self.hidden_layer_size, dropout_rate=self.dropout_rate, use_additionalcontext = True,
use_time_distributed=True, GRNname = 'Control5')
self.ControlProcessDynamicInput2 = ProcessDynamicInput(self.hidden_layer_size, self.dropout_rate,
self.NumDynamicFutureVariables, PDIname='Control2')
# Decoder self attention
self.TFTself_attn_layer = TFTInterpretableMultiHeadAttention(
self.num_heads, self.hidden_layer_size, self.dropout_rate)
# Set up for final prediction
self.FinalGLUplusskip2 = []
self.FinalGLUplusskip3 = []
self.FinalGRN6 = []
for FinalGatingLoop in range(0, myTFTTools.FinalLoopSize):
self.FinalGLUplusskip2.append(GLUplusskip(self.hidden_layer_size, self.dropout_rate, activation=None,
use_time_distributed=True, GLUname='ControlFinal2-'+str(FinalGatingLoop)))
self.FinalGLUplusskip3.append(GLUplusskip(self.hidden_layer_size, self.dropout_rate, activation=None,
use_time_distributed=True, GLUname='ControlFinal3-'+str(FinalGatingLoop)))
self.FinalGRN6.append(GRN(self.hidden_layer_size, dropout_rate=self.dropout_rate, use_time_distributed=True, GRNname = 'Control6-'+str(FinalGatingLoop)))
# Final Processing
if myTFTTools.TFTLSTMFinalMLP > 0:
self.FinalApplyMLP = apply_mlp(myTFTTools.TFTLSTMFinalMLP, output_size = myTFTTools.output_size * myTFTTools.NumberQuantiles,
output_activation = None, hidden_activation = 'selu',
use_time_distributed = True, MLPname='Predict')
else:
if myTFTTools.FinalLoopSize == 1:
n1 = 'FinalTD'
n2 = 'FinalDense'
self.FinalLayer = tf.keras.layers.TimeDistributed(
tf.keras.layers.Dense(myTFTTools.output_size * myTFTTools.NumberQuantiles, name = n2), name =n1)
else:
self.FinalStack =[]
localloopsize = myTFTTools.output_size * myTFTTools.NumberQuantiles
for localloop in range(0,localloopsize):
self.FinalStack.append(tf.keras.layers.Dense(1))
# Called with each batch as input
#[email protected]
def call(self, all_inputs, ignoredtime, ignoredidentifiers, training=None):
# ignoredtime, ignoredidentifiers not used
time_steps = myTFTTools.time_steps
combined_input_size = myTFTTools.input_size
encoder_steps = myTFTTools.num_encoder_steps
# Sanity checks on inputs
for InputIndex in myTFTTools._known_regular_input_idx:
if InputIndex in myTFTTools._input_obs_loc:
printexit('Observation cannot be known a priori!' + str(InputIndex))
for InputIndex in myTFTTools._input_obs_loc:
if InputIndex in myTFTTools._static_input_loc:
printexit('Observation cannot be static!' + str(InputIndex))
Sizefrominputs = all_inputs.get_shape().as_list()[-1]
if Sizefrominputs != myTFTTools.input_size:
raise printexit(f'Illegal number of inputs! Inputs observed={Sizefrominputs}, expected={myTFTTools.input_size}')
regular_inputs, categorical_inputs = all_inputs[:, :, :self.num_regular_variables], all_inputs[:, :, self.num_regular_variables:]
# Embed categories of all categorical variables -- static and Dynamic
# categorical variables MUST be at end and reordering done in preprocessing (definition of train valid test)
# XXX add reordering
categoricalembedded_inputs = []
for i in range(0,self.num_categorical_variables):
categoricalembedded_inputs.append( CatVariablesembeddings[i](categorical_inputs[Ellipsis, i]) )
# Complete Static Variables -- whether categorical or regular -- they are essentially thought of as known inputs
if myTFTTools._static_input_loc:
static_inputs = []
numstatic = 0
for i in range(self.num_regular_variables):
if i in myTFTTools._static_input_loc:
static_inputs.append(self.StaticInitialembeddings[numstatic](regular_inputs[:, 0, i:i + 1]) )
numstatic += 1
static_inputs = static_inputs + [self.categoricalembedded_inputs[i][:, 0, :]
for i in range(self.num_categorical_variables)
if i + self.num_regular_variables in myTFTTools._static_input_loc]
static_inputs = tf.stack(static_inputs, axis=1)
else:
static_inputs = None
# Targets misleadingly labelled obs_inputs. They are used as targets to predict and as observed inputs
obs_inputs = []
num_obs_inputs = 0
for i in myTFTTools._input_obs_loc:
e = self.convert_obs_inputs[num_obs_inputs](regular_inputs[Ellipsis, i:i + 1])
num_obs_inputs += 1
obs_inputs.append(e)
obs_inputs = tf.stack(obs_inputs, axis=-1)
# Categorical Unknown inputs. Unknown + Target is complete Observed InputCategory
categorical_unknown_inputs = []
for i in range(self.num_categorical_variables):
if i not in myTFTTools._known_categorical_input_idx and i + self.num_regular_variables not in myTFTTools._input_obs_loc:
e = self.categoricalembedded_inputs[i]
categorical_unknown_inputs.append(e)
# Regular Unknown inputs
unknown_inputs = []
num_unknown_inputs = 0
for i in range(self.num_regular_variables):
if i not in myTFTTools._known_regular_input_idx and i not in myTFTTools._input_obs_loc:
e = self.convert_unknown_inputs[num_unknown_inputs](regular_inputs[Ellipsis, i:i + 1])
num_unknown_inputs += 1
unknown_inputs.append(e)
# Add in categorical_unknown_inputs into unknown_inputs
if unknown_inputs + categorical_unknown_inputs:
unknown_inputs = tf.stack(unknown_inputs + categorical_unknown_inputs, axis=-1)
else:
unknown_inputs = None
# A priori known inputs
known_regular_inputs = []
num_known_regular_inputs = 0
for i in myTFTTools._known_regular_input_idx:
if i not in myTFTTools._static_input_loc:
e = self.convert_known_regular_inputs[num_known_regular_inputs](regular_inputs[Ellipsis, i:i + 1])
num_known_regular_inputs += 1
known_regular_inputs.append(e)
known_categorical_inputs = []
for i in myTFTTools._known_categorical_input_idx:
if i + self.num_regular_variables not in myTFTTools._static_input_loc:
e = categoricalembedded_inputs[i]
known_categorical_inputs.append(e)
known_combined_layer = tf.stack(known_regular_inputs + known_categorical_inputs, axis=-1)
# Now we know unknown_inputs, known_combined_layer, obs_inputs, static_inputs
# Identify known and observed historical_inputs.
if unknown_inputs is not None:
historical_inputs = tf.concat([
unknown_inputs[:, :encoder_steps, :],
known_combined_layer[:, :encoder_steps, :],
obs_inputs[:, :encoder_steps, :]
], axis=-1)
else:
historical_inputs = tf.concat([
known_combined_layer[:, :encoder_steps, :],
obs_inputs[:, :encoder_steps, :]
], axis=-1)
# Identify known future inputs.
future_inputs = known_combined_layer[:, encoder_steps:, :]
# Process Static Variables
static_encoder, static_weights = self.ControlProcessStaticInput(static_inputs)
static_context_variable_selection = self.StaticGRN1(static_encoder)
static_context_enrichment = self.StaticGRN2(static_encoder)
static_context_state_h = self.StaticGRN3(static_encoder)
static_context_state_c = self.StaticGRN4(static_encoder)
# End set up of static variables
historical_features, historical_flags, _ = self.ControlProcessDynamicInput1(historical_inputs,
static_context_variable_selection = static_context_variable_selection)
history_lstm, state_h, state_c = self.TFTLSTMEncoder(historical_features, initial_state = [static_context_state_h, static_context_state_c])
input_embeddings = historical_features
lstm_layer = history_lstm
future_features, future_flags, _ = self.ControlProcessDynamicInput2(future_inputs, static_context_variable_selection = static_context_variable_selection)
future_lstm = self.TFTLSTMDecoder(future_features, initial_state= [state_h, state_c])
input_embeddings = tf.concat([historical_features, future_features], axis=1)
lstm_layer = tf.concat([history_lstm, future_lstm], axis=1)
temporal_feature_layer, _ = self.TFTFullLSTMGLUplusskip(lstm_layer, input_embeddings)
expanded_static_context = tf.expand_dims(static_context_enrichment, axis=1) # Add fake time axis
enriched = self.TemporalGRN5(temporal_feature_layer, additional_context=expanded_static_context, return_gate=False)
# Calculate attention
# mask does not use "time" as implicit in order of entries in window
mask = TFTget_decoder_mask(enriched)
x, self_att = self.TFTself_attn_layer(enriched, enriched, enriched, mask=mask)
if myTFTTools.FinalLoopSize > 1:
StackLayers = []
for FinalGatingLoop in range(0, myTFTTools.FinalLoopSize):
x, _ = self.FinalGLUplusskip2[FinalGatingLoop](x,enriched)
# Nonlinear processing on outputs
decoder = self.FinalGRN6[FinalGatingLoop](x)
# Final skip connection
transformer_layer, _ = self.FinalGLUplusskip3[FinalGatingLoop](decoder, temporal_feature_layer)
if myTFTTools.FinalLoopSize > 1:
StackLayers.append(transformer_layer)
# End Loop over FinalGatingLoop
if myTFTTools.FinalLoopSize > 1:
transformer_layer = tf.stack(StackLayers, axis=-1)
# Attention components for explainability IGNORED
attention_components = {
# Temporal attention weights
'decoder_self_attn': self_att,
# Static variable selection weights
'static_flags': static_weights[Ellipsis, 0],
# Variable selection weights of past inputs
'historical_flags': historical_flags[Ellipsis, 0, :],
# Variable selection weights of future inputs
'future_flags': future_flags[Ellipsis, 0, :]
}
self._attention_components = attention_components
# Original split procerssing here and did
# return transformer_layer, all_inputs, attention_components
if myTFTTools.TFTLSTMFinalMLP > 0:
outputs = self.FinalApplyMLP(transformer_layer[Ellipsis, encoder_steps:, :])
else:
if myTFTTools.FinalLoopSize == 1:
outputs = self.FinalLayer(transformer_layer[Ellipsis, encoder_steps:, :])
else:
outputstack =[]
localloopsize = myTFTTools.output_size * myTFTTools.NumberQuantiles
for localloop in range(0,localloopsize):
localoutput = self.FinalStack[localloop](transformer_layer[Ellipsis, encoder_steps:, :, localloop])
outputstack.append(localoutput)
outputs = tf.stack(outputstack, axis=-2)
outputs = tf.squeeze(outputs, axis=-1)
return outputs
###Output
_____no_output_____
###Markdown
TFT Run & Output General Utilities
###Code
def get_model_summary(model):
stream = io.StringIO()
model.summary(print_fn=lambda x: stream.write(x + '\n'))
summary_string = stream.getvalue()
stream.close()
return summary_string
def setDLinput(Spacetime = True):
# Initial data is Flatten([Num_Seq][Nloc]) [Tseq] with values [Nprop-Sel + Nforcing + Add(ExPosEnc-Selin)] starting with RawInputSequencesTOT
# Predictions are Flatten([Num_Seq] [Nloc]) [Predvals=Npred+ExPosEnc-Selout] [Predtimes = Forecast-time range] starting with RawInputPredictionsTOT
# No assumptions as to type of variables here
if SymbolicWindows:
X_predict = SymbolicInputSequencesTOT.reshape(OuterBatchDimension,1,1)
else:
X_predict = RawInputSequencesTOT.reshape(OuterBatchDimension,Tseq,NpropperseqTOT)
y_predict = RawInputPredictionsTOT.reshape(OuterBatchDimension,NpredperseqTOT)
if Spacetime:
SpacetimeforMask_predict = SpacetimeforMask.reshape(OuterBatchDimension,1,1).copy()
return X_predict, y_predict, SpacetimeforMask_predict
return X_predict, y_predict
def setSeparateDLinput(model, Spacetime = False):
# Initial data is Flatten([Num_Seq][Nloc]) [Tseq] with values [Nprop-Sel + Nforcing + Add(ExPosEnc-Selin)] starting with RawInputSequencesTOT
# Predictions are Flatten([Num_Seq] [Nloc]) [Predvals=Npred+ExPosEnc-Selout] [Predtimes = Forecast-time range] starting with RawInputPredictionsTOT
# No assumptions as to type of variables here
# model = 0 LSTM =1 transformer
if model == 0:
Spacetime = False
X_val = None
y_val = None
Spacetime_val = None
Spacetime_train = None
if SymbolicWindows:
InputSequences = np.empty([Num_Seq, TrainingNloc], dtype = np.int32)
for iloc in range(0,TrainingNloc):
InputSequences[:,iloc] = SymbolicInputSequencesTOT[:,ListofTrainingLocs[iloc]]
if model == 0:
X_train = InputSequences.reshape(Num_Seq*TrainingNloc,1,1)
else:
X_train = InputSequences
if Spacetime:
Spacetime_train = X_train.copy()
if LocationValidationFraction > 0.001:
UsedValidationNloc = ValidationNloc
if FullSetValidation:
UsedValidationNloc = Nloc
ValInputSequences = np.empty([Num_Seq, UsedValidationNloc], dtype = np.int32)
if FullSetValidation:
for iloc in range(0,Nloc):
ValInputSequences[:,iloc] = SymbolicInputSequencesTOT[:,iloc]
else:
for iloc in range(0,ValidationNloc):
ValInputSequences[:,iloc] = SymbolicInputSequencesTOT[:,ListofValidationLocs[iloc]]
if model == 0:
X_val = ValInputSequences.reshape(Num_Seq * UsedValidationNloc,1,1)
else:
X_val = ValInputSequences
if Spacetime:
Spacetime_val = X_val.copy()
else: # Symbolic Windows false Calculate Training
InputSequences = np.empty([Num_Seq, TrainingNloc,Tseq,NpropperseqTOT], dtype = np.float32)
for iloc in range(0,TrainingNloc):
InputSequences[:,iloc,:,:] = RawInputSequencesTOT[:,ListofTrainingLocs[iloc],:,:]
if model == 0:
X_train = InputSequences.reshape(Num_Seq*TrainingNloc,Tseq,NpropperseqTOT)
else:
X_train = InputSequences
if Spacetime:
Spacetime_train = np.empty([Num_Seq, TrainingNloc], dtype = np.int32)
for iloc in range(0,TrainingNloc):
Spacetime_train[:,iloc] = SpacetimeforMask[:,ListofTrainingLocs[iloc]]
if LocationValidationFraction > 0.001: # Symbolic Windows false Calculate Validation
UsedValidationNloc = ValidationNloc
if FullSetValidation:
UsedValidationNloc = Nloc
ValInputSequences = np.empty([Num_Seq, UsedValidationNloc,Tseq,NpropperseqTOT], dtype = np.float32)
if FullSetValidation:
for iloc in range(0,Nloc):
ValInputSequences[:,iloc,:,:] = RawInputSequencesTOT[:,iloc,:,:]
else:
for iloc in range(0,ValidationNloc):
ValInputSequences[:,iloc,:,:] = RawInputSequencesTOT[:,ListofValidationLocs[iloc],:,:]
if model == 0:
X_val = ValInputSequences.reshape(Num_Seq * UsedValidationNloc,Tseq,NpropperseqTOT)
else:
X_val = ValInputSequences
if Spacetime:
Spacetime_val = np.empty([Num_Seq, UsedValidationNloc], dtype = np.int32)
if FullSetValidation:
for iloc in range(0,Nloc):
Spacetime_val[:,iloc] = SpacetimeforMask[:,iloc]
else:
for iloc in range(0,ValidationNloc):
Spacetime_val[:,iloc] = SpacetimeforMask[:,ListofValidationLocs[iloc]]
# Calculate training predictions
InputPredictions = np.empty([Num_Seq, TrainingNloc,NpredperseqTOT], dtype = np.float32)
for iloc in range(0,TrainingNloc):
InputPredictions[:,iloc,:] = RawInputPredictionsTOT[:,ListofTrainingLocs[iloc],:]
if model == 0:
y_train = InputPredictions.reshape(OuterBatchDimension,NpredperseqTOT)
else:
y_train = InputPredictions
# Calculate validation predictions
if LocationValidationFraction > 0.001:
ValInputPredictions = np.empty([Num_Seq, UsedValidationNloc,NpredperseqTOT], dtype = np.float32)
if FullSetValidation:
for iloc in range(0,Nloc):
ValInputPredictions[:,iloc,:] = RawInputPredictionsTOT[:,iloc,:]
else:
for iloc in range(0,ValidationNloc):
ValInputPredictions[:,iloc,:] = RawInputPredictionsTOT[:,ListofValidationLocs[iloc],:]
if model == 0:
y_val = ValInputPredictions.reshape(Num_Seq * ValidationNloc,NpredperseqTOT)
else:
y_val = ValInputPredictions
if Spacetime:
return X_train, y_train, Spacetime_train, X_val, y_val, Spacetime_val
else:
return X_train, y_train,X_val,y_val
def InitializeDLforTimeSeries(message,processindex,y_predict):
if processindex == 0:
current_time = timenow()
line = (startbold + current_time + ' ' + message + resetfonts + " Window Size " + str(Tseq) +
" Number of samples over time that sequence starts at and location:" +str(OuterBatchDimension) +
" Number input features per sequence:" + str(NpropperseqTOT) +
" Number of predicted outputs per sequence:" + str(NpredperseqTOT) +
" Batch_size:" + str(LSTMbatch_size) +
" n_nodes:" + str(number_LSTMnodes) +
" epochs:" + str(TFTTransformerepochs))
print(wraptotext(line))
checkNaN(y_predict)
###Output
_____no_output_____
###Markdown
Tensorflow Monitor
###Code
class TensorFlowTrainingMonitor:
def __init__(self):
# These OPERATIONAL variables control saving of best fits
self.lastsavedepoch = -1 # Epoch number where last saved fit done
self.BestLossValueSaved = NaN # Training Loss value of last saved fit
self.BestValLossValueSaved = NaN # Validation Loss value of last saved fit
self.Numsuccess = 0 # count little successes up to SuccessLimit
self.Numfailed = 0
self.LastLossValue = NaN # Loss on previous epoch
self.MinLossValue = NaN # Saved minimum loss value
self.LastValLossValue = NaN # Validation Loss on previous epoch
self.MinValLossValue = NaN # validation loss value at last save
self.BestLossSaved = False # Boolean to indicate that best Loss value saved
self.saveMinLosspath = '' # Checkpoint path for saved network
self.epochcount = 0
self.NumberTimesSaved = 0 # Number of Checkpointing steps for Best Loss
self.NumberTimesRestored = 0 # Number of Checkpointing Restores
self.LittleJumpdifference = NaN
self.LittleValJumpdifference = NaN
self.AccumulateSuccesses = 0
self.AccumulateFailures = np.zeros(5, dtype=int)
self.RestoreReasons = np.zeros(8, dtype = int)
self.NameofFailures = ['Success','Train Only Failed','Val Only Failed','Both Failed', 'NaN']
self.NameofRestoreReasons = ['Both Big Jump', 'Both Little Jump','Train Big Jump', 'Train Little Jump','Val Big Jump','Val Little Jump',' Failure Limit', ' NaN']
# End OPERATIONAL Control set up for best fit checkpointing
# These are parameters user can set
self.UseBestAvailableLoss = True
self.LittleJump = 2.0 # Multiplier for checking jump compared to recent changes
self.ValLittleJump = 2.0 # Multiplier for checking jump compared to recent changes
self.startepochs = -1 # Ignore this number of epochs to let system get started
self.SuccessLimit = 20 # Don't keep saving. Wait for this number of (little) successes
self.FailureLimit = 10 # Number of failures before restore
self.BadJumpfraction = 0.2 # This fractional jump will trigger attempt to go back to saved value
self.ValBadJumpfraction = 0.2 # This fractional jump will trigger attempt to go back to saved value
self.ValidationFraction = 0.0 # Must be used validation fraction
DownplayValidationIncrease = True
# End parameters user can set
self.checkpoint = None
self.CHECKPOINTDIR = ''
self.RunName = ''
self.train_epoch = 0.0
self.val_epoch = 0.0
tfepochstep = None
recordtrainloss =[]
recordvalloss = []
def SetControlParms(self, UseBestAvailableLoss = None, LittleJump = None, startepochs = None, ValLittleJump = None,
ValBadJumpfraction = None, SuccessLimit = None, FailureLimit = None, BadJumpfraction = None, DownplayValidationIncrease=True):
if UseBestAvailableLoss is not None:
self.UseBestAvailableLoss = UseBestAvailableLoss
if LittleJump is not None:
self.LittleJump = LittleJump
if ValLittleJump is not None:
self.ValLittleJump = ValLittleJump
if startepochs is not None:
self.startepochs = startepochs
if SuccessLimit is not None:
self.SuccessLimit = SuccessLimit
if FailureLimit is not None:
self.FailureLimit = FailureLimit
if BadJumpfraction is not None:
self.BadJumpfraction = BadJumpfraction
if ValBadJumpfraction is not None:
self.ValBadJumpfraction = ValBadJumpfraction
if DownplayValidationIncrease:
self.ValBadJumpfraction = 200.0
self.ValLittleJump = 2000.0
elif ValLittleJump is None:
self.ValLittleJump = 2.0
elif ValBadJumpfraction is None:
self.ValBadJumpfraction = 0.2
def SetCheckpointParms(self,checkpointObject,CHECKPOINTDIR,RunName = '',Restoredcheckpoint= False, Restored_path = '',
ValidationFraction = 0.0, SavedTrainLoss = NaN, SavedValLoss = NaN):
self.ValidationFraction = ValidationFraction
self.checkpoint = checkpointObject
self.CHECKPOINTDIR = CHECKPOINTDIR
self.RunName = RunName
if Restoredcheckpoint:
self.BestLossSaved = True
self.saveMinLosspath = Restored_path # Checkpoint path for saved network
self.LastLossValue = SavedTrainLoss
self.LastValLossValue = SavedValLoss
self.BestLossValueSaved = SavedTrainLoss
self.BestValLossValueSaved = SavedValLoss
self.lastsavedepoch = self.epochcount
self.MinLossValue = SavedTrainLoss
self.MinValLossValue = SavedValLoss
def EpochEvaluate(self, epochcount,train_epoch, val_epoch, tfepochstep, recordtrainloss, recordvalloss):
FalseReturn = 0
TrueReturn = 1
self.epochcount = epochcount
self.train_epoch = train_epoch
self.val_epoch = val_epoch
self.tfepochstep = tfepochstep
self.recordtrainloss = recordtrainloss
self.recordvalloss = recordvalloss
Needtorestore = False
Failreason = 5 # nonsense
LossChange = 0.0
ValLossChange = 0.0
if np.math.isnan(self.train_epoch) or np.math.isnan(self.val_epoch):
Restoreflag = 7
self.RestoreReasons[Restoreflag] += 1
Needtorestore = True
Failreason = 4
self.AccumulateFailures[Failreason] += 1
print(str(self.epochcount) + ' NAN Seen Reason ' + str(Failreason) + ' #succ ' + str(self.Numsuccess) + ' #fail ' + str(self.Numfailed) + ' ' + str(round(self.train_epoch,6)) + ' ' + str(round(self.val_epoch,6)), flush=True)
return TrueReturn, self.train_epoch, self.val_epoch
if self.epochcount <= self.startepochs:
return FalseReturn, self.train_epoch, self.val_epoch
if not np.math.isnan(self.LastLossValue):
LossChange = self.train_epoch - self.LastLossValue
if self.ValidationFraction > 0.001:
ValLossChange = self.val_epoch - self.LastValLossValue
if LossChange <= 0:
if self.ValidationFraction > 0.001:
# Quick Fix
self.Numsuccess +=1
self.AccumulateSuccesses += 1
if ValLossChange <= 0:
Failreason = 0
else:
Failreason = 2
else:
self.Numsuccess +=1
self.AccumulateSuccesses += 1
Failreason = 0
else:
Failreason = 1
if self.ValidationFraction > 0.001:
if ValLossChange > 0:
Failreason = 3
if Failreason > 0:
self.Numfailed += 1
self.AccumulateFailures[Failreason] += 1
if (not np.math.isnan(self.LastLossValue)) and (Failreason > 0):
print(str(self.epochcount) + ' Reason ' + str(Failreason) + ' #succ ' + str(self.Numsuccess) + ' #fail ' + str(self.Numfailed) + ' ' + str(round(self.train_epoch,6))
+ ' ' + str(round(self.LastLossValue,6)) + ' '+ str(round(self.val_epoch,6))+ ' ' + str(round(self.LastValLossValue,6)), flush=True)
self.LastLossValue = self.train_epoch
self.LastValLossValue = self.val_epoch
StoreMinLoss = False
if not np.math.isnan(self.MinLossValue):
# if (self.train_epoch < self.MinLossValue) and (self.val_epoch <= self.MinValLossValue):
if self.train_epoch < self.MinLossValue:
if self.Numsuccess >= self.SuccessLimit:
StoreMinLoss = True
else:
StoreMinLoss = True
if StoreMinLoss:
self.Numsuccess = 0
extrastuff = ''
extrastuff_val = ' '
if not np.math.isnan(self.MinLossValue):
extrastuff = ' Previous ' + str(round(self.MinLossValue,7))
self.LittleJumpdifference = self.MinLossValue - self.train_epoch
if self.ValidationFraction > 0.001:
if not np.math.isnan(self.MinValLossValue):
extrastuff_val = ' Previous ' + str(round(self.MinValLossValue,7))
LittleValJumpdifference = max(self.MinValLossValue - self.val_epoch, self.LittleJumpdifference)
self.saveMinLosspath = self.checkpoint.save(file_prefix=self.CHECKPOINTDIR + self.RunName +'MinLoss')
if not self.BestLossSaved:
print('\nInitial Checkpoint at ' + self.saveMinLosspath + ' from ' + self.CHECKPOINTDIR)
self.MinLossValue = self.train_epoch
self.MinValLossValue = self.val_epoch
if self.ValidationFraction > 0.001:
extrastuff_val = ' Val Loss ' + str(round(self.val_epoch,7)) + extrastuff_val
print(' Epoch ' + str(self.epochcount) + ' Loss ' + str(round(self.train_epoch,7)) + extrastuff + extrastuff_val+ ' Failed ' + str(self.Numfailed), flush = True)
self.Numfailed = 0
self.BestLossSaved = True
self.BestLossValueSaved = self.train_epoch
self.BestValLossValueSaved = self.val_epoch
self.lastsavedepoch = self.epochcount
self.NumberTimesSaved += 1
return FalseReturn, self.train_epoch, self.val_epoch
RestoreTrainflag = -1
Trainrestore = False
if LossChange > 0.0:
if LossChange > self.BadJumpfraction * self.train_epoch:
Trainrestore = True
RestoreTrainflag = 0
if not np.math.isnan(self.LittleJumpdifference):
if LossChange > self.LittleJumpdifference * self.LittleJump:
Trainrestore = True
if RestoreTrainflag < 0:
RestoreTrainflag = 1
if self.BestLossSaved:
if self.train_epoch < self.MinLossValue:
Trainrestore = False
RestoreTrainflag = -1
RestoreValflag = -1
Valrestore = False
if ValLossChange > 0.0:
if ValLossChange > self.ValBadJumpfraction * self.val_epoch:
Valrestore = True
RestoreValflag = 0
if not np.math.isnan(self.LittleValJumpdifference):
if ValLossChange > self.LittleValJumpdifference * self.ValLittleJump:
Valrestore = True
if RestoreValflag < 0:
RestoreValflag = 1
if self.BestLossSaved:
if self.val_epoch < self.MinValLossValue:
Valrestore = False
RestoreValflag = -1
Restoreflag = -1
if Trainrestore and Valrestore:
Needtorestore = True
if RestoreTrainflag == 0:
Restoreflag = 0
else:
Restoreflag = 1
elif Trainrestore:
Needtorestore = True
Restoreflag = RestoreTrainflag + 2
elif Valrestore:
Needtorestore = True
Restoreflag = RestoreValflag + 4
if (self.Numfailed >= self.FailureLimit) and (Restoreflag == -1):
Restoreflag = 6
Needtorestore = True
if Restoreflag >= 0:
self.RestoreReasons[Restoreflag] += 1
if Needtorestore and (not self.BestLossSaved):
print('bad Jump ' + str(round(LossChange,7)) + ' Epoch ' + str(self.epochcount) + ' But nothing saved')
return FalseReturn, self.train_epoch, self.val_epoch
if Needtorestore:
return TrueReturn, self.train_epoch, self.val_epoch
else:
return FalseReturn, self.train_epoch, self.val_epoch
def RestoreBestFit(self):
if self.BestLossSaved:
self.checkpoint.tfrecordvalloss = tf.Variable([], shape =tf.TensorShape(None), trainable = False)
self.checkpoint.tfrecordtrainloss = tf.Variable([], shape =tf.TensorShape(None), trainable = False)
self.checkpoint.restore(save_path=self.saveMinLosspath).expect_partial()
self.tfepochstep = self.checkpoint.tfepochstep
self.recordvalloss = self.checkpoint.tfrecordvalloss.numpy().tolist()
self.recordtrainloss = self.checkpoint.tfrecordtrainloss.numpy().tolist()
trainlen = len(self.recordtrainloss)
self.Numsuccess = 0
extrastuff = ''
if self.ValidationFraction > 0.001:
vallen =len(self.recordvalloss)
if vallen > 0:
extrastuff = ' Replaced Val Loss ' + str(round(self.recordvalloss[vallen-1],7))+ ' bad val ' + str(round(self.val_epoch,7))
else:
extrastuff = ' No previous Validation Loss'
print(str(self.epochcount) + ' Failed ' + str(self.Numfailed) + ' Restored Epoch ' + str(trainlen-1) + ' Replaced Loss ' + str(round(self.recordtrainloss[trainlen-1],7))
+ ' bad ' + str(round(self.train_epoch,7)) + extrastuff + ' Checkpoint at ' + self.saveMinLosspath)
self.train_epoch = self.recordtrainloss[trainlen-1]
self.Numfailed = 0
self.LastLossValue = self.train_epoch
self.NumberTimesRestored += 1
if self.ValidationFraction > 0.001:
vallen = len(self.recordvalloss)
if vallen > 0:
self.val_epoch = self.recordvalloss[vallen-1]
else:
self.val_epoch = 0.0
return self.tfepochstep, self.recordtrainloss, self.recordvalloss, self.train_epoch, self.val_epoch
def PrintEndofFit(self, Numberofepochs):
print(startbold + 'Number of Saves ' + str(self.NumberTimesSaved) + ' Number of Restores ' + str(self.NumberTimesRestored))
print('Epochs Requested ' + str(Numberofepochs) + ' Actually Stored ' + str(len(self.recordtrainloss)) + ' ' + str(self.tfepochstep.numpy())
+ ' Successes ' +str(self.AccumulateSuccesses) + resetfonts)
trainlen = len(self.recordtrainloss)
train_epoch1 = self.recordtrainloss[trainlen-1]
lineforval = ''
if self.ValidationFraction > 0.001:
lineforval = ' Last val '+ str(round(self.val_epoch,7))
print(startbold + 'Last loss '+ str(round(self.train_epoch,7)) + ' Last loss in History ' + str(round(train_epoch1,7))+ ' Best Saved Loss '
+ str(round(self.BestLossValueSaved,7)) + lineforval + resetfonts)
print(startbold + startred +"\nFailure Reasons" + resetfonts)
for ireason in range(0,len(self.AccumulateFailures)):
print('Optimization Failure ' + str(ireason) + ' ' + self.NameofFailures[ireason] + ' ' + str(self.AccumulateFailures[ireason]))
print(startbold + startred +"\nRestore Reasons" + resetfonts)
for ireason in range(0,len(self.RestoreReasons)):
print('Backup to earlier fit ' + str(ireason) + ' ' + self.NameofRestoreReasons[ireason] + ' ' + str(self.RestoreReasons[ireason]))
def BestPossibleFit(self): # Use Best Saved if appropriate
if self.UseBestAvailableLoss:
if self.BestLossSaved:
if self.BestLossValueSaved < self.train_epoch:
self.checkpoint.tfrecordvalloss = tf.Variable([], shape =tf.TensorShape(None), trainable = False)
self.checkpoint.tfrecordtrainloss = tf.Variable([], shape =tf.TensorShape(None), trainable = False)
self.checkpoint.restore(save_path=self.saveMinLosspath).expect_partial()
self.tfepochstep = self.checkpoint.tfepochstep
self.recordvalloss = self.checkpoint.tfrecordvalloss.numpy().tolist()
self.recordtrainloss = self.checkpoint.tfrecordtrainloss.numpy().tolist()
trainlen = len(self.recordtrainloss)
Oldtraining = self.train_epoch
self.train_epoch = self.recordtrainloss[trainlen-1]
extrainfo = ''
if self.ValidationFraction > 0.001:
vallen = len(self.recordvalloss)
if vallen > 0:
extrainfo = '\nVal Loss ' + str(round(self.recordvalloss[vallen-1],7)) + ' old Val ' + str(round(self.val_epoch,7))
self.val_epoch = self.recordvalloss[vallen-1]
else:
self.val_epoch = 0.0
extrainfo = '\n no previous validation loss'
print(startpurple+ startbold + 'Switch to Best Saved Value. Restored Epoch ' + str(trainlen-1)
+ '\nNew Loss ' + str(round(self.recordtrainloss[trainlen-1],7)) + ' old ' + str(round(Oldtraining,7))
+ extrainfo + '\nCheckpoint at ' + self.saveMinLosspath + resetfonts)
else:
print(startpurple+ startbold + '\nFinal fit is best: train ' + str(round(self.train_epoch,7)) + ' Val Loss ' + str(round(self.val_epoch,7)) + resetfonts)
return self.tfepochstep, self.recordtrainloss, self.recordvalloss, self.train_epoch, self.val_epoch
###Output
_____no_output_____
###Markdown
TFT Output
###Code
def TFTTestpredict(custommodel,datacollection):
"""Computes predictions for a given input dataset.
Args:
df: Input dataframe
return_targets: Whether to also return outputs aligned with predictions to
faciliate evaluation
Returns:
Input dataframe or tuple of (input dataframe, algined output dataframe).
"""
inputs = datacollection['inputs']
time = datacollection['time']
identifier = datacollection['identifier']
outputs = datacollection['outputs']
combined = None
OuterBatchDimension = inputs.shape[0]
batchsize = myTFTTools.maxibatch_size
numberoftestbatches = math.ceil(OuterBatchDimension/batchsize)
count1 = 0
for countbatches in range(0,numberoftestbatches):
count2 = min(OuterBatchDimension, count1+batchsize)
if count2 <= count1:
continue
samples = np.arange(count1,count2)
count1 += batchsize
X_test = inputs[samples,Ellipsis]
time_test = []
id_test =[]
Numinbatch = X_test.shape[0]
if myTFTTools.TFTSymbolicWindows:
X_test = X_test.numpy()
X_test = np.reshape(X_test,Numinbatch)
iseqarray = np.right_shift(X_test,16)
ilocarray = np.bitwise_and(X_test, 0b1111111111111111)
X_testFull = list()
for iloc in range(0,Numinbatch):
X_testFull.append(ReshapedSequencesTOT[ilocarray[iloc],iseqarray[iloc]:iseqarray[iloc]+Tseq])
X_test = np.array(X_testFull)
batchprediction = custommodel(X_test, time_test, id_test, training=False).numpy()
if combined is None:
combined = batchprediction
else:
combined = np.concatenate((combined, batchprediction),axis=0)
def format_outputs(prediction):
"""Returns formatted dataframes for prediction."""
reshapedprediction = prediction.reshape(prediction.shape[0], -1)
flat_prediction = pd.DataFrame(
reshapedprediction[:, :],
columns=[
't+{}-Obs{}'.format(i, j)
for i in range(myTFTTools.time_steps - myTFTTools.num_encoder_steps)
for j in range(0, myTFTTools.output_size)
])
cols = list(flat_prediction.columns)
flat_prediction['forecast_time'] = time[:,
myTFTTools.num_encoder_steps - 1, 0]
flat_prediction['identifier'] = identifier[:, 0, 0]
# Arrange in order
return flat_prediction[['forecast_time', 'identifier'] + cols]
# Extract predictions for each quantile into different entries
process_map = {
qname:
combined[Ellipsis, i * myTFTTools.output_size:(i + 1) * myTFTTools.output_size]
for i, qname in enumerate(myTFTTools.Quantilenames)
}
process_map['targets'] = outputs
return {k: format_outputs(process_map[k]) for k in process_map}
# Simple Plot of Loss from history
def finalizeTFTDL(ActualModel, recordtrainloss, recordvalloss, validationfrac, test_datacollection, modelflag, LabelFit =''):
# Ouput Loss v Epoch
histlen = len(recordtrainloss)
trainloss = recordtrainloss[histlen-1]
plt.rcParams["figure.figsize"] = [8,6]
plt.plot(recordtrainloss)
if (validationfrac > 0.001) and len(recordvalloss) > 0:
valloss = recordvalloss[histlen-1]
plt.plot(recordvalloss)
else:
valloss = 0.0
current_time = timenow()
print(startbold + startred + current_time + ' ' + RunName + ' finalizeDL ' + RunComment +resetfonts)
plt.title(LabelFit + ' ' + RunName+' model loss ' + str(round(trainloss,7)) + ' Val ' + str(round(valloss,7)))
plt.ylabel('loss')
plt.xlabel('epoch')
plt.yscale("log")
plt.grid(True)
plt.legend(['train', 'val'], loc='upper left')
plt.show()
# Setup TFT
if modelflag == 2:
global SkipDL2F, IncreaseNloc_sample, DecreaseNloc_sample
SkipDL2F = True
IncreaseNloc_sample = 1
DecreaseNloc_sample = 1
TFToutput_map = TFTTestpredict(ActualModel,test_datacollection)
VisualizeTFT(ActualModel, TFToutput_map)
else:
printexit("unsupported model " +str(modelflag))
###Output
_____no_output_____
###Markdown
TFTcustommodelControl Full TFT Network
###Code
class TFTcustommodel(tf.keras.Model):
def __init__(self, **kwargs):
super(TFTcustommodel, self).__init__(**kwargs)
self.myTFTFullNetwork = TFTFullNetwork()
def compile(self, optimizer, loss):
super(TFTcustommodel, self).compile()
if optimizer == 'adam':
self.optimizer = tf.keras.optimizers.Adam(learning_rate=myTFTTools.learning_rate)
else:
self.optimizer = tf.keras.optimizers.get(optimizer)
Dictopt = self.optimizer.get_config()
print(startbold+startred + 'Optimizer ' + resetfonts, Dictopt)
if loss == 'MSE' or loss =='mse':
self.loss_object = tf.keras.losses.MeanSquaredError()
elif loss == 'MAE' or loss =='mae':
self.loss_object = tf.keras.losses.MeanAbsoluteError()
else:
self.loss_object = loss
self.loss_tracker = tf.keras.metrics.Mean(name="loss")
self.loss_tracker.reset_states()
self.val_tracker = tf.keras.metrics.Mean(name="val")
self.val_tracker.reset_states()
return
def resetmetrics(self):
self.loss_tracker.reset_states()
self.val_tracker.reset_states()
return
def build_graph(self, shapes):
input = tf.keras.layers.Input(shape=shapes, name="Input")
return tf.keras.models.Model(inputs=[input], outputs=[self.call(input)])
@tf.function
def train_step(self, data):
if len(data) == 5:
X_train, y_train, sw_train, time_train, id_train = data
else:
X_train, y_train = data
sw_train = []
time_train = []
id_train = []
with tf.GradientTape() as tape:
predictions = self(X_train, time_train, id_train, training=True)
# loss = self.loss_object(y_train, predictions, sw_train)
loss = self.loss_object(y_train, predictions)
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
self.loss_tracker.update_state(loss)
return {"loss": self.loss_tracker.result()}
@tf.function
def test_step(self, data):
if len(data) == 5:
X_val, y_val, sw_val, time_val, id_val = data
else:
X_val, y_val = data
sw_val = []
time_train = []
id_train = []
predictions = self(X_val, time_val, id_val, training=False)
# loss = self.loss_object(y_val, predictions, sw_val)
loss = self.loss_object(y_val, predictions)
self.val_tracker.update_state(loss)
return {"val_loss": self.val_tracker.result()}
#@tf.function
def call(self, inputs, time, identifier, training=None):
predictions = self.myTFTFullNetwork(inputs, time, identifier, training=training)
return predictions
###Output
_____no_output_____
###Markdown
TFT Overall Batch Training* TIME not set explicitly* Weights allowed or not* Assumes TFTFullNetwork is full Network
###Code
def RunTFTCustomVersion():
myTFTTools.PrintTitle("Start Tensorflow")
TIME_start("RunTFTCustomVersion init")
global AnyOldValidation
UseClassweights = False
usecustomfit = True
AnyOldValidation = myTFTTools.validation
garbagecollectcall = 0
# XXX InitializeDLforTimeSeries setSeparateDLinput NOT USED
tf.keras.backend.set_floatx('float32')
# tf.compat.v1.disable_eager_execution()
myTFTcustommodel = TFTcustommodel(name ='myTFTcustommodel')
lossobject = 'MSE'
if myTFTTools.lossflag == 8:
lossobject = custom_lossGCF1
if myTFTTools.lossflag == 11:
lossobject = 'MAE'
if myTFTTools.lossflag == 12:
lossobject = tf.keras.losses.Huber(delta=myTFTTools.HuberLosscut)
myTFTcustommodel.compile(loss= lossobject, optimizer= myTFTTools.optimizer)
recordtrainloss = []
recordvalloss = []
tfrecordtrainloss = tf.Variable([], shape =tf.TensorShape(None), trainable = False)
tfrecordvalloss = tf.Variable([], shape =tf.TensorShape(None), trainable = False)
tfepochstep = tf.Variable(0, trainable = False)
TIME_stop("RunTFTCustomVersion init")
# Set up checkpoints to read or write
mycheckpoint = tf.train.Checkpoint(optimizer=myTFTcustommodel.optimizer,
model=myTFTcustommodel, tfepochstep=tf.Variable(0),
tfrecordtrainloss=tfrecordtrainloss,tfrecordvalloss=tfrecordvalloss)
TIME_start("RunTFTCustomVersion restore")
# This restores back up
if Restorefromcheckpoint:
save_path = inputCHECKPOINTDIR + inputRunName + inputCheckpointpostfix
mycheckpoint.restore(save_path=save_path).expect_partial()
tfepochstep = mycheckpoint.tfepochstep
recordvalloss = mycheckpoint.tfrecordvalloss.numpy().tolist()
recordtrainloss = mycheckpoint.tfrecordtrainloss.numpy().tolist()
trainlen = len(recordtrainloss)
extrainfo = ''
vallen = len(recordvalloss)
SavedTrainLoss = recordtrainloss[trainlen-1]
SavedValLoss = 0.0
if vallen > 0:
extrainfo = ' Val Loss ' + str(round(recordvalloss[vallen-1],7))
SavedValLoss = recordvalloss[vallen-1]
print(startbold + 'Network restored from ' + save_path + '\nLoss ' + str(round(recordtrainloss[trainlen-1],7))
+ extrainfo + ' Epochs ' + str(tfepochstep.numpy()) + resetfonts )
TFTTrainingMonitor.SetCheckpointParms(mycheckpoint,CHECKPOINTDIR,RunName = RunName,Restoredcheckpoint= True,
Restored_path = save_path, ValidationFraction = AnyOldValidation, SavedTrainLoss = SavedTrainLoss,
SavedValLoss =SavedValLoss)
else:
TFTTrainingMonitor.SetCheckpointParms(mycheckpoint,CHECKPOINTDIR,RunName = RunName,Restoredcheckpoint= False,
ValidationFraction = AnyOldValidation)
TIME_stop("RunTFTCustomVersion restore")
TIME_start("RunTFTCustomVersion analysis")
# This just does analysis
if AnalysisOnly:
if OutputNetworkPictures:
outputpicture1 = APPLDIR +'/Outputs/Model_' +RunName + '1.png'
outputpicture2 = APPLDIR +'/Outputs/Model_' +RunName + '2.png'
# TODO: also save as pdf if possible
tf.keras.utils.plot_model(myTFTcustommodel.build_graph([Tseq,NpropperseqTOT]),
show_shapes=True,
to_file = outputpicture1,
show_dtype=True,
expand_nested=True)
tf.keras.utils.plot_model(myTFTcustommodel.myTFTFullNetwork.build_graph([Tseq,NpropperseqTOT]),
show_shapes=True,
to_file = outputpicture2,
show_dtype=True,
expand_nested=True)
if myTFTTools.TFTSymbolicWindows:
finalizeTFTDL(myTFTcustommodel,recordtrainloss,recordvalloss,AnyOldValidation,TFTtest_datacollection,2, LabelFit = 'Custom TFT Fit')
else:
finalizeTFTDL(myTFTcustommodel,recordtrainloss,recordvalloss,AnyOldValidation,TFTtest_datacollection,2, LabelFit = 'Custom TFT Fit')
return
TIME_stop("RunTFTCustomVersion analysis")
TIME_start("RunTFTCustomVersion train")
# Initialize progress bars
epochsize = len(TFTtrain_datacollection["inputs"])
if AnyOldValidation > 0.001:
epochsize += len(TFTval_datacollection["inputs"])
pbar = notebook.trange(myTFTTools.num_epochs, desc='Training loop', unit ='epoch')
bbar = notebook.trange(epochsize, desc='Batch loop', unit = 'sample')
train_epoch = 0.0 # Training Loss this epoch
val_epoch = 0.0 # Validation Loss this epoch
Ctime1 = 0.0
Ctime2 = 0.0
Ctime3 = 0.0
GarbageCollect = True
# train_dataset = tf.data.Dataset.from_tensor_slices((TFTtrain_datacollection['inputs'],TFTtrain_datacollection['outputs'],TFTtrain_datacollection['active_entries']))
# val_dataset = tf.data.Dataset.from_tensor_slices((TFTval_datacollection['inputs'],TFTval_datacollection['outputs'],TFTval_datacollection['active_entries']))
OuterTrainBatchDimension = TFTtrain_datacollection['inputs'].shape[0]
OuterValBatchDimension = TFTval_datacollection['inputs'].shape[0]
print('Samples to batch Train ' + str(OuterTrainBatchDimension) + ' Val ' + str(OuterValBatchDimension))
# train_dataset = train_dataset.shuffle(buffer_size = OuterBatchDimension, reshuffle_each_iteration=True).batch(myTFTTools.minibatch_size)
# val_dataset = val_dataset.batch(myTFTTools.maxibatch_size)
np.random.seed(int.from_bytes(os.urandom(4), byteorder='little'))
trainbatchsize = myTFTTools.minibatch_size
valbatchsize = myTFTTools.maxibatch_size
numberoftrainbatches = math.ceil(OuterTrainBatchDimension/trainbatchsize)
numberofvalbatches = math.ceil(OuterValBatchDimension/valbatchsize)
for e in pbar:
myTFTcustommodel.resetmetrics()
train_lossoverbatch=[]
val_lossoverbatch=[]
if batchperepoch:
qbar = notebook.trange(epochsize, desc='Batch loop epoch ' +str(e))
# for batch, (X_train, y_train, sw_train) in enumerate(train_dataset.take(-1))
trainingorder = np.arange(0, OuterTrainBatchDimension)
np.random.shuffle(trainingorder)
count1 = 0
for countbatches in range(0,numberoftrainbatches):
count2 = min(OuterTrainBatchDimension, count1+trainbatchsize)
if count2 <= count1:
continue
samples = trainingorder[count1:count2]
count1 += trainbatchsize
X_train = TFTtrain_datacollection['inputs'][samples,Ellipsis]
y_train = TFTtrain_datacollection['outputs'][samples,Ellipsis]
sw_train = []
time_train = []
id_train = []
Numinbatch = X_train.shape[0]
# myTFTTools.TFTSymbolicWindows X_train is indexed by Batch index, 1(replace by Window), 1 (replace by properties)
if myTFTTools.TFTSymbolicWindows:
StopWatch.start('label1')
X_train = X_train.numpy()
X_train = np.reshape(X_train,Numinbatch)
iseqarray = np.right_shift(X_train,16)
ilocarray = np.bitwise_and(X_train, 0b1111111111111111)
StopWatch.stop('label1')
Ctime1 += StopWatch.get('label1', digits=4)
StopWatch.start('label3')
X_train_withSeq = list()
for iloc in range(0,Numinbatch):
X_train_withSeq.append(ReshapedSequencesTOT[ilocarray[iloc],iseqarray[iloc]:iseqarray[iloc]+Tseq])
# X_train_withSeq=[ReshapedSequencesTOT[ilocarray[iloc],iseqarray[iloc]:iseqarray[iloc]+Tseq] for iloc in range(0,Numinbatch)]
StopWatch.stop('label3')
Ctime3 += StopWatch.get('label3', digits=5)
StopWatch.start('label2')
loss = myTFTcustommodel.train_step((np.array(X_train_withSeq), y_train, sw_train, time_train,id_train))
StopWatch.stop('label2')
Ctime2 += StopWatch.get('label2', digits=4)
else:
loss = myTFTcustommodel.train_step((X_train, y_train, sw_train, time_train, id_train))
GarbageCollect = False
if GarbageCollect:
if myTFTTools.TFTSymbolicWindows:
X_train_withSeq = None
X_train = None
y_train = None
sw_train = None
time_train = None
id_train = None
if garbagecollectcall > GarbageCollectionLimit:
garbagecollectcall = 0
gc.collect()
garbagecollectcall += 1
localloss = loss["loss"].numpy()
train_lossoverbatch.append(localloss)
if batchperepoch:
qbar.update(LSTMbatch_size)
qbar.set_postfix(Loss = localloss, Epoch = e)
bbar.update(Numinbatch)
bbar.set_postfix(Loss = localloss, Epoch = e)
# End Training step for one batch
# Start Validation
if AnyOldValidation:
count1 = 0
for countbatches in range(0,numberofvalbatches):
count2 = min(OuterValBatchDimension, count1+valbatchsize)
if count2 <= count1:
continue
samples = np.arange(count1,count2)
count1 += valbatchsize
X_val = TFTval_datacollection['inputs'][samples,Ellipsis]
y_val = TFTval_datacollection['outputs'][samples,Ellipsis]
sw_val = []
# for batch, (X_val, y_val, sw_val) in enumerate(val_dataset.take(-1)):
time_val = []
id_val =[]
Numinbatch = X_val.shape[0]
# myTFTTools.TFTSymbolicWindows X_val is indexed by Batch index, 1(replace by Window), 1 (replace by properties)
if myTFTTools.TFTSymbolicWindows:
StopWatch.start('label1')
X_val = X_val.numpy()
X_val = np.reshape(X_val,Numinbatch)
iseqarray = np.right_shift(X_val,16)
ilocarray = np.bitwise_and(X_val, 0b1111111111111111)
StopWatch.stop('label1')
Ctime1 += StopWatch.get('label1', digits=4)
StopWatch.start('label3')
X_valFull = list()
for iloc in range(0,Numinbatch):
X_valFull.append(ReshapedSequencesTOT[ilocarray[iloc],iseqarray[iloc]:iseqarray[iloc]+Tseq])
StopWatch.stop('label3')
Ctime3 += StopWatch.get('label3', digits=5)
StopWatch.start('label2')
loss = myTFTcustommodel.test_step((np.array(X_valFull), y_val, sw_val, time_val, id_val))
StopWatch.stop('label2')
Ctime2 += StopWatch.get('label2', digits=4)
else:
loss = myTFTcustommodel.test_step((X_val, y_val, sw_val, time_val, id_val))
localval = loss["val_loss"].numpy()
val_lossoverbatch.append(localval)
bbar.update(Numinbatch)
bbar.set_postfix(Val_loss = localval, Epoch = e)
# End Batch
train_epoch = train_lossoverbatch[-1]
recordtrainloss.append(train_epoch)
mycheckpoint.tfrecordtrainloss = tf.Variable(recordtrainloss)
'''
line = 'Train ' + str(round(np.mean(train_lossoverbatch),5)) + ' '
count = 0
for x in train_lossoverbatch:
if count%100 == 0:
line = line + str(count) +':' + str(round(x,5)) + ' '
count += 1
print(wraptotext(line,size=180))
'''
val_epoch = 0.0
if AnyOldValidation > 0.001:
val_epoch = val_lossoverbatch[-1]
recordvalloss.append(val_epoch)
mycheckpoint.tfrecordvalloss = tf.Variable(recordvalloss)
'''
line = 'Val ' + str(round(np.mean(val_lossoverbatch),5)) + ' '
count = 0
for x in val_lossoverbatch:
if count%100 == 0:
line = line + str(count) +':' + str(round(x,5)) + ' '
count += 1
print(wraptotext(line,size=180))
'''
pbar.set_postfix(Loss = train_epoch, Val = val_epoch)
bbar.reset()
tfepochstep = tfepochstep + 1
mycheckpoint.tfepochstep.assign(tfepochstep)
# Decide on best fit
MonitorResult, train_epoch, val_epoch = TFTTrainingMonitor.EpochEvaluate(e,train_epoch, val_epoch,
tfepochstep, recordtrainloss, recordvalloss)
if MonitorResult==1:
tfepochstep, recordtrainloss, recordvalloss, train_epoch, val_epoch = TFTTrainingMonitor.RestoreBestFit() # Restore Best Fit
else:
continue
# *********************** End of Epoch Loop
TIME_stop("RunTFTCustomVersion train")
# Print Fit details
print(startbold + 'Times ' + str(round(Ctime1,5)) + ' ' + str(round(Ctime3,5)) + ' TF ' + str(round(Ctime2,5)) + resetfonts)
TFTTrainingMonitor.PrintEndofFit(TFTTransformerepochs)
# Set Best Possible Fit
TIME_start("RunTFTCustomVersion bestfit")
TIME_start("RunTFTCustomVersion bestfit FTTrainingMonitor")
tfepochstep, recordtrainloss, recordvalloss, train_epoch, val_epoch = TFTTrainingMonitor.BestPossibleFit()
TIME_stop("RunTFTCustomVersion bestfit FTTrainingMonitor")
if Checkpointfinalstate:
TIME_start("RunTFTCustomVersion bestfit Checkpointfinalstate")
savepath = mycheckpoint.save(file_prefix=CHECKPOINTDIR + RunName)
print('Checkpoint at ' + savepath + ' from ' + CHECKPOINTDIR)
TIME_stop("RunTFTCustomVersion bestfit Checkpointfinalstate")
trainlen = len(recordtrainloss)
extrainfo = ''
if AnyOldValidation > 0.001:
vallen = len(recordvalloss)
extrainfo = ' Val Epoch ' + str(vallen-1) + ' Val Loss ' + str(round(recordvalloss[vallen-1],7))
print('Train Epoch ' + str(trainlen-1) + ' Train Loss ' + str(round(recordtrainloss[trainlen-1],7)) + extrainfo)
#
TIME_start("RunTFTCustomVersion bestfit summary")
myTFTcustommodel.summary()
TIME_stop("RunTFTCustomVersion bestfit summary")
TIME_start("RunTFTCustomVersion bestfit network summary")
print('\nmyTFTcustommodel.myTFTFullNetwork **************************************')
myTFTcustommodel.myTFTFullNetwork.summary()
TIME_stop("RunTFTCustomVersion bestfit network summary")
print('\nmyTFTcustommodel.myTFTFullNetwork.TFTLSTMEncoder **************************************')
if not myTFTTools.TFTdefaultLSTM:
TIME_start("RunTFTCustomVersion bestfit TFTLSTMEncoder summary")
myTFTcustommodel.myTFTFullNetwork.TFTLSTMEncoder.summary()
TIME_stop("RunTFTCustomVersion bestfit TFTLSTMEncoder summary")
print('\nmyTFTcustommodel.myTFTFullNetwork.TFTLSTMDecoder **************************************')
TIME_start("RunTFTCustomVersion bestfit TFTLSTMDecoder summary")
myTFTcustommodel.myTFTFullNetwork.TFTLSTMEncoder.summary()
# TODO: Gregor thinks it shoudl be: myTFTcustommodel.myTFTFullNetwork.TFTLSTMDecoder.summary()
TIME_stop("RunTFTCustomVersion bestfit TFTLSTMDecoder summary")
print('\nmyTFTcustommodel.myTFTFullNetwork.TFTself_attn_layer **************************************')
TIME_start("RunTFTCustomVersion bestfit Network attn layer summary")
myTFTcustommodel.myTFTFullNetwork.TFTself_attn_layer.summary()
TIME_stop("RunTFTCustomVersion bestfit Network attn layer summary")
TIME_start("RunTFTCustomVersion bestfit Network attn layer attention summary")
myTFTcustommodel.myTFTFullNetwork.TFTself_attn_layer.attention.summary()
TIME_stop("RunTFTCustomVersion bestfit Network attn layer attention summary")
if OutputNetworkPictures:
outputpicture1 = APPLDIR +'/Outputs/Model_' +RunName + '1.png'
outputpicture2 = APPLDIR +'/Outputs/Model_' +RunName + '2.png'
# ALso save as PDF if possible
TIME_start("RunTFTCustomVersion bestfit Model build graph")
tf.keras.utils.plot_model(myTFTcustommodel.build_graph([Tseq,NpropperseqTOT]),
show_shapes=True, to_file = outputpicture1,
show_dtype=True,
expand_nested=True)
TIME_stop("RunTFTCustomVersion bestfit Model build graph")
TIME_start("RunTFTCustomVersion bestfit Network build graph")
tf.keras.utils.plot_model(myTFTcustommodel.myTFTFullNetwork.build_graph([Tseq,NpropperseqTOT]),
show_shapes=True, to_file = outputpicture2,
show_dtype=True,
expand_nested=True)
TIME_stop("RunTFTCustomVersion bestfit Network build graph")
TIME_start("RunTFTCustomVersion bestfit finalize")
if myTFTTools.TFTSymbolicWindows:
finalizeTFTDL(myTFTcustommodel,recordtrainloss,recordvalloss,AnyOldValidation,TFTtest_datacollection,2, LabelFit = 'Custom TFT Fit')
else:
finalizeTFTDL(myTFTcustommodel,recordtrainloss,recordvalloss,AnyOldValidation,TFTtest_datacollection,2, LabelFit = 'Custom TFT Fit')
TIME_stop("RunTFTCustomVersion bestfit finalize")
TIME_stop("RunTFTCustomVersion bestfit")
return
###Output
_____no_output_____
###Markdown
Run TFT
###Code
# Run TFT Only
TIME_start("RunTFTCustomVersion tft only")
AnalysisOnly = myTFTTools.AnalysisOnly
Dumpoutkeyplotsaspics = True
Restorefromcheckpoint = myTFTTools.Restorefromcheckpoint
Checkpointfinalstate = True
if AnalysisOnly:
Restorefromcheckpoint = True
Checkpointfinalstate = False
if Restorefromcheckpoint:
inputCHECKPOINTDIR = CHECKPOINTDIR
inputRunName = myTFTTools.inputRunName
inputCheckpointpostfix = myTFTTools.inputCheckpointpostfix
inputCHECKPOINTDIR = APPLDIR + "/checkpoints/" + inputRunName + "dir/"
batchperepoch = False # if True output a batch bar for each epoch
GlobalSpacetime = False
IncreaseNloc_sample = 1
DecreaseNloc_sample = 1
SkipDL2F = True
FullSetValidation = False
TFTTrainingMonitor = TensorFlowTrainingMonitor()
TFTTrainingMonitor.SetControlParms(SuccessLimit = 1,FailureLimit = 2)
TIME_stop("RunTFTCustomVersion tft only")
def PrintLSTMandBasicStuff(model):
myTFTTools.PrintTitle('Start TFT Deep Learning')
if myTFTTools.TFTSymbolicWindows:
print(startbold + startred + 'Symbolic Windows used to save space'+resetfonts)
else:
print(startbold + startred + 'Symbolic Windows NOT used'+resetfonts)
print('Training Locations ' + str(TrainingNloc) + ' Validation Locations ' + str(ValidationNloc) +
' Sequences ' + str(Num_Seq))
if LocationBasedValidation:
print(startbold + startred + " Location Based Validation with fraction " + str(LocationValidationFraction)+resetfonts)
if RestartLocationBasedValidation:
print(startbold + startred + " Using Validation set saved in " + RestartRunName+resetfonts)
print('\nAre futures predicted ' + str(UseFutures) + ' Custom Loss Pointer ' + str(CustomLoss) + ' Class weights used ' + str(UseClassweights))
print('\nProperties per sequence ' + str(NpropperseqTOT))
print('\n' + startbold +startpurple + 'Properties ' + resetfonts)
labelline = 'Name '
for propval in range (0,7):
labelline += QuantityStatisticsNames[propval] + ' '
print('\n' + startbold + labelline + resetfonts)
for iprop in range(0,NpropperseqTOT):
line = startbold + startpurple + str(iprop) + ' ' + InputPropertyNames[PropertyNameIndex[iprop]] + resetfonts
jprop = PropertyAverageValuesPointer[iprop]
line += ' Root ' + str(QuantityTakeroot[jprop])
for proppredval in range (0,7):
line += ' ' + str(round(QuantityStatistics[jprop,proppredval],3))
print(line)
print('\nPredictions per sequence ' + str(NpredperseqTOT))
print('\n' + startbold +startpurple + 'Predictions ' + resetfonts)
print('\n' + startbold + labelline + resetfonts)
for ipred in range(0,NpredperseqTOT):
line = startbold + startpurple + str(ipred) + ' ' + Predictionname[ipred] + ' wgt ' + str(round(Predictionwgt[ipred],3)) + resetfonts + ' '
jpred = PredictionAverageValuesPointer[ipred]
line += ' Root ' + str(QuantityTakeroot[jpred])
for proppredval in range (0,7):
line += ' ' + str(round(QuantityStatistics[jpred,proppredval],3))
print(line)
print('\n')
myTFTTools.PrintTitle('Start TFT Deep Learning')
for k in TFTparams:
print('# {} = {}'.format(k, TFTparams[k]))
TIME_start("RunTFTCustomVersion print")
runtype = ''
if Restorefromcheckpoint:
runtype = 'Restarted '
myTFTTools.PrintTitle(runtype)
PrintLSTMandBasicStuff(2)
TIME_stop("RunTFTCustomVersion print")
TIME_start("RunTFTCustomVersion A")
RunTFTCustomVersion()
myTFTTools.PrintTitle('TFT run completed')
TIME_stop("RunTFTCustomVersion A")
StopWatch.stop("total")
StopWatch.benchmark()
a="Figure out how to get name of gpu!"
#StopWatch.event(config)
#StopWatch.event(f"gpu={}")
#StopWatch.benchmark(sysinfo=False, tag="This_is_final")
StopWatch.benchmark(sysinfo=False, attributes="short")
if in_rivanna:
print("Partition is " + str(os.getenv('SLURM_JOB_PARTITION')))
print("Job ID is " + str(os.getenv("SLURM_JOB_ID")))
sys.exit(0)
###Output
_____no_output_____ |
m03_c04_interactive_visualization/m03_c04_lab_Diego_Garciar.ipynb | ###Markdown
MAT281 Aplicaciones de la Matemática en la Ingeniería Módulo 03 Laboratorio Clase 04: Visualización Interactiva Instrucciones* Completa tus datos personales (nombre y rol USM) en siguiente celda.* La escala es de 0 a 4 considerando solo valores enteros.* Debes _pushear_ tus cambios a tu repositorio personal del curso.* Como respaldo, debes enviar un archivo .zip con el siguiente formato `mXX_cYY_lab_apellido_nombre.zip` a [email protected], debe contener todo lo necesario para que se ejecute correctamente cada celda, ya sea datos, imágenes, scripts, etc.* Se evaluará: - Soluciones - Código - Que Binder esté bien configurado. - Al presionar `Kernel -> Restart Kernel and Run All Cells` deben ejecutarse todas las celdas sin error.* __La entrega es al final de esta clase.__ __Nombre__:Diego Garcia__Rol__:201610017-2
###Code
import os
import numpy as np
import pandas as pd
import altair as alt
alt.themes.enable('opaque') # Para quienes utilizan temas oscuros en Jupyter Lab
###Output
_____no_output_____
###Markdown
Ejercicio 1 (1 pto)Volveremos a utilizar los datos de _European Union lesbian, gay, bisexual and transgender survey (2012)_, para ello los cargaremos tal como en el laboratorio anterior.Utilizando `altair` realiza la siguiente visualización:1. Filtra los datos tal de utilizar solo la pregunta con código `g5`, que corresponde a _All things considered, how satisfied would you say you are with your life these days? *_.2. Debe ser un gráfico de barras horizontal tal que: 1. Para cada grupo mostrar el porcentaje. 2. Colorear por valor de la respuesta (recuerda que la pregunta `g5` tiene respuestas numéricas). 3. Debe haber un gráfico por cada país. Hint: utiliza el encode `row`.
###Code
daily_life = (
pd.read_csv(os.path.join("data", "LGBT_Survey_DailyLife.csv"))
.query("notes != ' [1] '")
.astype({"percentage": "int"})
.drop(columns="notes")
.rename(columns={"CountryCode": "country"})
)
daily_life.head()
alt.Chart(daily_life.query("question_code == 'g5'")).mark_bar().encode(
x= 'subset',
y='percentage',
color='g5:N',
row = 'country'
)
###Output
_____no_output_____
###Markdown
Ejercicio 2 (1 pto)Para esta parte utilizaremos un conjunto de datos de __Precios de Paltas__, extraído desde [Kaggle](https://www.kaggle.com/neuromusic/avocado-prices). Context_It is a well known fact that Millenials LOVE Avocado Toast. It's also a well known fact that all Millenials live in their parents basements._Clearly, they aren't buying home because they are buying too much Avocado Toast!But maybe there's hope... if a Millenial could find a city with cheap avocados, they could live out the Millenial American Dream. ContentThis data was downloaded from the Hass Avocado Board website in May of 2018 & compiled into a single CSV. Here's how the Hass Avocado Board describes the data on their website:> The table below represents weekly 2018 retail scan data for National retail volume (units) and price. Retail scan data comes directly from retailers’ cash registers based on actual retail sales of Hass avocados. Starting in 2013, the table below reflects an expanded, multi-outlet retail data set. Multi-outlet reporting includes an aggregation of the following channels: grocery, mass, club, drug, dollar and military. The Average Price (of avocados) in the table reflects a per unit (per avocado) cost, even when multiple units (avocados) are sold in bags. The Product Lookup codes (PLU’s) in the table are only for Hass avocados. Other varieties of avocados (e.g. greenskins) are not included in this table.Some relevant columns in the dataset:* `Date` - The date of the observation* `AveragePrice` - the average price of a single avocado* `type` - conventional or organic* `year` - the year* `Region` - the city or region of the observation* `Total` Volume - Total number of avocados sold* `4046` - Total number of avocados with PLU 4046 sold* `4225` - Total number of avocados with PLU 4225 sold* `4770` - Total number of avocados with PLU 4770 sold Veamos el conjunto de datos y formatémoslo con tal de aprovechar mejor la información
###Code
paltas_raw = pd.read_csv(os.path.join("data", "avocado.csv"), index_col=0)
paltas_raw.head()
paltas = (
paltas_raw.assign(
dt_date=lambda x: pd.to_datetime(x["Date"], format="%Y-%m-%d")
)
.drop(columns=["Date", "year"])
)
paltas.head()
###Output
_____no_output_____
###Markdown
Haz un gráfico de líneas tal que:* El eje horziontal corresponda a la fecha.* El eje vertical al promedio de precio.* El color sea por tipo de palta.
###Code
try:
alt.Chart(paltas).mark_lines().encode(
x='dt_date',
y='AveragePrice',
color='type'
)
except:
print("Exception?")
###Output
Exception?
###Markdown
¿`MaxRowError`? ¿Qué es eso? Para todo el detalle puedes dirigirte [aquí](https://altair-viz.github.io/user_guide/faq.html). En lo que nos concierne, `altair` no solo genera los pixeles de un gráfico, si no que también guarda la data asociada a él. Este error es para advertir al usuario que los jupyter notebooks podrían utilizar mucha memoria. Una buena práctica en estos datos, es generar un archivo `json` con los datos y `altair` es capaz de leer la url directamente. El único inconveniente es que no detecta el tipo de dato automáticamente, por lo que siempre se debe decalrar.Ejecuta la siguiente celda para generar el archivo `json`.
###Code
paltas_url = os.path.join("data", "paltas.json")
paltas.to_json(paltas_url, orient="records")
alt.data_transformers.enable('json') # Para poder leer directamente la url de un archivo json.
###Output
_____no_output_____
###Markdown
Vuelve a intentar generar el gráfico pero como argumento utiliza la url.
###Code
alt.Chart(paltas_url).mark_line().encode(
x="dt_date:T",
y="AveragePrice:Q",
color="type:N"
).properties(
width=800,
height=400
)
###Output
_____no_output_____
###Markdown
Ejercicio 3 (2 ptos)GEnera un gráfico similar al del gráfico anterior, pero esta vez coloreando por región, es decir, un gráfico de líneas tal que:* El eje horziontal corresponda a la fecha.* El eje vertical al promedio de precio.* El color sea por región.
###Code
alt.Chart(paltas_url).mark_line().encode(
x='dt_date:T',
y='AveragePrice:Q',
color='region:N'
).properties(
width=800,
height=400
)
###Output
_____no_output_____
###Markdown
¿Te parece adecuado y/o que entrega información útil? Ahora, para mostrar la misma información, genera un mapa de calor.
###Code
alt.Chart(paltas_url).mark_rect().encode(
x='dt_date:T',
color='region:Q',
y='AveragePrice:N'
).properties(
width=800,
height=800
)
###Output
_____no_output_____ |
DAT-02-14-main/Homework/Unit1/brian-weiss-week-3.ipynb | ###Markdown
Project 2: Analyzing IMDb Data_Author: Kevin Markham (DC)_--- For project two, you will complete a series of exercises exploring movie rating data from IMDb.For these exercises, you will be conducting basic exploratory data analysis on IMDB's movie data, looking to answer such questions as:What is the average rating per genre?How many different actors are in a movie?This process will help you practice your data analysis skills while becoming comfortable with Pandas. Basic level
###Code
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Read in 'imdb_1000.csv' and store it in a DataFrame named movies.
###Code
movies = pd.read_csv('./data/imdb_1000.csv')
movies.head()
###Output
_____no_output_____
###Markdown
Check the number of rows and columns.
###Code
# Answer:
#create data frame:
df = pd.DataFrame(movies, index = None)
#check rows:
rows = len(df.axes[0])
#print rows:
print(rows)
###Output
_____no_output_____
###Markdown
Check the data type of each column.
###Code
# Answer:
dataTypeSeries = movies.dtypes
#print data type:
print(dataTypeSeries)
###Output
_____no_output_____
###Markdown
Calculate the average movie duration.
###Code
# Answer:
movies.duration.mean()
#answer:
= 120.97957099080695
###Output
_____no_output_____
###Markdown
Sort the DataFrame by duration to find the shortest and longest movies.
###Code
# Answer:
movies.sort('duration').head(1)
#
movies.sort('duration').tail(1)
###Output
_____no_output_____
###Markdown
Create a histogram of duration, choosing an "appropriate" number of bins.
###Code
# Answer:
movies.duration.plot(kind='hist', bins=20)
###Output
_____no_output_____
###Markdown
Use a box plot to display that same data.
###Code
# Answer:
movies.duration.plot(kind='box')
###Output
_____no_output_____
###Markdown
Intermediate level Count how many movies have each of the content ratings.
###Code
# Answer:
movies.content_rating.value_counts()
###Output
_____no_output_____
###Markdown
Use a visualization to display that same data, including a title and x and y labels.
###Code
# Answer:
movies.content_rating.value_counts().plot(kind='bar', title='Top 1000 Movies by Content Rating')
plt.xlabel('Content Rating')
plt.ylabel('Number of Movies')
###Output
_____no_output_____
###Markdown
Convert the following content ratings to "UNRATED": NOT RATED, APPROVED, PASSED, GP.
###Code
# Answer:
movies.content_rating.replace(['NOT RATED', 'APPROVED', 'PASSED', 'GP'], 'UNRATED', inplace=True)
###Output
_____no_output_____
###Markdown
Convert the following content ratings to "NC-17": X, TV-MA.
###Code
# Answer:
movies.content_rating.replace(['X', 'TV-MA'], 'NC-17', inplace=True)
###Output
_____no_output_____
###Markdown
Count the number of missing values in each column.
###Code
# Answer:
movies.isnull().sum()
###Output
_____no_output_____
###Markdown
If there are missing values: examine them, then fill them in with "reasonable" values.
###Code
# Answer:
#identifying misisng values
movies[movies.content_rating.isnull()]
#adding fill
movies.content_rating.fillna('UNRATED', inplace=True)
###Output
_____no_output_____
###Markdown
Calculate the average star rating for movies 2 hours or longer, and compare that with the average star rating for movies shorter than 2 hours.
###Code
# Answer:
#average star rating
movies[movies.duration >= 120].star_rating.mean()
#comparison
movies[movies.duration < 120].star_rating.mean()
###Output
_____no_output_____
###Markdown
Use a visualization to detect whether there is a relationship between duration and star rating.
###Code
# Answer:
movies.plot(kind='scatter', x='star_rating', y='duration', alpha=0.2)
###Output
_____no_output_____
###Markdown
Calculate the average duration for each genre.
###Code
# Answer:
movies = pd.read_csv('./data/imdb_1000.csv')
movies.groupby('genre').duration.mean()
#results:
genre
Action 126.485294
Adventure 134.840000
Animation 96.596774
Biography 131.844156
Comedy 107.602564
Crime 122.298387
Drama 126.539568
Family 107.500000
Fantasy 112.000000
Film-Noir 97.333333
History 66.000000
Horror 102.517241
Mystery 115.625000
Sci-Fi 109.000000
Thriller 114.200000
Western 136.666667
Name: duration, dtype: float64
###Output
_____no_output_____
###Markdown
Advanced level Visualize the relationship between content rating and duration.
###Code
# Answer:
#content rating:
movies.boxplot(column='duration', by='content_rating')
#duration:
movies.duration.hist(by=movies.content_rating, sharex=True)
###Output
_____no_output_____
###Markdown
Determine the top rated movie (by star rating) for each genre.
###Code
# Answer:
movies.sort('star_rating', ascending=False).groupby('genre').title.first()
movies.groupby('genre').title.first()
###Output
_____no_output_____
###Markdown
Check if there are multiple movies with the same title, and if so, determine if they are actually duplicates.
###Code
# Answer:
dupe_titles = movies[movies.title.duplicated()].title
movies[movies.title.isin(dupe_titles)]
###Output
_____no_output_____
###Markdown
Calculate the average star rating for each genre, but only include genres with at least 10 movies Option 1: manually create a list of relevant genres, then filter using that list
###Code
# Answer:
movies.genre.value_counts()
top_genres = ['Drama', 'Comedy', 'Action', 'Crime', 'Biography', 'Adventure', 'Animation', 'Horror', 'Mystery']
movies[movies.genre.isin(top_genres)].groupby('genre').star_rating.mean()
###Output
_____no_output_____
###Markdown
Option 2: automatically create a list of relevant genres by saving the value_counts and then filtering
###Code
# Answer:
genre_counts = movies.genre.value_counts()
top_genres = genre_counts[genre_counts >= 10].index
movies[movies.genre.isin(top_genres)].groupby('genre').star_rating.mean()
###Output
_____no_output_____
###Markdown
Option 3: calculate the average star rating for all genres, then filter using a boolean Series
###Code
# Answer:
movies.groupby('genre').star_rating.mean()[movies.genre.value_counts() >= 10]
###Output
_____no_output_____
###Markdown
Option 4: aggregate by count and mean, then filter using the count
###Code
# Answer:
genre_ratings = movies.groupby('genre').star_rating.agg(['count', 'mean'])
genre_ratings[genre_ratings['count'] >= 10]
###Output
_____no_output_____ |
.ipynb_checkpoints/Participant Data Analysis-checkpoint.ipynb | ###Markdown
Time Related Comparisons
###Code
def convert_to_sec(time):
minutes,sec = time.split(':')
return int(minutes)*60 + int(sec)
# Convert time to sec. Currently in mm:ss format
df['TimeD1'] = df['TimeD1'].map(convert_to_sec)
df['TimeD2'] = df['TimeD2'].map(convert_to_sec)
# Plot the distribution of timings within subjects analysis
plt.plot(df['TimeD1'],'bs')
plt.plot(df['TimeD2'],'rs')
plt.legend(['Without Assistant','With Assistant'])
plt.ylabel('Time (sec)')
plt.xlabel('Participant ID')
plt.savefig('plots/time_dist.png', bbox_inches='tight')
# Mean timings for collective analysis
df.mean()[1:3].plot(kind='bar',yerr=df.std()[1:3])
plt.xticks(range(2), ['Without Assistant','With Assistant'], rotation=0)
plt.ylabel('Mean Time (sec)')
plt.savefig('plots/mean_timings.png', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Questions related to the Assistant (Q1-7)
###Code
# Select only the first 7 questions
ques_cols = ['Q1', 'Q2', 'Q3', 'Q4', 'Q5', 'Q6', 'Q7']
ques_df = df[ques_cols]
ques_df.head()
ques_df.mean()
# Showing the negative ques in red, positive ques in green
ques = ['Use Frequently', 'Easy to Use', 'Need Human Assistance', 'Functions well integrated', 'Inconsistency', 'Easy to learn', 'Cumbersome to use']
colors=['g','g','r','g','r','g','r']
ques_df.mean().plot(kind='bar', yerr=ques_df.std(), colors=colors)
plt.xticks(range(7), ques)
plt.ylabel('Mean Rating (Scale: 1-3)')
plt.xlabel('Participant Questions')
plt.savefig('plots/participant_Q1-7.png', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
TODO: Show the values on the bars. Questions for comparing the designs
###Code
# Extract the design related columns
design_df = df.iloc[:,-6:]
wo_df_cols = [x for x in design_df.columns if x.endswith('D1')]
wi_df_cols = [x for x in design_df.columns if x.endswith('D2')]
wo_df = design_df[wo_df_cols]
wi_df = design_df[wi_df_cols]
###Output
_____no_output_____
###Markdown
- *wi_df* is the dataframe for designs drawn with the assistant - *wo_df* is the dataframe for designs drawn without the assistant
###Code
# Getting rid of the 'P*' and 'O*' substrings
wi_df.columns = [x[:-2] for x in wi_df.columns]
wo_df.columns = [x[:-2] for x in wo_df.columns]
# Calculate the means and std for plots
wi_df_mean = wi_df.mean()
wo_df_mean = wo_df.mean()
wi_df_std = wi_df.std()
wo_df_std = wo_df.std()
# Combining the two cases into one dataframe
design_df_mean = pd.DataFrame([wo_df_mean, wi_df_mean]).T
design_df_std = pd.DataFrame([wo_df_std, wi_df_std]).T
labels = ['Visual Appeal','Usability','Satisfaction']
design_df_mean.plot(kind='bar',yerr=design_df_std)
plt.legend(['Without Assistant','With Assistant'])
plt.xticks(range(3), labels, rotation=0)
plt.xlabel('Parameters')
plt.ylabel('Mean Rating (Scale: 1-3)')
plt.savefig('plots/participant_Q_params.png', bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Studying the relation between time for completion and quality of mockupsThe idea is to see if there exists a relation between time for completion vs the quality of mockups.
###Code
# Import data after preprocessing from Expert Data Analysis
wo_df = pd.read_csv('Data/wo_df.csv')
wi_df = pd.read_csv('Data/wi_df.csv')
numQues = 5
numParticipants = int(wo_df.shape[1] / numQues)
cols = [i for i in range(1,numParticipants+1) for j in range(numQues)]
wo_df.columns = cols
###Output
_____no_output_____
###Markdown
The quality of mockups are quanitified by using a simple mean for the 5 parameters: ['Usability','Completeness','Familiarity','Atractiveness','Consistency'] used for getting the rating from independent expert evaluators.
###Code
wo_df_group = wo_df.groupby(wo_df.columns, axis=1).mean()
wo_df_mean = pd.DataFrame(wo_df_group.mean())
wo_df_mean = wo_df_mean.rename(columns={0:'mean'})
# Creating a dataframe with columns as time and mean ratings. Then sorting by time taken.
# tr1 is the prefix for time-rating-design1
# tr2 is the prefix for time-rating-design2
tr1_df = pd.concat((pd.DataFrame(df['TimeD1']), pd.DataFrame(df['Expert']), wo_df_mean), axis=1)
tr1_sorted = tr1_df.sort_values(by=['TimeD1'])
tr2_df = pd.concat((pd.DataFrame(df['TimeD2']), pd.DataFrame(df['Expert']), wo_df_mean), axis=1)
tr2_sorted = tr2_df.sort_values(by=['TimeD2'])
def plot_relation(x,y,c,xlabel,ylabel,filename):
plt.scatter(x, y, c=c)
b, m = polyfit(x, y, 1)
plt.plot(x, b + m * x, '-')
plt.xlabel(xlabel)
plt.ylabel(ylabel)
plt.savefig(filename, bbox_inches='tight')
plot_relation(x=tr1_sorted['TimeD1'], y = tr1_sorted['mean'],c=tr1_sorted['Expert'],\
xlabel='Time (sec) for designing without Assistant', ylabel='Mean Rating (Scale:1-3)',\
filename='plots/time_wo_plot.png')
plot_relation(x=tr2_sorted['TimeD2'], y = tr2_sorted['mean'],c=tr2_sorted['Expert'],\
xlabel='Time (sec) for designing with Assistant', ylabel='Mean Rating (Scale:1-3)',\
filename='plots/time_wi_plot.png')
###Output
_____no_output_____ |
src/addon/ESMPy/examples/notebooks/ungridded_dimension_regrid.ipynb | ###Markdown
ESMPy regridding with Fields containing ungridded dimensions This example demonstrates how to regrid a field with extra dimensions, such as time and vertical layers.
###Code
# conda create -n esmpy-ugrid-example -c ioos esmpy matplotlib krb5 jupyter netCDF4
# source activate esmpy-ugrid-example
# jupyter notebook
import ESMF
import numpy
###Output
_____no_output_____
###Markdown
Download data files using ESMPy utilities, if they are not downloaded already.
###Code
import os
DD = os.path.join(os.getcwd(), "ESMPy-data")
if not os.path.isdir(DD):
os.makedirs(DD)
from ESMF.util.cache_data import cache_data_file
cache_data_file(os.path.join(DD, "ll2.5deg_grid.nc"))
cache_data_file(os.path.join(DD, "T42_grid.nc"))
print('Done.')
###Output
Done.
###Markdown
Set the number of elements in the extra field dimensions
###Code
levels = 2
time = 5
###Output
_____no_output_____
###Markdown
Create two uniform global latlon grids from a SCRIP formatted files
###Code
srcgrid = ESMF.Grid(filename="ESMPy-data/ll2.5deg_grid.nc",
filetype=ESMF.FileFormat.SCRIP,
add_corner_stagger=True)
dstgrid = ESMF.Grid(filename="ESMPy-data/T42_grid.nc",
filetype=ESMF.FileFormat.SCRIP,
add_corner_stagger=True)
###Output
_____no_output_____
###Markdown
Create Fields on the center stagger locations of the Grids, specifying that they will have ungridded dimensions using the 'ndbounds' argument
###Code
srcfield = ESMF.Field(srcgrid, name='srcfield',
staggerloc=ESMF.StaggerLoc.CENTER,
ndbounds=[levels, time])
dstfield = ESMF.Field(dstgrid, name='dstfield',
staggerloc=ESMF.StaggerLoc.CENTER,
ndbounds=[levels, time])
xctfield = ESMF.Field(dstgrid, name='xctfield',
staggerloc=ESMF.StaggerLoc.CENTER,
ndbounds=[levels, time])
###Output
_____no_output_____
###Markdown
Get the coordinates of the source Grid and initialize the source Field
###Code
[lon,lat] = [0, 1]
gridXCoord = srcfield.grid.get_coords(lon, ESMF.StaggerLoc.CENTER)
gridYCoord = srcfield.grid.get_coords(lat, ESMF.StaggerLoc.CENTER)
deg2rad = 3.14159/180
for timestep in range(time):
for level in range(levels):
srcfield.data[level,timestep,:,:]=10.0*(level+timestep+1) + \
(gridXCoord*deg2rad)**2 + \
(gridXCoord*deg2rad)*\
(gridYCoord*deg2rad) + \
(gridYCoord*deg2rad)**2
###Output
_____no_output_____
###Markdown
Get the coordinates of the destination Grid and initialize the exact solution and destination Field
###Code
gridXCoord = xctfield.grid.get_coords(lon, ESMF.StaggerLoc.CENTER)
gridYCoord = xctfield.grid.get_coords(lat, ESMF.StaggerLoc.CENTER)
for timestep in range(time):
for level in range(levels):
xctfield.data[level,timestep,:,:]=10.0*(level+timestep+1) + \
(gridXCoord*deg2rad)**2 + \
(gridXCoord*deg2rad)*\
(gridYCoord*deg2rad) + \
(gridYCoord*deg2rad)**2
dstfield.data[...] = 1e20
###Output
_____no_output_____
###Markdown
Create an object to regrid data from the source to the destination Field
###Code
regrid = ESMF.Regrid(srcfield, dstfield,
regrid_method=ESMF.RegridMethod.CONSERVE,
unmapped_action=ESMF.UnmappedAction.ERROR)
###Output
_____no_output_____
###Markdown
Call the regridding operator on this Field pair
###Code
dstfield = regrid(srcfield, dstfield)
###Output
_____no_output_____
###Markdown
Display regridding results
###Code
import matplotlib.pyplot as plt
from matplotlib import animation
%matplotlib inline
lons = dstfield.grid.get_coords(0)
lats = dstfield.grid.get_coords(1)
fig = plt.figure()
ax = plt.axes(xlim=(numpy.min(lons), numpy.max(lons)),
ylim=(numpy.min(lats), numpy.max(lats)))
ax.set_xlabel("Longitude")
ax.set_ylabel("Latitude")
ax.set_title("Regrid Solution")
def animate(i):
z = dstfield.data[0,i,:,:]
cont = plt.contourf(lons, lats, z)
return cont
anim = animation.FuncAnimation(fig, animate, frames=time)
anim.save('ESMPyRegrid.mp4')
plt.show()
###Output
_____no_output_____ |
examples/Texas_example.ipynb | ###Markdown
Texas Choroshape Examples This script creates county-level choropleth maps for Texas demographic data. It creates some basic classes and walks through some use cases. City and county shapefiles were created with ArcGIS and have not been included. Colors were chosen using Color Brewer 2.0 (http://colorbrewer2.org/).All data is publicly available. The data file has been downloaded from the Texas State Data Center on 8/10/2016 (http://osd.texas.gov/Data/TPEPP/Projections/). The data comprises 2014 Population Projections with a Full 2000-10 Migration Rate.
###Code
import choroshape as cshp
import datetime as dt
import numpy as np
import pandas as pd
import geopandas as gpd
import sys, os
from six.moves import input
from Texas_mapping_objects import Texas_city_label_dict['city_names'] as Texas_city_label_df
%load_ext autoreload
%autoreload 2
%matplotlib inline
pd.set_option("display.max_rows",5)
###Output
_____no_output_____
###Markdown
Sets up some variables. A census api key must be specified here, as must the output path for storing the map image files.
###Code
TODAY = dt.datetime.today().strftime("%m/%d/%Y")
OUTPATH = os.path.expanduser('~/Desktop/Example_Files/')
CURR_PATH = (os.path.realpath(''))
###Output
_____no_output_____
###Markdown
Create the dataset Let's load and clean some data. First, load the file into a pandas dataframe.
###Code
TSDC_FILE = os.path.normpath(os.path.join(CURR_PATH,
'Data_Files/TSDC_PopulationProj_County_AgeGroup Yr2014 - 1.0ms.xlsx'))
pop_data = pd.read_excel(TSDC_FILE)
pop_data
###Output
_____no_output_____
###Markdown
Get the total population for each couty in Texas.
###Code
total_data = pop_data[pop_data['age_group'] == 'ALL']
total_data
###Output
_____no_output_____
###Markdown
For our example program, which only serves adults, reproductive women are defined as females ages 18 to 24. We'll want to create a column, "total_women_18_ti_64", with the population of reproductive women for each county. We'll do something similar for adults (18 and over) and children (under 18)
###Code
rep_women = pop_data[pop_data['age_group'].isin(
['18-24', '25-44', '45-64'])].groupby('FIPS').aggregate(np.sum).reset_index(level=0)
rep_women = rep_women[['FIPS', 'total_female']] # select
rep_women.columns = ['FIPS', '18-64_years_of_age_and_female'] # rename
rep_women
###Output
_____no_output_____
###Markdown
We'll do something similar to find the population of adults (over 18) for each county.
###Code
adults = pop_data[pop_data['age_group'].isin(
['18-24', '25-44', '45-64', '65+'])].groupby('FIPS').aggregate(
np.sum).reset_index(level=0)
adults = adults[['FIPS', 'total']] # select
adults.columns = ['FIPS', '18_years_of_age_and_older'] # rename
adults
###Output
_____no_output_____
###Markdown
And children (under 18)
###Code
children = pop_data[pop_data['age_group'] == '<18']
children = children[['FIPS', 'total']] # select
children.columns = ['FIPS', 'younger_than_18_years_of_age'] # rename
children
###Output
_____no_output_____
###Markdown
Now we'll add these columns to the totals dataset to create one county dataset with all our variables of interest.
###Code
for df in [rep_women, adults, children]:
total_data = pd.merge(total_data, df, how='left', on='FIPS')
total_data
###Output
_____no_output_____
###Markdown
We'll want to separate county info from state info.
###Code
state_only_data = total_data[total_data['FIPS'] == 0]
total_data = total_data[total_data['FIPS'] != 0]
total_data
###Output
_____no_output_____
###Markdown
Create the maps Let's creat a list of the columns that we want to map:
###Code
cat_cols = ['total_anglo', 'total_black', 'total_hispanic', 'total_other',
'18-64_years_of_age_and_female', '18_years_of_age_and_older',
'younger_than_18_years_of_age']
###Output
_____no_output_____
###Markdown
We want the map template to be the same for every variable, but we want the color scheme to differe between the maps. Let's create a dict with color schemes for each map topic.
###Code
color_list = ['greens', 'purples', 'oranges', 'yellows', 'blues', 'blues', 'blues']
color_dict = dict(zip(cat_cols, color_list))
color_dict
###Output
_____no_output_____
###Markdown
In this example, we'lls specify 2 shapefiles, one for county shape information and one for major cities/ cities of interest in Texas. Specify the shapefile paths here:
###Code
COUNTY_SHP = os.path.normpath('') # Enter the path for your county shapefile template; this should end in 'shp"
CITY_SHP = os.path.normpath('') # EnterEnter the path for your city shapefile template; this should end in 'shp"
today = dt.datetime.today().strftime("%m/%d/%Y")
footnote = 'Source: Texas State Data Center, 2014 Population Projections,\n' +\
' Full 2000-10 Migration Rate, Prepared on %s' % (today)
print(footnote)
# Column names from the city GeoDataFrame
city_geoms = 'geometry'
city_names = 'NAME'
city_info = cshp.CityInfo(CITY_SHP, 'geometry', 'NAME',
Texas_city_label_df)
###Output
Source: Texas State Data Center, 2014 Population Projections,
Full 2000-10 Migration Rate, Prepared on 11/08/2016
###Markdown
The following code 1) Creates a name for the parameter being mapped and the Title 2) Creates custom category bins with get_custom_bins--None will default to quantiles 3) Creates a choropleth data object 4) Creates a choropleth map object 5) Saves the map to a .png file
###Code
total_col = 'total'
fips_col = 'FIPS'
geofips_col = 'FIPS'
for c in cat_cols:
cat_name = c.replace('total_', '').title().replace('_', ' ')
title = 'Percent of Population who are %s by County, 2014' % cat_name
colors = color_dict[c]
# For some categories, we want to create custom bins around the state-level proportion
if c in cat_cols[4:]:
state_level = state_only_data[c]/state_only_data[total_col]
bins = cshp.get_custom_bins(state_level)
# Optional label for the legend
level_labels = {1: '(State Overall Prop.)'}
else:
bins = None
level_labels = None
total_data = cshp.fix_FIPS(total_data, fips_col, '48')
geodata = cshp.fix_FIPS(gpd.GeoDataFrame.from_file(COUNTY_SHP), geofips_col, '48')
# Data object
dataset = cshp.AreaPopDataset(total_data, geodata, fips_col, geofips_col, c,
total_col, footnote, cat_name, title,
None, 4, 2, level_labels, True)
choropleth = cshp.Choropleth(dataset, colors, city_info, OUTPATH, False, True)
# # And...make the map
choropleth.plot()
###Output
_____no_output_____ |
Big-Data-Clusters/CU9/public/content/cert-management/cer001-create-root-ca.ipynb | ###Markdown
CER001 - Generate a Root CA certificateIf a Certificate Authority certificate for the test environmnet hasnever been generated, generate one using this notebook.If a Certificate Authoriy has been generated in another cluster, and youwant to reuse the same CA for multiple clusters, then use CER002/CER003download and upload the already generated Root CA.- [CER002 - Download existing Root CA certificate](../cert-management/cer002-download-existing-root-ca.ipynb)- [CER003 - Upload existing Root CA certificate](../cert-management/cer003-upload-existing-root-ca.ipynb)Consider using one Root CA certificate for all non-production clustersin each environment, as this reduces the number of Root CA certificatesthat need to be uploaded to clients connecting to these clusters. Steps Parameters
###Code
import getpass
common_name = "SQL Server Big Data Clusters Test CA"
country_name = "US"
state_or_province_name = "Illinois"
locality_name = "Chicago"
organization_name = "Contoso"
organizational_unit_name = "Finance"
email_address = f"{getpass.getuser().lower()}@contoso.com"
days = "398" # Max supported validity period in Safari - https://www.thesslstore.com/blog/ssl-certificate-validity-will-be-limited-to-one-year-by-apples-safari-browser/
test_cert_store_root = "/var/opt/secrets/test-certificates"
###Output
_____no_output_____
###Markdown
Common functionsDefine helper functions used in this notebook.
###Code
# Define `run` function for transient fault handling, suggestions on error, and scrolling updates on Windows
import sys
import os
import re
import platform
import shlex
import shutil
import datetime
from subprocess import Popen, PIPE
from IPython.display import Markdown
retry_hints = {} # Output in stderr known to be transient, therefore automatically retry
error_hints = {} # Output in stderr where a known SOP/TSG exists which will be HINTed for further help
install_hint = {} # The SOP to help install the executable if it cannot be found
def run(cmd, return_output=False, no_output=False, retry_count=0, base64_decode=False, return_as_json=False):
"""Run shell command, stream stdout, print stderr and optionally return output
NOTES:
1. Commands that need this kind of ' quoting on Windows e.g.:
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='data-pool')].metadata.name}
Need to actually pass in as '"':
kubectl get nodes -o jsonpath={.items[?(@.metadata.annotations.pv-candidate=='"'data-pool'"')].metadata.name}
The ' quote approach, although correct when pasting into Windows cmd, will hang at the line:
`iter(p.stdout.readline, b'')`
The shlex.split call does the right thing for each platform, just use the '"' pattern for a '
"""
MAX_RETRIES = 5
output = ""
retry = False
# When running `azdata sql query` on Windows, replace any \n in """ strings, with " ", otherwise we see:
#
# ('HY090', '[HY090] [Microsoft][ODBC Driver Manager] Invalid string or buffer length (0) (SQLExecDirectW)')
#
if platform.system() == "Windows" and cmd.startswith("azdata sql query"):
cmd = cmd.replace("\n", " ")
# shlex.split is required on bash and for Windows paths with spaces
#
cmd_actual = shlex.split(cmd)
# Store this (i.e. kubectl, python etc.) to support binary context aware error_hints and retries
#
user_provided_exe_name = cmd_actual[0].lower()
# When running python, use the python in the ADS sandbox ({sys.executable})
#
if cmd.startswith("python "):
cmd_actual[0] = cmd_actual[0].replace("python", sys.executable)
# On Mac, when ADS is not launched from terminal, LC_ALL may not be set, which causes pip installs to fail
# with:
#
# UnicodeDecodeError: 'ascii' codec can't decode byte 0xc5 in position 4969: ordinal not in range(128)
#
# Setting it to a default value of "en_US.UTF-8" enables pip install to complete
#
if platform.system() == "Darwin" and "LC_ALL" not in os.environ:
os.environ["LC_ALL"] = "en_US.UTF-8"
# When running `kubectl`, if AZDATA_OPENSHIFT is set, use `oc`
#
if cmd.startswith("kubectl ") and "AZDATA_OPENSHIFT" in os.environ:
cmd_actual[0] = cmd_actual[0].replace("kubectl", "oc")
# To aid supportability, determine which binary file will actually be executed on the machine
#
which_binary = None
# Special case for CURL on Windows. The version of CURL in Windows System32 does not work to
# get JWT tokens, it returns "(56) Failure when receiving data from the peer". If another instance
# of CURL exists on the machine use that one. (Unfortunately the curl.exe in System32 is almost
# always the first curl.exe in the path, and it can't be uninstalled from System32, so here we
# look for the 2nd installation of CURL in the path)
if platform.system() == "Windows" and cmd.startswith("curl "):
path = os.getenv('PATH')
for p in path.split(os.path.pathsep):
p = os.path.join(p, "curl.exe")
if os.path.exists(p) and os.access(p, os.X_OK):
if p.lower().find("system32") == -1:
cmd_actual[0] = p
which_binary = p
break
# Find the path based location (shutil.which) of the executable that will be run (and display it to aid supportability), this
# seems to be required for .msi installs of azdata.cmd/az.cmd. (otherwise Popen returns FileNotFound)
#
# NOTE: Bash needs cmd to be the list of the space separated values hence shlex.split.
#
if which_binary == None:
which_binary = shutil.which(cmd_actual[0])
# Display an install HINT, so the user can click on a SOP to install the missing binary
#
if which_binary == None:
print(f"The path used to search for '{cmd_actual[0]}' was:")
print(sys.path)
if user_provided_exe_name in install_hint and install_hint[user_provided_exe_name] is not None:
display(Markdown(f'HINT: Use [{install_hint[user_provided_exe_name][0]}]({install_hint[user_provided_exe_name][1]}) to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)")
else:
cmd_actual[0] = which_binary
start_time = datetime.datetime.now().replace(microsecond=0)
print(f"START: {cmd} @ {start_time} ({datetime.datetime.utcnow().replace(microsecond=0)} UTC)")
print(f" using: {which_binary} ({platform.system()} {platform.release()} on {platform.machine()})")
print(f" cwd: {os.getcwd()}")
# Command-line tools such as CURL and AZDATA HDFS commands output
# scrolling progress bars, which causes Jupyter to hang forever, to
# workaround this, use no_output=True
#
# Work around a infinite hang when a notebook generates a non-zero return code, break out, and do not wait
#
wait = True
try:
if no_output:
p = Popen(cmd_actual)
else:
p = Popen(cmd_actual, stdout=PIPE, stderr=PIPE, bufsize=1)
with p.stdout:
for line in iter(p.stdout.readline, b''):
line = line.decode()
if return_output:
output = output + line
else:
if cmd.startswith("azdata notebook run"): # Hyperlink the .ipynb file
regex = re.compile(' "(.*)"\: "(.*)"')
match = regex.match(line)
if match:
if match.group(1).find("HTML") != -1:
display(Markdown(f' - "{match.group(1)}": "{match.group(2)}"'))
else:
display(Markdown(f' - "{match.group(1)}": "[{match.group(2)}]({match.group(2)})"'))
wait = False
break # otherwise infinite hang, have not worked out why yet.
else:
print(line, end='')
if wait:
p.wait()
except FileNotFoundError as e:
if install_hint is not None:
display(Markdown(f'HINT: Use {install_hint} to resolve this issue.'))
raise FileNotFoundError(f"Executable '{cmd_actual[0]}' not found in path (where/which)") from e
exit_code_workaround = 0 # WORKAROUND: azdata hangs on exception from notebook on p.wait()
if not no_output:
for line in iter(p.stderr.readline, b''):
try:
line_decoded = line.decode()
except UnicodeDecodeError:
# NOTE: Sometimes we get characters back that cannot be decoded(), e.g.
#
# \xa0
#
# For example see this in the response from `az group create`:
#
# ERROR: Get Token request returned http error: 400 and server
# response: {"error":"invalid_grant",# "error_description":"AADSTS700082:
# The refresh token has expired due to inactivity.\xa0The token was
# issued on 2018-10-25T23:35:11.9832872Z
#
# which generates the exception:
#
# UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 179: invalid start byte
#
print("WARNING: Unable to decode stderr line, printing raw bytes:")
print(line)
line_decoded = ""
pass
else:
# azdata emits a single empty line to stderr when doing an hdfs cp, don't
# print this empty "ERR:" as it confuses.
#
if line_decoded == "":
continue
print(f"STDERR: {line_decoded}", end='')
if line_decoded.startswith("An exception has occurred") or line_decoded.startswith("ERROR: An error occurred while executing the following cell"):
exit_code_workaround = 1
# inject HINTs to next TSG/SOP based on output in stderr
#
if user_provided_exe_name in error_hints:
for error_hint in error_hints[user_provided_exe_name]:
if line_decoded.find(error_hint[0]) != -1:
display(Markdown(f'HINT: Use [{error_hint[1]}]({error_hint[2]}) to resolve this issue.'))
# Verify if a transient error, if so automatically retry (recursive)
#
if user_provided_exe_name in retry_hints:
for retry_hint in retry_hints[user_provided_exe_name]:
if line_decoded.find(retry_hint) != -1:
if retry_count < MAX_RETRIES:
print(f"RETRY: {retry_count} (due to: {retry_hint})")
retry_count = retry_count + 1
output = run(cmd, return_output=return_output, retry_count=retry_count)
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
elapsed = datetime.datetime.now().replace(microsecond=0) - start_time
# WORKAROUND: We avoid infinite hang above in the `azdata notebook run` failure case, by inferring success (from stdout output), so
# don't wait here, if success known above
#
if wait:
if p.returncode != 0:
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(p.returncode)}.\n')
else:
if exit_code_workaround !=0 :
raise SystemExit(f'Shell command:\n\n\t{cmd} ({elapsed}s elapsed)\n\nreturned non-zero exit code: {str(exit_code_workaround)}.\n')
print(f'\nSUCCESS: {elapsed}s elapsed.\n')
if return_output:
if base64_decode:
import base64
return base64.b64decode(output).decode('utf-8')
else:
return output
# Hints for tool retry (on transient fault), known errors and install guide
#
retry_hints = {'python': [ ], 'azdata': ['Endpoint sql-server-master does not exist', 'Endpoint livy does not exist', 'Failed to get state for cluster', 'Endpoint webhdfs does not exist', 'Adaptive Server is unavailable or does not exist', 'Error: Address already in use', 'Login timeout expired (0) (SQLDriverConnect)', 'SSPI Provider: No Kerberos credentials available', ], 'kubectl': ['A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond', ], }
error_hints = {'python': [['Library not loaded: /usr/local/opt/unixodbc', 'SOP012 - Install unixodbc for Mac', '../install/sop012-brew-install-odbc-for-sql-server.ipynb'], ['WARNING: You are using pip version', 'SOP040 - Upgrade pip in ADS Python sandbox', '../install/sop040-upgrade-pip.ipynb'], ], 'azdata': [['Please run \'azdata login\' to first authenticate', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['The token is expired', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Reason: Unauthorized', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Max retries exceeded with url: /api/v1/bdc/endpoints', 'SOP028 - azdata login', '../common/sop028-azdata-login.ipynb'], ['Look at the controller logs for more details', 'TSG027 - Observe cluster deployment', '../diagnose/tsg027-observe-bdc-create.ipynb'], ['provided port is already allocated', 'TSG062 - Get tail of all previous container logs for pods in BDC namespace', '../log-files/tsg062-tail-bdc-previous-container-logs.ipynb'], ['Create cluster failed since the existing namespace', 'SOP061 - Delete a big data cluster', '../install/sop061-delete-bdc.ipynb'], ['Failed to complete kube config setup', 'TSG067 - Failed to complete kube config setup', '../repair/tsg067-failed-to-complete-kube-config-setup.ipynb'], ['Data source name not found and no default driver specified', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Can\'t open lib \'ODBC Driver 17 for SQL Server', 'SOP069 - Install ODBC for SQL Server', '../install/sop069-install-odbc-driver-for-sql-server.ipynb'], ['Control plane upgrade failed. Failed to upgrade controller.', 'TSG108 - View the controller upgrade config map', '../diagnose/tsg108-controller-failed-to-upgrade.ipynb'], ['NameError: name \'azdata_login_secret_name\' is not defined', 'SOP013 - Create secret for azdata login (inside cluster)', '../common/sop013-create-secret-for-azdata-login.ipynb'], ['ERROR: No credentials were supplied, or the credentials were unavailable or inaccessible.', 'TSG124 - \'No credentials were supplied\' error from azdata login', '../repair/tsg124-no-credentials-were-supplied.ipynb'], ['Please accept the license terms to use this product through', 'TSG126 - azdata fails with \'accept the license terms to use this product\'', '../repair/tsg126-accept-license-terms.ipynb'], ], 'kubectl': [['no such host', 'TSG010 - Get configuration contexts', '../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb'], ['No connection could be made because the target machine actively refused it', 'TSG056 - Kubectl fails with No connection could be made because the target machine actively refused it', '../repair/tsg056-kubectl-no-connection-could-be-made.ipynb'], ], }
install_hint = {'azdata': [ 'SOP063 - Install azdata CLI (using package manager)', '../install/sop063-packman-install-azdata.ipynb' ], 'kubectl': [ 'SOP036 - Install kubectl command line interface', '../install/sop036-install-kubectl.ipynb' ], }
print('Common functions defined successfully.')
###Output
_____no_output_____
###Markdown
Get the Kubernetes namespace for the big data clusterGet the namespace of the Big Data Cluster use the kubectl command lineinterface .**NOTE:**If there is more than one Big Data Cluster in the target Kubernetescluster, then either:- set \[0\] to the correct value for the big data cluster.- set the environment variable AZDATA_NAMESPACE, before starting Azure Data Studio.
###Code
# Place Kubernetes namespace name for BDC into 'namespace' variable
if "AZDATA_NAMESPACE" in os.environ:
namespace = os.environ["AZDATA_NAMESPACE"]
else:
try:
namespace = run(f'kubectl get namespace --selector=MSSQL_CLUSTER -o jsonpath={{.items[0].metadata.name}}', return_output=True)
except:
from IPython.display import Markdown
print(f"ERROR: Unable to find a Kubernetes namespace with label 'MSSQL_CLUSTER'. SQL Server Big Data Cluster Kubernetes namespaces contain the label 'MSSQL_CLUSTER'.")
display(Markdown(f'HINT: Use [TSG081 - Get namespaces (Kubernetes)](../monitor-k8s/tsg081-get-kubernetes-namespaces.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [TSG010 - Get configuration contexts](../monitor-k8s/tsg010-get-kubernetes-contexts.ipynb) to resolve this issue.'))
display(Markdown(f'HINT: Use [SOP011 - Set kubernetes configuration context](../common/sop011-set-kubernetes-context.ipynb) to resolve this issue.'))
raise
print(f'The SQL Server Big Data Cluster Kubernetes namespace is: {namespace}')
###Output
_____no_output_____
###Markdown
Create a temporary directory to stage files
###Code
# Create a temporary directory to hold configuration files
import tempfile
temp_dir = tempfile.mkdtemp()
print(f"Temporary directory created: {temp_dir}")
###Output
_____no_output_____
###Markdown
Helper function to save configuration files to disk
###Code
# Define helper function 'save_file' to save configuration files to the temporary directory created above
import os
import io
def save_file(filename, contents):
with io.open(os.path.join(temp_dir, filename), "w", encoding='utf8', newline='\n') as text_file:
text_file.write(contents)
print("File saved: " + os.path.join(temp_dir, filename))
print("Function `save_file` defined successfully.")
###Output
_____no_output_____
###Markdown
Certificate configuration file
###Code
certificate = f"""
[ ca ]
default_ca = CA_default # The default ca section
[ CA_default ]
default_days = 1000 # How long to certify for
default_crl_days = 30 # How long before next CRL
default_md = sha256 # Use public key default MD
preserve = no # Keep passed DN ordering
x509_extensions = ca_extensions # The extensions to add to the cert
email_in_dn = no # Don't concat the email in the DN
copy_extensions = copy # Required to copy SANs from CSR to cert
[ req ]
default_bits = 2048
default_keyfile = {test_cert_store_root}/cakey.pem
distinguished_name = ca_distinguished_name
x509_extensions = ca_extensions
string_mask = utf8only
[ ca_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_default = {country_name}
stateOrProvinceName = State or Province Name (full name)
stateOrProvinceName_default = {state_or_province_name}
localityName = Locality Name (eg, city)
localityName_default = {locality_name}
organizationName = Organization Name (eg, company)
organizationName_default = {organization_name}
organizationalUnitName = Organizational Unit (eg, division)
organizationalUnitName_default = {organizational_unit_name}
commonName = Common Name (e.g. server FQDN or YOUR name)
commonName_default = {common_name}
emailAddress = Email Address
emailAddress_default = {email_address}
[ ca_extensions ]
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid:always, issuer
basicConstraints = critical, CA:true
keyUsage = keyCertSign, cRLSign
"""
save_file("ca.openssl.cnf", certificate)
###Output
_____no_output_____
###Markdown
Get name of the ‘Running’ `controller` `pod`
###Code
# Place the name of the 'Running' controller pod in variable `controller`
controller = run(f'kubectl get pod --selector=app=controller -n {namespace} -o jsonpath={{.items[0].metadata.name}} --field-selector=status.phase=Running', return_output=True)
print(f"Controller pod name: {controller}")
###Output
_____no_output_____
###Markdown
Create folder on controller to hold Test Certificates
###Code
run(f'kubectl exec {controller} -n {namespace} -c controller -- bash -c "mkdir -p {test_cert_store_root}" ')
###Output
_____no_output_____
###Markdown
Copy certificate configuration to `controller` `pod`
###Code
import os
cwd = os.getcwd()
os.chdir(temp_dir) # Workaround kubectl bug on Windows, can't put c:\ on kubectl cp cmd line
run(f'kubectl cp ca.openssl.cnf {controller}:{test_cert_store_root}/ca.openssl.cnf -c controller -n {namespace}')
os.chdir(cwd)
###Output
_____no_output_____
###Markdown
Generate certificate
###Code
cmd = f"openssl req -x509 -config {test_cert_store_root}/ca.openssl.cnf -newkey rsa:2048 -sha256 -nodes -days {days} -out {test_cert_store_root}/cacert.pem -outform PEM -subj '/C={country_name}/ST={state_or_province_name}/L={locality_name}/O={organization_name}/OU={organizational_unit_name}/CN={common_name}'"
run(f'kubectl exec {controller} -c controller -n {namespace} -- bash -c "{cmd}"')
###Output
_____no_output_____
###Markdown
Clean up temporary directory for staging configuration files
###Code
# Delete the temporary directory used to hold configuration files
import shutil
shutil.rmtree(temp_dir)
print(f'Temporary directory deleted: {temp_dir}')
print("Notebook execution is complete.")
###Output
_____no_output_____ |
code/FOOD FOOD FOOD.ipynb | ###Markdown
Modeling our Food: A Model by Meg and SparshProject 1
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim library
from modsim import *
from pandas import read_html
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
table2 = tables[2]
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
def plot_results(census, un, timeseries, title):
"""Plot the estimates and the model.
census: TimeSeries of population estimates
un: TimeSeries of population estimates
timeseries: TimeSeries of simulation results
title: string
"""
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
if len(timeseries):
plot(timeseries, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title=title)
un = table2.un / 1e9
census = table2.census / 1e9
empty = TimeSeries()
plot_results(census, un, empty, 'World population estimates')
from pandas import read_csv
food_filename = read_csv("data/FAOSTAT_data_9-24-2018.csv")
food_filename.columns = ['year', 'cost']
print(food_filename)
food_filename.cost = food_filename.cost/100000
plot(food_filename.year, food_filename.cost)
decorate(title = "World Food Production over Time",
xlabel = "Year",
ylabel = "Cost of food produced in 2004-2006 \n hundred-thousand US Dollars")
'''Food cost per person = 2641 (2013 dollar)
Inflation from 2006 - 2013 = 13.46%'''
first_year = food_filename.year[0]
print(first_year)
#for i in food_filename:
# food_filename.cost[i] = food_filename.cost[i] / 1.1346
###Output
1961
|
nbs/02_pytorch.transformer.ipynb | ###Markdown
Pytorch Transformer> Utils about Transformer of pytorch Helpers gen_key_padding_mask
###Code
# export
def gen_key_padding_mask(input_ids, pad_id):
''' Returns ByteTensor where True values are positions that contain pad_id.
input_ids: (bs, seq_len) returns: (bs, seq_len)
'''
device = input_ids.device
mask = torch.where(input_ids == pad_id, torch.tensor(1, device=device), torch.tensor(0, device=device)).to(device)
return mask.bool()
input_ids = torch.tensor([[12, 11, 0, 0],
[9, 1, 5, 0]])
key_padding_mask = gen_key_padding_mask(input_ids, 0)
test_eq(key_padding_mask, torch.tensor([[0, 0, 1, 1],
[0, 0, 0, 1]]).bool())
###Output
_____no_output_____
###Markdown
gen_lm_mask
###Code
# export
def gen_lm_mask(tgt_seq_len, device):
"""Generate a square mask for the sequence. The masked positions are filled with float('-inf').
Unmasked positions are filled with float(0.0).
ex: tgt_seq_len = 4
[[0., -inf, -inf, -inf],
[0., 0., -inf, -inf],
[0., 0., 0., -inf],
[0., 0., 0., 0.]])
"""
mask = (torch.triu(torch.ones(tgt_seq_len, tgt_seq_len)) == 1).transpose(0, 1)
mask = mask.float().masked_fill(mask == 0, float('-inf')).masked_fill(mask == 1, float(0.0))
return mask.to(device)
lm_mask = gen_lm_mask(4, 'cpu')
test_eq(lm_mask, torch.tensor([ [0., float('-inf'), float('-inf'), float('-inf')],
[0., 0., float('-inf'), float('-inf')],
[0., 0., 0., float('-inf')],
[0., 0., 0., 0.]]) )
###Output
_____no_output_____
###Markdown
Modules BatchFirstTransformerEncoder
###Code
# export
class BatchFirstTransformerEncoder(nn.TransformerEncoder):
'''
nn.TransformerEncoder want src be (seq_len, bs, embeded_size) and returns (seq_len, bs, embeded_size),
just change it to accept batch first input and returns
'''
def forward(self, src, *inputs, **kwargs):
''' src: (bs, enc_seq_len, embeded_size), returns: (bs, enc_seq_len, embeded_size) '''
src = src.permute(1, 0, 2) # (enc_seq_len, bs, embeded_size)
output = super().forward(src, *inputs, **kwargs) # (enc_seq_len, bs, embeded_size)
return output.permute(1, 0, 2) # (bs, enc_seq_len, embeded_size)
encoder_layer = nn.TransformerEncoderLayer(d_model=128, nhead=2)
encoder_norm = nn.LayerNorm(normalized_shape=128)
encoder = BatchFirstTransformerEncoder(encoder_layer=encoder_layer, num_layers=2, norm=encoder_norm)
src = torch.randn((16, 50, 128)) # (bs, enc_seq_len, embeded_size)
output = encoder(src) # (bs, enc_seq_len, embeded_size)
test_eq(output.shape, (16, 50, 128))
###Output
_____no_output_____
###Markdown
BatchFirstTransformerDecoder
###Code
# export
class BatchFirstTransformerDecoder(nn.TransformerDecoder):
'''
nn.TransformerDecoder want tgt be (seq_len, bs, embeded_size) and returns (seq_len, bs, embeded_size),
just change it to accept batch first input and returns
'''
def forward(self, tgt, memory, *inputs, **kwargs):
'''
tgt: (bs, dec_seq_len, embeded_size)
memory: (bs, enc_seq_len, embeded_size)
returns: (bs, dec_seq_len, embeded_size)
'''
tgt = tgt.permute(1, 0, 2) # (dec_seq_len, bs, embeded_size)
memory = memory.permute(1, 0, 2) # (enc_seq_len, bs, embeded_size)
output = super().forward(tgt, memory, *inputs, **kwargs) # (dec_seq_len, bs, embeded_size)
return output.permute(1, 0, 2) # (bs, dec_seq_len, embeded_size)
decoder_layer = nn.TransformerDecoderLayer(d_model=128, nhead=2)
decoder_norm = nn.LayerNorm(normalized_shape=128)
decoder = BatchFirstTransformerDecoder(decoder_layer, num_layers=2, norm=decoder_norm)
tgt = torch.randn((16, 40, 128)) # (bs, dec_seq_len)
memory = torch.randn((16, 50, 128)) # (bs, enc_seq_len, embeded_size)
output = decoder(tgt, memory) # (bs, dec_seq_len, embeded_size)
test_eq(output.shape, (16, 40, 128))
###Output
_____no_output_____
###Markdown
BatchFirstMultiheadAttention
###Code
# export
class BatchFirstMultiheadAttention(nn.MultiheadAttention):
'''
Pytorch wants your query, key, value be (seq_len, b, embed_dim) and return (seq_len, b, embed_dim)
But I like batch-first thing. input: (b, seq_len, embed_dim) output: (b, seq_len, embed_dim)
'''
def forward(self, query, key, value, **kwargs):
'''
- inputs:
- query: (bs, tgt_seq_len, embed_dim)
- key: (bs, src_seq_len, embed_dim)
- value: (bs, src_seq_len, embed_dim)
- outputs:
- attn_output: (bs, tgt_seq_len, embed_dim)
- attn_weight: (bs, tgt_seq_len, src_seq_len), Averaged weights that averaged over all heads
'''
attn_output, attn_weight = super().forward(query.permute(1, 0, 2), key.permute(1, 0, 2), value.permute(1, 0, 2), **kwargs)
return attn_output.permute(1, 0, 2), attn_weight
multi_attn = BatchFirstMultiheadAttention(embed_dim=128, num_heads=1, dropout=0)
query = torch.randn((3, 40, 128))
key = torch.randn((3, 50, 128))
value = torch.randn((3, 50, 128))
attn_output, attn_weight = multi_attn(query, key, value)
test_eq(attn_output.shape, (3, 40, 128))
test_eq(attn_weight.shape, (3, 40, 50))
###Output
_____no_output_____
###Markdown
CrossAttention
###Code
# export
class CrossAttention(nn.Module):
def __init__(self, embed_dim, num_heads=1, drop_p=0, num_layers=1):
super().__init__()
self.cross_attn_layers = nn.ModuleList(
[BatchFirstMultiheadAttention(embed_dim, num_heads=num_heads, dropout=drop_p) for _ in range(num_layers)]
)
def forward(self, tgt, src, src_key_padding_mask):
'''
tgt: (bs, tgt_seq_len, embed_size)
src: (bs, src_seq_len, embed_size)
src_key_padding_mask: (bs, src_seq_len)
returns: output, attn_weight
output: (bs, tgt_seq_len, embed_dim)
attn_weight: (bs, tgt_seq_len, src_seq_len)
'''
for layer in self.cross_attn_layers:
tgt, attn_weight = layer(tgt, src, src, key_padding_mask=src_key_padding_mask)
return tgt, attn_weight
tgt = torch.randn((3, 40, 768))
src = torch.randn((3, 50, 768))
src_key_padding_mask = torch.zeros((3, 50)).bool()
cross_attn = CrossAttention(embed_dim=768, num_layers=2)
cross_attn_out, cross_attn_weight = cross_attn(tgt, src, src_key_padding_mask)
test_eq(cross_attn_out.shape, (3, 40, 768))
test_eq(cross_attn_weight.shape, (3, 40, 50))
###Output
_____no_output_____
###Markdown
Export -
###Code
# hide
from nbdev.export import notebook2script
notebook2script()
###Output
Converted 01_data.core.ipynb.
Converted 02_pytorch.transformer.ipynb.
Converted 03_pytorch.model.ipynb.
Converted 04_optuna.ipynb.
Converted index.ipynb.
|
Sentiment Analysis/Amazon Review Sentiment Analysis with BERT.ipynb | ###Markdown
Preparing the environment
###Code
! pip3 install tf-models-official
! pip install tensorflow-text
import numpy as np
import os
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import gc # garbage collections
import bz2 # to open zipped files
import tensorflow as tf
from tensorflow import keras
from keras import models, layers
# more libraries for BERT
import tensorflow_hub as hub
import tensorflow_text as text
from official.nlp import optimization # to create AdamW optimizer
tf.get_logger().setLevel('ERROR')
! pip install kaggle
! mkdir ~/.kaggle
! cp /content/drive/MyDrive/kaggle.json ~/.kaggle/
! chmod 600 ~/.kaggle/kaggle.json
! kaggle datasets download -d bittlingmayer/amazonreviews
! unzip /content/amazonreviews.zip -d /content/amazonreviews
# read the files
train = bz2.BZ2File('/content/amazonreviews/train.ft.txt.bz2')
test = bz2.BZ2File('/content/amazonreviews/test.ft.txt.bz2')
train = train.readlines()
test = test.readlines()
# convert from raw binary strings into text files that can be parsed
train = [x.decode('utf-8') for x in train]
test = [x.decode('utf-8') for x in test]
# extract the labels
train_labels = [0 if x.split(' ')[0] == '__label__1' else 1 for x in train]
test_labels = [0 if x.split(' ')[0] =='__label__1' else 1 for x in test]
# extract the texts
train_texts = [x.split(' ', maxsplit=1)[1][:-1] for x in train]
test_texts = [x.split(' ', maxsplit=1)[1][:-1] for x in test]
# let's convert the labels to numpy arrays
# labels = np.array(train_labels)
del train, test
gc.collect()
texts = train_texts[:100000]
labels = train_labels[:100000]
val_texts = train_texts[100001:120000]
val_labels = train_labels[100001:120000]
# Choosing a BERT Model to fine-tune
bert_model_name = 'small_bert/bert_en_uncased_L-4_H-512_A-8'
tfhub_handle_encoder = 'https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-4_H-512_A-8/1'
tfhub_handle_preprocess = 'https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3'
print(f'BERT model selected : {tfhub_handle_encoder}')
print(f'Preprocess model auto-selected: {tfhub_handle_preprocess}')
# Preprocessing
bert_preprocess_model = hub.KerasLayer(tfhub_handle_preprocess)
# test
text_test = ['this is such an amazing movie!']
text_preprocessed = bert_preprocess_model(text_test)
print(f'Keys : {list(text_preprocessed.keys())}')
print(f'Shape : {text_preprocessed["input_word_ids"].shape}')
print(f'Word Ids : {text_preprocessed["input_word_ids"][0, :12]}')
print(f'Input Mask : {text_preprocessed["input_mask"][0, :12]}')
print(f'Type Ids : {text_preprocessed["input_type_ids"][0, :12]}')
def build_classifier_model():
text_input = tf.keras.layers.Input(shape=(), dtype=tf.string, name='text')
preprocessing_layer = hub.KerasLayer(tfhub_handle_preprocess, name='preprocessing')
encoder_inputs = preprocessing_layer(text_input)
encoder = hub.KerasLayer(tfhub_handle_encoder, trainable=True, name='BERT_encoder')
outputs = encoder(encoder_inputs)
net = outputs['pooled_output']
net = tf.keras.layers.Dropout(0.1)(net)
net = tf.keras.layers.Dense(1, activation='sigmoid', name='classifier')(net)
return tf.keras.Model(text_input, net)
model = build_classifier_model()
model.summary()
tf.keras.utils.plot_model(model)
# define the loss, metrics and optimizer
loss = tf.keras.losses.BinaryCrossentropy()
metrics = tf.metrics.BinaryAccuracy(name='accuracy')
epochs = 5
dataset = tf.data.Dataset.range(1000)
steps_per_epoch = tf.data.experimental.cardinality(dataset).numpy()
num_train_steps = steps_per_epoch * epochs
num_warmup_steps = int(0.1*num_train_steps)
init_lr = 3e-5
optimizer = optimization.create_optimizer(init_lr=init_lr,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps,
optimizer_type='adamw')
model.compile(optimizer=optimizer, loss=loss, metrics=metrics)
history = model.fit(texts, labels, epochs=epochs, validation_data=(val_texts, val_labels))
model.evaluate(test_texts[:40000], test_labels[:40000])
test_texts[:3]
model.predict(test_texts[:3])
model.save('BERT_amazon.h5')
model.save('bert_amazon.h5')
# copy the model
! cp /content/bert_amazon.h5 -d /content/drive/MyDrive
###Output
_____no_output_____ |
nlp/bert_japanese_classification_LP_FT_multiCLS.ipynb | ###Markdown
ニュース記事のジャンル予測(9分類問題)事前学習済み日本語BERTモデルを、分類タスク用に転移学習(ファインチューニング)します。学習は次の論文に従い、Linear Probingの後、Fine-tuningします。- [Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution](https://arxiv.org/abs/2202.10054)今回は入出力が次の形式を持ったタスク用に転移学習します。- **入力**: "{title} {body}"をトークナイズしたトークンID列(最大512トークン)- **出力**: {genre_id}ここで、{title}はニュース記事のタイトル、{body}は本文、{genre_id}はニュースの分類ラベル(0〜8)です。 ライブラリやデータの準備 依存ライブラリのインストール
###Code
!pip install -qU torch==1.7.1 torchtext==0.8.0 torchvision==0.8.2 torchaudio==0.7.2
!pip install -q transformers==4.14.0 pytorch_lightning==1.5.7 fugashi ipadic
###Output
[K |████████████████████████████████| 776.8 MB 18 kB/s
[K |████████████████████████████████| 6.9 MB 36.5 MB/s
[K |████████████████████████████████| 12.8 MB 19.0 MB/s
[K |████████████████████████████████| 7.6 MB 42.9 MB/s
[K |████████████████████████████████| 3.3 MB 13.3 MB/s
[K |████████████████████████████████| 526 kB 50.0 MB/s
[K |████████████████████████████████| 568 kB 53.3 MB/s
[K |████████████████████████████████| 13.4 MB 29.4 MB/s
[K |████████████████████████████████| 880 kB 43.0 MB/s
[K |████████████████████████████████| 3.3 MB 49.1 MB/s
[K |████████████████████████████████| 84 kB 3.1 MB/s
[K |████████████████████████████████| 596 kB 44.8 MB/s
[K |████████████████████████████████| 829 kB 51.0 MB/s
[K |████████████████████████████████| 140 kB 50.8 MB/s
[K |████████████████████████████████| 409 kB 47.5 MB/s
[K |████████████████████████████████| 1.1 MB 44.1 MB/s
[K |████████████████████████████████| 144 kB 51.3 MB/s
[K |████████████████████████████████| 94 kB 1.2 MB/s
[K |████████████████████████████████| 271 kB 49.0 MB/s
[33mWARNING: The candidate selected for download or install is a yanked version: 'transformers' candidate (version 4.14.0 at https://files.pythonhosted.org/packages/1a/6e/b64c7a875b37ed0b22b264d62613aab8782ef92249c591f806f51739281e/transformers-4.14.0-py3-none-any.whl#sha256=b641928ea8d0e6751f8f5f870059e123a444970d2b52e2ca58b20a72ee114bd5 (from https://pypi.org/simple/transformers/) (requires-python:>=3.6.0))
Reason for being yanked: Circular import when both TensorFlow and Onnx are in the env[0m
[?25h Building wheel for future (setup.py) ... [?25l[?25hdone
Building wheel for ipadic (setup.py) ... [?25l[?25hdone
Building wheel for sacremoses (setup.py) ... [?25l[?25hdone
###Markdown
各種ディレクトリ作成* data: 学習用データセット格納用* model: 学習済みモデル格納用(Linear Probing + Fine-tuning)* lp_model: 学習済みモデル格納用(Linear Probing)
###Code
!mkdir -p /content/data /content/model /content/lp_model
# 事前学習済みモデル
PRETRAINED_MODEL_NAME = "cl-tohoku/bert-base-japanese-whole-word-masking"
# Linear Probing済みモデルを保存する場所
LP_MODEL_DIR = "/content/lp_model"
# Linear Probing + Fine-tuning済みモデルを保存する場所
MODEL_DIR = "/content/model"
###Output
_____no_output_____
###Markdown
livedoor ニュースコーパスのダウンロード
###Code
!wget -O ldcc-20140209.tar.gz https://www.rondhuit.com/download/ldcc-20140209.tar.gz
###Output
--2022-05-24 22:57:59-- https://www.rondhuit.com/download/ldcc-20140209.tar.gz
Resolving www.rondhuit.com (www.rondhuit.com)... 59.106.19.174
Connecting to www.rondhuit.com (www.rondhuit.com)|59.106.19.174|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 8855190 (8.4M) [application/x-gzip]
Saving to: ‘ldcc-20140209.tar.gz’
ldcc-20140209.tar.g 100%[===================>] 8.44M 1.91MB/s in 4.8s
2022-05-24 22:58:06 (1.75 MB/s) - ‘ldcc-20140209.tar.gz’ saved [8855190/8855190]
###Markdown
livedoorニュースコーパスの形式変換livedoorニュースコーパスを次の形式のTSVファイルに変換します。* 1列目: タイトル* 2列目: 本文* 3列目: ジャンルID(0〜8)TSVファイルは/content/dataに格納されます。 文字列の正規化の定義表記揺れを減らします。今回は[neologdの正規化処理](https://github.com/neologd/mecab-ipadic-neologd/wiki/Regexp.ja)を一部改変したものを利用します。処理の詳細はリンク先を参照してください。
###Code
# https://github.com/neologd/mecab-ipadic-neologd/wiki/Regexp.ja から引用・一部改変
from __future__ import unicode_literals
import re
import unicodedata
def unicode_normalize(cls, s):
pt = re.compile('([{}]+)'.format(cls))
def norm(c):
return unicodedata.normalize('NFKC', c) if pt.match(c) else c
s = ''.join(norm(x) for x in re.split(pt, s))
s = re.sub('-', '-', s)
return s
def remove_extra_spaces(s):
s = re.sub('[ ]+', ' ', s)
blocks = ''.join(('\u4E00-\u9FFF', # CJK UNIFIED IDEOGRAPHS
'\u3040-\u309F', # HIRAGANA
'\u30A0-\u30FF', # KATAKANA
'\u3000-\u303F', # CJK SYMBOLS AND PUNCTUATION
'\uFF00-\uFFEF' # HALFWIDTH AND FULLWIDTH FORMS
))
basic_latin = '\u0000-\u007F'
def remove_space_between(cls1, cls2, s):
p = re.compile('([{}]) ([{}])'.format(cls1, cls2))
while p.search(s):
s = p.sub(r'\1\2', s)
return s
s = remove_space_between(blocks, blocks, s)
s = remove_space_between(blocks, basic_latin, s)
s = remove_space_between(basic_latin, blocks, s)
return s
def normalize_neologd(s):
s = s.strip()
s = unicode_normalize('0-9A-Za-z。-゚', s)
def maketrans(f, t):
return {ord(x): ord(y) for x, y in zip(f, t)}
s = re.sub('[˗֊‐‑‒–⁃⁻₋−]+', '-', s) # normalize hyphens
s = re.sub('[﹣-ー—―─━ー]+', 'ー', s) # normalize choonpus
s = re.sub('[~∼∾〜〰~]+', '〜', s) # normalize tildes (modified by Isao Sonobe)
s = s.translate(
maketrans('!"#$%&\'()*+,-./:;<=>?@[¥]^_`{|}~。、・「」',
'!”#$%&’()*+,-./:;<=>?@[¥]^_`{|}〜。、・「」'))
s = remove_extra_spaces(s)
s = unicode_normalize('!”#$%&’()*+,-./:;<>?@[¥]^_`{|}〜', s) # keep =,・,「,」
s = re.sub('[’]', '\'', s)
s = re.sub('[”]', '"', s)
return s
###Output
_____no_output_____
###Markdown
情報抽出ニュース記事のタイトルと本文とジャンル(9分類)の情報を抽出します。
###Code
import tarfile
import re
target_genres = ["dokujo-tsushin",
"it-life-hack",
"kaden-channel",
"livedoor-homme",
"movie-enter",
"peachy",
"smax",
"sports-watch",
"topic-news"]
def remove_brackets(text):
text = re.sub(r"(^【[^】]*】)|(【[^】]*】$)", "", text)
return text
def normalize_text(text):
assert "\n" not in text and "\r" not in text
text = text.replace("\t", " ")
text = text.strip()
text = normalize_neologd(text)
text = text.lower()
return text
def read_title_body(file):
next(file)
next(file)
title = next(file).decode("utf-8").strip()
title = normalize_text(remove_brackets(title))
body = normalize_text(" ".join([line.decode("utf-8").strip() for line in file.readlines()]))
return title, body
genre_files_list = [[] for genre in target_genres]
all_data = []
with tarfile.open("ldcc-20140209.tar.gz") as archive_file:
for archive_item in archive_file:
for i, genre in enumerate(target_genres):
if genre in archive_item.name and archive_item.name.endswith(".txt"):
genre_files_list[i].append(archive_item.name)
for i, genre_files in enumerate(genre_files_list):
for name in genre_files:
file = archive_file.extractfile(name)
title, body = read_title_body(file)
title = normalize_text(title)
body = normalize_text(body)
if len(title) > 0 and len(body) > 0:
all_data.append({
"title": title,
"body": body,
"genre_id": i
})
###Output
_____no_output_____
###Markdown
データ分割データセットを70% : 15%: 15% の比率でtrain/dev/testに分割します。* trainデータ: 学習に利用するデータ* devデータ: 学習中の精度評価等に利用するデータ* testデータ: 学習結果のモデルの精度評価に利用するデータ
###Code
import random
from tqdm import tqdm
random.seed(1234)
random.shuffle(all_data)
def to_line(data):
title = data["title"]
body = data["body"]
genre_id = data["genre_id"]
assert len(title) > 0 and len(body) > 0
return f"{title}\t{body}\t{genre_id}\n"
data_size = len(all_data)
train_ratio, dev_ratio, test_ratio = 0.7, 0.15, 0.15
with open(f"data/train.tsv", "w", encoding="utf-8") as f_train, \
open(f"data/dev.tsv", "w", encoding="utf-8") as f_dev, \
open(f"data/test.tsv", "w", encoding="utf-8") as f_test:
for i, data in tqdm(enumerate(all_data)):
line = to_line(data)
if i < train_ratio * data_size:
f_train.write(line)
elif i < (train_ratio + dev_ratio) * data_size:
f_dev.write(line)
else:
f_test.write(line)
###Output
7334it [00:00, 78174.26it/s]
###Markdown
作成されたデータを確認します。形式: {タイトル}\t{本文}\t{ジャンルID}
###Code
!head -3 data/test.tsv
###Output
nttドコモ、ジョジョの奇妙な冒険25周年スマホ「jojo l-06d」を発表!荒木飛呂彦氏監修コンテンツが満載の全部入り[optimus_report] オラララオラオラオラオラオラオラオラオラオラオラ!nttドコモは16日、今夏に発売する予定の新モデルや新しく開始するサービスなどを発表する「2012年夏モデル新商品・新サービス発表会」を開催し、人気マンガ「ジョジョの奇妙な冒険」の連載25周年を記念した限定モデル「jojo l-06d」(lgエレクトロニクス製)を発表しています。発売時期は2012年8月を予定しています。jojo l-06dは限定1万5,000台の限定モデルで、5インチサイズの大型ディスプレイを搭載したxi対応androidスマートフォン「optimus vu l-06d」をベースに、原作者・荒木飛呂彦氏が監修したコラボレーションモデルです。荒木氏は監修のほか、jojo l-06dのためだけの書き下ろしイラスト&サインが入っており、ジョジョ好きにはたまらないコンテンツが満載です。コンテンツには、荒木氏が書き下ろした壁紙を含むジョジョの人気イラストの壁紙やライブ壁紙を多数プリインストール。さらに、6種類のきせかえテーマと組み合わせることで、自分だけのお気に入りのホーム画面を設定可能になっています。また、ジョジョ第3部に登場するカーレースゲーム「f-mega」もプリインストールされており、花京院とダービー弟の名勝負を体験できます。さらに、お気に入りのスタンドと合成できるカメラアプリや、トリッシュの電卓、ウェザー・リポートウィジェット、イギーのマチキャラというように作品中に登場するキャラクターによる各種機能、「ジョジョ」の名台詞を織り交ぜたオリジナルの予測変換辞書、オリジナルデコメ絵文字、デコメテンプレートなども搭載。この他、画面サイズが4:3でほぼ文庫サイズのl-06dの端末機能を活かして、特別編集のカラー版コミック第1巻〜12巻も内蔵されています。機能的にも、optimus vu l-06dと同等で、高速データ通信規格lteによるサービス「xi(クロッシィ)」による下り最大75mbpsおよび上り最大25mbpsの高速データ通信や1.5ghzデュアルコアcpu、5インチxga(769×1024ドット)ips液晶、おサイフケータイ(felica)、ワンセグ、nottv、赤外線、防水などの多くに対応。ボディカラーは、optimus vu l-06dがブラックなのに対し、jojo l-06dはホワイトとなっており、背面にはジョリーンが描かれています。 ■ 主なスペック機種名jojo l-06d発売時期2012年8月サイズ(約mm)140×90×9.4重量(約)176g連続通話3g(約分)340 gsm(約分)240連続待受lte(約時間)240 3g(約時間)300 gsm(約時間)240ディスプレイサイズ(インチ)5.0種類tft液晶解像度 ×ga発色数(色)1677万アウトカメラ8mインカメラ1.3mチップセットapq8060 cpuクロック数1.5ghzコア数dual ram 1gb rom 16gb外部メモリー-バッテリー(容量mah)2000急速充電-os android 4.0指紋センサー-防水ipx5、7防塵-おサイフケータイ ◯ ワンセグ ◯ gsm ◯ lte(xi) ◯ 赤外線 ◯ gps ◯ dlna ◯ mhl ◯ おくだけ充電-bluetooth 3.0+hsテザリング(最大同時接続数) ◯(8台)nottv(連続視聴時間約分) ◯ simカードmicrosim本体色jojo white記事執筆:s-max編集部 ■関連リンク・エスマックス(s-max)・エスマックス(s-max)smaxjp on twitter・報道発表資料:2012夏モデルに19機種を開発|お知らせ|nttドコモ・2012夏モデルの主な特長|製品|nttドコモ 6
家電もスマートに節電を!電気を総合的にマネージメントする仕組みがスゴイ スマートフォンにスマートtvなど何にでも“スマート"が付く世の中になりつつあるようだが、家電の分野もこれからは“スマート"なスマート家電になっていくようだ。日本では東日本大震災以降、電力が直近の課題となっているが、原子力や火力に変わり太陽光など再生可能エネルギーが増えていくことは確実だ。これらと合わせ、節電などのために各機器の制御や、使用電力の可視化なども課題になっている。富士通が5月17日から18日まで開催している富士通フォーラム2012では、これらの動きに合わせたスマート家電関連の「スマートシティ」がひとつの目玉になっている。■送電網と電力の制御「スマートグリッド」という言葉を聞いたことがあるだろう。今後、一般家庭に於ける太陽光発電など、小規模な発電が様々な場所で行われるようになると、発電所からの電力に加えて、そうした小規模発電によって生まれた電気のやり取りなどをインテリジェンスに行える送電網の最適化が必要となる。その仕組みがスマートグリッドである。スマートグリッドで各家庭に於ける小規模発電の電気の配分と発電所からの送電を最適化したあとに必要になるのが、実際に電力を使用する家庭や事業所内などの電力の制御だ。無駄な電力やピーク時の電力を削減するためには、実際にどの程度電力を使用しているかの可視化が必要になる。様々な機器を自動制御すれば、ユーザーは生活レベルを落とすことなく、自動的に節電が可能となる。それらを家庭内で制御するのがhems(home energy management system)という仕組みだ。さらにビルの電力制御のbemsなどもあり、これらの複雑なシステムを連携させて行くことが今後の省エネの課題とされている。富士通sspfその中で、家庭内の家電制御に関係するhemsを実現する規格がechonet liteだ。この規格は日本企業が中心のエコーネットコンソーシアムが策定した物で、国際標準にもなっている。日本の家電製品にはこのechonet liteを採用した物が登場するようだが、これらの対応製品が出ても、それを制御するための機器などをうまく連携しなければ意味がない。富士通フォーラムに合わせて発表されたsspf(スマートセンシングプラットフォーム)v10は、echonet liteや様々な機器などを連携するためのプラットフォームで、これを使えば省エネなどのマネージメントシステムを容易に構築できる物となっている。 ■将来の省エネ家電選びエンドユーザーがこの製品を購入するわけではないし、今回紹介した規格などが今後国際的に普及するかどうかは、まだわからない。しかし、今後の家電製品などはこのような標準規格に対応し、うまく連携できるようにした製品が続々と登場してくる予定で、10年、20年単位で状況も一変しているかもしれない。特に、エアコンや冷蔵庫など長期間使用するような家電製品を使用する際は、これらの規格に対応した物かどうかも考慮して購入の検討をする必要が出てきそうだ。住宅の購入やリフォームの際にも、これらを実現できるスマートハウスに対応出来るようにしておく必要があるのかもしれない。 ■エコーネットコンソーシアム ■富士通sspf上倉賢@kamikura[digi2(デジ通)]digi2は「デジタル通」の略です。現在のデジタル機器は使いこなしが難しくなっています。皆さんがデジタル機器の「通」に近づくための情報を、皆さんよりすこし通な執筆陣が提供します。 1
デスクトップ代わりに使える低価格ノートレノボのideapad z480はivy bridege搭載 巷はwwdcで発表されたアップルの新型ノート「macbook air/pro」の話題で持ち切りだ。特にretinaディスプレイを搭載したmacbook proは、解像度が2880×1800ドットという超高解像度なので従来の感覚を大きく凌駕する高精細さが魅力だ。pcで作業をしているイラストレータさんや漫画家さん、アニメーターさんなどがこぞって購入したとしてもおかしくないだろう。そうは言ってもretinaディスプレイモデルの18万4800円からという値段を考えると、cpuをcore i7の高クロックモデルにしてメモリーを倍に増やしてssdを768mbにすると余裕で30万円近くになってしまう。確かにそれだけの価値はあるのだが、この厳しい経済状態でかつ先立つものが無い人にとって30万円という金額は、おいそれとねん出できるものではないだろう。macbook pro並みとはいかなくてもivy bridge搭載で2kg台のwindowsノートpcなら、選択肢はいくらでもある。中でもコストパフォーマンスに優れるのが本日発表されたレノボのノート「ideapad z480」だろう。実売で6万円台からと安価ながら第3世代のintel core i5シリーズ(ivy bridge)を搭載しているのが特長だ。単純計算だがmacbook proの最低構成価格で何と3台購入できてしまうという安さが魅力だ。グラファイトグレー、チェリーレッド、コーラルブルーの3色のカラバリが用意されている。極端な話、18万4800円をどう使うかといった選択の問題にした場合、macbook proを標準構成で買うか、それともideapad z480を家族用に3台購入して個別に使うといった選択もある。独身なのであれば、1台買えば12万円近い節約になる。このスペックならデスクトップ代わりでも十分役に立ってくれるだろう。macbook proという高級機の購入は、“持てる喜び"や“ライフスタイルの充実"といった点で否定するものではないが、ideapad z480という“実を取る"という選択だって全然アリだと思うのだ。主な仕様(量販店モデル)cpu:intel core i5-3210m(2.5ghz、最高3.1ghz)・グラフィック:intel hd graphics 4000・メインメモリー:4gバイト(最大8gバイト)・hdd:500gバイト・光学ドライブ:dvdスーパーマルチドライブ・液晶:14インチグレアhd wled液晶(1366×768ドット)・ワイヤレスインターフェイス:802.11n無線lan、bluetooth v4.0・有線lan:10base-t/100base-tx・5 in 1メディア・カード・リーダー・重量:2.3kg(バッテリー含む)・os:windows 7 home premium sp1 64bit版 ■ideapad z480 ■lenovo lenovo ideapad z470シリーズ14インチa4ノートpcルビーピンク1022-64jクチコミを見る 1
###Markdown
学習に必要なクラス等の定義学習にはPyTorch/PyTorch-lightning/Transformersを利用します。
###Code
import argparse
import glob
import os
import json
import time
import logging
import random
import re
from itertools import chain
from string import punctuation
import numpy as np
import torch
from torch.utils.data import Dataset, DataLoader
import pytorch_lightning as pl
from transformers import (
AdamW,
AutoModel,
AutoTokenizer,
get_linear_schedule_with_warmup
)
# 乱数シードの設定
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
if torch.cuda.is_available():
torch.cuda.manual_seed_all(seed)
set_seed(42)
# GPU利用有無
USE_GPU = torch.cuda.is_available()
# 各種ハイパーパラメータ
args_dict = dict(
data_dir="/content/data", # データセットのディレクトリ
# model_name_or_path=PRETRAINED_MODEL_NAME,
# tokenizer_name_or_path=PRETRAINED_MODEL_NAME,
# learning_rate=1e-3,
# weight_decay=0.0,
# adam_epsilon=1e-8,
# warmup_steps=0,
# gradient_accumulation_steps=1,
# max_input_length=512,
# max_target_length=4,
# train_batch_size=8,
# eval_batch_size=8,
# num_train_epochs=4,
n_gpu=1 if USE_GPU else 0,
early_stop_callback=False,
fp_16=False,
# opt_level='O1',
max_grad_norm=1.0,
seed=42,
)
###Output
_____no_output_____
###Markdown
TSVデータセットクラスTSV形式のファイルをデータセットとして読み込みます。 形式は"{title}\t{body}\t{genre_id}"です。
###Code
class TsvDataset(Dataset):
def __init__(self, tokenizer, data_dir, type_path, input_max_len=512):
self.file_path = os.path.join(data_dir, type_path)
self.input_max_len = input_max_len
self.tokenizer = tokenizer
self.inputs = []
self.labels = []
self._build()
def __len__(self):
return len(self.inputs)
def __getitem__(self, index):
source_ids = self.inputs[index]["input_ids"].squeeze()
source_mask = self.inputs[index]["attention_mask"].squeeze()
label = self.labels[index].squeeze()
return {"source_ids": source_ids, "source_mask": source_mask,
"label": label}
def _make_record(self, title, body, genre_id):
# ニュース分類タスク用の入出力形式に変換する。
input = f"{title} {body}"
target = int(genre_id)
return input, target
def _build(self):
with open(self.file_path, "r", encoding="utf-8") as f:
for line in f:
line = line.strip().split("\t")
assert len(line) == 3
assert len(line[0]) > 0
assert len(line[1]) > 0
assert len(line[2]) > 0
title = line[0]
body = line[1]
genre_id = line[2]
input, target = self._make_record(title, body, genre_id)
tokenized_inputs = self.tokenizer.batch_encode_plus(
[input], max_length=self.input_max_len, truncation=True,
padding="max_length", return_tensors="pt"
)
label = torch.LongTensor([target])
self.inputs.append(tokenized_inputs)
self.labels.append(label)
###Output
_____no_output_____
###Markdown
試しにテストデータ(test.tsv)を読み込み、トークナイズ結果をみてみます。
###Code
# トークナイザー(SentencePiece)モデルの読み込み
tokenizer = AutoTokenizer.from_pretrained(PRETRAINED_MODEL_NAME, is_fast=True)
# テストデータセットの読み込み
train_dataset = TsvDataset(tokenizer, args_dict["data_dir"], "test.tsv",
input_max_len=512)
###Output
_____no_output_____
###Markdown
テストデータの1レコード目をみてみます。
###Code
for data in train_dataset:
print("A. 入力データの元になる文字列")
print(tokenizer.decode(data["source_ids"]))
print()
print("B. 入力データ(Aの文字列がトークナイズされたトークンID列)")
print(data["source_ids"])
print()
print("C. 出力データ")
print(data["label"])
break
###Output
A. 入力データの元になる文字列
[CLS] ntt ドコモ 、 ジョジョ の 奇妙 な 冒険 25 周年 スマ ホ 「 jojo l - 06 d 」 を 発表! 荒木 飛 呂 彦氏 監修 コンテンツ が 満載 の 全部 入り [ optimus _ report ] オラララオラオラオラオラオラオラオラオラオラオラ! ntt ドコモ は 16 日 、 今夏 に 発売 する 予定 の 新 モデル や 新しく 開始 する サービス など を 発表 する 「 2012 年 夏 モデル 新 商品 ・ 新 サービス 発表 会 」 を 開催 し 、 人気 マンガ 「 ジョジョ の 奇妙 な 冒険 」 の 連載 25 周年 を 記念 し た 限定 モデル 「 jojo l - 06 d 」 ( lg エレクトロニクス 製 ) を 発表 し て い ます 。 発売 時期 は 2012 年 8 月 を 予定 し て い ます 。 jojo l - 06 d は 限定 1 万 5, 000 台 の 限定 モデル で 、 5 インチ サイズ の 大型 ディスプレイ を 搭載 し た xi 対応 android スマート フォン 「 optimus vu l - 06 d 」 を ベース に 、 原作 者 ・ 荒木 飛 呂 彦氏 が 監修 し た コラボレーション モデル です 。 荒木 氏 は 監修 の ほか 、 jojo l - 06 d の ため だけ の 書き下ろし イラスト & サイン が 入っ て おり 、 ジョジョ 好き に は たまらない コンテンツ が 満載 です 。 コンテンツ に は 、 荒木 氏 が 書き下ろし た 壁紙 を 含む ジョジョ の 人気 イラスト の 壁紙 や ライブ 壁紙 を 多数 プリインストール 。 さらに 、 6 種類 の きせ か え テーマ と 組み合わせる こと で 、 自分 だけ の お気に入り の ホーム 画面 を 設定 可能 に なっ て い ます 。 また 、 ジョジョ 第 3 部 に 登場 する カーレースゲーム 「 f - mega 」 も プリインストール さ れ て おり 、 花京院 と ダービー 弟 の 名 勝負 を 体験 でき ます 。 さらに 、 お気に入り の スタンド と 合成 できる カメラアプリ や 、 トリッシュ の 電卓 、 ウェザー・リポートウィジェット 、 イギー の マチ キャラ という よう に 作品 中 に 登場 する キャラクター による 各種 機能 、 「 ジョジョ 」 の 名 台詞 を 織り 交ぜ た オリジナル の 予測 変換 辞書 、 オリジナル デ コメ 絵文字 、 デコメテンプレート など も 搭載 。 この 他 、 画面 サイズ が 4 : 3 で ほぼ 文庫 サイズ の l - 06 d の 端末 機能 を 活かし て 、 特別 編集 の カラー 版 コミック 第 1 巻 〜 12 巻 も 内蔵 さ れ て い ます 。 機能 的 に も 、 optimus v [SEP]
B. 入力データ(Aの文字列がトークナイズされたトークンID列)
tensor([ 2, 1751, 8810, 10731, 6, 1007, 3806, 5, 11896, 18,
6219, 626, 3162, 4430, 331, 36, 7124, 28538, 29753, 28538,
3285, 61, 8460, 1267, 38, 11, 602, 679, 15484, 787,
7253, 5951, 29083, 11391, 7319, 14, 26512, 5, 9156, 1207,
4314, 21348, 28566, 2241, 1368, 1679, 4821, 13261, 4118, 21578,
28485, 28485, 14658, 14658, 14658, 14658, 14658, 14658, 14658, 14658,
14658, 14658, 679, 1751, 8810, 10731, 9, 379, 32, 6,
744, 29465, 7, 580, 34, 1484, 5, 147, 1317, 49,
9217, 650, 34, 1645, 64, 11, 602, 34, 36, 908,
19, 1428, 1317, 147, 2442, 35, 147, 1645, 602, 136,
38, 11, 682, 15, 6, 1571, 8433, 36, 1007, 3806,
5, 11896, 18, 6219, 38, 5, 2037, 626, 3162, 11,
1488, 15, 10, 2089, 1317, 36, 7124, 28538, 29753, 28538,
3285, 61, 8460, 1267, 38, 23, 3285, 28782, 24262, 338,
24, 11, 602, 15, 16, 21, 2610, 8, 580, 1419,
9, 908, 19, 134, 37, 11, 1484, 15, 16, 21,
2610, 8, 7124, 28538, 29753, 28538, 3285, 61, 8460, 1267,
9, 2089, 17, 429, 76, 228, 1444, 551, 5, 2089,
1317, 12, 6, 76, 5499, 4401, 5, 2264, 11545, 11,
1389, 15, 10, 3908, 28535, 1277, 3513, 13295, 6724, 5464,
36, 21348, 28566, 2241, 1368, 2940, 28663, 3285, 61, 8460,
1267, 38, 11, 2475, 7, 6, 2461, 104, 35, 15484,
787, 7253, 5951, 29083, 14, 11391, 15, 10, 10440, 1317,
2992, 8, 15484, 643, 9, 11391, 5, 825, 6, 7124,
28538, 29753, 28538, 3285, 61, 8460, 1267, 5, 82, 687,
5, 17743, 4307, 1514, 10361, 14, 1577, 16, 206, 6,
1007, 3806, 3596, 7, 9, 6918, 28469, 3721, 7319, 14,
26512, 2992, 8, 7319, 7, 9, 6, 15484, 643, 14,
17743, 10, 3057, 29399, 11, 1310, 1007, 3806, 5, 1571,
4307, 5, 3057, 29399, 49, 1789, 3057, 29399, 11, 1542,
3898, 217, 25461, 28467, 8, 604, 6, 101, 1897, 5,
322, 28616, 29, 1723, 2206, 13, 19935, 45, 12, 6,
1040, 687, 5, 21780, 5, 1283, 4180, 11, 1374, 519,
7, 58, 16, 21, 2610, 8, 106, 6, 1007, 3806,
97, 48, 129, 7, 656, 34, 1640, 5843, 3698, 36,
1044, 61, 14824, 23197, 38, 28, 3898, 217, 25461, 28467,
26, 20, 16, 206, 6, 1172, 28750, 28861, 13, 9829,
1782, 5, 125, 7157, 11, 4946, 203, 2610, 8, 604,
6, 21780, 5, 9438, 13, 3873, 392, 3579, 16422, 49,
6, 2942, 1203, 5, 327, 30241, 6, 1260, 1312, 28472,
28479, 2636, 1731, 14017, 6, 88, 1500, 5, 96, 28555,
5699, 140, 124, 7, 403, 51, 7, 656, 34, 1480,
250, 3845, 1197, 6, 36, 1007, 3806, 38, 5, 125,
11027, 11, 13185, 27721, 10, 2313, 5, 7055, 4618, 16043,
6, 2313, 148, 4979, 1644, 6277, 6, 148, 28539, 28542,
1247, 19665, 64, 28, 1389, 8, 70, 375, 6, 4180,
4401, 14, 57, 266, 48, 12, 1691, 4329, 4401, 5,
3285, 61, 8460, 1267, 5, 6032, 1197, 11, 10318, 16,
6, 1403, 2028, 5, 3994, 623, 5115, 97, 17, 1226,
1143, 197, 1226, 28, 8406, 26, 20, 16, 21, 2610,
8, 1197, 81, 7, 28, 6, 21348, 28566, 2241, 1368,
2940, 3])
C. 出力データ
tensor(6)
###Markdown
学習処理クラス[PyTorch-Lightning](https://github.com/PyTorchLightning/pytorch-lightning)を使って学習します。PyTorch-Lightningとは、機械学習の典型的な処理を簡潔に書くことができるフレームワークです。
###Code
import os
import json
from torch import nn
class BertFineTuner(pl.LightningModule):
def __init__(self, hparams):
super().__init__()
self.params = hparams
# 事前学習済みモデルの読み込み
self.model = AutoModel.from_pretrained(hparams.model_name_or_path)
config = self.model.config
if hparams.freeze_transformer:
for param in self.model.parameters():
param.requires_grad = False
self.num_labels = hparams.num_labels
config.num_labels = hparams.num_labels
self.max_cls_depth = 6 # 後半6層のCLSトークンの埋め込みベクトルを特徴量に利用
self.output_linear = nn.Linear(self.max_cls_depth * config.hidden_size, self.num_labels)
if os.path.exists(hparams.model_name_or_path):
# ローカルファイルシステムに学習済みパラメータがあれば読み込む
output_linear_state_dict = torch.load(os.path.join(hparams.model_name_or_path, "output_linear.bin"))
self.output_linear.load_state_dict(output_linear_state_dict)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
# トークナイザーの読み込み
self.tokenizer = AutoTokenizer.from_pretrained(hparams.tokenizer_name_or_path, do_lower_case=True, is_fast=True)
def forward(self, input_ids, attention_mask=None, labels=None):
"""順伝搬"""
output_states = self.model(
input_ids,
attention_mask=attention_mask,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
output_attentions=None,
output_hidden_states=True,
return_dict=True,
)
token_embeddings = output_states[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
hidden_states = output_states["hidden_states"]
output_vectors = []
# cls tokens
for i in range(1, self.max_cls_depth + 1):
cls_token = hidden_states[-1 * i][:, 0]
output_vectors.append(cls_token)
output_vector = torch.cat(output_vectors, dim=1)
output_vector = self.dropout(output_vector)
logits = self.output_linear(output_vector)
outputs = (logits,) + output_states[2:]
if labels is not None:
if self.num_labels == 1:
loss_fct = nn.MSELoss()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), logits, (hidden_states), (attentions)
def _step(self, batch):
"""ロス計算"""
labels = batch["label"]
outputs = self(
input_ids=batch["source_ids"],
attention_mask=batch["source_mask"],
labels=labels
)
loss = outputs[0]
return loss
def training_step(self, batch, batch_idx):
"""訓練ステップ処理"""
loss = self._step(batch)
self.log("train_loss", loss)
return {"loss": loss}
def validation_step(self, batch, batch_idx):
"""バリデーションステップ処理"""
loss = self._step(batch)
self.log("val_loss", loss)
return {"val_loss": loss}
def validation_epoch_end(self, outputs):
"""バリデーション完了処理"""
avg_loss = torch.stack([x["val_loss"] for x in outputs]).mean()
self.log("val_loss", avg_loss, prog_bar=True)
def test_step(self, batch, batch_idx):
"""テストステップ処理"""
loss = self._step(batch)
self.log("test_loss", loss)
return {"test_loss": loss}
def test_epoch_end(self, outputs):
"""テスト完了処理"""
avg_loss = torch.stack([x["test_loss"] for x in outputs]).mean()
self.log("test_loss", avg_loss, prog_bar=True)
def configure_optimizers(self):
"""オプティマイザーとスケジューラーを作成する"""
model = self.model
no_decay = ["bias", "LayerNorm.weight"]
optimizer_grouped_parameters = [
{
"params": [p for n, p in model.named_parameters()
if not any(nd in n for nd in no_decay)],
"weight_decay": self.params.weight_decay,
},
{
"params": [p for n, p in model.named_parameters()
if any(nd in n for nd in no_decay)],
"weight_decay": 0.0,
},
]
optimizer = AdamW(optimizer_grouped_parameters,
lr=self.params.learning_rate,
eps=self.params.adam_epsilon)
scheduler = get_linear_schedule_with_warmup(
optimizer, num_warmup_steps=self.params.warmup_steps,
num_training_steps=self.t_total
)
return [optimizer], [{"scheduler": scheduler, "interval": "step", "frequency": 1}]
def get_dataset(self, tokenizer, type_path, args):
"""データセットを作成する"""
return TsvDataset(
tokenizer=tokenizer,
data_dir=args.data_dir,
type_path=type_path,
input_max_len=args.max_input_length)
def setup(self, stage=None):
"""初期設定(データセットの読み込み)"""
if stage == 'fit' or stage is None:
train_dataset = self.get_dataset(tokenizer=self.tokenizer,
type_path="train.tsv", args=self.params)
self.train_dataset = train_dataset
val_dataset = self.get_dataset(tokenizer=self.tokenizer,
type_path="dev.tsv", args=self.params)
self.val_dataset = val_dataset
self.t_total = (
(len(train_dataset) // (self.params.train_batch_size * max(1, self.params.n_gpu)))
// self.params.gradient_accumulation_steps
* float(self.params.num_train_epochs)
)
def train_dataloader(self):
"""訓練データローダーを作成する"""
return DataLoader(self.train_dataset,
batch_size=self.params.train_batch_size,
drop_last=True, shuffle=True, num_workers=4)
def val_dataloader(self):
"""バリデーションデータローダーを作成する"""
return DataLoader(self.val_dataset,
batch_size=self.params.eval_batch_size,
num_workers=4)
def save(self, output_dir):
torch.save(self.output_linear.state_dict(), os.path.join(output_dir, "output_linear.bin"))
self.model.save_pretrained(output_dir)
###Output
_____no_output_____
###Markdown
転移学習を実行
###Code
# 学習に用いるハイパーパラメータを設定する
args_dict.update({
"max_input_length": 512, # 入力文の最大トークン数
"train_batch_size": 64,
"eval_batch_size": 8,
"num_train_epochs": 8,
"num_labels": 9, # ラベルのカテゴリ数
"model_name_or_path": PRETRAINED_MODEL_NAME,
"tokenizer_name_or_path": PRETRAINED_MODEL_NAME,
"learning_rate": 1e-2, # タスクに応じて要調整
"weight_decay": 0.0,
"adam_epsilon": 1e-8,
"warmup_steps": 30,
"gradient_accumulation_steps": 1,
"freeze_transformer": True,
})
args = argparse.Namespace(**args_dict)
# checkpoint_callback = pl.callbacks.ModelCheckpoint(
# "/content/checkpoints",
# monitor="val_loss", mode="min", save_top_k=1
# )
train_params = dict(
accumulate_grad_batches=args.gradient_accumulation_steps,
gpus=args.n_gpu,
max_epochs=args.num_train_epochs,
precision= 16 if args.fp_16 else 32,
# amp_level=args.opt_level,
gradient_clip_val=args.max_grad_norm,
# checkpoint_callback=checkpoint_callback,
)
# Linear Probingの実行
model = BertFineTuner(args)
trainer = pl.Trainer(**train_params)
trainer.fit(model)
# 最終エポックのモデルを保存
model.tokenizer.save_pretrained(LP_MODEL_DIR)
model.save(LP_MODEL_DIR)
del model
# 学習に用いるハイパーパラメータを設定する
args_dict.update({
"max_input_length": 512, # 入力文の最大トークン数
"train_batch_size": 8,
"eval_batch_size": 8,
"num_train_epochs": 4,
"num_labels": 9, # ラベルのカテゴリ数
"model_name_or_path": LP_MODEL_DIR,
"tokenizer_name_or_path": LP_MODEL_DIR,
"learning_rate": 1.4e-5, # タスクに応じて要調整(線形探索ではなく値を指数(例えば2倍)で変えて最適値を探索するといいでしょう)
"weight_decay": 0.0,
"adam_epsilon": 1e-8,
"warmup_steps": 30,
"gradient_accumulation_steps": 1,
"freeze_transformer": False,
})
args = argparse.Namespace(**args_dict)
# checkpoint_callback = pl.callbacks.ModelCheckpoint(
# "/content/checkpoints",
# monitor="val_loss", mode="min", save_top_k=1
# )
train_params = dict(
accumulate_grad_batches=args.gradient_accumulation_steps,
gpus=args.n_gpu,
max_epochs=args.num_train_epochs,
precision= 16 if args.fp_16 else 32,
# amp_level=args.opt_level,
gradient_clip_val=args.max_grad_norm,
# checkpoint_callback=checkpoint_callback,
)
# Fine-tuningの実行
model = BertFineTuner(args)
trainer = pl.Trainer(**train_params)
trainer.fit(model)
# 最終エポックのモデルを保存
model.tokenizer.save_pretrained(MODEL_DIR)
model.save(MODEL_DIR)
del model
###Output
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
--------------------------------------------
0 | model | BertModel | 110 M
1 | output_linear | Linear | 41.5 K
2 | dropout | Dropout | 0
--------------------------------------------
110 M Trainable params
0 Non-trainable params
110 M Total params
442.635 Total estimated model params size (MB)
###Markdown
学習済みモデルの読み込み
###Code
class BertModelForClassification(nn.Module):
def __init__(self, model_name_or_path, num_labels):
super().__init__()
# 事前学習済みモデルの読み込み
self.model = AutoModel.from_pretrained(model_name_or_path)
config = self.model.config
self.num_labels = num_labels
self.max_cls_depth = 6 # 後半6層のCLSトークンの埋め込みベクトルを特徴量に利用
self.output_linear = nn.Linear(self.max_cls_depth * config.hidden_size, self.num_labels)
output_linear_state_dict = torch.load(os.path.join(model_name_or_path, "output_linear.bin"))
self.output_linear.load_state_dict(output_linear_state_dict)
self.dropout = nn.Dropout(config.hidden_dropout_prob)
def forward(self, input_ids, attention_mask=None, labels=None):
output_states = self.model(
input_ids,
attention_mask=attention_mask,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
output_attentions=None,
output_hidden_states=True,
return_dict=True,
)
token_embeddings = output_states[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
hidden_states = output_states["hidden_states"]
output_vectors = []
# cls tokens
for i in range(1, self.max_cls_depth + 1):
cls_token = hidden_states[-1 * i][:, 0]
output_vectors.append(cls_token)
output_vector = torch.cat(output_vectors, dim=1)
output_vector = self.dropout(output_vector)
logits = self.output_linear(output_vector)
outputs = (logits,) + output_states[2:]
if labels is not None:
if self.num_labels == 1:
loss_fct = nn.MSELoss()
loss = loss_fct(logits.view(-1), labels.view(-1))
else:
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))
outputs = (loss,) + outputs
return outputs # (loss), logits, (hidden_states), (attentions)
import torch
from torch.utils.data import Dataset, DataLoader
from transformers import AutoModel, AutoTokenizer
# トークナイザー
tokenizer = AutoTokenizer.from_pretrained(MODEL_DIR, do_lower_case=True, is_fast=True)
# 学習済みモデル
trained_model = BertModelForClassification(model_name_or_path=MODEL_DIR,
num_labels=args.num_labels)
# GPUの利用有無
USE_GPU = torch.cuda.is_available()
if USE_GPU:
trained_model.cuda()
###Output
_____no_output_____
###Markdown
テストデータに対する予測精度を評価
###Code
import textwrap
from tqdm.auto import tqdm
from sklearn import metrics
# テストデータの読み込み
test_dataset = TsvDataset(tokenizer, args_dict["data_dir"], "test.tsv",
input_max_len=args.max_input_length)
test_loader = DataLoader(test_dataset, batch_size=32, num_workers=4)
trained_model.eval()
outputs = []
confidences = []
targets = []
with torch.no_grad():
for batch in tqdm(test_loader):
input_ids = batch['source_ids']
input_mask = batch['source_mask']
if USE_GPU:
input_ids = input_ids.cuda()
input_mask = input_mask.cuda()
outs = trained_model(input_ids=input_ids,
attention_mask=input_mask)
logits = outs[0]
pred_label = logits.argmax(dim=1, keepdim=True)
conf = logits.softmax(dim=1).gather(dim=1, index=pred_label).squeeze().cpu().numpy().tolist()
pred_label = pred_label.squeeze().cpu().numpy().tolist()
target = batch["label"].tolist()
outputs.extend(pred_label)
confidences.extend(conf)
targets.extend(target)
###Output
_____no_output_____
###Markdown
accuracy
###Code
metrics.accuracy_score(targets, outputs)
###Output
_____no_output_____
###Markdown
ラベル別精度[accuracy, precision, recall, f1-scoreの意味](http://ibisforest.org/index.php?F値)
###Code
print(metrics.classification_report(targets, outputs))
###Output
precision recall f1-score support
0 0.98 0.93 0.95 130
1 0.97 0.93 0.95 121
2 0.92 0.93 0.93 123
3 0.90 0.85 0.88 82
4 0.94 0.96 0.95 129
5 0.91 0.95 0.93 141
6 0.97 0.98 0.97 127
7 0.99 0.97 0.98 127
8 0.93 0.97 0.95 120
accuracy 0.95 1100
macro avg 0.94 0.94 0.94 1100
weighted avg 0.95 0.95 0.95 1100
###Markdown
確信度の上下限
###Code
min(confidences), max(confidences)
###Output
_____no_output_____ |
course_content/case_study/Case Study A.ipynb | ###Markdown
Case Study A This task will give you hands on experience of solving a machine learning problem from start to finish. In this notebook you will be introduced to a new type of model which can be used for both classification and regression purposes. It is based on a powerful algorithm which is frequently used in machine learning. The task will entail a new data set, and you will make choices about how to process the data and build the model using the experience gained in the chapters of this course. K-Nearest Neighbour AlgorithmThe principle behind the K-nearest neightbour (KNN) algorithm is that data which have *'similar'* values are likely to have the same class/target value. The KNN uses something briefly discussed earlier called the *'data space'* of features. This is where the data is represented by a vector, where each feature/attribute is a dimension in space. For two features we have a 2D plot, three a 3D plot etc (don't worry you don't have to imagine higher numbers in a real space!).A key feature of the model is the distance calculation, this is used to calculate which data is the *'nearest'* to our new point. The model uses the Euclidean distance to measure how far apart two data points are across the $n$ different dimensions. The distance between data points $\textbf{p}$ and $\textbf{q}$ are given as:$$d(\textbf{p},\textbf{q}) = \sqrt{\sum_{i=1}^{n}(p_i - q_i)^2} $$Where $i$ denotes the $i$'th dimension. An example for calculating the distance between two data points with two features $x$ and $y$ would be:$$d(\textbf{p},\textbf{q}) = \sqrt{(p_x - q_x)^2 + (p_y - q_y)^2} $$What we as machine learning practitioners need to is find an appropriate number for the value $K$. $K$ determines how many neighbours we are going to take into account to determine the label of our new data point. We calculate the distance between all our data points and the new point then select the K-nearest. Our prediction for the new data point is then whichever class the majority of neighbours have. For example, if we have $K=10$ and 7 of the nearest data points are Class A and 3 are Class B our new data point will be predicted as Class A. Below is an illustration of the KNN classification. For our test data point if we take $K = 1$ then our point is classified as Class B. However, this doesn't quite look right as the majority of the data points on that side of the data space are Class A. If we take a larger value, $K = 4$ then the neighbours are majority Class A. This shows that choosing the right value of $K$ is really important when using this model. The best value for $K$ will be a result of properties of the data. We often us a grid search as shown in Chapter 4 to find $K$.For a regression task the KNN regressor will predict the continuous value imputed from the nearest neighbours to the test data point. Key Point The optimal value of $K$ is highly data dependent. You should try understand how your classes are distributed with relation to different variables in order to help understand what might be a good scale for $K$. Typically, large values of $K$ help reduce the efect of 'noisy' data. However, this can reduce the model's ability to determine distinctions in boundries in the data. The KNN classifier is implemented in **`sklearn`** using the class **`sklearn.neighbors.KNeighborsClassifier`**. The argument **`n_neighbors`** is equivalent to $K$ in our description. Resampling with `sklearn`In the Chapter 4 we mentioned different methods of dealing with class imbalances, one of which was to use an argument within the model selected. However, not all models have the `"class_weight"` argument associated, we therefore need to re-weight our data manually. Using the `sklearn.utils` resample utlity package allows us to easily change the distribution of our data set. We have two options when resampling our distribution, we can either *downsample* the majority class which reduces the number of the most prevelant class to that of the minority class. On the otherhand we can *upsample* the minority class, where we create new repeated records of the minority class so that there are the same number as the most prevelent class.Below is a brief introduction to the syntax of resampling with `resample`, we will be downsampling. More detail on the function can be found [here](https://scikit-learn.org/stable/modules/generated/sklearn.utils.resample.html).```pythonfrom sklearn.utils import resample Separate out the classes into different dataframesmajority_class_df = our_dataframe[our_dataframe["target_class"]==0]minority_class_df = our_dataframe[our_dataframe["target_class"]==1] Find how many are in the minority classnumber_in_minority_class = len(minority_class_df.index) Create a new object with the resampled majority class.majority_class_downsampled = resample(majority_class_df, replace=False, sample without replacement to prevent repeated data n_samples=number_in_minority_class, to match minority class random_state=123) reproducible results Join the resampled and original majority and minority classesbalanced_data = pd.concat([majority_class_downsampled, minority_class_df], axis=0, sort=True)balanced_data = balanced_data.reset_index(drop=True) Display new class countsbalanced_data["target_class"].value_counts()``` TaskYou have been given a data set which contains information about bank customers. Your task is to build a model using the K-nearest neighbour classifier that will predict, based on the data given, whether or not the customer stopped using the service. Use the data set `"../../data/churn.csv"`.The data set has fifteen initial features with the **`Attrition_Flag`** attribute being the target variable. The data contains missing values, unique values, and a range of data types which will require preparing. Your goal is to use the material in this course to achieve a weighted average F1 classification score of greater than XXX on a 80:20 train-test split. Attempt to complete the whole problem before looking at the model answer. Can you improve on the model answer's score? Does logistic regression perform better or worse on this problem?
###Code
# import your libraries
# import your data and explore its properties
# handle missing data
# engineer your features
# scale your features
# select your features
# split your data
# train your model
# evaluate your model
# improve your model
###Output
_____no_output_____ |
Project2-KC.ipynb | ###Markdown
Project: Regional Growth with Gapminder World Table of ContentsIntroductionData WranglingExploratory Data AnalysisConclusions Introduction**1.Which regions had higher rates of urban growth in 2017?****2.Which indicator has a stronger relationship with the Urban Growth (UG) indicator? Is the relationship between urban growth and population growth stable?****3.How have the indicators behaved in the last 10 years?** **The main objectives are to find trends between the selected metrics and discover the regions that stood out in UG in 2017.Dataframes from Gapminder World, with updates from the source:****Urban population growth (annual %)**Primer source: World BankCategory: Population Subcategory: Urbanization**HDI (Human Development Index):**Primer source: UNDP(United Nations Development Programme)+ Update(:2017)Category: Society HDI is an index used to rank countries by level of "human development".It contains three dimensions: health level, educational level and living standard.(http://wikiprogress.org/articles/initiatives/human-development-index/)**Inequality index (Gini)**Primer source: The World BankCategory: EconomySubcategory: Poverty & inequality"In economics, the Gini coefficient (/ˈdʒiːni/ JEE-nee), sometimes called Gini index, or Gini ratio, is a measure of statistical dispersion intended to represent the income or wealth distribution of a nation's residents, and is the most commonly used measurement of inequality." (https://en.wikipedia.org/wiki/Gini_coefficient_)**Population growth (annual %)** Primer source: The World BankCategory: Population Subcategory: Population growthThe population growth indicator is used to check its behavior along with its rates, in comparison with the Urban growth indicator.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Data Wrangling General Properties In this first section, I import the dataframes,and make a first analysis of them checking the years covered and the null cells.Since the HDI's dataset is outdated, I update it with data from a dataset updated.The datasets with few or some null cells are filled with their means.The data with many null cells (more than 80%) are deleted.
###Code
#Import the dataframes
urb_growth_orig = pd.read_csv('urban_population_growth_annual_percent.csv')
gini_orig = pd.read_csv('gini.csv')
pop_growth_orig = pd.read_csv('population_growth_annual_percent.csv')
hdi = pd.read_csv('hdi_human_development_index.csv')
hdi_update = pd.read_csv('HDI-Copy of 2018_all_indicators.csv')
#Verify the independent variable (urban_population_growth_annual_percent)
urb_growth_orig.head() ###1960-2017
#urb_growth_orig.info() ### some null cells
###Output
_____no_output_____
###Markdown
UG:1960-2017, some null cells
###Code
#Check the dataset of hdi
hdi.head()
#hdi.info()
#Check the dataset of update for hdi
hdi_update.head()
#print(hdi_update)
#hdi_update.info()
###Output
_____no_output_____
###Markdown
HDI: 1990-2015, many null cellsHDI update: 1990-2017 & 9999, many null cellsI found differences and repetitions between hdi and its update, hdi has many null cells. In these cases, I keep with the hdi's dataset.
###Code
#Check the dataset of Gini
gini_orig.head() ###1800-2040
#gini_orig.shape
#check null cells
gini_orig.info() ###no null cells (it seems)
#gini_orig.apply(lambda x: x.count(), axis=1)
gini_orig.isnull().sum(axis=1)
###Output
_____no_output_____
###Markdown
Gini: 1800-2040, no null cells
###Code
#Check the dataset of pop Growth
pop_growth_orig.head() ###1960-2017
#pop_growth_orig.info() ### some null cells
###Output
_____no_output_____
###Markdown
pop_growth_orig: 1960-2017, some null cells Data Cleaning (Replace this with more specific notes!)
###Code
# After discussing the structure of the data and any problems that need to be
# cleaned, perform those cleaning steps in the second part of this section.
#Clean/fill the independent variable (urban_growth)
#Fill the missing values with the mean and checking
urb_growth_orig.fillna(urb_growth_orig.mean(), inplace=True)
urb_growth_orig.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 194 entries, 0 to 193
Data columns (total 59 columns):
country 194 non-null object
1960 194 non-null float64
1961 194 non-null float64
1962 194 non-null float64
1963 194 non-null float64
1964 194 non-null float64
1965 194 non-null float64
1966 194 non-null float64
1967 194 non-null float64
1968 194 non-null float64
1969 194 non-null float64
1970 194 non-null float64
1971 194 non-null float64
1972 194 non-null float64
1973 194 non-null float64
1974 194 non-null float64
1975 194 non-null float64
1976 194 non-null float64
1977 194 non-null float64
1978 194 non-null float64
1979 194 non-null float64
1980 194 non-null float64
1981 194 non-null float64
1982 194 non-null float64
1983 194 non-null float64
1984 194 non-null float64
1985 194 non-null float64
1986 194 non-null float64
1987 194 non-null float64
1988 194 non-null float64
1989 194 non-null float64
1990 194 non-null float64
1991 194 non-null float64
1992 194 non-null float64
1993 194 non-null float64
1994 194 non-null float64
1995 194 non-null float64
1996 194 non-null float64
1997 194 non-null float64
1998 194 non-null float64
1999 194 non-null float64
2000 194 non-null float64
2001 194 non-null float64
2002 194 non-null float64
2003 194 non-null float64
2004 194 non-null float64
2005 194 non-null float64
2006 194 non-null float64
2007 194 non-null float64
2008 194 non-null float64
2009 194 non-null float64
2010 194 non-null float64
2011 194 non-null float64
2012 194 non-null float64
2013 194 non-null float64
2014 194 non-null float64
2015 194 non-null float64
2016 194 non-null float64
2017 194 non-null float64
dtypes: float64(58), object(1)
memory usage: 89.5+ KB
###Markdown
checking: no null cells
###Code
#Clean hdi_update (filtering and merging dataframes)
#Rename the main column to be the same as the main dataset hdi.
hdi_update.rename(columns={'country_name': 'country'})
hdi_update.head()
#Select data of updates to the indicator hdi and check
#Filter Pandas Dataframe By Values of Column
hdi_update.drop(hdi_update[hdi_update['indicator_id'] != 137506].index, inplace=True)
hdi_update
#hdi_update.info()
#Remove unnecessary columns and checking, keeping with the 10 last years
hdi_update.drop(hdi_update.iloc[:, 0:4], axis=1, inplace=True) #first 4
hdi_update.head()
#Remove column '9999' (the last one)
hdi_update.drop(hdi_update.iloc[:, -1:], axis=1, inplace=True)
hdi_update.head()
#Choose just the countries and the years that are missing in hdi
hdi_update.drop(hdi_update.iloc[:, 1:-2], axis=1, inplace=True) #years before 2016
hdi_update.head()
hdi.head()
hdi_update=hdi_update.rename(columns={'country_name': 'country'})
hdi_update.head()
# merge hdi and hdi_updates and assigning the name hdi_new
hdi_new = pd.merge(hdi,hdi_update, on ='country')
print(hdi_new)
# Drop columns with + or = to 20% of values missing
hdi_new.dropna(thresh=0.8*len(hdi_new), axis=1, inplace=True)
hdi_new.head()
#Fill the missing values with the mean and checking
hdi_new.fillna(hdi_new.mean(), inplace=True)
hdi_new.info()
#hdi_new
###Output
<class 'pandas.core.frame.DataFrame'>
Int64Index: 162 entries, 0 to 161
Data columns (total 20 columns):
country 162 non-null object
1999 162 non-null float64
2000 162 non-null float64
2001 162 non-null float64
2002 162 non-null float64
2003 162 non-null float64
2004 162 non-null float64
2005 162 non-null float64
2006 162 non-null float64
2007 162 non-null float64
2008 162 non-null float64
2009 162 non-null float64
2010 162 non-null float64
2011 162 non-null float64
2012 162 non-null float64
2013 162 non-null float64
2014 162 non-null float64
2015 162 non-null float64
2016 162 non-null float64
2017 162 non-null float64
dtypes: float64(19), object(1)
memory usage: 26.6+ KB
###Markdown
hdi_new: 1990-2017, no null cells
###Code
#Clean pop growth
#Fill the missing values with the mean and check
pop_growth_orig.fillna(pop_growth_orig.mean(), inplace=True)
pop_growth_orig.info()
#pop_growth_orig
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 194 entries, 0 to 193
Data columns (total 59 columns):
country 194 non-null object
1960 194 non-null float64
1961 194 non-null float64
1962 194 non-null float64
1963 194 non-null float64
1964 194 non-null float64
1965 194 non-null float64
1966 194 non-null float64
1967 194 non-null float64
1968 194 non-null float64
1969 194 non-null float64
1970 194 non-null float64
1971 194 non-null float64
1972 194 non-null float64
1973 194 non-null float64
1974 194 non-null float64
1975 194 non-null float64
1976 194 non-null float64
1977 194 non-null float64
1978 194 non-null float64
1979 194 non-null float64
1980 194 non-null float64
1981 194 non-null float64
1982 194 non-null float64
1983 194 non-null float64
1984 194 non-null float64
1985 194 non-null float64
1986 194 non-null float64
1987 194 non-null float64
1988 194 non-null float64
1989 194 non-null float64
1990 194 non-null float64
1991 194 non-null float64
1992 194 non-null float64
1993 194 non-null float64
1994 194 non-null float64
1995 194 non-null float64
1996 194 non-null float64
1997 194 non-null float64
1998 194 non-null float64
1999 194 non-null float64
2000 194 non-null float64
2001 194 non-null float64
2002 194 non-null float64
2003 194 non-null float64
2004 194 non-null float64
2005 194 non-null float64
2006 194 non-null float64
2007 194 non-null float64
2008 194 non-null float64
2009 194 non-null float64
2010 194 non-null float64
2011 194 non-null float64
2012 194 non-null float64
2013 194 non-null float64
2014 194 non-null float64
2015 194 non-null float64
2016 194 non-null float64
2017 194 non-null float64
dtypes: float64(58), object(1)
memory usage: 89.5+ KB
###Markdown
Exploratory Data Analysis 1) Analysis of Urban Growth comparing countriesParameters: Bar graph, 50 samples of countries, Urban Growth, 2017.With the bar graph, I could have an overview of this indicator in some countries, in 2017.
###Code
#Check after clean
hdi_new.head() ###1990-2017
#hdi_new.info() ### no null cells
# Draw a bar graph from 50 samples of countries, considering the independent variable (urban growth)
# To get 50 random rows sorting
urb_growth_orig_sample = urb_growth_orig.sample(n = 50).sort_values(by='country')
urb_growth_orig_sample
#print a bar graph
urb_growth_orig_sample.plot(kind='bar',
x='country',
y='2017',
figsize=(20,6),
title='Urban growth in 2017: top 50 countries',
color='b')
###Output
_____no_output_____
###Markdown
Findings: With the bar graph I could have an overview of this indicator in some countries, in 2017.Analyzing the countries with UG higher than 3.5 twice, I found that most of the countries with higher UG are in Africa, mainly in East Africa. **Run 1:**Bahrain: in the Persian GulCongo: Central AfricaMozambique: Southern AfricanNiger: West AfricaSomalia: East AfricaTanzania: East AfricaUganda: East Africa**Run 2:**Cameroon: Central AfricaEtiophia: East AfricaGambia: West AfricaMauritania: Northwest AfricaSenegal: West Africa South Sudan: North Africa Tanzania: East AfricaUganda: East AfricaZambia: East Africa(Source: Wikipedia) 2) Comparison of two indicators Parameters: 3 Scatters: **Urban growth vs hdi**, **Urban growth vs Gini**, and **Urban growth vs Population growth, 2017**.
###Code
urb_growth_orig.head()
#Compare the relationship between variables (independent vs dependent:urban growth vs hdi)
#Create a table to scatter
#Filter and rename to urb_growth(2017)
urb_growth_orig_2017 = urb_growth_orig.drop(urb_growth_orig.iloc[:, 1:-1], axis=1)
urb_growth_orig_2017.rename(columns={'2017': 'urb_growth_2017'},inplace = True)
urb_growth_orig_2017
#Filter
hdi_2017=hdi_new.drop(hdi_new.iloc[:, 1:-1], axis=1)
hdi_2017.rename(columns={'2017': 'hdi_2017'},inplace = True)
hdi_2017
#merge dfs originals urban and hdi
ug_hdi_2017 = pd.merge(urb_growth_orig_2017,hdi_2017, on ='country')
#print(ug_hdi_2017)
ug_hdi_2017
#urb_growth_orig_2017.head()
urb_growth_orig.shape
#Graph urban growth vs hdi
x = ug_hdi_2017['urb_growth_2017']
y = ug_hdi_2017['hdi_2017']
plt.scatter(x,y, color='b')
plt.xlabel('Urban growth')
plt.ylabel('HDI')
plt.title('Urban growth vs HDI (2017)')
plt.show()
gini_orig.head()
# UG vs gini
gini_orig_2017=gini_orig.drop(gini_orig.iloc[:,1:-24], axis=1)
gini_orig_2017.drop(gini_orig_2017.iloc[:,-23:], axis=1, inplace=True)
gini_orig_2017.rename(columns={'2017': 'gini_2017'},inplace = True)
gini_orig_2017
#merge dfs originals urban and gini
ug_gini_2017 = pd.merge(urb_growth_orig_2017,gini_orig_2017, on ='country')
print(ug_gini_2017)
# UG vs PG
pop_growth_orig_2017 = pop_growth_orig.drop(pop_growth_orig.iloc[:,1:-1], axis=1)
pop_growth_orig_2017.rename(columns={'2017': 'pop_growth_2017'}, inplace=True)
pop_growth_orig_2017.head()
#merge dfs originals urban and pop growth
ug_pop_2017 = pd.merge(urb_growth_orig_2017,pop_growth_orig_2017, on ='country')
ug_pop_2017.head()
#Graph urban growth vs pop growth
x = ug_pop_2017['urb_growth_2017']
y = ug_pop_2017['pop_growth_2017']
plt.scatter(x,y, color='b')
plt.xlabel('Urban growth')
plt.ylabel('Gini')
plt.title('Urban growth vs Population growth (2017)')
plt.show()
###Output
_____no_output_____
###Markdown
urb_growth and pop_growth are checked just if there are many outliers, and their behavious along their rates.Since they are bases on population growth, they cannot be compared. Findings: Urban growth vs HDI- moderate, linear and negative relationship (inversely proportional).Urban growth vs Gini- weak, linear and positive relationship (directly proportional).Urban growth vs pop growth- strong, linear and positive relationship (directly proportional). Therefore: There is some relationship between urban growth(UG) and HDI. The higher the UG, the lower the HDI. The strong relationship between UG and population growth (PG) was shown. This was expected since the UG is based on the PG.However:Urban growth vs pop growth, the higher the indexes, the weaker is the relationship between them; on the other hand, the rarer are the occurrences of outliers. 3) Comparison the evolution of four indicators (Evolution of metrics) Parameters: a line graph with the 4 indicators, 2007-2017.
###Code
urb_growth_country = urb_growth_orig.set_index('country')
#Transpose df
urb_growth_transposed = urb_growth_country.transpose()
#Discard unneeded data:
urb_growth_germany_transposed = pd.DataFrame(urb_growth_transposed.Germany)
#Rename column
urb_growth_germany_transposed=urb_growth_germany_transposed.rename(columns={'Germany': 'ug_in_germany'})
urb_growth_germany_transposed.head()
hdi_country=hdi_new.set_index('country') #set country as index
#Transpose df
hdi_transposed = hdi_country.transpose()
#Discard unneeded data:
hdi_germany_transposed = pd.DataFrame(hdi_transposed.Germany)
#Rename column
hdi_germany_transposed = hdi_germany_transposed.rename(columns={'Germany': 'hdi_in_germany'})
hdi_germany_transposed.head()
gini_orig
gini_country=gini_orig.set_index('country') #set country as index
#Transpose df
gini_transposed = gini_country.transpose()
#Discard unneeded data:
gini_germany_transposed = pd.DataFrame(gini_transposed.Germany)
#Rename column
gini_germany_transposed = gini_germany_transposed.rename(columns={'Germany': 'gini_in_germany'})
gini_germany_transposed.head()
pop_growth_country=pop_growth_orig.set_index('country') #set country as index
#Transpose df
pop_growth_transposed = pop_growth_country.transpose()
#Discard unneeded data:
pop_growth_germany_transposed = pd.DataFrame(pop_growth_transposed.Germany)
#Rename column
pop_growth_germany_transposed = pop_growth_germany_transposed.rename(columns={'Germany': 'pg_in_germany'})
pop_growth_germany_transposed.head()
#normalize gini droping rows
gini_germany_transposed=gini_germany_transposed[gini_germany_transposed.index > '1959']
gini_germany_transposed=gini_germany_transposed[gini_germany_transposed.index < '2018']
gini_germany_transposed
#df[df.Name != 'Alisa']
# merge indicators MA
frames1=[urb_growth_germany_transposed, gini_germany_transposed , hdi_germany_transposed, pop_growth_germany_transposed]
#indicators_germany = pd.concat(frames1, axis=1, sort=False)
indicators_germany = pd.concat(frames1, join='outer', axis=1,sort=False)
#indicators_germany = pd.concat(frames1, join='outer', axis=1,sort=False).fillna('nan')
indicators_germany.info()
#ug: 1960-2017
#gini
#hdi
#pg #1960-2017
#include columns of moving average by 10 years
indicators_germany['germany_ug_10years']=urb_growth_germany_transposed.ug_in_germany.rolling(10).mean().shift()
indicators_germany['germany_hdi_10years']=hdi_germany_transposed.hdi_in_germany.rolling(10).mean()
indicators_germany['germany_gini_10years']=gini_germany_transposed.gini_in_germany.rolling(10).mean()
indicators_germany['germany_pg_10years']=pop_growth_germany_transposed.pg_in_germany.rolling(10).mean()
#indicators_germany.head(60)
indicators_germany_MA10 = indicators_germany.drop(['ug_in_germany', 'gini_in_germany','hdi_in_germany', 'pg_in_germany'], 1)
#drop unneeded rows
indicators_germany_MA10 = indicators_germany_MA10[indicators_germany_MA10.index > '1969']
#indicators_germany_MA10.head(60)
#Show a line graph of indicators considerinh moving average for 10years
#Set the dimension of the figure
plt.figure(figsize=(15,7))
#List the values of the indexes
x=indicators_germany_MA10.index.values.tolist()
#Set the indicatorrs to be plotted
y1=indicators_germany_MA10['germany_ug_10years']
y2=indicators_germany_MA10['germany_hdi_10years']
y3=indicators_germany_MA10['germany_gini_10years']
y4=indicators_germany_MA10['germany_pg_10years']
#Set the limits of y
plt.ylim(-2,35)
#Set the indicators and their labels
plt.plot(x, y1, label = "Urban growth")
plt.plot(x, y2, label = "HDI")
plt.plot(x, y3, label = "Gini")
plt.plot(x, y4, label = "Population growth")
#Set the labels to the axis
plt.xlabel('Years')
plt.ylabel('Indicators')
#Rotate values in the x axis
plt.xticks(x, rotation=40)
#Set the title of the graph
plt.title('Evolution of the indicators cosidering moving average for 10 years (1970-2017)')
#Set the legend
plt.legend()
#Plot
plt.show()
#Lines graph from the last 10 years # parameters manually selected0
plt.figure(figsize=(10,7))
x = [2008,2009,2010,2011,2012,2013,2014,2015,2016,2017]
# line 1 points
y1 = [5.466,5.584,5.716,5.956,6.092,6.048,5.932,5.776,5.626,5.474]
# line 2 points
y2 = [0.4646,0.4784,0.4880,0.4924,0.4970,0.5022,0.5052,0.5074,0.5178,0.5208]
y3 = [39.033333,38.766667,38.483333,38.200000,37.983333,37.833333,37.750000,37.783333,37.816667,37.866667]
y4 = [3.315000,3.433330,3.556667,3.660000,3.713333,3.705000,3.635000,3.535000,3.438333,3.346667]
plt.plot(x, y1, label = "Urban growth")
plt.plot(x, y2, label = "HDI")
plt.plot(x, y3, label = "Gini")
plt.plot(x, y4, label = "Population growth")
# naming the x axis
plt.xlabel('Years')
# naming the y axis
plt.ylabel('Indicators')
# giving a title to my graph
plt.title('Evolution of the indicators (2007-2017)')
# show a legend on the plot
plt.legend()
# function to show the plot
plt.show()
###Output
_____no_output_____ |
02_linters_en_python.ipynb | ###Markdown
*Linters* en *Python*. Un *linter* (removedor de pelusa) es una herramienta que permite validar que el código se apegue a las reglas de estilo y mejores prácticas del lenguaje. Los *linters* más avanzados son capaces incluso de analizar la calidad del código basado en patrones específicos.https://www.sonarlint.org/ ```flake8```.[```flake8```](https://flake8.pycqa.org/en/latest/) es un *linter* que permite validar que el código cumpla con las reglas de estilo definidas en el [*PEP-8*](https://pep8.org/).La documentación de ```flake8``` está disponibe en:https://flake8.pycqa.org/en/latest/manpage.html
###Code
!pip install flake8
###Output
_____no_output_____
###Markdown
Ejecución de ```flake8``` desde la línea de comandos.```flake8``` puede ser ejecutado como un comando desde la terminal.```flake8 ```Donde:* `````` es la ruta al directorio o el archivo que será analizado. En caso de no incliur una ruta a un script, sino a un directorio, ```flake8``` identificará todos los archivos con extensión ```.py``` a partir del directorio actual de la terminal así como en sus subdirectorios. El comando ```flake8``` enlistará cada no conformidad del código de la siguiente manera:```::: ```Donde:* `````` es la ruta del script analizado.* `````` es el número de línea en el que se encuentra la no conformidad.* `````` es el número de columna en el que se encuentra la no conformidad.* `````` es una cadena compuesta por un caracter seguido de un número. * El caracter puede ser una ```E``` en caso de que se trate de un error o una ```W``` en caso de que se trate de una advertencia. * El número corresponde al código del error.* `````` es un texto que describe del error. **Ejemplo:** El script ```src/02/noconforme.py``` contiene código que aún cuando cumple con la sintaxis de *Python*, no cumple con varias reglas de estilo definidas en el *PEP-8*.``` python! /usr/bin/env python3print("Esta es una línea de código que es demasiado larga y no cumple con el PEP-8, el cual indica que no debe de pasar de 80 caracteres")Usamos tabuladores en vez de espacios.for i in range(3): print(i)``` * La siguiente celda ejecutará el script ```src/02/noconforme.py```.
###Code
%run src/02/noconfome.py
###Output
_____no_output_____
###Markdown
* La siguiente celda ejecutará el comando ```flake8``` desde el directorio ```src/02```.
###Code
!flake8 src/02
###Output
_____no_output_____
###Markdown
```pylint```.[```pylint```](https://www.pylint.org/) es un *linter* avanzado capaz de detectar no sólo no conformidades con el *PEP-8*, sino una gran cantidad de errores de codificación.La documentación de ```pylint``` puede ser consultada desde:https://pylint.readthedocs.io/en/latest/index.html
###Code
!pip install pylint
###Output
_____no_output_____
###Markdown
Ejecución de ```pylint``` desde la línea de comandos.```pylint``` puede ser ejecutado como un comando desde la terminal.```pylint ```Donde:* `````` es la ruta al directorio o el archivo que será analizado que será tratado como un paquete o un módulo. **Ejemplo:** * La siguiente celda ejecutará el comando ```pylint``` para analizar el script ```src/02/noconfome.py```.
###Code
!pylint src/02/noconfome.py
###Output
_____no_output_____
###Markdown
La extensión de *iPython* ```pycodestyle-magic```
###Code
!pip install pycodestyle pycodestyle_magic
%load_ext pycodestyle_magic
%%pycodestyle
#! /usr/bin/env python3
print("Esta es una línea de código que es demasiado larga y no cumple con el PEP-8, el cual indica que no debe de pasar de 80 caracteres")
#Usamos tabuladores en vez de espacios.
for i in range(3):
print(i)
###Output
_____no_output_____ |
analyses/primer-design-3/index-design-3.ipynb | ###Markdown
SwabSeq Primer DesignHere we will design 1536 unique i5 and i7 indicies. This will enable the end user to detect any "index hopping" between wells. They will satify the following criteria:- Minimum Levenshtein Distance of 3 between any two indicies- No homopolymer repeats > 2 nt- Minimized homo- and hetero-dimerization potentialFirst, let's generate the pool of all potential 10 nt indicies satisfying our Levenshtein Distance constraints. From [Ashlock et. al 2012](https://doi.org/10.1016/j.biosystems.2012.06.005), we know there are exactly 11,743 of them. This will take ~30 mins, so we've provided the pre-calculated set.
###Code
import random
import itertools
import multiprocessing
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from collections import Counter
from collections import defaultdict
# external depends
import Levenshtein
import primer3
def conways_lexicode(all_idx, d=3):
"""
Peforms John Conway's (RIP) Lexicode Algorithm (Ashlock 2012). Given an ordered set of
indicies (all_idx) and an empty set (pool), select an index from all_idx and place it
in pool if it is at an edit distance of d or more from every index in pool.
Obviously, this is an inefficient way of calculating things, but the
complexity of edit metric spaces means this is almost as good as it gets...
Reference:
https://doi.org/10.1016/j.biosystems.2012.06.005
Input:
all_idx - an iterator that contains a all indicies
d - the minimum distance a idx must be to be included in pool
Output:
pool - a list of all the barcodes satisfying the constraint
"""
pool = []
pool.append(next(all_idx))
for test_idx in all_idx:
for pool_idx in pool:
if Levenshtein.distance(test_idx, pool_idx) < d:
break
else:
pool.append(test_idx)
return pool
# Generate the indicies
# ten_mers = (''.join(x) for x in itertools.product('ACGT', repeat=10))
# ten_mers_d3 = conways_lexicode(ten_mers, d=3)
# Instead, load the indices
ten_mers_d3 = []
with open('./ten-mer_d3.txt', 'r') as f:
for line in f:
ten_mers_d3.append(line.rstrip())
###Output
_____no_output_____
###Markdown
While these indices are far apart, we need to remove the repetative and highly GC rich species
###Code
ten_mers_filter = []
for idx in ten_mers_d3:
gc = (idx.count('G') + idx.count('C')) / len(idx)
if gc > 0.65 or gc < 0.35:
continue
elif any(len(list(x)) > 2 for _,x in itertools.groupby(idx)):
continue
# elif idx[:2] == 'GG':
# continue
else:
ten_mers_filter.append(idx)
print(f'Primers remaining after filter - {len(ten_mers_filter)}')
print(f'Possible unique pairs after filter - {len(ten_mers_filter)//2}')
###Output
Primers remaining after filter - 5299
Possible unique pairs after filter - 2649
###Markdown
Primer SetsSwabSeq has 2 different primer sets - one for SARS-CoV2 (S2) and one for the housekeeping gene RPP30. It should be noted that in the genbank files, our primers are oriented as follows:```s2: P5_Forward----------Reverse_P7rpp30: P5_Reverse----------Forward_P7```We will ignore the "Forward"/"Reverse" primer distinction, and instead orient our indices based on the P5/P7 sequences, as this is what the sequencer will do regardless.
###Code
# illumina amplicons must start and end with these sequences
p5 = 'AATGATACGGCGACCACCGAGATCTACAC'
p7 = 'CAAGCAGAAGACGGCATACGAGAT'
s2_f = 'GCTGGTGCTGCAGCTTATTA'
s2_r = 'AGGGTCAAGTGCACAGTCTA'
rpp30_r = 'GAGCGGCTGTCTCCACAAGT'
rpp30_f = 'AGATTTGGACCTGCGAGCG'
# capture all of the primers that will be in each well
# data structure -> {orientation:{idx:{s2:seq, rpp30:seq}, ...}}
s2_dict = defaultdict(lambda: defaultdict(dict))
for idx in ten_mers_filter:
# note eric & aaron paired designed the amplicons as follows:
# s2: P5_Forward----------Reverse_P7
# rpp30: P5_Reverse----------Forward_P7
s2_dict['F'][idx] = {'S2':f'{p5}{idx}{s2_f}', 'RPP30':f'{p5}{idx}{rpp30_r}'}
s2_dict['R'][idx] = {'S2':f'{p7}{idx}{s2_r}', 'RPP30':f'{p7}{idx}{rpp30_f}'}
###Output
_____no_output_____
###Markdown
Since the calculations take some time, we can mash all of these primer sets together and use `multiprocessing` to parallelize the tasks. We'll also drop the data into a `Pandas` `DataFrame` to make visualization/etc easier.
###Code
def calc_self_props(idx_dict):
"""
Use Primer3 to calculate hairpin and self-dimer properties. Note we have to
trim sequences to 60 nt (removing the 5' end) to work with Primer3. While
not ideal, this is probably safe as primer dimers come from the 3' end and
this is part of the common region of the primer.
Input:
idx_dict - must have primer:str key:val pair
Output:
out_dict - Adds haripin and self-dimer Tm and dG
Requires:
primer3-py - pip install primer3-py
"""
# be good - avoid side-effects
out_dict = {key:val for key,val in idx_dict.items()}
hairpin = primer3.calcHairpin(out_dict['primer'][-60:]).todict()
self = primer3.calcHomodimer(out_dict['primer'][-60:]).todict()
out_dict['hairpin_dg'] = hairpin['dg']
out_dict['hairpin_tm'] = hairpin['tm']
out_dict['self_dg'] = self['dg']
out_dict['self_tm'] = self['tm']
return(out_dict)
# flatten the dict for easy proc and export to pandas
# {orientation:{idx:{s2:seq, rpp30:seq}, ...}} -> {orientation:xx, idx:xx, s2:xx, rpp30:xx}
flat_s2 = []
for orientation, idx_dict in s2_dict.items():
for idx, well_dict in idx_dict.items():
for primer_set, seq in well_dict.items():
flat_s2.append({'orientation':orientation, 'idx':idx, 'set':primer_set, 'primer':seq})
with multiprocessing.Pool() as pool:
self_props = pd.DataFrame(pool.map(calc_self_props, flat_s2))
###Output
_____no_output_____
###Markdown
Let's visualize the properties to get a sense for what's going on with our potential primers. First the hairpin dG's
###Code
g = sns.FacetGrid(self_props, row='orientation', col='set', margin_titles=True)
g.map(sns.distplot, "hairpin_dg", kde=False)
###Output
_____no_output_____
###Markdown
We can appreciate that the hairpin dG's are binned to a single value. Next, the self-dimer dG's
###Code
g = sns.FacetGrid(self_props, row='orientation', col='set', margin_titles=True)
g.map(sns.distplot, "self_dg", kde=False)
###Output
_____no_output_____
###Markdown
Here we have more of a spread in dG's. With the S2 primers being more likely to self-dimerize. Finding Optimal PairsLet's drop the bottom 5% from all of these distributions, and take the intersection within each orientation. The goal here being to find a common set of indicies that work well for either the forward or reverse orientations. We can then compare across orientations (excluding identical indices) to find 1536 forward and 1536 reverse primers that have minimized self- and cross-reactivity.
###Code
cutoffs = self_props.groupby(['set', 'orientation']).agg(
self_dg_cutoff=('self_dg', lambda x: np.quantile(x, 0.05)),
hairpin_dg_cutoff=('hairpin_dg', lambda x: np.quantile(x, 0.05))
)
cutoffs
###Output
_____no_output_____
###Markdown
Let's find indices that pass our cutoffs in all our primer sets
###Code
good_idx = (pd.merge(self_props, cutoffs, on=['set','orientation'])
.query('hairpin_dg > hairpin_dg_cutoff & self_dg > self_dg_cutoff')
.groupby(['orientation', 'idx'])
.size()
.reset_index(name='counts')
.query('counts == 2')
)
good_idx.groupby('orientation').size()
###Output
_____no_output_____
###Markdown
Next, we'll take the intersection of these two sets of barcodes to get pairs that are good in either the forward or reverse configuration. Since the well heterodimer calculation takes a while, we'll randomly sample 500,000 pairs from all of possible combinations. Given the previous filtering steps and Levenshtein distance constraint, we are pretty confident these will be good primers. One can easily scale this up to ~1,000,000 draws, but some initial profiling suggests that the dG values all get binned on to a single point (which is reasonable given that only 10 nt of the entire ~60nt primer is changing).
###Code
%%time
f_set = set(good_idx.query('orientation == "F"')['idx'])
r_set = set(good_idx.query('orientation == "R"')['idx'])
good_set = f_set.intersection(r_set)
print(f'Barcodes remaining - {len(good_set)}')
print(f'Possible unique pairs remaining - {len(good_set)//2}')
# ensure we get the same sampling each time
# sort good_set since dictionaries have no order
# for whatever reason random.shuffle would not obey the seed
combos = list(itertools.combinations(sorted(good_set), 2))
rand_idx = random.Random(42).sample(range(len(combos)), len(combos))
combos_shuf = [combos[x] for x in rand_idx]
print('Random Test! The two lists should be the same...')
print(f'[1609253, 6865793, 4057987, 8030505]\n{rand_idx[-4:]}')
sample_depth = 500000
pairs = combos_shuf[:sample_depth]
###Output
Barcodes remaining - 4060
Possible unique pairs remaining - 2030
Random Test! The two lists should be the same...
[1609253, 6865793, 4057987, 8030505]
[1609253, 6865793, 4057987, 8030505]
CPU times: user 11.3 s, sys: 424 ms, total: 11.8 s
Wall time: 11.8 s
###Markdown
Now run the well cross-hybridization calculation. Should take ~10 mins on 64 cores.
###Code
%%time
def calc_well_heterodimers(well_combo):
"""
Generate all combinations of possible primer pairs within one well.
Calculate the dG and report the min value.
Input:
well_combo: [(f_index, {s2:seq, rpp30:seq}), (r_index, {s2:seq, rpp30:seq})]
Output:
(f_index, r_index, dG)
Depends:
primer3 - pip install primer3-py
"""
idx, dict_store = list(zip(*well_combo))
primers = itertools.chain.from_iterable(x.values() for x in dict_store)
# pairs = itertools.permutations(primers, 2)
pairs = itertools.combinations(primers, 2)
dG = min(primer3.calcHeterodimer(x[-60:], y).dg for x,y in pairs)
return((*idx, dG))
s2_wells = [((f, s2_dict['F'][f]), (r, s2_dict['R'][r])) for f,r in pairs]
with multiprocessing.Pool() as pool:
s2_well_dG = pool.map(calc_well_heterodimers, s2_wells)
sns.distplot([x[2] for x in s2_well_dG])
###Output
_____no_output_____
###Markdown
There seems to be two different populations. Given the depth of sampling, this is not likely driven by specific indices, rather from the primers themselves. Let's sort by dG and iterate through these pairs in a greedy fashion, ensuring that the forward and reverse primers never get used twice.
###Code
# since there will be a lot of ties, first sort by f index then sort by dG
# this should ensure that we get the same set of primers everytime
s2_well_dG.sort(key=lambda x: x[0])
s2_well_dG.sort(key=lambda x: x[2], reverse=True)
print(f'Length of input - {len(s2_well_dG)}')
final_pairs = []
used_idx = set()
for f, r, dG in s2_well_dG:
if f in used_idx or r in used_idx:
continue
else:
final_pairs.append((f, r, dG))
used_idx.update((f,r))
print(f'Total unique index pairs found - {len(final_pairs)}')
###Output
Length of input - 500000
Total unique index pairs found - 2024
###Markdown
Let's see what our final dG distribution looks like
###Code
sns.distplot([x[2] for x in final_pairs])
###Output
_____no_output_____
###Markdown
Looks good! Now grab as many primer sets as you need and output to a nice tsv
###Code
# write them out together for debugging
with open('./primer-set.tsv', 'w') as out_file:
for f, r, dG in final_pairs:
# output as: f_idx, f_primer, set, r_idx, r_primer
print(f'{f}\t{s2_dict["F"][f]["RPP30"]}\tRPP30\t{r}\t{s2_dict["R"][r]["RPP30"]}', file=out_file)
print(f'{f}\t{s2_dict["F"][f]["S2"]}\tS2\t{r}\t{s2_dict["R"][r]["S2"]}', file=out_file)
# separate out for ordering
with open('./rpp30_f.tsv', 'w') as f_file:
with open('./rpp30_r.tsv', 'w') as r_file:
for i, (f, r, dG) in enumerate(final_pairs):
print(f'rpp30_F_{i+1}_{f}\t{s2_dict["F"][f]["RPP30"]}', file=f_file)
print(f'rpp30_R_{i+1}_{r}\t{s2_dict["R"][r]["RPP30"]}', file=r_file)
with open('./s2_f.tsv', 'w') as f_file:
with open('./s2_r.tsv', 'w') as r_file:
for i, (f, r, dG) in enumerate(final_pairs):
print(f's2_F_{i+1}_{f}\t{s2_dict["F"][f]["S2"]}', file=f_file)
print(f's2_R_{i+1}_{r}\t{s2_dict["R"][r]["S2"]}', file=r_file)
###Output
_____no_output_____ |
KDDCUP99_08.ipynb | ###Markdown
KDD Cup 1999 Data http://kdd.ics.uci.edu/databases/kddcup99/kddcup99.html
###Code
import sklearn
import pandas as pd
from sklearn import preprocessing
from sklearn.utils import resample
from sklearn.model_selection import GridSearchCV
from sklearn.svm import SVC
import numpy as np
from sklearn.decomposition import PCA
from sklearn.pipeline import Pipeline
import time
from sklearn.metrics import confusion_matrix, classification_report
from sklearn.externals import joblib
from sklearn.utils import resample
print('The scikit-learn version is {}.'.format(sklearn.__version__))
col_names = ["duration","protocol_type","service","flag","src_bytes",
"dst_bytes","land","wrong_fragment","urgent","hot","num_failed_logins",
"logged_in","num_compromised","root_shell","su_attempted","num_root","num_file_creations",
"num_shells","num_access_files","num_outbound_cmds","is_host_login","is_guest_login","count",
"srv_count","serror_rate","srv_serror_rate","rerror_rate","srv_rerror_rate","same_srv_rate",
"diff_srv_rate","srv_diff_host_rate","dst_host_count","dst_host_srv_count",
"dst_host_same_srv_rate","dst_host_diff_srv_rate","dst_host_same_src_port_rate",
"dst_host_srv_diff_host_rate","dst_host_serror_rate","dst_host_srv_serror_rate",
"dst_host_rerror_rate","dst_host_srv_rerror_rate","label"]
data = pd.read_csv("kddcup.data_10_percent", header=None, names = col_names)
###Output
_____no_output_____
###Markdown
前処理 カテゴリ化
###Code
data.label.value_counts()
data['label2'] = data.label.where(data.label.str.contains('normal'),'atack')
data.label2.value_counts()
data['label3'] = data.label.copy()
data.loc[data.label.str.contains('back|land|neptune|pod|smurf|teardrop'),'label3'] = 'DoS'
data.loc[data.label.str.contains('buffer_overflow|loadmodule|perl|rootkit'),'label3'] = 'U2R'
data.loc[data.label.str.contains('ftp_write|guess_passwd|imap|multihop|phf|spy|warezclient|warezmaster'),'label3'] = 'R2L'
data.loc[data.label.str.contains('ipsweep|nmap|portsweep|satan'),'label3'] = 'Probe'
data.label3.value_counts()
###Output
_____no_output_____
###Markdown
サンプリング
###Code
data = resample(data,n_samples=5000,random_state=0)
data.shape
###Output
_____no_output_____
###Markdown
数値化
###Code
le_protocol_type = preprocessing.LabelEncoder()
le_protocol_type.fit(data.protocol_type)
data.protocol_type=le_protocol_type.transform(data.protocol_type)
le_service = preprocessing.LabelEncoder()
le_service.fit(data.service)
data.service = le_service.transform(data.service)
le_flag = preprocessing.LabelEncoder()
le_flag.fit(data.flag)
data.flag = le_flag.transform(data.flag)
data.describe()
data.shape
###Output
_____no_output_____
###Markdown
ラベルの分離
###Code
y_train_1 = data.label.copy()
y_train_2 = data.label2.copy()
y_train_3 = data.label3.copy()
x_train = data.drop(['label','label2','label3'],axis=1)
x_train.shape
y_train_1.shape
y_train_2.shape
y_train_3.shape
###Output
_____no_output_____
###Markdown
標準化
###Code
ss = preprocessing.StandardScaler()
ss.fit(x_train)
x_train = ss.transform(x_train)
col_names2 = ["duration","protocol_type","service","flag","src_bytes",
"dst_bytes","land","wrong_fragment","urgent","hot","num_failed_logins",
"logged_in","num_compromised","root_shell","su_attempted","num_root","num_file_creations",
"num_shells","num_access_files","num_outbound_cmds","is_host_login","is_guest_login","count",
"srv_count","serror_rate","srv_serror_rate","rerror_rate","srv_rerror_rate","same_srv_rate",
"diff_srv_rate","srv_diff_host_rate","dst_host_count","dst_host_srv_count",
"dst_host_same_srv_rate","dst_host_diff_srv_rate","dst_host_same_src_port_rate",
"dst_host_srv_diff_host_rate","dst_host_serror_rate","dst_host_srv_serror_rate",
"dst_host_rerror_rate","dst_host_srv_rerror_rate"]
pd.DataFrame(x_train,columns=col_names2).describe()
###Output
_____no_output_____
###Markdown
学習
###Code
pca = PCA(copy=True, iterated_power='auto', n_components=3, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
pca.fit(x_train)
x_train2 = pca.transform(x_train)
x_train2.shape
clf = SVC(C=10.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape=None, degree=3, gamma=1.0, kernel='poly',
max_iter=-1, probability=False, random_state=None, shrinking=True,
tol=0.001, verbose=False)
t1=time.perf_counter()
clf.fit(x_train2,y_train_2)
t2=time.perf_counter()
print(t2-t1,"秒")
###Output
38.69440309900165 秒
###Markdown
予測
###Code
t1=time.perf_counter()
pred=clf.predict(x_train2)
t2=time.perf_counter()
print(t2-t1,"秒")
print(classification_report(y_train_2, pred))
print(confusion_matrix(y_train_2, pred))
###Output
precision recall f1-score support
atack 1.00 0.99 0.99 3985
normal. 0.94 0.99 0.97 1015
avg / total 0.99 0.99 0.99 5000
[[3926 59]
[ 6 1009]]
###Markdown
学習
###Code
pca2 = PCA(copy=True, iterated_power='auto', n_components=4, random_state=None,
svd_solver='auto', tol=0.0, whiten=False)
clf2 = SVC(C=1.0, cache_size=200, class_weight=None, coef0=0.0,
decision_function_shape=None, degree=3, gamma=0.17782794100389229,
kernel='rbf', max_iter=-1, probability=False, random_state=None,
shrinking=True, tol=0.001, verbose=False)
pca2.fit(x_train)
x_train3 = pca.transform(x_train)
t1=time.perf_counter()
clf2.fit(x_train3,y_train_3)
t2=time.perf_counter()
print(t2-t1,"秒")
###Output
0.05409338200115599 秒
###Markdown
予測
###Code
t1=time.perf_counter()
pred2=clf2.predict(x_train3)
t2=time.perf_counter()
print(t2-t1,"秒")
print(classification_report(y_train_3, pred2))
print(confusion_matrix(y_train_3, pred2))
###Output
precision recall f1-score support
DoS 1.00 0.99 1.00 3928
Probe 1.00 0.80 0.89 44
R2L 1.00 0.08 0.15 12
U2R 1.00 1.00 1.00 1
normal. 0.97 1.00 0.98 1015
avg / total 0.99 0.99 0.99 5000
[[3904 0 0 0 24]
[ 5 35 0 0 4]
[ 3 0 1 0 8]
[ 0 0 0 1 0]
[ 3 0 0 0 1012]]
|
Kaggle_scatterplots.ipynb | ###Markdown
###Code
import pandas as pd
pd.plotting.register_matplotlib_converters()
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
print("Setup Complete")
from google.colab import drive
drive.mount('/content/gdrive')
import os
os.environ['KAGGLE_CONFIG_DIR'] = "/content/gdrive/My Drive/Kaggle"
# /content/gdrive/My Drive/Kaggle is the path where kaggle.json is present in the Google Drive
#changing the working directory
%cd /content/gdrive/My Drive/Kaggle
#Check the present working directory using pwd command
!kaggle datasets download -d alexisbcook/data-for-datavis
!ls
#unzipping the zip files and deleting the zip files
!unzip \*.zip && rm *.zip
# Path of the file to read
insurance_filepath = "/content/gdrive/My Drive/Kaggle/insurance.csv"
# Read the file into a variable insurance_data
insurance_data = pd.read_csv(insurance_filepath)
insurance_data.head()
sns.scatterplot(x=insurance_data['bmi'], y=insurance_data['charges'])
sns.regplot(x=insurance_data['bmi'], y=insurance_data['charges'])
sns.scatterplot(x=insurance_data['bmi'], y=insurance_data['charges'], hue=insurance_data['smoker'])
sns.lmplot(x="bmi", y="charges", hue="smoker", data=insurance_data)
plt.figure(figsize=(20,5))
plt.title("Smoker vs Non-smoker")
sns.swarmplot(x=insurance_data['smoker'],
y=insurance_data['charges'])
###Output
_____no_output_____ |
notebooks/Exponential Smoothing Real Data.ipynb | ###Markdown
Exponential Smoothing Real Data
###Code
# install and load necessary packages
!pip install seaborn
!pip install --upgrade --no-deps statsmodels
import pyspark
from datetime import datetime
import seaborn as sns
import sys
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import os
print('Python version ' + sys.version)
print('Spark version: ' + pyspark.__version__)
%env DH_CEPH_KEY = DTG5R3EEWN9JBYJZH0DF
%env DH_CEPH_SECRET = pdcEGFERILlkRDGrCSxdIMaZVtNCOKvYP4Gf2b2x
%env DH_CEPH_HOST = http://storage-016.infra.prod.upshift.eng.rdu2.redhat.com:8080
%env METRIC_NAME = kubelet_docker_operations_latency_microseconds
%env LABEL =
label = os.getenv("LABEL")
where_labels = {}#"metric.group=route.openshift.io"}
metric_name = str(os.getenv("METRIC_NAME"))
print(metric_name)
###Output
env: DH_CEPH_KEY=DTG5R3EEWN9JBYJZH0DF
env: DH_CEPH_SECRET=pdcEGFERILlkRDGrCSxdIMaZVtNCOKvYP4Gf2b2x
env: DH_CEPH_HOST=http://storage-016.infra.prod.upshift.eng.rdu2.redhat.com:8080
env: METRIC_NAME=kubelet_docker_operations_latency_microseconds
env: LABEL=
kubelet_docker_operations_latency_microseconds
###Markdown
Establish Connection to Spark Clusterset configuration so that the Spark Cluster communicates with Ceph and reads a chunk of data.
###Code
import string
import random
# Set the configuration
# random string for instance name
inst = ''.join(random.choices(string.ascii_uppercase + string.digits, k=4))
AppName = inst + ' - Ceph S3 Prometheus JSON Reader'
conf = pyspark.SparkConf().setAppName(AppName).setMaster('spark://spark-cluster.dh-prod-analytics-factory.svc:7077')
print("Application Name: ", AppName)
# specify number of nodes need (1-5)
conf.set("spark.cores.max", "88")
# specify Spark executor memory (default is 1gB)
conf.set("spark.executor.memory", "400g")
# Set the Spark cluster connection
sc = pyspark.SparkContext.getOrCreate(conf)
# Set the Hadoop configurations to access Ceph S3
import os
(ceph_key, ceph_secret, ceph_host) = (os.getenv('DH_CEPH_KEY'), os.getenv('DH_CEPH_SECRET'), os.getenv('DH_CEPH_HOST'))
ceph_key = 'DTG5R3EEWN9JBYJZH0DF'
ceph_secret = 'pdcEGFERILlkRDGrCSxdIMaZVtNCOKvYP4Gf2b2x'
ceph_host = 'http://storage-016.infra.prod.upshift.eng.rdu2.redhat.com:8080'
sc._jsc.hadoopConfiguration().set("fs.s3a.access.key", ceph_key)
sc._jsc.hadoopConfiguration().set("fs.s3a.secret.key", ceph_secret)
sc._jsc.hadoopConfiguration().set("fs.s3a.endpoint", ceph_host)
#Get the SQL context
sqlContext = pyspark.SQLContext(sc)
#Read the Prometheus JSON BZip data
jsonUrl = "s3a://DH-DEV-PROMETHEUS-BACKUP/prometheus-openshift-devops-monitor.1b7d.free-stg.openshiftapps.com/"+metric_name+"/"
jsonFile = sqlContext.read.option("multiline", True).option("mode", "PERMISSIVE").json(jsonUrl)
import pyspark.sql.functions as F
from pyspark.sql.types import StringType
from pyspark.sql.types import IntegerType
from pyspark.sql.types import TimestampType
# create function to convert POSIX timestamp to local date
def convert_timestamp(t):
return datetime.fromtimestamp(float(t))
def format_df(df):
#reformat data by timestamp and values
df = df.withColumn("values", F.explode(df.values))
df = df.withColumn("timestamp", F.col("values").getItem(0))
df = df.withColumn("values", F.col("values").getItem(1))
# drop null values
df = df.na.drop(subset=["values"])
# cast values to int
df = df.withColumn("values", df.values.cast("int"))
#df = df.withColumn("timestamp", df.values.cast("int"))
# define function to be applied to DF column
udf_convert_timestamp = F.udf(lambda z: convert_timestamp(z), TimestampType())
df = df.na.drop(subset=["timestamp"])
# convert timestamp values to datetime timestamp
df = df.withColumn("timestamp", udf_convert_timestamp("timestamp"))
# drop null values
df = df.na.drop(subset=["values"])
# calculate log(values) for each row
#df = df.withColumn("log_values", F.log(df.values))
return df
def extract_from_json(json, name, select_labels, where_labels):
#Register the created SchemaRDD as a temporary variable
json.registerTempTable(name)
#Filter the results into a data frame
query = "SELECT values"
# check if select labels are specified and add query condition if appropriate
if len(select_labels) > 0:
query = query + ", " + ", ".join(select_labels)
query = query + " FROM " + name
# check if where labels are specified and add query condition if appropriate
if len(where_labels) > 0:
query = query + " WHERE " + " AND ".join(where_labels)
print("SQL QUERRY: ", query)
df = sqlContext.sql(query)
#sample data to make it more manageable
#data = data.sample(False, fraction = 0.05, seed = 0)
# TODO: get rid of this hack
#df = sqlContext.createDataFrame(df.head(1000), df.schema)
return format_df(df)
if label != "":
select_labels = ['metric.' + label]
else:
select_labels = []
where_labels = {"metric.quantile='0.9'","metric.hostname='free-stg-master-03fb6'"}
# get data and format
df = extract_from_json(jsonFile, metric_name, select_labels, where_labels)
select_labels = []
df_pd = df.toPandas()
df_pd = df_pd[["values","timestamp"]]
df_pd
df_pd.sort_values(by='timestamp')
df_pd.set_index("timestamp")
train_frame = df_pd[0 : int(0.7*len(df_pd))]
test_frame = df_pd[int(0.7*len(df_pd)) : ]
sc.stop()
df_pd_trimmed = df_pd[df_pd["timestamp"] > datetime(2018,6,16,3,14)]
df_pd_trimmed = df_pd_trimmed[df_pd_trimmed["timestamp"] < datetime(2018,6,21,3,14)]
train_frame = df_pd_trimmed[0 : int(0.7*len(df_pd_trimmed))]
test_frame = df_pd_trimmed[int(0.7*len(df_pd_trimmed)) : ]
train_frame += 1
###Output
_____no_output_____
###Markdown
Triple Exponential Smoothing (Holt Winters Method)inspiration: https://github.com/statsmodels/statsmodels/blob/master/examples/notebooks/exponential_smoothing.ipynb
###Code
!pip install patsy
from statsmodels.tsa.api import ExponentialSmoothing, SimpleExpSmoothing, Holt
import matplotlib.pyplot as plt
import pandas as pd
df_series = pd.Series(train_frame["values"], name='values',index=train_frame["timestamp"])
df_series
ax=df_series.plot(title="Original Data")
ax.set_ylabel("Value")
plt.show()
fit1 = SimpleExpSmoothing(df_series).fit(smoothing_level=0.2,optimized=False)
fcast1 = fit1.forecast(3).rename(r'$\alpha=0.2$')
ax = df_series.plot(marker='o', color='black', figsize=(12,8))
plt.show()
ax1 = fcast1.plot(marker='o', color='blue', legend=True, figsize=(12,8))
fit1.fittedvalues.plot(marker='o', ax=ax1, color='green')
plt.show()
fit2 = ExponentialSmoothing(df_series, seasonal_periods=4, trend='add', seasonal='add').fit(use_boxcox=True)
results=pd.DataFrame(index=[r"$\alpha$",r"$\beta$",r"$\phi$",r"$\gamma$",r"$l_0$","$b_0$","SSE"])
params = ['smoothing_level', 'smoothing_slope', 'damping_slope', 'smoothing_seasonal', 'initial_level', 'initial_slope']
results["Additive"] = [fit2.params[p] for p in params] + [fit2.sse]
ax = df_series.plot(figsize=(10,6), marker='o', color='black', title="Forecasts from Holt-Winters' additive method" )
fit2.fittedvalues.plot(ax=ax, style='--', color='green')
fit2.forecast(8).plot(ax=ax, style='--', marker='o', color='green', legend=True)
plt.show()
fit2 = ExponentialSmoothing(df_series, seasonal_periods=4, trend='add', seasonal='mult').fit(use_boxcox=True)
results=pd.DataFrame(index=[r"$\alpha$",r"$\beta$",r"$\phi$",r"$\gamma$",r"$l_0$","$b_0$","SSE"])
params = ['smoothing_level', 'smoothing_slope', 'damping_slope', 'smoothing_seasonal', 'initial_level', 'initial_slope']
results["Additive"] = [fit1.params[p] for p in params] + [fit1.sse]
ax = df_series.plot(figsize=(10,6), marker='o', color='black', title="Forecasts from Holt-Winters' multiplicative method" )
fit2.fittedvalues.plot(ax=ax, style='--', color='purple')
fit2.forecast(8).plot(ax=ax, style='--', marker='o', color='purple', legend=True)
plt.show()
###Output
/opt/app-root/lib/python3.6/site-packages/statsmodels/tsa/base/tsa_model.py:171: ValueWarning: No frequency information was provided, so inferred frequency S will be used.
% freq, ValueWarning)
|
Notebooks/RealTime.ipynb | ###Markdown
Dependancies
###Code
import mediapipe as mp
import cv2 as ocv
###Output
_____no_output_____
###Markdown
Variable Control Sliders
###Code
#mp_holistic Parameters
static_image_mode = False
model_complexity = 2 #set to 1 if on weaker hardware
smooth_landmarks = True
enable_segmentation = False
smooth_segmentation = False
holistic_min_detection_confidence = 0.5
holistic_min_tracking_confidence = 0.5
#Landmark Colour Control
#OpenCv window control
flip_image = False
exit_ = False
###Output
_____no_output_____
###Markdown
Pose Detection
###Code
mp_drawing = mp.solutions.drawing_utils
mp_drawing_styles = mp.solutions.drawing_styles
mp_holistic = mp.solutions.holistic
###Output
_____no_output_____
###Markdown
Camera control Brute Force Search For Available CamerasDocuments on Extra VideoCapture Api Preferences
###Code
# Search For Available Cameras (Run only to find which port is in use)
for i in range(1600):
cap = ocv.VideoCapture(i)
bool, image = cap.read()
if bool:
print(i)
cap.release()
###Output
_____no_output_____
###Markdown
Colour conversion and Pose Model Processing
###Code
def mediapipe_opencv_transform(image,mp_model):
image = ocv.cvtColor(image, ocv.COLOR_BGR2RGB) # Color space transform from ocv to mediapipe
image.flags.writeable = False #Set Image Array to read only(immutable)
results = mp_model.process(image) #Run model on the image array
image.flags.writeable = True #Set Image Array to be writable again(mutable)
image = ocv.cvtColor(image, ocv.COLOR_RGB2BGR) # Color space transform from mediapipe to ocv
return image,results
###Output
_____no_output_____
###Markdown
Drawing Landmarks
###Code
def landmarks(iamge,results):
mp_drawing.draw_landmarks(image,results.right_hand_landmarks,mp_holistic.HAND_CONNECTIONS,landmark_drawing_spec=mp_drawing_styles.get_default_pose_landmarks_style())
mp_drawing.draw_landmarks(image,results.left_hand_landmarks,mp_holistic.HAND_CONNECTIONS,landmark_drawing_spec=mp_drawing_styles.get_default_pose_landmarks_style())
pass
###Output
_____no_output_____
###Markdown
Video Capture
###Code
vidcap = ocv.VideoCapture(1400)
with mp_holistic.Holistic(
static_image_mode = static_image_mode,
model_complexity = model_complexity,
smooth_landmarks = smooth_landmarks,
smooth_segmentation = smooth_segmentation,
min_detection_confidence=holistic_min_detection_confidence,
min_tracking_confidence=holistic_min_tracking_confidence)\
as holistic:
while vidcap.isOpened():
# Camera input
# success is the boolean and image is the video frame output
success, image = vidcap.read()
# Run Model on Input and draw landmarks
image, results = mediapipe_opencv_transform(image,holistic)
landmarks(image,results)
# Selfie mode control
if ocv.waitKey(5) & 0xFF == ord('f'):
flip_image = not flip_image
# uncomment to test flip state
# print(flip_image)
if flip_image:
image = ocv.flip(image, 1)
# Camera Video Feed is just an arbitrary window name
ocv.imshow('Camera Video Feed', image)
# Exit Feed (using q key)
# reason for 0xff is waitKey() returns 32 bit integer but key input(Ascii) is 8 bit so u want rest of 32 to be 0 as 0xFF = 11111111 and & is bitwise operator
if ocv.waitKey(5) & 0xFF == ord('q'):
exit_ = not exit_
if exit_:
break
vidcap.release()
#exit_ reset to False is here because if you dont rerun the notebook and rather rerun the cell exit would be set to true
exit_ = False
ocv.destroyAllWindows()
results.left_hand_landmarks
###Output
_____no_output_____ |
old/parse.ipynb | ###Markdown
We do this
###Code
NUMBER_OF_LINES = 217883
champion_roles = pull_data()
champions_mapper = {champion.id: champion.name for champion in cass.get_champions("EUW")}
summoners = {}
summoners_columns_mapper = {
'total_games': 0,
'wins': 1
}
role_names = ['TOP', 'JUNGLE', 'MIDDLE', 'BOTTOM', 'UTILITY']
columns_by_role = ['kills', 'deaths', 'assists', 'gold_earned', 'total_damage_dealt_to_champions',
'total_minions_killed', 'vision_score', 'vision_wards_bought', 'total_games', 'wins']
index = len(summoners_columns_mapper)
for role_name in role_names:
for column in columns_by_role:
column_key = role_name + '_' + column
summoners_columns_mapper[column_key] = index
index += 1
columns_mapper = {}
index = 0
with open('data/raw_data/match_all_merged.csv', encoding='utf8') as infile:
for line in infile:
split = line.rstrip('\n').split(';')
if index == 0:
columns_mapper = {key: value for value, key in enumerate(split)}
index += 1
continue
queue_id = float(split[columns_mapper['queueId']])
if queue_id != 420:
index += 1
continue
game_duration = float(split[columns_mapper['gameDuration']])
participant_identities = json.loads(split[columns_mapper['participantIdentities']]\
.replace('\'', '\"'))
participants = json.loads(split[columns_mapper['participants']]\
.replace('\'', '\"')\
.replace('False', '0')\
.replace('True', '1'))
champions = []
for participant in participants:
champions.append(participant['championId'])
roles = list(get_roles(champion_roles, champions[0:5]).items())
roles += list(get_roles(champion_roles, champions[5:10]).items())
for participantIdentity, participant, role in zip(participant_identities, participants, roles):
summoner_id = participantIdentity['player']['summonerId']
role_name = role[0]
participant_stats = participant['stats']
win = participant_stats['win']
kills = participant_stats['kills']
deaths = participant_stats['deaths']
assists = participant_stats['assists']
gold_earned = participant_stats['goldEarned']
total_damage_dealt_to_champions = participant_stats['totalDamageDealtToChampions']
total_minions_killed = participant_stats['totalMinionsKilled']
vision_score = participant_stats['visionScore']
vision_wards_bought = participant_stats['visionWardsBoughtInGame']
if summoner_id not in summoners:
summoners[summoner_id] = {key: 0 for key in summoners_columns_mapper}
summoners[summoner_id]['wins'] += win
summoners[summoner_id]['total_games'] += 1
summoners[summoner_id][role_name + '_wins'] += win
summoners[summoner_id][role_name + '_total_games'] += 1
summoners[summoner_id][role_name + '_kills'] += kills / game_duration * 60
summoners[summoner_id][role_name + '_deaths'] += deaths / game_duration * 60
summoners[summoner_id][role_name + '_assists'] += assists / game_duration * 60
summoners[summoner_id][role_name + '_gold_earned'] += gold_earned / game_duration * 60
summoners[summoner_id][role_name + '_total_damage_dealt_to_champions'] += total_damage_dealt_to_champions / game_duration * 60
summoners[summoner_id][role_name + '_total_minions_killed'] += total_minions_killed / game_duration * 60
summoners[summoner_id][role_name + '_vision_score'] += vision_score / game_duration * 60
summoners[summoner_id][role_name + '_vision_wards_bought'] += vision_wards_bought / game_duration * 60
clear_output(wait = True)
print(f'{index} / {NUMBER_OF_LINES}')
index += 1
for summoner in summoners.values():
for role_name in role_names:
total_games = summoner[role_name + '_total_games']
if total_games == 0:
total_games += 1
summoner[role_name + '_wins'] /= total_games
summoner[role_name + '_kills'] /= total_games
summoner[role_name + '_deaths'] /= total_games
summoner[role_name + '_assists'] /= total_games
summoner[role_name + '_gold_earned'] /= total_games
summoner[role_name + '_total_damage_dealt_to_champions'] /= total_games
summoner[role_name + '_total_minions_killed'] /= total_games
summoner[role_name + '_vision_score'] /= total_games
summoner[role_name + '_vision_wards_bought'] /= total_games
print(f'Number of summoners: {len(summoners)}')
print('Saving to pickle...')
with open('data/processed_data/summoners.pickle', 'wb') as handle:
pickle.dump(summoners, handle, protocol=pickle.HIGHEST_PROTOCOL)
print('Saved to \'data/processed_data/summoners.pickle\'')
print('Saving to csv...')
pd.DataFrame.from_dict(data=summoners, orient='index').to_csv('data/processed_data/summoners.csv', header=True)
print('Saved to \'data/processed_data/summoners.csv\'')
###Output
217880 / 217883
Number of summoners: 43583
Saving to pickle...
Saved to 'data/processed_data/summoners.pickle'
Saving to csv...
Saved to 'data/processed_data/summoners.csv'
|
code/ion-svm_kaggle.ipynb | ###Markdown
Hi everyone!This is my first Kaggle Competition and Kernel. I tried working with Support Vector Machines, and achieved very high F1 macro score with the same. I am sharing my results below.Dataset used : https://www.kaggle.com/cdeotte/data-without-driftI have used 5 different SVM models. For more details and detailed plots, go here: https://www.kaggle.com/cdeotte/one-feature-model-0-930/outputThe above link explains how 5 different models were used to create synthetic data.I do not have much experience with Machine Learning, so I have naturally explained in a very simpler manner. Enjoy!
###Code
import tensorflow as tf
import numpy as np
from sklearn.metrics import f1_score
###Output
_____no_output_____
###Markdown
The below cell will input data and store them in numpy arrays. There are 5 models: Model 0, Model 1...Model 4. They are our estimations of the original respective models used to generate respective batches:1. Model 0: * **Training Batches** 0,1 * **Testing Batches** 0,3,8,10,11,12,13,14,15,16,17,18,19 * **Maximum Open Channels**: 12. Model 1: * **Training Batches** 2,6 * **Testing Batches** 4 * **Maximum Open Channels**: 13. Model 2: * **Training Batches** 3,7 * **Testing Batches** 1,9 * **Maximum Open Channels**: 34. Model 3: * **Training Batches** 4,9 * **Testing Batches** 5,7 * **Maximum Open Channels**: 105. Model 4: * **Training Batches** 5,8 * **Testing Batches** 2,6 * **Maximum Open Channels**: 5
###Code
data_path = '/kaggle/input/data-without-drift/'
train_data_file = data_path + 'train_clean.csv'
test_data_file = data_path + 'test_clean.csv'
def get_data(filename, train=True):
if(train):
with open(filename) as training_file:
split_size = 10
data = np.loadtxt(training_file, delimiter=',', skiprows=1)
signal = data[:,1]
channels = data[:,2]
signal = np.array_split(signal, split_size)
channels = np.array_split(channels, split_size)
data = None
return np.array(signal), np.array(channels)
else:
with open(filename) as training_file:
split_size = 4
data = np.loadtxt(training_file, delimiter=',', skiprows=1)
signal = data[:,1]
signal = np.array_split(signal, split_size)
data = None
return np.array(signal)
train_signal , train_channels = get_data(train_data_file)
test_signal = get_data(test_data_file, train=False)
test_model_signal = np.zeros((5,1000000))
test_model_channel = np.zeros((5,1000000))
test_model_signal[0][:500000] = train_signal[0].flatten()
test_model_signal[0][500000:] = train_signal[1].flatten()
test_model_signal[1][:500000] = train_signal[2].flatten()
test_model_signal[1][500000:] = train_signal[6].flatten()
test_model_signal[2][:500000] = train_signal[3].flatten()
test_model_signal[2][500000:] = train_signal[7].flatten()
test_model_signal[3][:500000] = train_signal[4].flatten()
test_model_signal[3][500000:] = train_signal[9].flatten()
test_model_signal[4][:500000] = train_signal[5].flatten()
test_model_signal[4][500000:] = train_signal[8].flatten()
test_model_channel[0][:500000] = train_channels[0].flatten()
test_model_channel[0][500000:] = train_channels[1].flatten()
test_model_channel[1][:500000] = train_channels[2].flatten()
test_model_channel[1][500000:] = train_channels[6].flatten()
test_model_channel[2][:500000] = train_channels[3].flatten()
test_model_channel[2][500000:] = train_channels[7].flatten()
test_model_channel[3][:500000] = train_channels[4].flatten()
test_model_channel[3][500000:] = train_channels[9].flatten()
test_model_channel[4][:500000] = train_channels[5].flatten()
test_model_channel[4][500000:] = train_channels[8].flatten()
###Output
_____no_output_____
###Markdown
Specs below refers to specifications of SVM model, namely C and gamma. You need to have a basic understanding of what an SVM is to understand the math behind the specifications. These were evaluated using a grid search for hyperparameter tuning. Refer to documentation of sklearn.svm.svc for more details. Below, the model is trained on the first 400000 entries and validated on the next 100000 entries. The remaining 500000 is unused. You can do undersampling and upsampling to generate a well balanced data but the below also works.
###Code
from sklearn.svm import SVC
models = []
specs = [[1.2,1],[0.1,1],[0.5,1],[7,0.01],[10,0.1]]
for k in range (5):
print("starting training model no: ", k)
x = test_model_signal[k].flatten()
y = test_model_channel[k].flatten()
y = np.array(y).astype(int)
x = np.expand_dims(np.array(x),-1)
model = SVC(kernel = 'rbf', C=specs[k][0],gamma = specs[k][1])
samples= 400000
#trains by splitting into 10 batches for faster training
for i in range(10):
model.fit(x[i*samples//10:(i+1)*samples//10],y[i*samples//10:(i+1)*samples//10])
y_pred = model.predict(x[400000:500000])
y_true = y[400000:500000]
print(f1_score(y_true, y_pred, average=None))
print(f1_score(y_true, y_pred, average='macro'))
models.append(model)
###Output
starting training model no: 0
[0.99942855 0.97204194]
0.9857352455148416
starting training model no: 1
[0.98950849 0.99642754]
0.9929680178508269
starting training model no: 2
[0.97255454 0.97828286 0.9816969 0.98735409]
0.9799720959170334
starting training model no: 3
[0.8 0.82511211 0.84218399 0.8534202 0.86819845 0.87489464
0.87845751 0.8801643 0.88030013 0.87362792]
0.8576359244670078
starting training model no: 4
[0.94567404 0.95925495 0.96636798 0.97028753 0.97310097 0.9759043 ]
0.9650982952316117
###Markdown
The following is the testing process. Each batch is of length 100000, which can be easily seen from plotting the signal values. The model for each batch can be manually determined, or by calculating the average of all the entries on each batch and matching the same with the average of training batches.
###Code
model_ref = [0,2,4,0,1,3,4,3,0,2,0,0,0,0,0,0,0,0,0,0]
y_pred_all = np.zeros((2000000))
for pec in range(20):
print("starting prediction of test batch no: ", pec)
x_test = test_signal.flatten()[pec*100000:(pec+1)*100000]
x_test = np.expand_dims(np.array(x_test),-1)
test_pred = models[model_ref[pec]].predict(x_test)
y_pred_1 = np.array(test_pred).astype(int)
y_pred_all[pec*100000:(pec+1)*100000] = y_pred_1
y_pred_all = np.array(y_pred_all).astype(int)
###Output
starting prediction of test batch no: 0
starting prediction of test batch no: 1
starting prediction of test batch no: 2
starting prediction of test batch no: 3
starting prediction of test batch no: 4
starting prediction of test batch no: 5
starting prediction of test batch no: 6
starting prediction of test batch no: 7
starting prediction of test batch no: 8
starting prediction of test batch no: 9
starting prediction of test batch no: 10
starting prediction of test batch no: 11
starting prediction of test batch no: 12
starting prediction of test batch no: 13
starting prediction of test batch no: 14
starting prediction of test batch no: 15
starting prediction of test batch no: 16
starting prediction of test batch no: 17
starting prediction of test batch no: 18
starting prediction of test batch no: 19
###Markdown
The following is a good estimation of LB. As it is known that the first 600000 or the first 6 batches of testing data are used for the public leaderboard. So, we evaluate the results for 6 batches of validation data from similar models.
###Code
model_ref = [0,0,1,2,3,4]
y_valid = np.zeros((1000000))
y_pred = np.zeros((1000000))
for k in range(6):
x = train_signal[k].flatten()
y = train_channels[k].flatten()
y = np.array(y).astype(int)
x = np.expand_dims(np.array(x),-1)
model = models[model_ref[k]]
y_pred[k*100000:(k+1)*100000] = model.predict(x[400000:500000])
y_valid[k*100000:(k+1)*100000]=y[400000:500000]
print(f1_score(y_valid, y_pred, average=None))
print(f1_score(y_valid, y_pred, average='macro'))
###Output
[0.99917212 0.99094251 0.9779509 0.97846288 0.9647749 0.94008288
0.87489464 0.87845751 0.8801643 0.88030013 0.87362792]
0.9308027905289973
###Markdown
The following writes the testing predictions into csv file for submission:
###Code
import pandas as pd
sub = pd.read_csv('/kaggle/input/liverpool-ion-switching/sample_submission.csv')
sub.iloc[:,1] = y_pred_all
sub.to_csv('submission.csv',index=False,float_format='%.4f')
print("saved the file")
###Output
saved the file
|
tutorial-contents-notebooks/.ipynb_checkpoints/203_activation-checkpoint.ipynb | ###Markdown
203 ActivationView more, visit my tutorial page: https://morvanzhou.github.io/tutorials/My Youtube Channel: https://www.youtube.com/user/MorvanZhouDependencies:* torch: 0.1.11* matplotlib
###Code
import torch
import torch.nn.functional as F
from torch.autograd import Variable
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Firstly generate some fake data
###Code
x = torch.linspace(-5, 5, 200) # x data (tensor), shape=(200, 1)
x = Variable(x)
x_np = x.data.numpy() # numpy array for plotting
###Output
_____no_output_____
###Markdown
Following are popular activation functions
###Code
y_relu = F.relu(x).data.numpy()
y_sigmoid = torch.sigmoid(x).data.numpy()
y_tanh = F.tanh(x).data.numpy()
y_softplus = F.softplus(x).data.numpy()
# y_softmax = F.softmax(x)
# softmax is a special kind of activation function, it is about probability
# and will make the sum as 1.
###Output
C:\Users\morvanzhou\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\functional.py:1006: UserWarning: nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.
warnings.warn("nn.functional.sigmoid is deprecated. Use torch.sigmoid instead.")
C:\Users\morvanzhou\AppData\Local\Programs\Python\Python36\lib\site-packages\torch\nn\functional.py:995: UserWarning: nn.functional.tanh is deprecated. Use torch.tanh instead.
warnings.warn("nn.functional.tanh is deprecated. Use torch.tanh instead.")
###Markdown
Plot to visualize these activation function
###Code
%matplotlib inline
plt.figure(1, figsize=(8, 6))
plt.subplot(221)
plt.plot(x_np, y_relu, c='red', label='relu')
plt.ylim((-1, 5))
plt.legend(loc='best')
plt.subplot(222)
plt.plot(x_np, y_sigmoid, c='red', label='sigmoid')
plt.ylim((-0.2, 1.2))
plt.legend(loc='best')
plt.subplot(223)
plt.plot(x_np, y_tanh, c='red', label='tanh')
plt.ylim((-1.2, 1.2))
plt.legend(loc='best')
plt.subplot(224)
plt.plot(x_np, y_softplus, c='red', label='softplus')
plt.ylim((-0.2, 6))
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____ |
Kaggle_Challenge_X4_XGBoost.ipynb | ###Markdown
###Code
# Installs
#%%capture
!pip install --upgrade category_encoders plotly
!pip install xgboost
# Imports
import os, sys
os.chdir('/content')
!git init .
!git remote add origin https://github.com/LambdaSchool/DS-Unit-2-Kaggle-Challenge.git
!git pull origin master
!pip install -r requirements.txt
os.chdir('module1')
# Imports
import pandas as pd
import numpy as np
import math
import sklearn
sklearn.__version__
from sklearn.model_selection import train_test_split
# Import the models
from sklearn.linear_model import LogisticRegressionCV
from sklearn.pipeline import make_pipeline
# Import encoder and scaler and imputer
import category_encoders as ce
from sklearn.preprocessing import StandardScaler
from sklearn.impute import SimpleImputer
# Import random forest classifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from xgboost import XGBClassifier
def main_program():
def wrangle(X):
# Wrangles train, validate, and test sets
X = X.copy()
# Convert date_recorded to datetime
X['date_recorded'] = pd.to_datetime(X['date_recorded'], infer_datetime_format=True)
# Extract components from date_recorded and drop the original column
X['year_recorded'] = X['date_recorded'].dt.year
X['month_recorded'] = X['date_recorded'].dt.month
X['day_recorded'] = X['date_recorded'].dt.day
X = X.drop(columns='date_recorded')
# Engineer new feature years - construction_year to date_recorded
X.loc[X['construction_year'] == 0, 'construction_year'] = np.nan
X['years'] = X['year_recorded'] - X['construction_year']
# Remove latitude outliers
X['latitude'] = X['latitude'].replace(-2e-08, np.nan)
# Features with many zero's are likely nan's
cols_with_zeros = ['construction_year', 'longitude', 'latitude', 'gps_height',
'population', 'amount_tsh']
for col in cols_with_zeros:
X[col] = X[col].replace(0, np.nan)
# Impute mean for years
X.loc[X['years'].isna(), 'years'] = X['years'].mean()
#X.loc[X['pump_age'].isna(), 'pump_age'] = X['pump_age'].mean()
# Impute mean for longitude and latitude based on region
average_lat = X.groupby('region').latitude.mean().reset_index()
average_long = X.groupby('region').longitude.mean().reset_index()
shinyanga_lat = average_lat.loc[average_lat['region'] == 'Shinyanga', 'latitude']
shinyanga_long = average_long.loc[average_long['region'] == 'Shinyanga', 'longitude']
X.loc[(X['region'] == 'Shinyanga') & (X['latitude'] > -1), ['latitude']] = shinyanga_lat[17]
X.loc[(X['region'] == 'Shinyanga') & (X['longitude'].isna()), ['longitude']] = shinyanga_long[17]
mwanza_lat = average_lat.loc[average_lat['region'] == 'Mwanza', 'latitude']
mwanza_long = average_long.loc[average_long['region'] == 'Mwanza', 'longitude']
X.loc[(X['region'] == 'Mwanza') & (X['latitude'] > -1), ['latitude']] = mwanza_lat[13]
X.loc[(X['region'] == 'Mwanza') & (X['longitude'].isna()) , ['longitude']] = mwanza_long[13]
#X.loc[X['amount_tsh'].isna(), 'amount_tsh'] = 0
# Clean installer
X['installer'] = X['installer'].str.lower()
X['installer'] = X['installer'].str[:4]
X['installer'].value_counts(normalize=True)
tops = X['installer'].value_counts()[:15].index
X.loc[~X['installer'].isin(tops), 'installer'] = 'other'
# Bin lga
#tops = X['lga'].value_counts()[:10].index
#X.loc[~X['lga'].isin(tops), 'lga'] = 'Other'
# Bin subvillage
tops = X['subvillage'].value_counts()[:25].index
X.loc[~X['subvillage'].isin(tops), 'subvillage'] = 'Other'
# Impute mean for a feature based on latitude and longitude
def latlong_conversion(feature, pop, long, lat):
radius = 0.1
radius_increment = 0.3
if math.isnan(pop):
pop_temp = 0
while pop_temp <= 1 and radius <= 2:
lat_from = lat - radius
lat_to = lat + radius
long_from = long - radius
long_to = long + radius
df = X[(X['latitude'] >= lat_from) &
(X['latitude'] <= lat_to) &
(X['longitude'] >= long_from) &
(X['longitude'] <= long_to)]
pop_temp = df[feature].mean()
radius = radius + radius_increment
else:
pop_temp = pop
if np.isnan(pop_temp):
new_pop = X_train[feature].mean()
else:
new_pop = pop_temp
return new_pop
X.loc[X['population'].isna(), 'population'] = X['population'].mean()
#X['population'] = X.apply(lambda x: latlong_conversion('population', x['population'], x['longitude'], x['latitude']), axis=1)
# Impute mean for tsh based on mean of source_class/basin/waterpoint_type_group
#def tsh_calc(tsh, source, base, waterpoint):
# if math.isnan(tsh):
# if (source, base, waterpoint) in tsh_dict:
# new_tsh = tsh_dict[source, base, waterpoint]
# return new_tsh
# else:
# return tsh
# return tsh
#temp = X[~X['amount_tsh'].isna()].groupby(['source_class',
# 'basin',
# 'waterpoint_type_group'])['amount_tsh'].mean()
#tsh_dict = dict(temp)
#X['amount_tsh'] = X.apply(lambda x: tsh_calc(x['amount_tsh'], x['source_class'], x['basin'], x['waterpoint_type_group']), axis=1)
# Drop unneeded columns
unusable_variance = ['recorded_by', 'id', 'num_private', 'wpt_name']
X = X.drop(columns=unusable_variance)
return X
# Merge train_features.csv & train_labels.csv
train = pd.merge(pd.read_csv('../data/tanzania/train_features.csv'),
pd.read_csv('../data/tanzania/train_labels.csv'))
# Read test_features.csv & sample_submission.csv
test = pd.read_csv('../data/tanzania/test_features.csv')
sample_submission = pd.read_csv('../data/tanzania/sample_submission.csv')
# Split train into train & val. Make val the same size as test.
target = 'status_group'
train, val = train_test_split(train, test_size=len(test),
stratify=train[target], random_state=42)
# Wrangle train, validate, and test sets in the same way
train = wrangle(train)
val = wrangle(val)
test = wrangle(test)
# Arrange data into X features matrix and y target vector
X_train = train.drop(columns=target)
y_train = train[target]
X_val = val.drop(columns=target)
y_val = val[target]
X_test = test
# Make pipeline!
pipeline = make_pipeline(
ce.OrdinalEncoder(),
SimpleImputer(strategy='mean'),
XGBClassifier(max_depth=6, n_estimators=1400, learning_rate=0.05)
)
#(n_estimators=1400,
# random_state=42,
# min_samples_split=5,
# min_samples_leaf=1,
# max_features='auto',
# max_depth=30,
# bootstrap=True,
# n_jobs=-1,
# verbose = 1)
#)
# Fit on train, score on val
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_val)
print('Validation Accuracy', accuracy_score(y_val, y_pred))
#pd.set_option('display.max_rows', 200)
#model = pipeline.named_steps['randomforestclassifier']
#encoder = pipeline.named_steps['ordinalencoder']
#encoded_columns = encoder.transform(X_train).columns
#importances = pd.Series(model.feature_importances_, encoded_columns)
#importances.sort_values(ascending=False)
assert all(X_test.columns == X_train.columns)
y_pred = pipeline.predict(X_test)
submission = sample_submission.copy()
submission['status_group'] = y_pred
submission.to_csv('/content/submission-f3.csv', index=False)
main_program()
#for i in range(3, 10):
# main_program(i)
#for i in range(1, 6):
# i = i * 5
# print('lga bins: ', i)
# main_program(i)
#i = 25
#for j in range(2, 6):
# j = j * 5
# for k in range(2, 6):
# k = k * 5
# print('installer bins: ', i, 'funder bins: ', j,'subvillage bins: ', k)
# main_program( i, j, k)
#pd.set_option('display.max_rows', 200)
#model = pipeline.named_steps['randomforestclassifier']
#encoder = pipeline.named_steps['ordinalencoder']
#encoded_columns = encoder.transform(X_train).columns
#importances = pd.Series(model.feature_importances_, encoded_columns)
#importances.sort_values(ascending=False)
#assert all(X_test.columns == X_train.columns)
#y_pred = pipeline.predict(X_test)
#submission = sample_submission.copy()
#submission['status_group'] = y_pred
#submission.to_csv('/content/submission-f3.csv', index=False)
###Output
_____no_output_____ |
notebooks/gbank_to_fasta_conversion_dev.ipynb | ###Markdown
Have to add gene info to feature parser in genbank_parsers.py
###Code
from Bio import SeqIO
from Bio import SeqFeature
for k,v in feat1.iteritems():
for ft in v:
if ft.type == 'gene':
gen_quals = ft.qualifiers
gene = gen_quals['gene']
print(gen_quals)
#if 'note' in gen_quals:
# note = gen_quals['note']
# print(note)
###Output
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['trnL-UAA'], 'gene': ['trnL']}
{'note': ['trnF-GAA'], 'gene': ['trnF']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu (UAA)'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['trnL-UAA'], 'gene': ['trnL']}
{'note': ['trnF-GAA'], 'gene': ['trnF']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu; contains intron and exon'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu; tRNA-Leu(UAA)'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; contains intron and exon'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu; contains intron'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; tRNA-Leu(UAA)'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu (UAA)'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['trnL-UAA'], 'gene': ['trnL']}
{'note': ['trnF-GAA'], 'gene': ['trnF']}
{'gene': ['trnL']}
{'note': ['contains intron and exon'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['trnL-UAA'], 'gene': ['trnL']}
{'note': ['trnF-GAA'], 'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; contains intron and exon sequence'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; contains intron and exon sequence'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['trnL-UAA'], 'gene': ['trnL']}
{'note': ['trnF-GAA'], 'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu; contains intron and exon'], 'gene': ['trnL']}
{'note': ['tRNA-Leu (UAA)'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu; contains intron and exon'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; trnL (UAA); contains intron and exon'], 'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['trnL-UAA'], 'gene': ['trnL']}
{'note': ['trnF-GAA'], 'gene': ['trnF']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; contains intron and exon'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; tRNA-Leu(UAA)'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; contains intron and exon sequence'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; contains intron and exon sequence'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'gene': ['trnL']}
{'gene': ['trnF']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; tRNA-Leu(UAA)'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu'], 'gene': ['trnL']}
{'note': ['tRNA-Leu; includes trnL intron'], 'gene': ['trnL']}
{'gene': ['trnL']}
{'gene': ['trnF']}
###Markdown
**Now added, will proceed from parser**
###Code
%load_ext autoreload
%autoreload 2
new_info = parse_gb_feature_dict(feat1)
# now working
gene_info = new_info[2]
gene_info['Salvia_pratensis:667668288']['note']
###Output
_____no_output_____
###Markdown
Add option for using gene name after sequence name
###Code
# now add options for both description and change the sequence name
d = 'no' # for description
a = 'yes' # for accession
n = 'yes' # for gene note
g = 'yes' # for GI
G = 'yes' # for gene
output = open('test_out6.fa', 'w')
for k,v in seqs.iteritems():
if a == 'yes':
name = annots[k]['seq_name']
epithet = k.split(':')[0]
newname = epithet+':'+name
elif g == 'yes':
newname = k.replace(':',':gi')
else:
newname = k
seqline = newname
if G == 'yes':
gene = gene_info[k]['gene']
genestr = ''
for gen in gene:
genestr = genestr+' '+gen
if n == 'yes':
if 'note' in gene_info[k]:
notes = gene_info[k]['note']
for nt in notes:
genestr = genestr + ' '+nt
seqline = seqline + ' '+genestr # doing it like this allows adding both gene and description
if d == 'yes':
descript = annots[k]['description']
seqline = seqline + ' '+descript
#final_seqname = newname+' '+descript
#else:
# seqline = newname
output.write('>'+seqline+'\n'+v+'\n')
output.close()
###Output
_____no_output_____ |
example_dense_vae.ipynb | ###Markdown
Purpose of this notebook is to setup a simple VAE that can learn poses of cubes.@yvan june 9 2018 Setup
###Code
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils import data
from torch.autograd import Variable
from torch import optim
from torchvision.datasets import ImageFolder
from torchvision.utils import make_grid
from torchvision.transforms import transforms
# constants
NEPOCHS = 100
BATCH_SIZE = 64
SEED = 1337
IMG_DIM = 64
ncpus = !nproc
NCPUS = int(ncpus[0])
IMG_PATH = '/home/yvan/data_load/imgs_jpg_clpr_128/'
torch.cuda.is_available(), torch.__version__, torch.cuda.device_count(), torch.cuda.device(0)
t = transforms.Compose([transforms.ToTensor()])
cube_dataset = ImageFolder(IMG_PATH, transform=t)
cube_loader = data.DataLoader(dataset=cube_dataset,
batch_size=BATCH_SIZE,
shuffle=False,
num_workers=NCPUS)
###Output
_____no_output_____
###Markdown
Create a simple VAE model.
###Code
# thanks to https://github.com/L1aoXingyu/pytorch-beginner/blob/master/08-AutoEncoder/Variational_autoencoder.py
# for his implementation of a simple VAE
class Vae(nn.Module):
def __init__(self):
super(Vae,self).__init__()
# encoder dense layers
self.dense1 = nn.Linear(3*IMG_DIM*IMG_DIM, int(IMG_DIM*IMG_DIM/50))
# two output vectors, one for mu, and one for log variance.
self.dense11 = nn.Linear(int(IMG_DIM*IMG_DIM/50), 10)
self.dense12 = nn.Linear(int(IMG_DIM*IMG_DIM/50), 10)
# decoder dense layers
self.dense3 = nn.Linear(10, int(IMG_DIM*IMG_DIM/50))
self.dense4 = nn.Linear(int(IMG_DIM*IMG_DIM/50), IMG_DIM*IMG_DIM*3)
def encode(self, x):
x = F.relu(self.dense1(x))
return self.dense11(x), self.dense12(x)
def reparameterize(self, mu, logvar):
std = logvar.mul(0.5).exp_()
if torch.cuda.is_available():
epsilon = torch.cuda.FloatTensor(std.size()).normal_()
else:
epsilon = torch.FloatTensor(std.size()).normal_()
epsilon = Variable(epsilon)
return epsilon.mul(std).add(mu)
def decode(self, x):
x = F.relu(self.dense3(x))
return F.sigmoid(self.dense4(x))
def forward(self, x):
mu, logvar = self.encode(x)
z = self.reparameterize(mu, logvar)
return self.decode(z), mu, logvar
def loss_func(reconstructed, original, mu, logvar):
mse = nn.MSELoss(size_average=False)(reconstructed, original)
kld = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return mse + kld
model = Vae()
if torch.cuda.is_available():
model.cuda()
optimizer = optim.Adam(model.parameters(), lr=1e-3)
for epoch in range(NEPOCHS):
model.train()
for i, batch in enumerate(cube_loader):
# get and flatten img, move to gpu
img, _ = batch
img = img.view(img.size(0), -1)
img = Variable(img)
if torch.cuda.is_available():
img = img.cuda()
# run our vae
reconstructed_batch, mu, logvar = model(img)
loss = loss_func(reconstructed_batch, img, mu, logvar)
# backprop the loss
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f'epoch{epoch}:{loss.item()}')
model.eval()
sample = torch.randn(64, 10).cuda()
sample = model.decode(sample).cpu()
firstimg = sample.view(64, 3, 64, 64)
grid = make_grid(firstimg)
grid = transforms.Compose([transforms.ToPILImage()])(grid)
grid
imggrid = make_grid(batch[0])
imggrid = transforms.Compose([transforms.ToPILImage()])(imggrid)
imggrid
###Output
_____no_output_____ |
ti_edgeai_tidl_tools_tflite.ipynb | ###Markdown
EdgeAI TIDL Tools - THIS IS NOT VALIDATED YET, AS the Colab Runtime is by default starting with 3.7 IntroductionThis notebooks follows the steps mentioned in [edgeai-tidl-tools](https://github.com/TexasInstruments/edgeai-tidl-tools) github repository and prepares PC environment for Model compilation and PC emulation of inference. The compiled models can be copied to EVM/SK board for validation/benchmarking of DL inference on the target device. Prepare Colab RuntimeThe default the version used by colab instance is ubuntu 18.04 with python 3.7. The below sections sets up right python 3.6 and pip3 versions
###Code
!sudo update-alternatives --set python3 /usr/bin/python3.6
!curl https://bootstrap.pypa.io/pip/3.6/get-pip.py -o get-pip.py
!python3 get-pip.py --force-reinstall
###Output
update-alternatives: using /usr/bin/python3.6 to provide /usr/bin/python3 (python3) in manual mode
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 2108k 100 2108k 0 0 11.5M 0 --:--:-- --:--:-- --:--:-- 11.5M
Looking in indexes: https://pypi.org/simple, https://us-python.pkg.dev/colab-wheels/public/simple/
Collecting pip<22.0
Downloading pip-21.3.1-py3-none-any.whl (1.7 MB)
|████████████████████████████████| 1.7 MB 4.2 MB/s
[?25hCollecting setuptools
Downloading setuptools-59.6.0-py3-none-any.whl (952 kB)
|████████████████████████████████| 952 kB 38.2 MB/s
[?25hCollecting wheel
Downloading wheel-0.37.1-py2.py3-none-any.whl (35 kB)
Installing collected packages: wheel, setuptools, pip
Successfully installed pip-21.3.1 setuptools-59.6.0 wheel-0.37.1
[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv[0m
###Markdown
Clone and Setup Edgai-TIDL-ToolsThe below setup is configured for J7 device with python3 API examples only. For CPP example refer documentation in the repo and update the setup command accordingly
###Code
!git clone https://github.com/TexasInstruments/edgeai-tidl-tools.git
!export DEVICE=j7 && cd edgeai-tidl-tools && source ./setup.sh --skip_arm_gcc_download --skip_cpp_deps
import tflite_runtime.interpreter as tflite
import time
import os
import sys
import numpy as np
import PIL
from PIL import Image, ImageFont, ImageDraw, ImageEnhance
import re
import platform
os.environ["TIDL_TOOLS_PATH"]= "/content/edgeai-tidl-tools/tidl_tools"
os.environ["LD_LIBRARY_PATH"]= "/content/edgeai-tidl-tools/tidl_tools"
os.environ["TIDL_RT_PERFSTATS"] = "1"
os.environ["DEVICE"] = "j7"
sys.path.append("/content/edgeai-tidl-tools/examples/osrt_python")
from common_utils import *
tidl_tools_path = os.environ["TIDL_TOOLS_PATH"]
required_options = {
"tidl_tools_path":"/content/edgeai-tidl-tools/tidl_tools",
"artifacts_folder":"/content/edgeai-tidl-tools/model-artifacts",
}
output_images_folder = "/content/edgeai-tidl-tools/outputs"
calib_images = ['/content/edgeai-tidl-tools/test_data/airshow.jpg',
'/content/edgeai-tidl-tools/test_data/ADE_val_00001801.jpg']
class_test_images = ['/content/edgeai-tidl-tools/test_data/airshow.jpg']
od_test_images = ['/content/edgeai-tidl-tools/test_data/ADE_val_00001801.jpg']
seg_test_images = ['/content/edgeai-tidl-tools/test_data/ADE_val_00001801.jpg']
DEVICE = os.environ["DEVICE"]
def infer_image(interpreter, image_files, config):
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
floating_model = input_details[0]['dtype'] == np.float32
batch = input_details[0]['shape'][0]
height = input_details[0]['shape'][1]
width = input_details[0]['shape'][2]
channel = input_details[0]['shape'][3]
new_height = height #valid height for modified resolution for given network
new_width = width #valid width for modified resolution for given network
imgs = []
# copy image data in input_data if num_batch is more than 1
shape = [batch, new_height, new_width, channel]
input_data = np.zeros(shape)
for i in range(batch):
imgs.append(Image.open(image_files[i]).convert('RGB').resize((new_width, new_height), PIL.Image.LANCZOS))
temp_input_data = np.expand_dims(imgs[i], axis=0)
input_data[i] = temp_input_data[0]
if floating_model:
input_data = np.float32(input_data)
for mean, scale, ch in zip(config['mean'], config['std'], range(input_data.shape[3])):
input_data[:,:,:, ch] = ((input_data[:,:,:, ch]- mean) * scale)
else:
input_data = np.uint8(input_data)
config['mean'] = [0, 0, 0]
config['std'] = [1, 1, 1]
interpreter.resize_tensor_input(input_details[0]['index'], [batch, new_height, new_width, channel])
interpreter.allocate_tensors()
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()
outputs = [interpreter.get_tensor(output_detail['index']) for output_detail in output_details]
return imgs, outputs, new_height, new_width
tf_models_configs = {
# TFLite RT OOB Models
'cl-tfl-mobilenet_v1_1.0_224' : {
'model_path' : os.path.join("/content/edgeai-tidl-tools/models", 'mobilenet_v1_1.0_224.tflite'),
'source' : {'model_url': 'http://software-dl.ti.com/jacinto7/esd/modelzoo/latest/models/vision/classification/imagenet1k/tf1-models/mobilenet_v1_1.0_224.tflite', 'opt': True},
'mean': [127.5, 127.5, 127.5],
'std' : [1/127.5, 1/127.5, 1/127.5],
'num_images' : 3,
'num_classes': 1001,
'session_name' : 'tflitert',
'model_type': 'classification'
}
}
model = 'cl-tfl-mobilenet_v1_1.0_224'
download_model(tf_models_configs, model)
config = tf_models_configs[model]
if config['model_type'] == 'classification':
test_images = class_test_images
elif config['model_type'] == 'od':
test_images = od_test_images
elif config['model_type'] == 'seg':
test_images = seg_test_images
#set delegate options
delegate_options = {}
delegate_options.update(required_options)
delegate_options.update(optional_options)
# stripping off the ss-tfl- from model namne
delegate_options['artifacts_folder'] = delegate_options['artifacts_folder'] + '/' + model + '/'
if config['model_type'] == 'od':
delegate_options['object_detection:meta_layers_names_list'] = config['meta_layers_names_list'] if ('meta_layers_names_list' in config) else ''
delegate_options['object_detection:meta_arch_type'] = config['meta_arch_type'] if ('meta_arch_type' in config) else -1
os.makedirs(delegate_options['artifacts_folder'], exist_ok=True)
for root, dirs, files in os.walk(delegate_options['artifacts_folder'], topdown=False):
[os.remove(os.path.join(root, f)) for f in files]
[os.rmdir(os.path.join(root, d)) for d in dirs]
input_image = calib_images
numFrames = config['num_images']
if numFrames > delegate_options['advanced_options:calibration_frames']:
numFrames = delegate_options['advanced_options:calibration_frames']
c_interpreter = tflite.Interpreter(model_path=config['model_path'], \
experimental_delegates=[tflite.load_delegate(os.path.join(tidl_tools_path, 'tidl_model_import_tflite.so'), delegate_options)])
# run Compilation
for i in range(numFrames):
start_index = i%len(input_image)
input_details = c_interpreter.get_input_details()
batch = input_details[0]['shape'][0]
input_images = []
#for batch > 1 input images will be more than one in single input tensor
for j in range(batch):
input_images.append(input_image[(start_index+j)%len(input_image)])
imgs, output, new_height, new_width = infer_image(c_interpreter, input_images, config)
interpreter = tflite.Interpreter(model_path=config['model_path'], \
experimental_delegates=[tflite.load_delegate('libtidl_tfl_delegate.so', delegate_options)])
# run Inference
numFrames = 1
for i in range(numFrames):
start_index = i%len(test_images)
input_details = interpreter.get_input_details()
batch = input_details[0]['shape'][0]
input_images = []
#for batch > 1 input images will be more than one in single input tensor
for j in range(batch):
input_images.append(test_images[(start_index+j)%len(test_images)])
imgs, output, new_height, new_width = infer_image(interpreter, input_images, config)
images = []
if config['model_type'] == 'classification':
for j in range(batch):
classes, image = get_class_labels(output[0][j],imgs[j])
images.append(image)
print("\n", classes)
elif config['model_type'] == 'od':
for j in range(batch):
classes, image = det_box_overlay(output, imgs[j], config['od_type'])
images.append(image)
elif config['model_type'] == 'seg':
for j in range(batch):
classes, image = seg_mask_overlay(output[0][j], imgs[j])
images.append(image)
else:
print("Not a valid model type")
for j in range(batch):
output_file_name = "py_out_"+model+'_'+os.path.basename(input_images[j])
print("\nSaving image to ", output_images_folder)
if not os.path.exists(output_images_folder):
os.makedirs(output_images_folder)
images[j].save(output_images_folder + output_file_name, "JPEG")
###Output
_____no_output_____ |
fill_time_gaps_Medium.ipynb | ###Markdown
Fill in gaps in a time table
###Code
import pandas as pd
data = pd.read_csv("calls_server_across_time.csv").drop(["Unnamed: 0"], axis = 1)
data
df_org = data.copy(deep=True)
df_org['TimeGenerated'] = df_org['Time'].astype(object)
# Step 1: Convert Time to datetime object
df_org.TimeGenerated = pd.to_datetime(df_org.TimeGenerated, format="%Y/%m/%d %H:%M:%S")
# Step 2: Create an index that goes from your minimum time to your maximum time and create a time
# at the frequency you want. In our case, we want a new data point every minute => fre='min'
idx = pd.period_range(min(df_org.TimeGenerated), max(df_org.TimeGenerated), freq='min').strftime('%Y/%m/%d %H:%M:%S')
idx = pd.to_datetime(idx)
# Step 3: Build a dataframe based on this index
df_date = pd.DataFrame()
df_date['TimeGenerated'] = idx
# Step 4: Merge this dataframe to your original dataset. If there is no point in your original dataset then
# you should expect to have an NA value in the merge. Those points are exactly the points where we have no failed
# calls logged. Therefore we will NA values (~not failed calls) by a value of 0.
df_merge = pd.merge(left = df_org, right = df_date, on='TimeGenerated', how = 'right').fillna(0)
df_merge = df_merge.loc[:, ['TimeGenerated', 'Calls']]
df_merge
# Save the above in a csv file
df_merge.to_csv("calls_server_across_time_filled.csv", sep=",")
###Output
_____no_output_____ |
_notebooks/2022-01-30-create_3d_art.ipynb | ###Markdown
"Create 3d nft art"> "Generate 3d wall art"- toc: true- branch: master- badges: true- comments: true- author: Riyadh Uddin- categories: [python, jupyter, tensorflow, nft, art, GAN]
###Code
!pip3 install numpy-stl
import os
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.colors as colors
import numpy as np
from stl import mesh
# Define the 8 vertices of the cube
vertices = np.array([\
[-1, -1, -1],
[+1, -1, -1],
[+1, +1, -1],
[-1, +1, -1],
[-1, -1, +1],
[+1, -1, +1],
[+1, +1, +1],
[-1, +1, +1]])
# Define the 12 triangles composing the cube
faces = np.array([\
[0,3,1],
[1,3,2],
[0,4,7],
[0,7,3],
[4,5,6],
[4,6,7],
[5,1,2],
[5,2,6],
[2,3,6],
[3,7,6],
[0,1,5],
[0,5,4]])
# Create the mesh
cube = mesh.Mesh(np.zeros(faces.shape[0], dtype=mesh.Mesh.dtype))
for i, f in enumerate(faces):
for j in range(3):
print(vertices[f[j],:])
cube.vectors[i][j] = vertices[f[j]]
# Write the mesh to file "cube.stl"
cube.save('cube.stl')
from PIL import Image
import matplotlib.pyplot as plt
im = Image.open("data/images/ctl.jpg")
plt.imshow(im)
grey_img = Image.open('data/images/ctl.jpg').convert('L')
plt.imshow(grey_img)
import numpy as np
from stl import mesh
# Define the 8 vertices of the cube
vertices = np.array([\
[-1, -1, -1],
[+1, -1, -1],
[+1, +1, -1],
[-1, +1, -1]])
# Define the 12 triangles composing the cube
faces = np.array([\
[1,2,3],
[3,1,0]
])
# Create the mesh
cube = mesh.Mesh(np.zeros(faces.shape[0], dtype=mesh.Mesh.dtype))
for i, f in enumerate(faces):
for j in range(3):
cube.vectors[i][j] = vertices[f[j],:]
# Write the mesh to file "cube.stl"
cube.save('surface.stl')
grey_img = Image.open('data/images/ctl.jpg').convert('L')
max_size=(500,500)
max_height=10
min_height=0
#height=0 for minPix
#height=maxHeight for maxPIx
grey_img.thumbnail(max_size)
imageNp = np.array(grey_img)
maxPix=imageNp.max()
minPix=imageNp.min()
print(imageNp)
(ncols,nrows)=grey_img.size
vertices=np.zeros((nrows,ncols,3))
for x in range(0, ncols):
for y in range(0, nrows):
pixelIntensity = imageNp[y][x]
z = (pixelIntensity * max_height) / maxPix
#print(imageNp[y][x])
vertices[y][x]=(x, y, z)
faces=[]
for x in range(0, ncols - 1):
for y in range(0, nrows - 1):
# create face 1
vertice1 = vertices[y][x]
vertice2 = vertices[y+1][x]
vertice3 = vertices[y+1][x+1]
face1 = np.array([vertice1,vertice2,vertice3])
# create face 2
vertice1 = vertices[y][x]
vertice2 = vertices[y][x+1]
vertice3 = vertices[y+1][x+1]
face2 = np.array([vertice1,vertice2,vertice3])
faces.append(face1)
faces.append(face2)
print(f"number of faces: {len(faces)}")
facesNp = np.array(faces)
# Create the mesh
surface = mesh.Mesh(np.zeros(facesNp.shape[0], dtype=mesh.Mesh.dtype))
for i, f in enumerate(faces):
for j in range(3):
surface.vectors[i][j] = facesNp[i][j]
# Write the mesh to file "cube.stl"
surface.save('surface.stl')
print(surface)
a = np.zeros((3, 3))
a[:,0]=3
print(a[:,0])
print(a)
! pip install jupyter-cadquery==2.2.1 matplotlib
# run cadquery 2.2.1 example jupyter notebook
import cadquery as cq
import cadquery.freecad_impl as cadfc
import matplotlib.pyplot as plt
! pip install vpython
from vpython import *
sphere()
from vpython import *
scene = canvas() # This is needed in Jupyter notebook and lab to make programs easily rerunnable
b = box(pos=vec(-4,2,0), color=color.red)
c1 = cylinder(pos=b.pos, radius=0.1, axis=vec(0,1.5,0), color=color.yellow)
s = sphere(pos=vec(4,-4,0), radius=0.5, color=color.green)
c2 = cylinder(pos=s.pos, radius=0.1, axis=vec(0,1.5,0), color=color.yellow)
t1 = text(text='box', pos=c1.pos+c1.axis, align='center', height=0.5,
color=color.yellow, billboard=True, emissive=True)
t2 = text(text='sphere', pos=c2.pos+c2.axis, align='center', height=0.5,
color=color.yellow, billboard=True, emissive=True)
t3 = text(text='Faces forward', pos=vec(-4,0,0),
color=color.cyan, billboard=True, emissive=True)
box(pos=t3.start, size=0.1*vec(1,1,1), color=color.red)
t4 = text(text='Regular text', pos=vec(-4,-1,0), depth=0.5, color=color.yellow,
start_face_color=color.red, end_face_color=color.green)
box(pos=t4.start, size=0.1*vec(1,1,1), color=color.red)
scene.caption = """<b>3D text can be "billboard" text -- always facing you.</b>
Note that the "Regular text" has different colors on the front, back and sides.
Right button drag or Ctrl-drag to rotate "camera" to view scene.
To zoom, drag with middle button or Alt/Option depressed, or use scroll wheel.
On a two-button mouse, middle is left + right.
Touch screen: pinch/extend to zoom, swipe or two-finger rotate."""
psutil.sensors_battery()
###Output
_____no_output_____ |
notebooks/simulation/NewtonMethod.ipynb | ###Markdown
Newton法 $\sqrt{2}$に収束する漸化式$$a_{n + 1} = \frac{1}{2}\left( a_n + \frac{2}{a_n} \right),\quad a_0 > 0$$
###Code
a = 1.0
print(a)
for n in range(10):
a = 0.5 * (a + 2/a)
print(a)
###Output
1.0
1.5
1.4166666666666665
1.4142156862745097
1.4142135623746899
1.414213562373095
1.414213562373095
1.414213562373095
1.414213562373095
1.414213562373095
1.414213562373095
###Markdown
Newton法関数$f(x)$に対して$f(x) = 0$の解を求めたい.漸化式$$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$によって数列を計算する.
###Code
def f(x):
return x**2 - 2
def df(x):
return 2*x
###Output
_____no_output_____
###Markdown
初期値と反復回数を設定する
###Code
x = 2.0
N = 10
###Output
_____no_output_____
###Markdown
ニュートン法で数列を生成する
###Code
print(x)
for n in range(N):
x = x - f(x) / df(x)
print(x)
###Output
2.0
1.5
1.4166666666666667
1.4142156862745099
1.4142135623746899
1.4142135623730951
1.414213562373095
1.4142135623730951
1.414213562373095
1.4142135623730951
1.414213562373095
###Markdown
初期値を$x = 2$とした場合,4ステップ目で既に$1.41421356$まで一致している.5ステップ目以降は二つの数値が入れ替わりに十分に収束しているようである精度の改善が見られないのに反復を繰り返すのは効率が悪い.反復終了の判定条件をつけよう.ここでは残差の絶対値が一定値以下になったら反復を停止して近似解を出力,または一定回数の反復で終了条件を満たさなかったらエラーを返すことにする.判定条件にはxの増分や相対残差,またはそれらの組み合わせを用いることもできる.
###Code
def Newton(x, f, df, eps=1.0e-8, maxiter=10):
y = x
for n in range(maxiter):
y = y - f(y) / df(y)
if (abs(f(y)) < eps):
return y
print("収束しなかった")
Newton(2, f, df, eps=1e-14)
###Output
_____no_output_____
###Markdown
計算精度には限界がある.倍精度で残差を$10^{-16}$まで小さくするのは望めない.
###Code
Newton(2, f, df, eps=1e-16)
###Output
収束しなかった
###Markdown
パッケージを使う詳しくはhttps://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.newton.htmlscipy.optimize.newton
###Code
from scipy.optimize import newton
newton(f, 2)
###Output
_____no_output_____
###Markdown
複素ニュートン法ニュートン法は複素関数にも拡張できる.例として$1$の$3$乗根を求めてみる.
###Code
def f(z):
return z**3 - 1
def df(z):
return 3*z**2
Newton(-1+1.j, f, df)
###Output
_____no_output_____ |
notebooks/taxon/taxon_eda.ipynb | ###Markdown
Count taxons within journeys Setup
###Code
def unique_taxon_flat_unique(taxon_list):
return sum(Counter(set([t for taxon in taxon_list for t in taxon.split(",")])).values())
def unique_taxon_nested_unique(taxon_list):
return sum(Counter(set([taxon for taxon in taxon_list])).values())
def unique_taxon_flat_pages(taxon_list):
return sum(Counter([t for taxon in taxon_list for t in taxon.split(",")]).values())
def unique_taxon_nested_pages(taxon_list):
return sum(Counter([taxon for taxon in taxon_list]).values())
df.iloc[0].Sequence
target = df.Taxon_List.iloc[1]
print(target)
print(unique_taxon_flat_unique(target))
print(unique_taxon_nested_unique(target))
print(unique_taxon_flat_pages(target))
print(unique_taxon_nested_pages(target))
df['taxon_flat_unique'] = df['Taxon_List'].map(unique_taxon_flat_unique)
df['taxon_nested_unique'] = df['Taxon_List'].map(unique_taxon_nested_unique)
df['taxon_flat_pages'] = df['Taxon_List'].map(unique_taxon_flat_pages)
df['taxon_nested_pages'] = df['Taxon_List'].map(unique_taxon_nested_pages)
df.describe().drop("count").applymap(lambda x: format(x,"f"))
df.describe().drop("count").applymap(lambda x: '%.2f' % x)
df[df.taxon_flat_unique == 429].Taxon_List.values
df[df.taxon_flat_unique == 0].Sequence.values
def taxon_split(taxon_list):
return [t for taxon in taxon_list for t in taxon.split(",")]
#### Build list of unique taxons, excluding "other"
taxon_counter = Counter()
for tup in df.itertuples():
taxons = taxon_split(tup.Taxon_List)
for taxon in taxons:
taxon_counter[taxon]+=1
len(taxon_counter)
list(taxon_counter.keys())[0:10]
taxon_counter.most_common(10)
taxon_df = pd.read_csv("taxon_level_df.tsv",sep='\t')
###Output
_____no_output_____
###Markdown
Assign unique parent taxons per journey
###Code
df['subpaths'] = df['Page_List'].map(prep.subpaths_from_list)
for val in df[['Page_List','subpaths']].iloc[0].values:
pprint.pprint(val)
print("\n====")
###Output
_____no_output_____
###Markdown
create new subpaths where each element is a (page,parent taxon pair, pick one?)
###Code
def get_taxon_name(taxon_id):
if taxon_id in taxon_df.content_id.values:
return taxon_df[taxon_df.content_id==taxon_id].iloc[0].title
else:
return None
def taxon_title(taxon_id_list):
return [get_taxon_name(taxon_id) for taxon_id in taxon_id_list]
def subpaths_from_pcd_list(pcd_list):
return [[(page,taxon_title(taxons)), (pcd_list[i + 1][0],taxon_title(pcd_list[i + 1][1]))]
for i, (page,taxons) in enumerate(pcd_list) if i < len(pcd_list) - 1]
test_journey = df[df.PageSeq_Length>4].iloc[0]
pprint.pprint([p for p,_ in test_journey.Taxon_Page_List])
for i,element in enumerate(subpaths_from_pcd_list(test_journey.Taxon_Page_List)):
print(i,element,"\n====")
df['taxon_subpaths'] = df['Taxon_Page_List'].map(subpaths_from_pcd_list)
# taxon_title(df.Taxon_Page_List.iloc[0][0][1])
# def add_to_taxon_dict(diction,taxon_list):
# for taxon in taxon_list:
# if taxon not in diction.keys():
# diction[taxon] = get_taxon_name(taxon)
# df.Taxon_Page_List.iloc[0][0][1]
# df.Taxon_Page_List.iloc[0][1][1]
# taxon_name = {}
# add_to_taxon_dict(taxon_name,df.Taxon_Page_List.iloc[0][0][1]+df.Taxon_Page_List.iloc[0][1][1])
# taxon_name
# df.shape
# print(datetime.datetime.now().strftime("[%H:%M:%S]"))
###Output
_____no_output_____
###Markdown
Graph viz graph some stuff based on taxon (parent?)
###Code
def add_page_taxon(diction,key,value):
if key not in diction.keys():
diction[key] = value
adjacency_list = {}
adjacency_counter = Counter()
freq_filter = 1000
dupe_count = 0
page_taxon_title = {}
for i,tup in enumerate(df.sort_values(by="Occurrences",ascending=False).itertuples()):
# for page,taxon in tup.Taxon_Page_List:
for subpath in subpaths_from_pcd_list(tup.Taxon_Page_List):
start = subpath[0][0]
end = subpath[1][0]
# print(subpath[0][1]+subpath[1][1])
adjacency_counter [(start,end)] += tup.Occurrences
if start!=end and adjacency_counter[(start,end)] >= freq_filter:
add_page_taxon(page_taxon_title,start,subpath[0][1])
add_page_taxon(page_taxon_title,end,subpath[1][1])
if start in adjacency_list.keys():
if end not in adjacency_list[start]:
adjacency_list[start].append(end)
else:
adjacency_list[start] = [end]
if len(adjacency_list)>1000:
break
if i%30000==0:
print(datetime.datetime.now().strftime("[%H:%M:%S]"),"ind",i)
print(len(adjacency_list))
len(adjacency_list)
list(adjacency_list.items())[0:10]
list(page_taxon_title.items())[0:10]
for page,taxons in page_taxon_title.items():
page_taxon_title[page] = "_".join([taxon if taxon is not None else "None" for taxon in taxons])
###Output
_____no_output_____
###Markdown
Set up colors
###Code
N = len(page_taxon_title.values())
HSV_tuples = [(x*1.0/N, 0.5, 0.5) for x in range(N)]
RGB_tuples = map(lambda x: colorsys.hsv_to_rgb(*x), HSV_tuples)
RGB_tuples = list(RGB_tuples)
taxon_color = {taxon:RGB_tuples[i] for i,taxon in enumerate(page_taxon_title.values())}
digraph = nx.DiGraph()
for node,out_nodes in adjacency_list.items():
color = taxon_color[page_taxon_title[node]]
digraph.add_node(node,taxon=page_taxon_title[node],color=color)
for o_node in out_nodes:
color = taxon_color[page_taxon_title[o_node]]
digraph.add_node(o_node,taxon=page_taxon_title[o_node],color=color)
digraph.add_edge(node,o_node)
digraph.edges()
edges = digraph.edges()
color_map = [data['color'] for _,data in digraph.nodes(data=True)]
pos = nx.nx_agraph.graphviz_layout(digraph, prog='neato')
nx.draw(digraph, pos, node_size=20, fontsize=12, edges=edges, node_color=color_map)
plt.show()
###Output
_____no_output_____ |
demo/02. Methods/04. WeightedDEW.ipynb | ###Markdown
7. WeightedDEW This notebook shows how to run the WeightedDEW method on a ViMMS dataset
###Code
%matplotlib inline
%load_ext autoreload
%autoreload 2
import sys
sys.path.append('../..')
from pathlib import Path
from vimms.MassSpec import IndependentMassSpectrometer
from vimms.Controller import WeightedDEWController
from vimms.Environment import Environment
from vimms.Common import *
###Output
_____no_output_____
###Markdown
Load the data
###Code
data_dir = os.path.abspath(os.path.join(os.getcwd(),'..','..','tests','fixtures'))
dataset = load_obj(os.path.join(data_dir, 'QCB_22May19_1.p'))
###Output
_____no_output_____
###Markdown
Run Top N Controller
###Code
rt_range = [(0, 1440)]
min_rt = rt_range[0][0]
max_rt = rt_range[0][1]
min_ms1_intensity = 5000
mz_tol = 10
r = 20
N = 10
t0 = 20
isolation_width = 1
mass_spec = IndependentMassSpectrometer(POSITIVE, dataset)
controller = WeightedDEWController(POSITIVE, N, isolation_width, mz_tol,
r,min_ms1_intensity, exclusion_t_0 = t0, log_intensity = True)
# create an environment to run both the mass spec and controller
env = Environment(mass_spec, controller, min_rt, max_rt, progress_bar=True)
# set the log level to WARNING so we don't see too many messages when environment is running
set_log_level_warning()
# run the simulation
env.run()
###Output
(1440.000s) ms_level=2: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉| 1439.8000000001498/1440 [00:26<00:00, 53.51it/s]
###Markdown
Simulated results are saved to the following .mzML file and can be viewed in tools like ToppView or using other mzML file viewers.
###Code
set_log_level_debug()
mzml_filename = 'weighteddew_controller.mzML'
out_dir = os.path.join(os.getcwd(), 'results')
env.write_mzML(out_dir, mzml_filename)
###Output
2021-08-30 15:04:38.995 | DEBUG | vimms.Environment:write_mzML:149 - Writing mzML file to C:\Users\joewa\Work\git\vimms\demo\02. Methods\results\weighteddew_controller.mzML
2021-08-30 15:04:49.935 | DEBUG | vimms.Environment:write_mzML:152 - mzML file successfully written!
|
Ipynb/data_analytics_ipynb/wifi_lin_reg_mean_nointercept.ipynb | ###Markdown
Research Practicum This notebook contains a model which uses the max value per hour per day per room. GET DATA
###Code
#import pandas package to read and merge csv files
import pandas as pd
#import csv package for reading from and writing to csv files
import csv
# Import package numpy for numeric computing
import numpy as np
# Import package matplotlib for visualisation/plotting
import matplotlib.pyplot as plt
%matplotlib inline
# read data from csv file into a data frame
# code is now OS agnostic
import os
a = '..' # removed slash
b = 'cleaned_data' # removed slash
c = 'full.csv'
print(os.path.join(a, b, c))
wifi_df = pd.read_csv(os.path.join(a, b, c), names=['room', 'event_time', 'ass', 'auth'])
# check data loaded into data frame correctly
wifi_df.head()
# check data loaded into data frame correctly
wifi_df.tail()
###Output
_____no_output_____
###Markdown
CLEAN DATA Convert timestamp to epoch time.
###Code
import time
from dateutil.parser import parse
def convert_to_epoch(df, column):
'''function that reads in a dataframe with a column containing values in timestamp format and converts those values to epoch forma
requires module time and parse function from dateutil.parser
paramaters
----------
df is a dataframe
column is a string that denotes the name of the column containing value in timestamp format
'''
#for loop that iterates through each row in the dataframe
for i in range(df.shape[0]):
# variable 'x' is assigned the value from the column and row 'i'
x = df[column][i]
# variable 'y' is assigned the result of variable 'x' passed through the parse method
y = parse(x)
# variable 'epoch' is assigned 'y' value converted to epoch time
epoch = int(time.mktime(y.timetuple()))
# set column value to value of variable 'epoch'
df.set_value(i, column, epoch)
return df
convert_to_epoch(wifi_df, 'event_time')
## Original code used to create convert_to_epoch() function above
#import time
#from dateutil.parser import parse
#for i in range(wifi_log_data.shape[0]):
# x = wifi_log_data["event_time"][i]
# y = parse(x)
# epoch = int(time.mktime(y.timetuple()))
# wifi_log_data.set_value(i,"event_time",epoch)
###Output
_____no_output_____
###Markdown
Clean Room Identifiers
###Code
def room_number(df, room_column):
'''function that reads in a dataframe with a column containing room information in the format 'campus > building > roomcode-xxx'
and replaces the values in the column with just the room ID which is the last character of the string in that column.
'''
# for loop that iterates through each row in the df
for i in range(df.shape[0]):
# selects last character of the string in the room_column which is the room ID
df.set_value(i, room_column, df[room_column][i][-1:])
return df
room_number(wifi_df, 'room')
wifi_df.head()
###Output
_____no_output_____
###Markdown
Add building.
###Code
wifi_df['building'] = 'school of computer science'
wifi_df.head()
###Output
_____no_output_____
###Markdown
Clean Occupancy Data
###Code
# put survey data in a dataframe
a = '..' # removed slash
b = 'cleaned_data' # removed slash
c = 'survey_data.csv'
print(os.path.join(a, b, c))
occupancy_df = pd.read_csv(os.path.join(a, b, c))
occupancy_df.head()
# delete column 'Unnamed: 0'
del occupancy_df['Unnamed: 0']
occupancy_df.head()
###Output
_____no_output_____
###Markdown
Convert EPCOH time into human-readable format.
###Code
# convert 'event_time' values from EPOCH to DATETIME
wifi_df['event_time'] = pd.to_datetime(wifi_df.event_time, unit='s')
# use event_time as dataframe index
wifi_df.set_index('event_time', inplace=True)
wifi_df.head()
# create two new columns, event_hour and event_day
wifi_df['event_hour'] = wifi_df.index.hour
wifi_df['event_day'] = wifi_df.index.day
wifi_df.head()
# convert 'event_time' values from EPOCH to DATETIME
occupancy_df['event_time'] = pd.to_datetime(occupancy_df.event_time, unit='s')
# use event_time as dataframe index
occupancy_df.set_index('event_time', inplace=True)
occupancy_df.head()
# create two new columns, event_hour and event_day
occupancy_df['event_hour'] = occupancy_df.index.hour
occupancy_df['event_day'] = occupancy_df.index.day
occupancy_df.head()
###Output
_____no_output_____
###Markdown
DATA ANALYSIS Survey data contains one recorded value per room, per day, per hour. Here, we take the max reading per hour, per day, per room.
###Code
df_max_conn = wifi_df.groupby(['room', 'event_day', 'event_hour'], as_index=False).mean()
df_max_conn.tail()
# merge data into single dataframe
df_max_conn['room'] = df_max_conn['room'].astype(int)
full_df = pd.merge(df_max_conn, occupancy_df, on=['room', 'event_day', 'event_hour'], how='inner')
full_df.head(15)
# add column for number of estimated occupants based on room capacity * occupancy rate
def estimate_occ(df,room, occupancy_rate):
'''function that caluclates the estimated number of room occupants
parameters
----------
df is a dataframe with columns room and occupancy_rate
room is a string denoting a column in df that contains INT values representing room IDs
occupancy_rate is a string denoting a column in df that contains DECIMAL values that represent the estimated room occupancy rate
'''
#for loop that iterates through each row of the df
for i in range(df.shape[0]):
#room two and three have capacity of 90
if df[room][i] == 2 or df[room][i] == 3:
# calculate estimated occupants for row, assign to variable 'est'
est = df[occupancy_rate][i] * 90
#set value in new column
df.set_value(i, 'est_occupants', est)
#room four has a capcity of 220
elif df[room][i] == 4:
est = df[occupancy_rate][i] * 220
df.set_value(i, 'est_occupants', est)
else:
raise ValueError('Incorrect room number:', df[room][i])
estimate_occ(full_df, 'room', 'occupancy')
full_df.head()
# look at correlations for estimated occupants and associated devices
full_df[['ass', 'auth', 'est_occupants']].corr()
fig, axs = plt.subplots(1, 2, sharey=True)
full_df.plot(kind='scatter', x='ass', y='est_occupants', label='%.3f'
% full_df[['ass', 'est_occupants']].corr().as_matrix()[0,1], ax=axs[0], figsize=(15, 8))
full_df.plot(kind='scatter', x='auth', y='est_occupants', label='%.3f'
% full_df[['auth', 'est_occupants']].corr().as_matrix()[0,1], ax=axs[1])
###Output
_____no_output_____
###Markdown
Linear Regression Model
###Code
import statsmodels.formula.api as sm
# can also use associated but higher correlation with authenticated
lm = sm.ols(formula='est_occupants ~ auth', data=full_df).fit()
print(lm.params)
print(lm.summary())
full_df['prediction_max'] = None
for i in range(full_df.shape[0]):
full_df.set_value(i, 'prediction_max', full_df['auth'][i] * lm.params['auth'])
# add column to dataframe for prediction category
full_df['cat_predict'] = None
def set_occupancy_category(df, room, linear_predict, cat_predict):
'''function that converts linear predictions to a defined category and updates the dataframe passed through
Parameters
----------
df: a dataframe
room: a string that is the column in df containing room id values of type INT
linear_predict: a string that is the column in df containing linear predictions
cat_predict: a string that is the column in df that will containing category predictions
'''
for i in range(df.shape[0]):
# assign room capacity
if df[room][i] == 2 or df[room][i] == 3:
cap = 90
elif df[room][i] == 4:
cap = 200
# calculate the occupancy rate and assign to variable 'ratio'
ratio = df[linear_predict][i]/ cap
# assign category based on ratio
if ratio < 0.13:
cat = 0.0
elif ratio < 0.38:
cat = 0.25
elif ratio < 0.5:
cat = 0.5
elif ratio < 0.88:
cat = 0.75
else:
cat = 1.0
# set category value in df
df.set_value(i, cat_predict, cat)
set_occupancy_category(full_df, 'room', 'prediction_max', 'cat_predict')
full_df.head()
###Output
_____no_output_____
###Markdown
Check accuracy of model according to survey data
###Code
full_df['accurate'] = None
for i in range(full_df.shape[0]):
full_df.set_value(i, 'accurate', 1 if full_df['occupancy'][i] == full_df['cat_predict'][i] else 0)
full_df.head()
accuracy = full_df['accurate'].sum()/full_df.shape[0]
accuracy
###Output
_____no_output_____ |
adam_api_repo_curve_anomaly_detection/.ipynb_checkpoints/jiang_run_notebook-checkpoint.ipynb | ###Markdown
==========================================================================================app = Flask(__name__)@app.route('/', methods=['GET', 'POST'])def main(): return render_template("template.html", universe_indices=universe_indices, len_universe_indices=len(universe_indices), indices_repo=indices_dt, len_indices_repo=len(indices_dt), indices_mnemo=indices_mnemo, indices_ric=indices_ric, repo=repo, repo_json=json.dumps(repo.tolist()), repo_decoded=repo_decoded, repo_decoded_json=json.dumps(repo_decoded), rmses_repo=rmses_repo, rmses_repo_json=json.dumps(rmses_repo.tolist()), labels=json.dumps(time_series.tolist()), outliers_indices=outliers_indices, len_outliers_indices=len(outliers_indices), accurate_indices=accurate_indices, len_accurate_indices=len(accurate_indices), seuil=seuil, time_zone=time_zone, today_time_exact=today_time_exact, dates_repo=dates_repo)@app.route('/stop', methods=['GET', 'POST'])def shutdown(): func = request.environ.get('werkzeug.server.shutdown') if func is None: raise RuntimeError('Not running with the Werkzeug Server') func() return 'Server shutting down...'app.run(host='0.0.0.0', port=8000, threaded=True)
###Code
len(repo_decoded)
len(repo)
len(dates_repo)
plt.rcParams['axes.facecolor'] = 'white'
def plotCurve(dataSerieOriginal, dataSerieInterpolated, title):
refSize=5
plt.plot(dataSerieOriginal, 'o')
plt.plot(dataSerieInterpolated)
plt.xlabel("Maturity (Year)", fontsize=2*refSize, labelpad=3*refSize)
plt.ylabel("Rate (%)", fontsize=2*refSize, labelpad=3*refSize)
plt.tick_params(axis='x', labelsize=2*refSize, pad=int(refSize/2))
plt.tick_params(axis='y', labelsize=2*refSize, pad=int(refSize/2))
plt.title(title, fontsize = 2*refSize)
plt.show()
time_series.shape
#plot outliers
for k in outliers_indices:
date = time_series[k]
repoCurve = repo[k]
pred = repo_decoded[k]
#plt.figure()
sCleaned = pd.Series(pred,
index = list(map(lambda x : x/365,
list(filter(lambda x: x >= 90, date)))) )
sOrigin = pd.Series([repoCurve[x] for x in range(len(repoCurve)) if 90 <= date[x]],
index = list(map(lambda x : x/365,
list(filter(lambda x: x >= 90, date)))) )
plotCurve(sOrigin,sCleaned, dates_repo[k])
#max_error
#filter
#%matplotlib
#plot rmses and outliers points
sRMSE = pd.Series(rmses_repo, index = [datetime.strptime(d , '%d-%b-%Y') for d in dates_repo ])
sOutlier = pd.Series([rmses_repo[i] for i in outliers_indices ],
index = [datetime.strptime(dates_repo[i] , '%d-%b-%Y') for i in outliers_indices ])
plt.plot(sOutlier.sort_index(), 'o', color='k')
plt.plot(sRMSE.sort_index(), color='royalblue')
plt.show()
inLiers = list(filter(lambda x: rmses_repo[x] <= 0.01, range(len(rmses_repo))))
#plot inliers
for k in inLiers:
date = time_series[k]
repoCurve = repo[k]
pred = repo_decoded[k]
sCleaned = pd.Series(pred,
index = list(map(lambda x : x/365,
list(filter(lambda x: x >= 90, date)))) )
sOrigin = pd.Series([repoCurve[x] for x in range(len(repoCurve)) if 90 <= date[x]],
index = list(map(lambda x : x/365,
list(filter(lambda x: x >= 90, date)))) )
plotCurve(sOrigin,sCleaned, dates_repo[k])
ind1 = inLiers[4]
date1 = time_series[ind1]
repoCurve1 = repo[ind1]
pred1 = repo_decoded[ind1]
sCleaned1 = pd.Series(pred1,
index = list(map(lambda x : x/365,
list(filter(lambda x: x >= 90, date1)))) )
sOrigin1 = pd.Series([repoCurve1[x] for x in range(len(repoCurve1)) if 90 <= date1[x]],
index = list(map(lambda x : x/365,
list(filter(lambda x: x >= 90, date1)))) )
plotCurve(sOrigin1,sCleaned1, dates_repo[ind1])
ind2 = outliers_indices[13]
date2 = time_series[ind2]
repoCurve2 = repo[ind2]
pred2 = repo_decoded[ind2]
sCleaned2 = pd.Series(pred2,
index = list(map(lambda x : x/365,
list(filter(lambda x: x >= 90, date2)))) )
sOrigin2 = pd.Series([repoCurve2[x] for x in range(len(repoCurve2)) if 90 <= date2[x]],
index = list(map(lambda x : x/365,
list(filter(lambda x: x >= 90, date2)))) )
plotCurve(sOrigin2,sCleaned2, dates_repo[ind2])
date3 = [0.334247,
0.583562,
1.082192,
2.079452,
3.076712,
4.073973,
5.090411,
6.087671,
7.084932,
9.076712,
11.093151,
15.093151]
repoCurve3 = [-0.39,
-0.43,
-0.45,
-0.47,
-0.49,
-0.49,
-0.49,
-0.49,
-0.49,
-0.49,
-0.49,
-0.50]
pred3 = [-0.389610,
-0.432510,
-0.458510,
-0.477504,
-0.483631,
-0.484594,
-0.484436,
-0.484704,
-0.485662,
-0.488890,
-0.492566,
-0.498653]
sCleaned3 = pd.Series(pred3, index = date3 )
sOrigin3 = pd.Series(repoCurve3, index = date3 )
plotCurve(sOrigin3,sCleaned3, datetime(2013, 10, 3, 0, 0))
refSizeMulti=5
plt.plot(sOrigin1, 'o')
plt.plot(sOrigin2, '^')
plt.plot(sOrigin3, 'kx')
plt.xlabel("Maturity (Year)", fontsize=2*refSizeMulti, labelpad=3*refSizeMulti)
plt.ylabel("Rate (%)", fontsize=2*refSizeMulti, labelpad=3*refSizeMulti)
plt.tick_params(axis='x', labelsize=2*refSizeMulti, pad=int(refSizeMulti/2))
plt.tick_params(axis='y', labelsize=2*refSizeMulti, pad=int(refSizeMulti/2))
plt.title("Initial repo curve points", fontsize = 2*refSizeMulti)
plt.show()
###Output
_____no_output_____ |
Program 1 - FindS/.ipynb_checkpoints/pgm1-checkpoint.ipynb | ###Markdown
Program 1 - FIND-S
###Code
import csv
with open('pgm1.csv') as fh:
reader = csv.reader(fh)
h = ['0','0','0','0','0','0']
for row in reader:
if row[-1] == 'no':
print(h)
continue
for i in range(len(row)-1):
if h[i] != row[i] and h[i] != '0':
h[i] = '?'
elif h[i] == '0':
h[i] = row[i]
print(h)
###Output
['sunny', 'warm', 'normal', 'strong', 'warm', 'same']
['sunny', 'warm', '?', 'strong', 'warm', 'same']
['sunny', 'warm', '?', 'strong', 'warm', 'same']
['sunny', 'warm', '?', 'strong', '?', '?']
|
dS-ML/week1/Data Visualization Jupyter Notebook.ipynb | ###Markdown
Table of Content Introduction to Visualisation Libraries1. **[Plots using Matplotlib ](matplotlib)**2. **[Plots using Seaborn ](seaborn)** **There are different visualization libraries in python, that provides an interface for drawing various graphics. Some most widely used libraries are Matplotlib, Seaborn, and Plotly.** Import the required libraries
###Code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
# to suppress warnings
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Seaborn library provides a variety of datasets. Plot different visualization plots using various libraries for the 'tips' dataset.
###Code
# load the 'tips' dataset from seaborn
tips_data = sns.load_dataset('tips')
# display head() of the dataset
tips_data.head()
###Output
_____no_output_____
###Markdown
1. Plots using Matplotlib Matplotlib is a Python 2D plotting library. Many libraries are built on top of it and use its functions in the backend. pyplot is a subpackage of matplotlib that provides a MATLAB-like way of plotting. matplotlib.pyplot is a mostly used package because it is very simple to use and it generates plots in less time. **How to install Matplotlib?**1. You can use-`!pip install matplotlib` 1.1 Line Plot A line graph is the simplest plot that displays the relationship between the one independent and one dependent dataset. In this plot, the points are joined by straight line segments.
###Code
# data
import numpy as np
X = np.linspace(1,20,100)
Y = np.exp(X)
# line plot
plt.plot(X,Y)
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
From the plot, it can be observed that as 'X' is increasing there is an exponential increase in Y. **The above plot can be represented not only by a solid line, but also a dotted line with varied thickness. The points can be marked explicitly using any symbol.**
###Code
# data
X = np.linspace(1,20,100)
Y = np.exp(X)
# line plot
# the argument 'r*' plots each point as a red '*'
plt.plot(X,Y, 'r*')
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
We can change the colors or shapes of the data points.There can be multiple line plots in one plot. Let's plot three plots together in a single graph. Also, add a plot title.
###Code
# data
X = np.linspace(1,20,100)
Y1 = X
Y2 = np.square(X)
Y3 = np.sqrt(X)
# line plot
plt.plot(X,Y1,'r', X,Y2,'b', X,Y3,'g')
# add title to the plot
plt.title('Line Plot')
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
1.2 Scatter Plot A scatter plot is a set of points plotted on horizontal and vertical axes. The scatter plot can be used to study the correlation between the two variables. One can also detect the extreme data points using a scatter plot.
###Code
# check the head() of the tips dataset
tips_data.head()
###Output
_____no_output_____
###Markdown
Plot the scatter plot for the variables 'total_bill' and 'tip'
###Code
# data
X = tips_data['total_bill']
Y = tips_data['tip']
# plot the scatter plot
plt.scatter(X,Y)
# add the axes labels to the plot
plt.xlabel('total_bill')
plt.ylabel('tip')
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
We can add different colors, opacity, and shape of data points. Let's add these customizations in the above plot.
###Code
# plot the scatter plot for the variables 'total_bill' and 'tip'
X = tips_data['total_bill']
Y = tips_data['tip']
# plot the scatter plot
# s is for shape, c is for colour, alpha is for opacity (0 < alpha < 1)
plt.scatter(X, Y, s = np.array(Y)**2, c= 'green', alpha= 0.8)
# add title
plt.title('Scatter Plot')
# add the axes labels to the plot
plt.xlabel('total_bill')
plt.ylabel('tip')
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
The bubbles with greater radius display that the tip amount is more as compared to the bubbles with less radius. 1.3 Bar Plot A bar plot is used to display categorical data with bars with lengths proportional to the values that they represent. The comparison between different categories of a categorical variable can be done by studying a bar plot. In the vertical bar plot, the X-axis displays the categorical variable and Y-axis contains the values corresponding to different categories.
###Code
# check the head() of the tips dataset
tips_data.head()
# the variable 'smoker' is categorical
# check categories in the variable
set(tips_data['smoker'])
# bar plot to get the count of smokers and non-smokers in the data
# kind='bar' plots a bar plot
# 'rot = 0' returns the categoric labels horizontally
tips_data.smoker.value_counts().plot(kind='bar', rot = 0)
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
Let's add the count of smokers and non-smokers, axes labels and title to the above plot
###Code
# bar plot to get the count of smokers and non-smokers in the data
# kind='bar' plots a bar plot
# 'rot = 0' returns the categoric labels horizontally
# 'color' can be used to add a specific colour
tips_data.smoker.value_counts().plot(kind='bar', rot = 0, color = 'green')
# plt.text() adds the text to the plot
# x and y are positions on the axes
# s is the text to be added
plt.text(x = -0.05, y = tips_data.smoker.value_counts()[1]+1, s = tips_data.smoker.value_counts()[1])
plt.text(x = 0.98, y = tips_data.smoker.value_counts()[0]+2, s = tips_data.smoker.value_counts()[0])
# add title and axes labels
plt.title('Bar Plot')
plt.xlabel('Smoker')
plt.ylabel('Count')
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
From the bar plot, it can be interpreted that the proportion of non-smokers is more in the data 1.4 Pie Plot Pie plot is a graphical representation of univariate data. It is a circular graph divided into slices displaying the numerical proportion. For the categorical variable, each slice of the pie plot corresponds to each of the categories.
###Code
# check the head() of the tips dataset
tips_data.head()
# categories in the 'day' variable
tips_data.day.value_counts()
# plot the occurrence of different days in the dataset
# 'autopct' displays the percentage upto 1 decimal place
# 'radius' sets the radius of the pie plot
plt.pie(tips_data.day.value_counts(), autopct = '%.1f%%', radius = 1.2, labels = ['Sat', 'Sun','Thur','Fri'])
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
From the above pie plot, it can be seen that the data has a high proportion for Saturday followed by Sunday. **Exploded pie plot** is a plot in which one or more sectors are separated from the disc
###Code
# plot the occurrence of different days in the dataset
# exploded pie plot
plt.pie(tips_data.day.value_counts(), autopct = '%.1f%%', radius = 1.2, labels = ['Sat', 'Sun','Thur','Fri'],
explode = [0,0,0,0.5])
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
**Donut pie plot** is a type of pie plot in which there is a hollow center representing a doughnut.
###Code
# plot the occurrence of different days in the dataset
# pie plot
plt.pie(tips_data.day.value_counts(), autopct = '%.1f%%', radius = 1.2, labels = ['Sat', 'Sun','Thur','Fri'])
# add a circle at the center
circle = plt.Circle( (0,0), 0.5, color='white')
plot = plt.gcf()
plot.gca().add_artist(circle)
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
1.5 Histogram A histogram is used to display the distribution and spread of the continuous variable. One axis represents the range of variable and the other axis shows the frequency of the data points. In a histogram, there are no gaps between the bars.
###Code
# check the head() of the tips dataset
tips_data.head()
###Output
_____no_output_____
###Markdown
In tips dataset, 'tip' is the continuous variable. Let's plot the histogram to understand the distribution of the variable.
###Code
# plot the histogram
# specify the number of bins, using 'bins' parameter
plt.hist(tips_data['tip'], bins= 5)
# add the graph title and axes labels
plt.title('Distribution of tip amount')
plt.xlabel('tip')
plt.ylabel('Frequency')
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
From the above plot, we can see that the tip amount is positively skewed. 1.6 Box Plot Boxplot is a way to visualize the five-number summary of the variable. The five-number summary includes the numerical quantities like minimum, first quartile (Q1), median (Q2), third quartile (Q3), and maximum. Boxplot gives information about the outliers in the data. Detecting and removing outliers is one of the most important steps in exploratory data analysis. Boxplots also tells about the distribution of the data.
###Code
# check the head() of the tips dataset
tips_data.head()
###Output
_____no_output_____
###Markdown
Plot the boxplot of 'total_bill' to check the distribution and presence of outliers in the variable.
###Code
# plot a distribution of total bill
plt.boxplot(tips_data['total_bill'])
# add labels for five number summary
plt.text(x = 1.1, y = tips_data['total_bill'].min(), s ='min')
plt.text(x = 1.1, y = tips_data.total_bill.quantile(0.25), s ='Q1')
plt.text(x = 1.1, y = tips_data['total_bill'].median(), s ='meadian (Q2)')
plt.text(x = 1.1, y = tips_data.total_bill.quantile(0.75), s ='Q3')
plt.text(x = 1.1, y = tips_data['total_bill'].max(), s ='max')
# add the graph title and axes labels
plt.title('Boxplot of Total Bill Amount')
plt.ylabel('Total bill')
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
The above boxplot clearly shows the presence of outliers above the horizontal line. We can add an arrow to showcase the outliers. Also, the median (Q2) is represented by the orange line, which is near to Q1 rather than Q3. This shows that the total bill is positively skewed.
###Code
# plot a distribution of total bill
plt.boxplot(tips_data['total_bill'])
# add labels for five number summary
plt.text(x = 1.1, y = tips_data['total_bill'].min(), s ='min')
plt.text(x = 1.1, y = tips_data.total_bill.quantile(0.25), s ='Q1')
plt.text(x = 1.1, y = tips_data['total_bill'].median(), s ='meadian (Q2)')
plt.text(x = 1.1, y = tips_data.total_bill.quantile(0.75), s ='Q3')
plt.text(x = 1.1, y = tips_data['total_bill'].max(), s ='max')
# add an arrow (annonate) to show the outliers
plt.annotate('Outliers', xy = (0.97,45),xytext=(0.7, 44), arrowprops = dict(facecolor='black', arrowstyle = 'simple'))
# add the graph title and axes labels
plt.title('Boxplot of Total Bill Amount')
plt.ylabel('Total bill')
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
2. Plots using Seaborn Seaborn is a Python visualization library based on matplotlib. The library provides a high-level interface for plotting statistical graphics. As the library uses matplotlib in the backend, we can use the functions in matplotlib along with functions in seaborn.Various functions in the seaborn library allow us to plot complex and advance statistical plots like linear/higher-order regression, univariate/multivariate distribution, violin, swarm, strip plots, correlations and so on. **How to install Seaborn?**1. You can use-`!pip install seaborn` 2.1 Strip Plot The strip plot resembles a scatterplot when one variable is categorical. This plot can help study the underlying distribution.
###Code
# check the head() of the tips dataset
tips_data.head()
###Output
_____no_output_____
###Markdown
Plot a strip plot to check the relationship between the variables 'tip' and 'time'
###Code
# strip plot
sns.stripplot(y = 'tip', x = 'time', data = tips_data)
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
It can be seen that the tip amount is more at dinner time than at lunchtime. But the above plot is unable to display the spread of the data. We can plot the points with spread using the 'jitter' parameter in the stripplot function.
###Code
# strip plot with jitter to spread the points
sns.stripplot(y = 'tip', x = 'time', data = tips_data, jitter = True)
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
The plot shows that for most of the observations the tip amount is in the range 1 to 3 irrespective of the time. 2.2 Swarm Plot The swarm plot is similar to the strip plot but it avoids the overlapping of the points. This can give a better representation of the distribution of the data.
###Code
# check the head() of the tips dataset
tips_data.head()
###Output
_____no_output_____
###Markdown
Plot the swarm plot for the variables 'tip' and 'time'.
###Code
# swarm plot
sns.swarmplot(y = 'tip', x = 'time', data = tips_data)
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
The above plot gives a good representation of the tip amount for the time. It can be seen that the tip amount is 2 for most of the observations. We can see that the swarm plot gives a better understanding of the variables than the strip plot. We can add another categorical variable in the above plot by using a parameter 'hue'.
###Code
# swarm plot with one more categorical variable 'day'
sns.swarmplot(y = 'tip', x = 'time', data = tips_data, hue = 'day')
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
The plot shows that the tip was collected at lunchtime only on Thursday and Friday. The amount of tips collected at dinner time on Saturday is the highest. 2.3 Violin Plot The violin plot is similar to a box plot that features a kernel density estimation of the underlying distribution. The plot shows the distribution of numerical variables across categories of one (or more) categorical variables such that those distributions can be compared.
###Code
# check the head() of the tips dataset
tips_data.head()
###Output
_____no_output_____
###Markdown
Let's draw a violin plot for the numerical variable 'total_bill' and a categorical variable 'day'.
###Code
# violin plot
sns.violinplot(y = 'total_bill', x = 'day', data = tips_data)
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
The above violin plot shows that the total bill distribution is nearly the same for different days. We can add another categorical variable 'sex' to the above plot to get an insight into the bill amount distribution based on days as well as gender.
###Code
# set the figure size
plt.figure(figsize = (8,5))
# violin plot with addition of the variable 'sex'
# 'split = True' draws half plot for each of the category of 'sex'
sns.violinplot(y = 'total_bill', x = 'day', data = tips_data, hue = 'sex', split = True)
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
There is no significant difference in the distribution of bill amount and sex. 2.4 Pair Plot The pair plot gives a pairwise distribution of variables in the dataset. pairplot() function creates a matrix such that each grid shows the relationship between a pair of variables. On the diagonal axes, a plot shows the univariate distribution of each variable.
###Code
# check the head() of the tips dataset
tips_data.head()
###Output
_____no_output_____
###Markdown
Plot a pair plot for the tips dataset
###Code
# set the figure size
plt.figure(figsize = (8,8))
# plot a pair plot
sns.pairplot(tips_data)
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
The above plot shows the relationship between all the numerical variables. 'total_bill' and 'tip' has a positive linear relationship with each other. Also, 'total_bill' and 'tip' are positively skewed. 'size' has a significant impact on the 'total_bill', as the minimum bill amount is increasing with an increasing number of customers (size). 2.5 Distribution Plot A seaborn provides a distplot() function which is used to visualize a distribution of the univariate variable. This function uses matplotlib to plot a histogram and fit a kernel density estimate (KDE).
###Code
# check the head() of the tips dataset
tips_data.head()
###Output
_____no_output_____
###Markdown
Lets plot a distribution plot of 'total_bill'
###Code
# plot a distribution plot
sns.distplot(tips_data['total_bill'])
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
We can interpret from the above plot that the total bill amount is between the range 10 to 20 for a large number of observations. The distribution plot can be used to visualize the total bill for different times of the day.
###Code
# iterate the distplot() function over the time
# list of time
time = ['Lunch', 'Dinner']
# iterate through time
for i in time:
subset = tips_data[tips_data['time'] == i]
# Draw the density plot
# 'hist = False' will not plot a histogram
# 'kde = True' plots density curve
sns.distplot(subset['total_bill'], hist = False, kde = True,
kde_kws = {'shade':True},
label = i)
###Output
_____no_output_____
###Markdown
It can be seen that the distribution plot for lunch is more right-skewed than a plot for dinner. This implies that the customers are spending more on dinner rather than lunch. 2.6 Count Plot Count plot shows the count of observations in each category of a categorical variable. We can add another variable using a parameter 'hue'.
###Code
# check the head() of the tips dataset
tips_data.head()
###Output
_____no_output_____
###Markdown
Let us plot the count of observations for each day based on time
###Code
# count of observations for each day based on time
# set 'time' as hue parameter
sns.countplot(data = tips_data, x = 'day', hue = 'time')
# display the plot
plt.show()
###Output
_____no_output_____
###Markdown
All the observations recorded on Saturday and Sunday are for dinner. Observations for lunch on Thursday is highest among lunchtime. 2.7 Heatmap Heatmap is a two-dimensional graphical representation of data where the individual values that are contained in a matrix are represented as colors. Each square in the heatmap shows the correlation between variables on each axis. Correlation is a statistic that measures the degree to which two variables move with each other.
###Code
# check the head() of the tips dataset
tips_data.head()
###Output
_____no_output_____
###Markdown
Compute correlation between the variables using .corr() function. Plot a heatmap of the correlation matrix.
###Code
# compute correlation
corr_matrix = tips_data.corr()
corr_matrix
# plot heatmap
# 'annot=True' returns the correlation values
sns.heatmap(corr_matrix, annot = True)
# display the plot
plt.show()
###Output
_____no_output_____ |
Visualisation/.ipynb_checkpoints/Moria_Presentation-checkpoint.ipynb | ###Markdown
Disclaimer: This slideshow is from the draft report from AI for Good Simulator on the COVID-19 situation in Moria camp, Lesbos, Greece. The insights are preliminary and they are subject to future model fixes and improvements. AI for Good Simulator Model Report for Moria Camp Strucutre of the slideshow:* Overview* Unmitigated epidemic* Intervention scenarios* Supplementary information 1. Overview This presentation extracts the major insights from the detailed report we have produced from the epidemiology simulations from the tailored-made compartment models. We estimated peak counts, the timing of peak counts as well as cumulative counts for new symptomatic cases, hospitalisation demand person-days, critical care demand person-days and deaths for an unmitigated epidemic. Then we compare the results with different combinations of intervention strategies in place to:* Have a realistic estimate of the clinic capacity, PPE, ICU transfer and other supplies and logistical measures needed* Compare the potential efficacies of different interventions and prioritise the ones that are going to help contain the virus. 2. Unmitigated COVID-19 Epidemic Trajectory Here we assume the epidemic spreads through the camp without any non-pharmaceutical intervention in place and the peak incidence (number of cases), the timing and the cumulative case counts are all presented by interquartile range values (25%-75% quantiles) and they respectively represent the optimistic and pessimistic estimates of the spread of the virus given the uncertainty in parameters estimated from epidemiological studies. In the simulations we explore the basic reproduction number from 1 to 7 which covers from estimates in European and Asian settings to nearly what is estimated from a high population density location like a cruise ship. 2.1 Peak day and peak incidence of the epidemic
###Code
incidence_table_all=incidence_all_table(baseline);incidence_table_all
###Output
_____no_output_____
###Markdown
The peak number of infections is likely to be in the thousands which could easily overwhelm the care capacity of the normal clinics which occur one and half month after the virus first appears in the camp but the optimistic information is that according to hospitalisation,critical condition and incidence of death estimated based on the best available information currently, the peak hospitalisation demand will be 40-70 a day and the death estimate is based on the fact the patients require critical care will receive appropriate treatment from the 6 ICU beds that are currently available on Lesbos. The incidence of death could be the same as the peak critical care demand if oxygen therapy etc. won't be available for the refugee camp residents.
###Code
plot_by_age_all(baseline)
###Output
_____no_output_____
###Markdown
2.2 Cumulative counts of the incidences
###Code
comulative_table_all=cumulative_all_table(baseline);comulative_table_all
###Output
_____no_output_____
###Markdown
Looking at the cumulative counts in the course of the simulation spanning 200 days since the arrival of the virus, more than one third of the camp residents are expected to be symptomatically infected by the virus and the total hospital person days will be over 1750 person days which places huge demand on the hospital. This can be translated into projected medical costs or time required if the medical cost and time taken is known for treating one patient for a day.
###Code
count_table_age=cumulative_age_table(baseline);count_table_age
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.