path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
deep-learning/Logistic & Softmax Function in Neural Networks.ipynb | ###Markdown
Import numpy, matplotlib
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(seed=1)
plt.xkcd()
###Output
_____no_output_____
###Markdown
Define the Logistic functionThe standard logistic function has an easily calculated derivative:$${\displaystyle f(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{1+e^{x}}}}$$$${\displaystyle {\frac {d}{dx}}f(x)={\frac {e^{x}\cdot (1+e^{x})-e^{x}\cdot e^{x}}{(1+e^{x})^{2}}}}$$$${\displaystyle {\frac {d}{dx}}f(x)={\frac {e^{x}}{(1+e^{x})^{2}}}=f(x)(1-f(x))}$$The derivative of the logistic function has the property that:$${\displaystyle {\frac {d}{dx}}f(x)={\frac {d}{dx}}f(-x).}$$ In Statistics and Machine LearningLogistic functions are used in several roles in statistics. For example, they are the cumulative distribution function of the logistic family of distributions, and they are, a bit simplified, used to model the chance a chess player has to beat his opponent in the Elo rating system. More specific examples now follow. Logistic regressionLogistic functions are used in logistic regression to model how the probability p of an event may be affected by one or more explanatory variables: an example would be to have the model${\displaystyle p=f(a+bx)}$where x is the explanatory variable and a and b are model parameters to be fitted and f is the standard logistic function.Logistic regression and other log-linear models are also commonly used in machine learning. A generalisation of the logistic function to multiple inputs is the softmax activation function, used in multinomial logistic regression.Another application of the logistic function is in the Rasch model, used in item response theory. In particular, the Rasch model forms a basis for maximum likelihood estimation of the locations of objects or persons on a continuum, based on collections of categorical data, for example the abilities of persons on a continuum based on responses that have been categorized as correct and incorrect. Neural networksLogistic functions are often used in neural networks to introduce nonlinearity in the model and/or to clamp signals to within a specified range. A popular neural net element computes a linear combination of its input signals, and applies a bounded logistic function to the result; this model can be seen as a "smoothed" variant of the classical threshold neuron.A common choice for the activation or "squashing" functions, used to clip for large magnitudes to keep the response of the neural network bounded is$${\displaystyle g(h)={\frac {1}{1+e^{-2\beta h}}}}$$which is a logistic function. These relationships result in simplified implementations of artificial neural networks with artificial neurons. Practitioners caution that sigmoidal functions which are antisymmetric about the origin (e.g. the hyperbolic tangent) lead to faster convergence when training networks with backpropagation.The logistic function is itself the derivative of another proposed activation function, the softplus.Source: [Logistic Function](https://en.wikipedia.org/wiki/Logistic_function)
###Code
def logist(z):
return 1 / (1 + np.exp(-z))
###Output
_____no_output_____
###Markdown
Let's plot it
###Code
z = np.linspace(-5, 5)
plt.plot(z, logist(z), 'c-')
plt.xlabel('$z$')
plt.ylabel('$\sigma(z)$')
plt.title('Logistic Function')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Derivative of a Logistic FunctionDerivativeThe standard logistic function has an easily calculated derivative:$${\displaystyle f(x)={\frac {1}{1+e^{-x}}}={\frac {e^{x}}{1+e^{x}}}}$$$${\displaystyle {\frac {d}{dx}}f(x)={\frac {e^{x}\cdot (1+e^{x})-e^{x}\cdot e^{x}}{(1+e^{x})^{2}}}}$$$${\displaystyle {\frac {d}{dx}}f(x)={\frac {e^{x}}{(1+e^{x})^{2}}}=f(x)(1-f(x))}$$The derivative of the logistic function has the property that:$${\displaystyle {\frac {d}{dx}}f(x)={\frac {d}{dx}}f(-x).}$$
###Code
#From calculation, it is expected that the local minimum occurs at x=9/4
cur_x = 6 # The algorithm starts at x=6
gamma = 0.01 # step size multiplier
precision = 0.00001
previous_step_size = cur_x
def df(x):
return 4 * x**3 - 9 * x**2
while previous_step_size > precision:
prev_x = cur_x
cur_x += -gamma * df(prev_x)
previous_step_size = abs(cur_x - prev_x)
print("The local minimum occurs at %f" % cur_x)
def logist_derivative(z):
return logist(z) * (1 - logist(z) )
z = np.linspace(-5,5,100)
plt.plot(z, logist_derivative(z), 'r-')
plt.xlabel('$z$')
plt.ylabel('$\\frac{\\partial \\sigma(z)}{\\partial z}$')
plt.title('Derivative of the logistic function')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Cross-EntropyThe output of the model $y = \sigma(z)$ can be interpreted as a probability $y$ that input $z$ belongs to one class $(t=1)$, or probability $1-y$ that $z$ belongs to the other class $(t=0)$ in a two class classification problem. We note this down as: $P(t=1| z) = \sigma(z) = y$.The neural network model will be optimized by maximizing the likelihood that a given set of parameters $\theta$ of the model can result in a prediction of the correct class of each input sample. The parameters $\theta$ transform each input sample $i$ into an input to the logistic function $z_{i}$. The likelihood maximization can be written as:$$\underset{\theta}{\text{argmax}}\; \mathcal{L}(\theta|t,z) = \underset{\theta}{\text{argmax}} \prod_{i=1}^{n} \mathcal{L}(\theta|t_i,z_i)$$The likelihood $\mathcal{L}(\theta|t,z)$ can be rewritten as the joint probability of generating $t$ and $z$ given the parameters $\theta$: $P(t,z|\theta)$. Since $P(A,B) = P(A|B)*P(B)$ this can be written as:$$P(t,z|\theta) = P(t|z,\theta)P(z|\theta)$$Since we are not interested in the probability of $z$ we can reduce this to: $\mathcal{L}(\theta|t,z) = P(t|z,\theta) = \prod_{i=1}^{n} P(t_i|z_i,\theta)$. Since $t_i$ is a Bernoulli variable, and the probability $P(t| z) = y$ is fixed for a given $\theta$ we can rewrite this as:$$\begin{split}P(t|z) = \prod_{i=1}^{n} P(t_i=1|z_i)^{t_i} * (1 - P(t_i=1|z_i))^{1-t_i} \\ = \prod_{i=1}^{n} y_i^{t_i} * (1 - y_i)^{1-t_i} \end{split}$$Since the logarithmic function is a monotone increasing function we can optimize the log-likelihood function $\underset{\theta}{\text{argmax}}\; log \mathcal{L}(\theta|t,z)$. This maximum will be the same as the maximum from the regular likelihood function. The log-likelihood function can be written as:$$\begin{split} log \mathcal{L}(\theta|t,z) = log \prod_{i=1}^{n} y_i^{t_i} * (1 - y_i)^{1-t_i} \\ = \sum_{i=1}^{n} t_i log(y_i) + (1-t_i) log(1 - y_i)\end{split}$$Minimizing the negative of this function (minimizing the negative log likelihood) corresponds to maximizing the likelihood. This error function $\xi(t,y)$ is typically known as the cross-entropy error function (also known as log-loss):$$\begin{split}\xi(t,y) = - log \mathcal{L}(\theta|t,z) \\ = - \sum_{i=1}^{n} \left[ t_i log(y_i) + (1-t_i)log(1-y_i) \right] \\ = - \sum_{i=1}^{n} \left[ t_i log(\sigma(z) + (1-t_i)log(1-\sigma(z)) \right]\end{split}$$This function looks complicated but besides the previous derivation there are a couple of intuitions why this function is used as a cost function for logistic regression. First of all it can be rewritten as:$$ \xi(t_i,y_i) = \begin{cases} -log(y_i) \text{if } t_i = 1 \\ -log(1-y_i) \text{if } t_i = 0 \end{cases}$$Which in the case of $t_i=1$ is $0$ if $y_i=1$ $(-log(1)=0)$ and goes to infinity as $y_i \rightarrow 0$ $(\underset{y \rightarrow 0}{\text{lim}} -log(y) = +\infty)$. The reverse effect is happening if $t_i=0$.So what we end up with is a cost function that is $0$ if the probability to predict the correct class is $1$ and goes to infinity as the probability to predict the correct class goes to $0$.Notice that the cost function $\xi(t,y)$ is equal to the negative log probability that $z$ is classified as its correct class:$-log(P(t=1| z)) = -log(y)$,$-log(P(t=0| z)) = -log(1-y)$.By minimizing the negative log probability, we will maximize the log probability. And since $t$ can only be $0$ or $1$, we can write $\xi(t,y)$ as: $$ \xi(t,y) = -t * log(y) - (1-t) * log(1-y) $$Which will give $\xi(t,y) = - \sum_{i=1}^{n} \left[ t_i log(y_i) + (1-t_i)log(1-y_i) \right]$ if we sum over all $n$ samples.Another reason to use the cross-entropy function is that in simple logistic regression this results in a convex cost function, of which the global minimum will be easy to find. Note that this is not necessarily the case anymore in multilayer neural networks. Derivative of the cross-entropy cost function for the logistic functionThe derivative ${\partial \xi}/{\partial y}$ of the cost function with respect to its input can be calculated as:$$\begin{split}\frac{\partial \xi}{\partial y} = \frac{\partial (-t * log(y) - (1-t)* log(1-y))}{\partial y} = \frac{\partial (-t * log(y))}{\partial y} + \frac{\partial (- (1-t)*log(1-y))}{\partial y} \\ = -\frac{t}{y} + \frac{1-t}{1-y} = \frac{y-t}{y(1-y)}\end{split}$$This derivative will give a nice formula if it is used to calculate the derivative of the cost function with respect to the inputs of the classifier ${\partial \xi}/{\partial z}$ since the derivative of the logistic function is ${\partial y}/{\partial z} = y (1-y)$:$$\frac{\partial \xi}{\partial z} = \frac{\partial y}{\partial z} \frac{\partial \xi}{\partial y} = y (1-y) \frac{y-t}{y(1-y)} = y-t $$ Softmax FunctionThe logistic output function described in the previous intermezzo can only be used for the classification between two target classes $t=1$ and $t=0$. This logistic function can be generalized to output a multiclass categorical probability distribution by the softmax function. This softmax function $\varsigma$ takes as input a $C$-dimensional vector $\mathbf{z}$ and outputs a $C$-dimensional vector $\mathbf{y}$ of real values between $0$ and $1$. This function is a normalized exponential and is defined as:$$ y_c = \varsigma(\mathbf{z})_c = \frac{e^{z_c}}{\sum_{d=1}^C e^{z_d}} \quad \text{for} \; c = 1 \cdots C$$The denominator $\sum_{d=1}^C e^{z_d}$ acts as a regularizer to make sure that $\sum_{c=1}^C y_c = 1$. As the output layer of a neural network, the softmax function can be represented graphically as a layer with $C$ neurons.We can write the probabilities that the class is $t=c$ for $c = 1 \ldots C$ given input $\mathbf{z}$ as:$$ \begin{bmatrix} P(t=1 | \mathbf{z}) \\\vdots \\P(t=C | \mathbf{z}) \\\end{bmatrix}= \begin{bmatrix} \varsigma(\mathbf{z})_1 \\\vdots \\\varsigma(\mathbf{z})_C \\\end{bmatrix}= \frac{1}{\sum_{d=1}^C e^{z_d}}\begin{bmatrix} e^{z_1} \\\vdots \\e^{z_C} \\\end{bmatrix}$$Where $P(t=c | \mathbf{z})$ is thus the probability that that the class is $c$ given the input $\mathbf{z}$.These probabilities of the output $P(t=1|\mathbf{z})$ for an example system with 2 classes ($t=1$, $t=2$) and input $\mathbf{z} = [z_1, z_2]$ is shown in the figure below. The other probability $P(t=2|\mathbf{z})$ will be complementary.
###Code
from matplotlib.colors import colorConverter, ListedColormap
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
###Output
_____no_output_____
###Markdown
Define Softmax function
###Code
def softmax(z):
return np.exp(z) / np.sum(np.exp(z))
# Plot the softmax output for 2 dimensions for both classes
# Plot the output in function of the weights
# Define a vector of weights for which we want to plot the ooutput
nb_of_zs = 200
zs = np.linspace(-10, 10, num=nb_of_zs) # input
zs_1, zs_2 = np.meshgrid(zs, zs) # generate grid
y = np.zeros((nb_of_zs, nb_of_zs, 2)) # initialize output
# Fill the output matrix for each combination of input z's
for i in range(nb_of_zs):
for j in range(nb_of_zs):
y[i,j,:] = softmax(np.asarray([zs_1[i,j], zs_2[i,j]]))
# Plot the cost function surfaces for both classes
fig = plt.figure()
# Plot the cost function surface for t=1
ax = fig.gca(projection='3d')
surf = ax.plot_surface(zs_1, zs_2, y[:,:,0], linewidth=0, cmap=cm.coolwarm_r)
ax.view_init(elev=30, azim=70)
cbar = fig.colorbar(surf)
ax.set_xlabel('$z_1$', fontsize=15)
ax.set_ylabel('$z_2$', fontsize=15)
ax.set_zlabel('$y_1$', fontsize=15)
ax.set_title ('$P(t=1|\mathbf{z})$')
cbar.ax.set_ylabel('$P(t=1|\mathbf{z})$', fontsize=15)
plt.grid()
plt.show()
###Output
_____no_output_____ |
quantization/notebooks/imagenet_v2/mobilenet.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved. Licensed under the MIT License. Mobilenet v2 Quantization with ONNX Runtime on CPU In this tutorial, we will load a mobilenet v2 model pretrained with [PyTorch](https://pytorch.org/), export the model to ONNX, quantize then run with ONNXRuntime, and convert the ONNX models to ORT format for ONNXRuntime Mobile. 0. Prerequisites If you have Jupyter Notebook, you can run this notebook directly with it. You may need to install or upgrade [PyTorch](https://pytorch.org/), [OnnxRuntime](https://microsoft.github.io/onnxruntime/), and other required packages.Otherwise, you can setup a new environment. First, install [Anaconda](https://www.anaconda.com/distribution/). Then open an AnaConda prompt window and run the following commands:```consoleconda create -n cpu_env python=3.8conda activate cpu_envconda install jupyterjupyter notebook```The last command will launch Jupyter Notebook and we can open this notebook in browser to continue. 0.1 Install packagesLet's install the necessary packages to start the tutorial. We will install PyTorch 1.8, OnnxRuntime 1.8, latest ONNX and pillow.
###Code
# Install or upgrade PyTorch 1.8.0 and OnnxRuntime 1.8 for CPU-only.
import sys
!{sys.executable} -m pip install --upgrade torch==1.8.0 torchvision==0.9.0 torchaudio===0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
!{sys.executable} -m pip install --upgrade onnxruntime==1.8.0
!{sys.executable} -m pip install --upgrade onnx
!{sys.executable} -m pip install --upgrade pillow
###Output
_____no_output_____
###Markdown
1 Download pretrained model and export to ONNX In this step, we load a pretrained mobilenet v2 model, and export it to ONNX. 1.1 Load the pretrained modelUse torchvision provides API to load mobilenet_v2 model.
###Code
from torchvision import models, datasets, transforms as T
mobilenet_v2 = models.mobilenet_v2(pretrained=True)
###Output
_____no_output_____
###Markdown
1.2 Export the model to ONNXPytorch onnx export API to export the model.
###Code
import torch
image_height = 224
image_width = 224
x = torch.randn(1, 3, image_height, image_width, requires_grad=True)
torch_out = mobilenet_v2(x)
# Export the model
torch.onnx.export(mobilenet_v2, # model being run
x, # model input (or a tuple for multiple inputs)
"mobilenet_v2_float.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=12, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output']) # the model's output names
###Output
_____no_output_____
###Markdown
1.3 Sample Execution with ONNXRuntime Run an sample with the full precision ONNX model. Firstly, implement the preprocess.
###Code
from PIL import Image
import numpy as np
import onnxruntime
import torch
def preprocess_image(image_path, height, width, channels=3):
image = Image.open(image_path)
image = image.resize((width, height), Image.ANTIALIAS)
image_data = np.asarray(image).astype(np.float32)
image_data = image_data.transpose([2, 0, 1]) # transpose to CHW
mean = np.array([0.079, 0.05, 0]) + 0.406
std = np.array([0.005, 0, 0.001]) + 0.224
for channel in range(image_data.shape[0]):
image_data[channel, :, :] = (image_data[channel, :, :] / 255 - mean[channel]) / std[channel]
image_data = np.expand_dims(image_data, 0)
return image_data
###Output
_____no_output_____
###Markdown
Download the imagenet labels and load it
###Code
# Download ImageNet labels
!curl -o imagenet_classes.txt https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt
# Read the categories
with open("imagenet_classes.txt", "r") as f:
categories = [s.strip() for s in f.readlines()]
###Output
_____no_output_____
###Markdown
Run the example with ONNXRuntime
###Code
session_fp32 = onnxruntime.InferenceSession("mobilenet_v2_float.onnx")
def softmax(x):
"""Compute softmax values for each sets of scores in x."""
e_x = np.exp(x - np.max(x))
return e_x / e_x.sum()
def run_sample(session, image_file, categories):
output = session.run([], {'input':preprocess_image(image_file, image_height, image_width)})[0]
output = output.flatten()
output = softmax(output) # this is optional
top5_catid = np.argsort(-output)[:5]
for catid in top5_catid:
print(categories[catid], output[catid])
run_sample(session_fp32, 'cat.jpg', categories)
###Output
_____no_output_____
###Markdown
2 Quantize the model with ONNXRuntime In this step, we load the full precison model, and quantize it with ONNXRuntime quantization tool. And show the model size comparison between full precision and quantized model. Finally, we run the same sample with the quantized model 2.1 Implement a CalibrationDataReaderCalibrationDataReader takes in calibration data and generates input for the model
###Code
from onnxruntime.quantization import quantize_static, CalibrationDataReader, QuantType
import os
def preprocess_func(images_folder, height, width, size_limit=0):
image_names = os.listdir(images_folder)
if size_limit > 0 and len(image_names) >= size_limit:
batch_filenames = [image_names[i] for i in range(size_limit)]
else:
batch_filenames = image_names
unconcatenated_batch_data = []
for image_name in batch_filenames:
image_filepath = images_folder + '/' + image_name
image_data = preprocess_image(image_filepath, height, width)
unconcatenated_batch_data.append(image_data)
batch_data = np.concatenate(np.expand_dims(unconcatenated_batch_data, axis=0), axis=0)
return batch_data
class MobilenetDataReader(CalibrationDataReader):
def __init__(self, calibration_image_folder):
self.image_folder = calibration_image_folder
self.preprocess_flag = True
self.enum_data_dicts = []
self.datasize = 0
def get_next(self):
if self.preprocess_flag:
self.preprocess_flag = False
nhwc_data_list = preprocess_func(self.image_folder, image_height, image_width, size_limit=0)
self.datasize = len(nhwc_data_list)
self.enum_data_dicts = iter([{'input': nhwc_data} for nhwc_data in nhwc_data_list])
return next(self.enum_data_dicts, None)
###Output
_____no_output_____
###Markdown
2.2 Quantize the model As we can not upload full calibration data set for copy right issue, we only demonstrate with some example images. You need to use your own calibration data set in practice.
###Code
# change it to your real calibration data set
calibration_data_folder = "calibration_imagenet"
dr = MobilenetDataReader(calibration_data_folder)
quantize_static('mobilenet_v2_float.onnx',
'mobilenet_v2_uint8.onnx',
dr)
print('ONNX full precision model size (MB):', os.path.getsize("mobilenet_v2_float.onnx")/(1024*1024))
print('ONNX quantized model size (MB):', os.path.getsize("mobilenet_v2_uint8.onnx")/(1024*1024))
###Output
_____no_output_____
###Markdown
2.3 Run the model with OnnxRuntime
###Code
session_quant = onnxruntime.InferenceSession("mobilenet_v2_uint8.onnx")
run_sample(session_quant, 'cat.jpg', categories)
###Output
_____no_output_____
###Markdown
3 Convert the models to ORT format This step is optional, we will convert the `mobilenet_v2_float.onnx` and `mobilenet_v2_uint8.onnx` to ORT format, to be used in mobile applications.If you intend to run these models using ONNXRuntime Mobile Execution Providers such as [NNAPI Execution Provider](https://www.onnxruntime.ai/docs/reference/execution-providers/NNAPI-ExecutionProvider.html) or [CoreML Execution Provider](https://www.onnxruntime.ai/docs/reference/execution-providers/CoreML-ExecutionProvider.html), please set the `optimization_level` of the conversion to `basic`. If you intend to run these models using CPU only, please set the `optimization_level` of the conversion to `all`. For further details, please see [Converting ONNX models to ORT format](https://www.onnxruntime.ai/docs/how-to/mobile/model-conversion.html).
###Code
!{sys.executable} -m onnxruntime.tools.convert_onnx_models_to_ort --optimization_level basic ./
###Output
_____no_output_____ |
Chapter02/.ipynb_checkpoints/Exercise 16-checkpoint.ipynb | ###Markdown
Exercise 16: Implementing a Stack in Python 1. Define an empty stack and load the json file.
###Code
import pandas as pd
df = pd.read_json (r'Chapter02/users.json')
df
stack = []
###Output
_____no_output_____
###Markdown
2. Append another value to the stack
###Code
output = df.apply(lambda row : stack.append(row["email"]), axis=1)
stack
###Output
_____no_output_____
###Markdown
3. Use the append method to add an element in the stack.
###Code
stack.append("[email protected]")
stack
###Output
_____no_output_____
###Markdown
4. Read a value from our stack using the pop method.
###Code
tos = stack.pop()
tos
###Output
_____no_output_____
###Markdown
5. Append hello to the stack
###Code
stack.append("[email protected]")
stack
###Output
_____no_output_____ |
notebooks/templates/single-column-tropics.ipynb | ###Markdown
Water budget Single Column Model Run column model and plot some outputs
###Code
!ncks -O -d x,0,,32 -d y,32 -d time,100.0,120.0 {data} subset.nc
%run -m uwnet.columns {model} subset.nc {column_path}
cols = xr.open_dataset(column_path)
qt_levels = np.r_[:11] * 2
sl_levels = np.r_[:11] * 10 + 270
cols.QT.squeeze().plot.contourf(y='z', levels=qt_levels, col='x', col_wrap=3)
cols.SLI.squeeze().plot.contourf(y='z', levels=sl_levels, col='x', col_wrap=3)
cols.FSLINN.squeeze().plot(y='z', col='x', col_wrap=3)
cols.FQTNN.squeeze().plot(y='z', col='x', col_wrap=3)
###Output
_____no_output_____
###Markdown
Clean up
###Code
%cd {old_wd}
!rm -rf {cwd}
###Output
_____no_output_____ |
1_1_automatic_evaluation_fibonacci_recursive.ipynb | ###Markdown
**i. Colab hardware and software specs:**- n1-highmem-2 instance- 2vCPU @ 2.3GHz- 13GB RAM- 100GB Free Space- idle cut-off 90 minutes- maximum lifetime 12 hours
###Code
# Colab hardware info (processor and memory):
# !cat /proc/cpuinfo
# !cat /proc/memoinfo
# !lscpu
!lscpu | egrep 'Model name|Socket|Thread|NUMA|CPU\(s\)'
print("---------------------------------")
!free -m
# Colab SO structure and version
!ls -a
print("---------------------------------")
!ls -l /
print("---------------------------------")
!lsb_release -a
###Output
. .. .config sample_data
---------------------------------
total 92
drwxr-xr-x 1 root root 4096 Jun 15 13:28 bin
drwxr-xr-x 2 root root 4096 Apr 24 2018 boot
drwxr-xr-x 1 root root 4096 Jun 15 13:37 content
drwxr-xr-x 1 root root 4096 Jun 21 13:18 datalab
drwxr-xr-x 5 root root 360 Jun 23 05:31 dev
drwxr-xr-x 1 root root 4096 Jun 23 05:31 etc
drwxr-xr-x 2 root root 4096 Apr 24 2018 home
drwxr-xr-x 1 root root 4096 Jun 15 13:29 lib
drwxr-xr-x 2 root root 4096 Jun 15 13:19 lib32
drwxr-xr-x 1 root root 4096 Jun 15 13:19 lib64
drwxr-xr-x 2 root root 4096 Sep 21 2020 media
drwxr-xr-x 2 root root 4096 Sep 21 2020 mnt
drwxr-xr-x 1 root root 4096 Jun 15 13:31 opt
dr-xr-xr-x 170 root root 0 Jun 23 05:31 proc
drwx------ 1 root root 4096 Jun 23 05:31 root
drwxr-xr-x 1 root root 4096 Jun 15 13:22 run
drwxr-xr-x 1 root root 4096 Jun 15 13:28 sbin
drwxr-xr-x 2 root root 4096 Sep 21 2020 srv
dr-xr-xr-x 12 root root 0 Jun 23 05:31 sys
drwxr-xr-x 4 root root 4096 Jun 17 13:53 tensorflow-1.15.2
drwxrwxrwt 1 root root 4096 Jun 23 05:31 tmp
drwxr-xr-x 1 root root 4096 Jun 21 13:18 tools
drwxr-xr-x 1 root root 4096 Jun 15 13:31 usr
drwxr-xr-x 1 root root 4096 Jun 23 05:31 var
---------------------------------
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic
###Markdown
**ii. Cloning IntPy repository:**- https://github.com/claytonchagas/intpy_dev.git
###Code
!git clone https://github.com/claytonchagas/intpy_dev.git
!ls -a
print("---------------------------------")
%cd intpy_dev/
!ls -a
print("---------------------------------")
!git branch
print("---------------------------------")
#!git log --pretty=oneline --abbrev-commit
#!git log --all --decorate --oneline --graph
###Output
. .. .config intpy_dev sample_data
---------------------------------
/content/intpy_dev
. .gitignore setup.py
.. intpy stats_colab.py
fibonacci_iterative.py power_recursive.py .vscode
fibonacci_recursive.py quicksort_recursive_fixed.py
.git quicksort_recursive_random.py
---------------------------------
* [32mmain[m
---------------------------------
###Markdown
**iii. Fibonacci's evolutions and cutoff by approach**- Evaluating recursive fibonacci code and its cutoff by approach
###Code
!ls -a
print("---------------------------------")
print("Cleaning up cache")
!rm -rf .intpy
!rm -rf output_iii.dat
print("--no-cache execution")
!for i in {1..37}; do python fibonacci_recursive.py $i --no-cache >> output_iii.dat; rm -rf .intpy; done
print("done!")
print("only intra cache")
!for i in {1..37}; do python fibonacci_recursive.py $i -v v01x >> output_iii.dat; rm -rf .intpy; done
print("done!")
print("full cache")
!for i in {1..37}; do python fibonacci_recursive.py $i -v v01x >> output_iii.dat; done
print("done!")
import matplotlib.pyplot as plt
import numpy as np
f1 = open("output_iii.dat", "r")
data1 = []
dataf1 = []
for x in f1.readlines()[3:148:4]:
data1.append(float(x))
f1.close()
for datas1 in data1:
dataf1.append(round(datas1, 3))
print(dataf1)
f2 = open("output_iii.dat", "r")
data2 = []
dataf2 = []
for x in f2.readlines()[151:296:4]:
data2.append(float(x))
f2.close()
for datas2 in data2:
dataf2.append(round(datas2, 3))
print(dataf2)
f3 = open("output_iii.dat", "r")
data3 = []
dataf3 = []
for x in f3.readlines()[299:444:4]:
data3.append(float(x))
f3.close()
for datas3 in data3:
dataf3.append(round(datas3, 3))
print(dataf3)
x = np.arange(1,38)
#plt.style.use('classic')
fig, (ax1, ax2) = plt.subplots(1, 2)
fig.set_figheight(5)
fig.set_figwidth(14)
fig.suptitle("Fibonacci's evolutions and cutoff by approach", fontweight='bold')
ax1.plot(x, dataf1, "tab:blue", label="no-cache")
ax1.plot(x, dataf2, "tab:orange", label="intra cache")
ax1.plot(x, dataf3, "tab:green", label="full cache")
#ax1.set_title("Fibonacci's evolutions and cutoff by approach")
ax1.set_xlabel("Fibonacci's Series Value")
ax1.set_ylabel("Time in seconds")
ax1.grid()
lex = ax1.legend()
ax2.plot(x, dataf2, "tab:orange", label="intra cache")
ax2.plot(x, dataf3, "tab:green", label="full cache")
#ax2.set_title("Quicksort's random evolutions and cutoff by approach")
ax2.set_xlabel("Fibonacci's Series Value")
ax2.set_ylabel("Time in seconds")
ax2.grid()
lex = ax2.legend()
plt.show()
###Output
[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.001, 0.001, 0.002, 0.002, 0.004, 0.006, 0.011, 0.016, 0.025, 0.04, 0.066, 0.108, 0.189, 0.272, 0.522, 0.712, 1.16, 1.931, 3.028, 4.903, 8.379]
[0.017, 0.031, 0.04, 0.05, 0.062, 0.071, 0.082, 0.089, 0.095, 0.112, 0.119, 0.126, 0.137, 0.14, 0.152, 0.167, 0.169, 0.185, 0.188, 0.2, 0.207, 0.218, 0.238, 0.237, 0.243, 0.264, 0.27, 0.278, 0.281, 0.287, 0.314, 0.316, 0.341, 0.334, 0.343, 0.358, 0.355]
[0.016, 0.023, 0.014, 0.014, 0.015, 0.014, 0.014, 0.014, 0.013, 0.014, 0.014, 0.014, 0.014, 0.013, 0.015, 0.015, 0.014, 0.014, 0.014, 0.014, 0.014, 0.015, 0.014, 0.015, 0.016, 0.014, 0.014, 0.013, 0.013, 0.014, 0.013, 0.014, 0.02, 0.014, 0.013, 0.014, 0.013]
###Markdown
**iv. Fibonacci 200, 100 and 50 recursive, three mixed trials**- Evaluating recursive fibonacci code, input 200, 100, and 50, three trials and plot.- First trial: input 200, 100, and 50, no inter-cache (baseline).- Second trial: input 200, 100, and 50, with intra and inter-cache, analyzing the cache's behavior with different inputs.- Third trial: input 50, 100, and 200, with intra and inter-cache, analyzing the cache's behavior with different inputs, in a different order of the previous running.
###Code
print("---------------------------------")
print("Cleaning up cache")
!rm -rf .intpy
!rm -rf output_iv.dat
print("First running, Fibonacci 200: value and time in sec")
!python fibonacci_recursive.py 200 -v v01x | tee -a output_iv.dat
print("---------------------------------")
print("Cleaning up cache")
!rm -rf .intpy
print("Second running, Fibonacci 100: value and time in sec")
!python fibonacci_recursive.py 100 -v v01x | tee -a output_iv.dat
print("---------------------------------")
print("Cleaning up cache")
!rm -rf .intpy
print("Third running, Fibonacci 50: value and time in sec")
!python fibonacci_recursive.py 50 -v v01x | tee -a output_iv.dat
print("---------------------------------")
###Output
---------------------------------
Cleaning up cache
First running, Fibonacci 200: value and time in sec
['v01x']
False
280571172992510140037611932413038677189525
1.9410288039999841
---------------------------------
Cleaning up cache
Second running, Fibonacci 100: value and time in sec
['v01x']
False
354224848179261915075
0.96284990099997
---------------------------------
Cleaning up cache
Third running, Fibonacci 50: value and time in sec
['v01x']
False
12586269025
0.5233045470000093
---------------------------------
###Markdown
- Second trial: with inter and intra-cache, inputs: 200, 100 and 50.
###Code
print("---------------------------------")
print("Cleaning up cache")
!rm -rf .intpy
print("First running, Fibonacci 200: value and time in sec")
!python fibonacci_recursive.py 200 -v v01x | tee -a output_iv.dat
print("---------------------------------")
print("Second running, Fibonacci 100: value and time in sec")
!python fibonacci_recursive.py 100 -v v01x | tee -a output_iv.dat
print("---------------------------------")
print("Third running, Fibonacci 50: value and time in sec")
!python fibonacci_recursive.py 50 -v v01x | tee -a output_iv.dat
print("---------------------------------")
###Output
---------------------------------
Cleaning up cache
First running, Fibonacci 200: value and time in sec
['v01x']
False
280571172992510140037611932413038677189525
2.0088238820000015
---------------------------------
Second running, Fibonacci 100: value and time in sec
['v01x']
False
354224848179261915075
0.0042586389999996754
---------------------------------
Third running, Fibonacci 50: value and time in sec
['v01x']
False
12586269025
0.0074632739999742626
---------------------------------
###Markdown
- Third trial: with inter and intra-cache, inputs: 50, 100 and 200.
###Code
print("---------------------------------")
print("Cleaning up cache")
!rm -rf .intpy
print("First running, Fibonacci 50: value and time in sec")
!python fibonacci_recursive.py 50 -v v01x | tee -a output_iv.dat
print("---------------------------------")
print("Second running, Fibonacci 100: value and time in sec")
!python fibonacci_recursive.py 100 -v v01x | tee -a output_iv.dat
print("---------------------------------")
print("Third running, Fibonacci 200: value and time in sec")
!python fibonacci_recursive.py 200 -v v01x | tee -a output_iv.dat
print("---------------------------------")
###Output
---------------------------------
Cleaning up cache
First running, Fibonacci 50: value and time in sec
['v01x']
False
12586269025
0.5096050450000007
---------------------------------
Second running, Fibonacci 100: value and time in sec
['v01x']
False
354224848179261915075
0.5089021019999791
---------------------------------
Third running, Fibonacci 200: value and time in sec
['v01x']
False
280571172992510140037611932413038677189525
0.9777138999999693
---------------------------------
###Markdown
- Plotting the comparison: first graph.
###Code
import numpy as np
f4 = open("output_iv.dat", "r")
fib200 = []
fib100 = []
fib50 = []
data4 = []
dataf4 = []
for x in f4.readlines()[3::4]:
data4.append(float(x))
f4.close()
for datas4 in data4:
dataf4.append(round(datas4, 6))
print(dataf4)
fib200 = [dataf4[0], dataf4[3], dataf4[8]]
print(fib200)
fib100 = [dataf4[1], dataf4[4], dataf4[7]]
print(fib100)
fib50 = [dataf4[2], dataf4[5], dataf4[6]]
print(fib50)
running3to5 = ['1st trial: cache intra', '2nd trial: cache inter-intra/desc', '3rd trial: cache inter-intra/asc']
y = np.arange(len(running3to5))
width = 0.40
z = ['Fib 200', 'Fib 100', 'Fib 50']
list_color_z = ['blue', 'orange', 'green']
zr = ['Fib 50', 'Fib 100', 'Fib 200']
list_color_zr = ['green', 'orange', 'blue']
t1=[dataf4[0], dataf4[1], dataf4[2]]
t2=[dataf4[3], dataf4[4], dataf4[5]]
t3=[dataf4[6], dataf4[7], dataf4[8]]
fig, (ax1, ax2, ax3) = plt.subplots(1, 3, sharey=True, figsize=(11,5))
rects1 = ax1.bar(z, t1,width, label='1st trial', color=list_color_z)
rects2 = ax2.bar(z, t2, width, label='2nd trial', color=list_color_z)
rects3 = ax3.bar(zr, t3, width, label='3rd trial', color=list_color_zr)
ax1.set_ylabel('Time in seconds', fontweight='bold')
ax1.set_xlabel('1st trial: cache intra', fontweight='bold')
ax2.set_xlabel('2nd trial: cache inter-intra/desc', fontweight='bold')
ax3.set_xlabel('3rd trial: cache inter-intra/asc', fontweight='bold')
ax2.set_title('Fibonacci recursive 200, 100 and 50 v0.1.x', fontweight='bold')
for index, datas in enumerate(t1):
ax1.text(x=index, y=datas, s=t1[index], ha = 'center', va = 'bottom', fontweight='bold')
for index, datas in enumerate(t2):
ax2.text(x=index, y=datas, s=t2[index], ha = 'center', va = 'bottom', fontweight='bold')
for index, datas in enumerate(t3):
ax3.text(x=index, y=datas, s=t3[index], ha = 'center', va = 'bottom', fontweight='bold')
ax1.grid(axis='y')
ax2.grid(axis='y')
ax3.grid(axis='y')
fig.tight_layout()
plt.savefig('chart_iv_fib_50_100_200_v01x.png')
plt.show()
###Output
[1.941029, 0.96285, 0.523305, 2.008824, 0.004259, 0.007463, 0.509605, 0.508902, 0.977714]
[1.941029, 2.008824, 0.977714]
[0.96285, 0.004259, 0.508902]
[0.523305, 0.007463, 0.509605]
###Markdown
**1. Fast execution, all versions (v0.1.x and from v0.2.1.x to v0.2.7.x)** **1.1 Fast execution: only intra-cache** **1.1.1 Fast execution: only intra-cache => experiment's executions**
###Code
!rm -rf .intpy;\
rm -rf stats_intra.dat;\
echo "IntPy only intra-cache";\
experimento=fibonacci_recursive.py;\
param=200;\
echo "Experiment: $experimento";\
echo "Params: $param";\
for i in v01x v021x v022x v023x v024x v025x v026x v027x;\
do rm -rf output_intra_$i.dat;\
rm -rf .intpy;\
echo "---------------------------------";\
echo "IntPy version $i";\
for j in {1..5};\
do echo "Execution $j";\
rm -rf .intpy;\
python $experimento $param -v $i >> output_intra_$i.dat;\
echo "Done execution $j";\
done;\
echo "Done IntPy version $i";\
done;\
echo "---------------------------------";\
echo "---------------------------------";\
echo "Statistics evaluation:";\
for k in v01x v021x v022x v023x v024x v025x v026x v027x;\
do echo "Statistics version $k" >> stats_intra.dat;\
echo "Statistics version $k";\
python stats_colab.py output_intra_$k.dat;\
python stats_colab.py output_intra_$k.dat >> stats_intra.dat;\
echo "---------------------------------";\
done;\
###Output
IntPy only intra-cache
Experiment: fibonacci_recursive.py
Params: 200
---------------------------------
IntPy version v01x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v01x
---------------------------------
IntPy version v021x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v021x
---------------------------------
IntPy version v022x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v022x
---------------------------------
IntPy version v023x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v023x
---------------------------------
IntPy version v024x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v024x
---------------------------------
IntPy version v025x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v025x
---------------------------------
IntPy version v026x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v026x
---------------------------------
IntPy version v027x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v027x
---------------------------------
---------------------------------
Statistics evaluation:
Statistics version v01x
[1.9585485990000393, 1.9303937939999969, 1.9337598459999867, 1.9907464240000081, 2.0101031239999543]
Max: 2.0101031239999543
Min: 1.9303937939999969
Mean: 1.964710357399997
Median: 1.9585485990000393
Standard deviation: 0.035042502656368074
Variance: 0.0012279769924215633
---------------------------------
Statistics version v021x
[0.16864294700002347, 0.1727353250000192, 0.17285318299997243, 0.1694440370000052, 0.16818873699997994]
Max: 0.17285318299997243
Min: 0.16818873699997994
Mean: 0.17037284580000006
Median: 0.1694440370000052
Standard deviation: 0.0022560445192595126
Variance: 5.089736872880884e-06
---------------------------------
Statistics version v022x
[0.15953562300001067, 0.15662962300001482, 0.15866848500002106, 0.162498888000016, 0.15987674100000504]
Max: 0.162498888000016
Min: 0.15662962300001482
Mean: 0.15944187200001353
Median: 0.15953562300001067
Standard deviation: 0.0021242715816943147
Variance: 4.512529752794065e-06
---------------------------------
Statistics version v023x
[0.16182657799998879, 0.1635703709999916, 0.16864982000004147, 0.16004521799999338, 0.1724069330000475]
Max: 0.1724069330000475
Min: 0.16004521799999338
Mean: 0.16529978400001255
Median: 0.1635703709999916
Standard deviation: 0.005108786578092383
Variance: 2.609970030049688e-05
---------------------------------
Statistics version v024x
[0.1772083460000431, 0.16639901000002055, 0.18560098199998265, 0.16749308999999357, 0.18210272800001803]
Max: 0.18560098199998265
Min: 0.16639901000002055
Mean: 0.17576083120001157
Median: 0.1772083460000431
Standard deviation: 0.008589859757520642
Variance: 7.378569065387258e-05
---------------------------------
Statistics version v025x
[0.16189941300001465, 0.16556844900003398, 0.16473034599999892, 0.16195332999996026, 0.16679373499999883]
Max: 0.16679373499999883
Min: 0.16189941300001465
Mean: 0.16418905460000133
Median: 0.16473034599999892
Standard deviation: 0.002192088525830008
Variance: 4.805252105075577e-06
---------------------------------
Statistics version v026x
[0.16346602700002677, 0.15904201799997963, 0.1649260780000077, 0.16070193800004517, 0.1666358929999774]
Max: 0.1666358929999774
Min: 0.15904201799997963
Mean: 0.16295439080000734
Median: 0.16346602700002677
Standard deviation: 0.0030855706525152633
Variance: 9.520746251663468e-06
---------------------------------
Statistics version v027x
[0.16950696999998627, 0.1660535479999794, 0.1714465819999873, 0.1692190109999956, 0.1648336100000165]
Max: 0.1714465819999873
Min: 0.1648336100000165
Mean: 0.16821194419999302
Median: 0.1692190109999956
Standard deviation: 0.0027030525883911137
Variance: 7.3064932956079e-06
---------------------------------
###Markdown
**1.1.2 Fast execution: only intra-cache => charts generation**
###Code
%matplotlib inline
import matplotlib.pyplot as plt
versions = ['v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v026x', 'v027x']
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:brown']
filev = "f_intra_"
data = "data_intra_"
dataf = "dataf_intra_"
for i, j in zip(versions, colors):
filev_version = filev+i
data_version = data+i
dataf_version = dataf+i
file_intra = open("output_intra_"+i+".dat", "r")
data_intra = []
dataf_intra = []
for x in file_intra.readlines()[3::4]:
data_intra.append(float(x))
file_intra.close()
#print(data_intra)
for y in data_intra:
dataf_intra.append(round(y, 5))
print(i+": ",dataf_intra)
running1_1 = ['1st', '2nd', '3rd', '4th', '5th']
plt.figure(figsize = (10, 5))
plt.bar(running1_1, dataf_intra, color =j, width = 0.4)
plt.grid(axis='y')
for index, datas in enumerate(dataf_intra):
plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold')
plt.xlabel("Running only with intra cache "+i, fontweight='bold')
plt.ylabel("Time in seconds", fontweight='bold')
plt.title("Chart "+i+" intra - Fibonacci 200 recursive - with intra cache, no inter cache - IntPy "+i+" version", fontweight='bold')
plt.savefig("chart_intra_"+i+".png")
plt.close()
#plt.show()
import matplotlib.pyplot as plt
file_intra = open("stats_intra.dat", "r")
data_intra = []
for x in file_intra.readlines()[5::8]:
data_intra.append(round(float(x[8::]), 5))
file_intra.close()
print(data_intra)
versions = ["0.1.x", "0.2.1.x", "0.2.2.x", "0.2.3.x", "0.2.4.x", "0.2.5.x", "0.2.6.x", "0.2.7.x"]
#colors =['royalblue', 'forestgreen', 'orangered', 'purple', 'skyblue', 'lime', 'lightgrey', 'tan']
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:brown']
plt.figure(figsize = (10, 5))
plt.bar(versions, data_intra, color = colors, width = 0.7)
plt.grid(axis='y')
for index, datas in enumerate(data_intra):
plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold')
plt.xlabel("Median for 5 executions in each version, intra cache", fontweight='bold')
plt.ylabel("Time in seconds", fontweight='bold')
plt.title("Fibonacci 200 recursive, cache intra-running, comparison of all versions", fontweight='bold')
plt.savefig('compare_median_intra.png')
plt.close()
#plt.show()
###Output
[1.95855, 0.16944, 0.15954, 0.16357, 0.17721, 0.16473, 0.16347, 0.16922]
###Markdown
**1.2 Fast execution: full cache -> intra and inter-cache** **1.2.1 Fast execution: full cache -> intra and inter-cache => experiment's executions**
###Code
!rm -rf .intpy;\
rm -rf stats_full.dat;\
echo "IntPy full cache -> intra and inter-cache";\
experimento=fibonacci_recursive.py;\
param=200;\
echo "Experiment: $experimento";\
echo "Params: $param";\
for i in v01x v021x v022x v023x v024x v025x v026x v027x;\
do rm -rf output_full_$i.dat;\
rm -rf .intpy;\
echo "---------------------------------";\
echo "IntPy version $i";\
for j in {1..5};\
do echo "Execution $j";\
python $experimento $param -v $i >> output_full_$i.dat;\
echo "Done execution $j";\
done;\
echo "Done IntPy version $i";\
done;\
echo "---------------------------------";\
echo "---------------------------------";\
echo "Statistics evaluation:";\
for k in v01x v021x v022x v023x v024x v025x v026x v027x;\
do echo "Statistics version $k" >> stats_full.dat;\
echo "Statistics version $k";\
python stats_colab.py output_full_$k.dat;\
python stats_colab.py output_full_$k.dat >> stats_full.dat;\
echo "---------------------------------";\
done;\
###Output
IntPy full cache -> intra and inter-cache
Experiment: fibonacci_recursive.py
Params: 200
---------------------------------
IntPy version v01x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v01x
---------------------------------
IntPy version v021x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v021x
---------------------------------
IntPy version v022x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v022x
---------------------------------
IntPy version v023x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v023x
---------------------------------
IntPy version v024x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v024x
---------------------------------
IntPy version v025x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v025x
---------------------------------
IntPy version v026x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v026x
---------------------------------
IntPy version v027x
Execution 1
Done execution 1
Execution 2
Done execution 2
Execution 3
Done execution 3
Execution 4
Done execution 4
Execution 5
Done execution 5
Done IntPy version v027x
---------------------------------
---------------------------------
Statistics evaluation:
Statistics version v01x
[1.9162730599999804, 0.004912373000024672, 0.004316976999973576, 0.004245891999971718, 0.004312153999990187]
Max: 1.9162730599999804
Min: 0.004245891999971718
Mean: 0.3868120911999881
Median: 0.004316976999973576
Standard deviation: 0.8549947164981567
Variance: 0.7310159652397633
---------------------------------
Statistics version v021x
[0.16443456500002185, 0.004482439000014438, 0.004170815999998467, 0.004154612000036195, 0.004190809000022]
Max: 0.16443456500002185
Min: 0.004154612000036195
Mean: 0.03628664820001859
Median: 0.004190809000022
Standard deviation: 0.0716369904889001
Variance: 0.005131858406306764
---------------------------------
Statistics version v022x
[0.16392684400000235, 0.03555968299997403, 0.03567801399998416, 0.032346548999953484, 0.03567528899998251]
Max: 0.16392684400000235
Min: 0.032346548999953484
Mean: 0.06063727579997931
Median: 0.03567528899998251
Standard deviation: 0.05775822737296328
Variance: 0.0033360128292669244
---------------------------------
Statistics version v023x
[0.16948383199996897, 0.004013834000033967, 0.004081790000043384, 0.004021523999995225, 0.004204149000031521]
Max: 0.16948383199996897
Min: 0.004013834000033967
Mean: 0.03716102580001461
Median: 0.004081790000043384
Standard deviation: 0.0739707366360648
Variance: 0.005471669878482057
---------------------------------
Statistics version v024x
[0.16743824700000687, 0.19402358899998262, 0.2050414419999811, 0.19426615599996921, 0.18930978200000936]
Max: 0.2050414419999811
Min: 0.16743824700000687
Mean: 0.19001584319998982
Median: 0.19402358899998262
Standard deviation: 0.013875717793874037
Variance: 0.00019253554429523255
---------------------------------
Statistics version v025x
[0.16070399100004806, 0.0072144050000133575, 0.009981879999998, 0.008153317999983756, 0.006955135000055179]
Max: 0.16070399100004806
Min: 0.006955135000055179
Mean: 0.03860174580001967
Median: 0.008153317999983756
Standard deviation: 0.06826755249460258
Variance: 0.004660458723603319
---------------------------------
Statistics version v026x
[0.16869466899998997, 0.004984024999998837, 0.004811555000003409, 0.004618606000008185, 0.004933829999970385]
Max: 0.16869466899998997
Min: 0.004618606000008185
Mean: 0.037608536999994155
Median: 0.004933829999970385
Standard deviation: 0.07327951084577994
Variance: 0.0053698867097967794
---------------------------------
Statistics version v027x
[0.18906311700004608, 0.00421043300002566, 0.004189640999982203, 0.004146002999959819, 0.004259054999977252]
Max: 0.18906311700004608
Min: 0.004146002999959819
Mean: 0.0411736497999982
Median: 0.00421043300002566
Standard deviation: 0.08267273545225413
Variance: 0.006834781187158398
---------------------------------
###Markdown
**1.2.2 Fast execution: full cache -> intra and inter-cache => charts generation**
###Code
%matplotlib inline
import matplotlib.pyplot as plt
versions = ['v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v026x', 'v027x']
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:brown']
filev = "f_full_"
data = "data_full_"
dataf = "dataf_full_"
for i, j in zip(versions, colors):
filev_version = filev+i
data_version = data+i
dataf_version = dataf+i
file_full = open("output_full_"+i+".dat", "r")
data_full = []
dataf_full = []
for x in file_full.readlines()[3::4]:
data_full.append(float(x))
file_full.close()
for y in data_full:
dataf_full.append(round(y, 5))
print(i+": ",dataf_full)
running1_1 = ['1st', '2nd', '3rd', '4th', '5th']
plt.figure(figsize = (10, 5))
plt.bar(running1_1, dataf_full, color =j, width = 0.4)
plt.grid(axis='y')
for index, datas in enumerate(dataf_full):
plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold')
plt.xlabel("Running full cache "+i, fontweight='bold')
plt.ylabel("Time in seconds", fontweight='bold')
plt.title("Chart "+i+" full - Fibonacci 200 recursive - with intra and inter cache - IntPy "+i+" version", fontweight='bold')
plt.savefig("chart_full_"+i+".png")
plt.close()
#plt.show()
import matplotlib.pyplot as plt
file_full = open("stats_full.dat", "r")
data_full = []
for x in file_full.readlines()[5::8]:
data_full.append(round(float(x[8::]), 5))
file_full.close()
print(data_full)
versions = ["0.1.x", "0.2.1.x", "0.2.2.x", "0.2.3.x", "0.2.4.x", "0.2.5.x", "0.2.6.x", "0.2.7.x"]
#colors =['royalblue', 'forestgreen', 'orangered', 'purple', 'skyblue', 'lime', 'lightgrey', 'tan']
colors = ['tab:blue', 'tab:orange', 'tab:green', 'tab:purple', 'tab:grey', 'tab:olive', 'tab:cyan', 'tab:brown']
plt.figure(figsize = (10, 5))
plt.bar(versions, data_full, color = colors, width = 0.7)
plt.grid(axis='y')
for index, datas in enumerate(data_full):
plt.text(x=index, y=datas, s=datas, ha = 'center', va = 'bottom', fontweight='bold')
plt.xlabel("Median for 5 executions in each version, full cache", fontweight='bold')
plt.ylabel("Time in seconds", fontweight='bold')
plt.title("Fibonacci 200 recursive, cache intra and inter-running, comparison of all versions", fontweight='bold')
plt.savefig('compare_median_full.png')
plt.close()
#plt.show()
###Output
[0.00432, 0.00419, 0.03568, 0.00408, 0.19402, 0.00815, 0.00493, 0.00421]
###Markdown
**1.3 Displaying charts to all versions** **1.3.1 Only intra-cache charts**
###Code
versions = ['v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v026x', 'v027x']
from IPython.display import Image, display
for i in versions:
display(Image("chart_intra_"+i+".png"))
print("=====================================================================================")
###Output
_____no_output_____
###Markdown
**1.3.2 Full cache charts -> intra and inter-cache**
###Code
versions = ['v01x', 'v021x', 'v022x', 'v023x', 'v024x', 'v025x', 'v026x', 'v027x']
from IPython.display import Image, display
for i in versions:
display(Image("chart_full_"+i+".png"))
print("=====================================================================================")
###Output
_____no_output_____
###Markdown
**1.3.3 Only intra-cache: median comparison chart of all versions**
###Code
from IPython.display import Image, display
display(Image("compare_median_intra.png"))
###Output
_____no_output_____
###Markdown
**1.3.4 Full cache -> intra and inter-cache: median comparison chart of all versions**
###Code
from IPython.display import Image, display
display(Image("compare_median_full.png"))
###Output
_____no_output_____
###Markdown
**1.3.5 IntPy Fibonacci 50 - raw execution OK (no cache): 1h31min15sec**
###Code
!wget -nv https://github.com/claytonchagas/intpy_prod/raw/main/intpy_raw_50_1h31m15s_ok.jpg
from IPython.display import Image, display
display(Image("intpy_raw_50_1h31m15s_ok.jpg", width=720))
###Output
2021-06-23 05:37:25 URL:https://raw.githubusercontent.com/claytonchagas/intpy_prod/main/intpy_raw_50_1h31m15s_ok.jpg [68757/68757] -> "intpy_raw_50_1h31m15s_ok.jpg" [1]
###Markdown
**1.3.6 IntPy Fibonacci 100 - raw execution NO OK (no cache): 14h43min30sec**
###Code
!wget -nv https://github.com/claytonchagas/intpy_prod/raw/main/intpy_raw_100_14h43m30s_NO_ok.jpg
from IPython.display import Image, display
display(Image("intpy_raw_100_14h43m30s_NO_ok.jpg", width=720))
###Output
2021-06-23 05:37:26 URL:https://raw.githubusercontent.com/claytonchagas/intpy_prod/main/intpy_raw_100_14h43m30s_NO_ok.jpg [67582/67582] -> "intpy_raw_100_14h43m30s_NO_ok.jpg" [1]
###Markdown
**1.3.6 IntPy Fibonacci 200 - no execution (no cache): inf**
###Code
!wget -nv https://github.com/claytonchagas/intpy_prod/raw/main/intpy_raw_200_NO_exec_inf.jpg
from IPython.display import Image, display
display(Image("intpy_raw_200_NO_exec_inf.jpg", width=720))
###Output
2021-06-23 05:37:26 URL:https://raw.githubusercontent.com/claytonchagas/intpy_prod/main/intpy_raw_200_NO_exec_inf.jpg [65325/65325] -> "intpy_raw_200_NO_exec_inf.jpg" [1]
|
Chapter 2 - The TensorFlow Way/Operations using eager execution.ipynb | ###Markdown
Getting ready
###Code
import tensorflow as tf
import numpy as np
###Output
2022-01-18 12:20:05.836583: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0
###Markdown
How to do it
###Code
x_vals = np.array([1., 3., 5., 7., 9.])
x_data = tf.Variable(x_vals, dtype=tf.float32)
m_const = tf.constant(3.)
operation = tf.multiply(x_data, m_const)
for result in operation:
print(result.numpy())
###Output
3.0
9.0
15.0
21.0
27.0
|
fr_basic_huggingface.ipynb | ###Markdown
Installation sur Google ColabLe package Transformers n&39;est pas installé par défaut sur Google Colab. Alors installons-le avec pip :
###Code
!pip install transformers[sentencepiece]
###Output
Collecting transformers[sentencepiece]
[?25l Downloading https://files.pythonhosted.org/packages/b5/d5/c6c23ad75491467a9a84e526ef2364e523d45e2b0fae28a7cbe8689e7e84/transformers-4.8.1-py3-none-any.whl (2.5MB)
[K |████████████████████████████████| 2.5MB 25.9MB/s
[?25hRequirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (20.9)
Requirement already satisfied: filelock in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (3.0.12)
Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (1.19.5)
Collecting tokenizers<0.11,>=0.10.1
[?25l Downloading https://files.pythonhosted.org/packages/d4/e2/df3543e8ffdab68f5acc73f613de9c2b155ac47f162e725dcac87c521c11/tokenizers-0.10.3-cp37-cp37m-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_12_x86_64.manylinux2010_x86_64.whl (3.3MB)
[K |████████████████████████████████| 3.3MB 36.5MB/s
[?25hRequirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (4.5.0)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (2.23.0)
Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (4.41.1)
Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (2019.12.20)
Collecting sacremoses
[?25l Downloading https://files.pythonhosted.org/packages/75/ee/67241dc87f266093c533a2d4d3d69438e57d7a90abb216fa076e7d475d4a/sacremoses-0.0.45-py3-none-any.whl (895kB)
[K |████████████████████████████████| 901kB 32.8MB/s
[?25hCollecting huggingface-hub==0.0.12
Downloading https://files.pythonhosted.org/packages/2f/ee/97e253668fda9b17e968b3f97b2f8e53aa0127e8807d24a547687423fe0b/huggingface_hub-0.0.12-py3-none-any.whl
Requirement already satisfied: pyyaml in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (3.13)
Collecting sentencepiece==0.1.91; extra == "sentencepiece"
[?25l Downloading https://files.pythonhosted.org/packages/f2/e2/813dff3d72df2f49554204e7e5f73a3dc0f0eb1e3958a4cad3ef3fb278b7/sentencepiece-0.1.91-cp37-cp37m-manylinux1_x86_64.whl (1.1MB)
[K |████████████████████████████████| 1.1MB 24.4MB/s
[?25hRequirement already satisfied: protobuf; extra == "sentencepiece" in /usr/local/lib/python3.7/dist-packages (from transformers[sentencepiece]) (3.12.4)
Requirement already satisfied: pyparsing>=2.0.2 in /usr/local/lib/python3.7/dist-packages (from packaging->transformers[sentencepiece]) (2.4.7)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->transformers[sentencepiece]) (3.4.1)
Requirement already satisfied: typing-extensions>=3.6.4; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->transformers[sentencepiece]) (3.7.4.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->transformers[sentencepiece]) (2021.5.30)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->transformers[sentencepiece]) (1.24.3)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->transformers[sentencepiece]) (2.10)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->transformers[sentencepiece]) (3.0.4)
Requirement already satisfied: six in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers[sentencepiece]) (1.15.0)
Requirement already satisfied: click in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers[sentencepiece]) (7.1.2)
Requirement already satisfied: joblib in /usr/local/lib/python3.7/dist-packages (from sacremoses->transformers[sentencepiece]) (1.0.1)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from protobuf; extra == "sentencepiece"->transformers[sentencepiece]) (57.0.0)
Installing collected packages: tokenizers, sacremoses, huggingface-hub, sentencepiece, transformers
Successfully installed huggingface-hub-0.0.12 sacremoses-0.0.45 sentencepiece-0.1.91 tokenizers-0.10.3 transformers-4.8.1
###Markdown
Analyse des sentiments en anglaisDans cet article, nous utiliserons l&39;interface de pipeline de haut niveau, ce qui facilite grandement l'utilisation de modèles de transformers pré-entraînés.Il nous faut juste dire au pipeline ce qu'il doit faire, et éventuellement de lui dire quel modèle utiliser pour cette tâche.Ici, nous allons faire une analyse des sentiments en anglais.Nous sélectionnons donc la tâche `sentiment-analysis` et gardons le modèle par défaut :
###Code
from transformers import pipeline
classifier = pipeline("sentiment-analysis")
###Output
_____no_output_____
###Markdown
Le pipeline est prêt, et nous pouvons maintenant l&39;utiliser :
###Code
classifier(["this is a great tutorial, thank you",
"your content just sucks"])
###Output
_____no_output_____
###Markdown
Nous avons envoyé deux phrases par le pipeline. La première est considéréé comme positive et la deuxième négative, avec un niveau de confiance très élevé.Voyons maintenant ce qui se passe si nous envoyons des phrases en français :
###Code
classifier(["Ton tuto est vraiment bien",
"il est complètement nul"])
###Output
_____no_output_____
###Markdown
Cette fois, le classement ne fonctionne pas...En effet, la deuxième phrase, est classée comme positive.Ce n&39;est pas une surprise : le modèle par défaut pour la tâche d&39;analyse des sentiments a été entraîné sur du texte anglais, et il ne comprend donc pas le français. Analyse des sentiments en néerlandais, allemand, français, espagnol et italienAlors, que faire si vous souhaitez travailler avec du texte dans une autre langue, disons le français ?Il vous suffit de rechercher sur le hub un [modèle de classification français](https://huggingface.co/models?filter=fr&pipeline_tag=text-classification&sort=downloads).Plusieurs modèles sont disponibles, et j&39;ai décidé de sélectionner [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment).Nous pouvons spécifier ce modèle lors de la création de notre pipeline `sentiment-analysis` :
###Code
multilang_classifier = pipeline("sentiment-analysis",
model="nlptown/bert-base-multilingual-uncased-sentiment")
multilang_classifier(["Ton tuto est vraiment bien",
"il est complètement nul"])
###Output
_____no_output_____
###Markdown
Et ça marche ! La deuxième phrase est maintenant correctement classée comme très négative.Vous vous demandez peut-être pourquoi la confiance pour la première phrase est plus faible. Je suis presque sûr que c&39;est parce que cette phrase obtient également un score élevé pour « 4 stars ».Essayons maintenant avec une critique réelle trouvéé sur Google pour un restaurant près de chez moi :
###Code
import pprint
sentence="Contente de pouvoir retourner au restaurant... Quelle déception... L accueil peu chaleureux... Un plat du jour plus disponible à 12h45...rien à me proposer à la place... Une pizza pas assez cuite et pour finir une glace pleine de glaçons... Et au gout très fade... Je pensais que les serveuses seraient plus aimable à l idée de retrouver leur clientèle.. Dommage"
pprint.pprint(sentence)
multilang_classifier([sentence])
###Output
_____no_output_____
###Markdown
2 étoiles ! sur Google Review, cet avis a 1 étoile. Ce n'est pas une mauvaise prédiction, et je pense que le score doit être assez élevé pour "1 star" également. Traduction anglais-françaisEssayons de faire un peu de traduction, de l&39;anglais vers le français.Encore une fois, nous recherchons le hub, et nous nous retrouvons avec ce pipeline :
###Code
en_to_fr = pipeline("translation_en_to_fr",
model="Helsinki-NLP/opus-mt-en-fr")
en_to_fr("your tutorial is really good")
###Output
_____no_output_____
###Markdown
Cela fonctionne bien. Traduisons dans l&39;autre sens. Pour cela, nous devons changer la tâche et le modèle :
###Code
fr_to_en = pipeline("translation_fr_to_en",
model="Helsinki-NLP/opus-mt-fr-en")
fr_to_en("ton tutoriel est super")
###Output
_____no_output_____
###Markdown
Parfait! Classement "zero-shot" en françaisDe nos jours, de très grands modèles d&39;apprentissage en profondeur sont entraînés sur de vastes ensembles de données collectés sur Internet.Ces modèles en savent déjà beaucoup, et n&39;ont donc pas besoin d&39;en apprendre beaucoup plus.En règle générale, il est possible d&39;affiner ces modèles pour un cas d&39;utilisation spécifique comme la classification de texte avec un très petit jeu de données spécifique supplémentaire. C&39;est ce qu&39;on appelle l&39;apprentissage en quelques coups, ou *few-shot learning*.Et parfois, nous pouvons même faire du *zero shot learning* : des tâches spécifiques peuvent être effectuées sans aucun entraînement spécifique. C&39;est ce que nous allons faire maintenant.Nous recherchons dans le hub un modèle de classification *zero-shot* français, et nous créons le pipeline :
###Code
classifier = pipeline("zero-shot-classification",
model="BaptisteDoyen/camembert-base-xlni")
###Output
_____no_output_____
###Markdown
Dans l&39;exemple ci-dessous, je propose une phrase à classer, et je précise également les catégories.Il est important de noter que le modèle **n&39;a pas été entraîné avec ces catégories**, vous pouvez les modifier à volonté !
###Code
sequence = "Colin est en train d'écrire un article au sujet du traitement du langage naturel"
candidate_labels = ["science","politique","education", "news"]
classifier(sequence, candidate_labels)
###Output
_____no_output_____
###Markdown
Les probabilités prédites semblent raisonnables. Cette phrase concerne en effet la science, l&39;actualité et l&39;éducation. Et n'a rien à voir avec la politique.Essayons maintenant ceci :
###Code
sequence = "Laurent Wauquiez reconduit à la tête de la région Rhône-Alpes-Auvergne à la suite du deuxième tour des élections."
candidate_labels = ["politique", "musique"]
classifier(sequence, candidate_labels)
###Output
_____no_output_____
###Markdown
Cette fois, c'est bien la catégorie `politique` qui sort en premierN&39;hésitez pas à essayer d&39;autres phrases et d&39;autres catégories. Vous pouvez également changer de modèle si vous souhaitez effectuer une classification zero zhot en anglais ou dans une autre langue. Résumé en françaisRésumer du texte est un cas d'usage intéressant des transformers.Ici, nous utilisons un modèle entraîné sur un jeu de données obtenu en scrapant [https://actu.orange.fr/](https://actu.orange.fr/), à nouveau trouvé sur le hub Hugging Face :
###Code
summarizer = pipeline("summarization",
model="moussaKam/barthez-orangesum-title")
###Output
_____no_output_____
###Markdown
Reprenons les deux premiers paragraphes d&39;un article sur le Covid-19 lu dans Le Monde :
###Code
import pprint
sentence = "La pandémie ne marque pas le pas. Le variant Delta poursuit son essor planétaire au grand dam de pays impatients de retrouver une vie normale. La pandémie a fait près de quatre millions de morts dans le monde depuis que le bureau de l’Organisation mondiale de la santé (OMS) en Chine a fait état de l’apparition de la maladie fin décembre 2019, selon un bilan établi par l’Agence France-Presse (AFP) à partir de sources officielles, lundi à 12 heures. Les Etats-Unis sont le pays le plus touché tant en nombre de morts (603 967) que de cas. Le Brésil, qui compte 513 474 morts, est suivi par l’Inde (396 730), le Mexique (232 564) et le Pérou (191 899), le pays qui déplore le plus de morts par rapport à sa population. Ces chiffres, qui reposent sur les bilans quotidiens des autorités nationales de santé, sont globalement sous-évalués. L’Organisation mondiale de la santé (OMS) estime que le bilan de la pandémie pourrait être deux à trois fois plus élevé que celui officiellement calculé."
pprint.pprint(sentence)
summarizer(sentence, max_length=80)
###Output
_____no_output_____
###Markdown
Le résumé est assez lapidaire, mais plutôt bon. Reconnaissance d'entités nommées en françaisLa reconnaissance d&39;entités nommées (NER pour Named Entity Recognition) peut servir de base à de nombreuses applications intéressantes.Par exemple, on pourrait analyser des rapports financiers à la recherche de dates, de prix, de noms de sociétés.Voyons comment faire cela.Ici, nous utilisons un équivalent français de BERT, appelé CamemBERT, affiné pour NER :
###Code
ner = pipeline("token-classification", model="Jean-Baptiste/camembert-ner")
nes = ner("Colin est parti à Saint-André acheter de la mozzarella")
pprint.pprint(nes)
###Output
[{'end': 5,
'entity': 'PER',
'index': 1,
'score': 0.94243556,
'start': 0,
'word': '▁Colin'},
{'end': 23,
'entity': 'LOC',
'index': 5,
'score': 0.99605554,
'start': 17,
'word': '▁Saint'},
{'end': 24,
'entity': 'LOC',
'index': 6,
'score': 0.9967083,
'start': 23,
'word': '-'},
{'end': 29,
'entity': 'LOC',
'index': 7,
'score': 0.99609375,
'start': 24,
'word': 'André'}]
###Markdown
Nous devons faire un peu de post-traitement pour agréger les entités nommées du même type.Voici un algorithme simple pour le faire (il peut certainement être amélioré !)
###Code
cur = None
agg = []
for ne in nes:
entity=ne['entity']
if entity != cur:
if cur is None:
cur = entity
if agg:
print(cur, ner.tokenizer.convert_tokens_to_string(agg))
agg = []
cur = entity
agg.append(ne['word'])
print(cur, ner.tokenizer.convert_tokens_to_string(agg))
###Output
PER Colin
LOC Saint-André
|
Spring_2019/LB26/Pandas_dataframe_tutorial.ipynb | ###Markdown
Data Analysis with Pandas Dataframe **Pandas** is a popular library for manipulating vectors, tables, and time series. We will frequently use Pandas data structures instead of the built-in python data structures, as they provide much richer functionality. Also, Pandas is **fast**, which makes working with large datasets easier. Check out the official pandas website at [http://pandas.pydata.org/]Pandas provides three data structures: * the **series**, which represents a single column of data similar to a python list. Series are most fundamental data structures in Pandas. * the **data frame**, which represents multiple series of data * the **panel**, which represents multiple data framesToday we will mainly work with dataframe.
###Code
import pandas as pd
###Output
_____no_output_____
###Markdown
Data I/O Data cleaning...
###Code
glad = pd.read_csv('./GLAD_15min_filtered_S1_41days_sample.csv')
glad
glad.shape
###Output
_____no_output_____
###Markdown
First and last five rows.
###Code
glad_orig.head()
glad_orig.tail()
###Output
_____no_output_____
###Markdown
Add or delete columns and write data to .csv file with one command line.
###Code
import numpy as np
np.zeros(240000)
glad['temperature'] = np.zeros(240000)
glad.head()
del glad['vel_Error']
del glad['Pos_Error']
glad.head()
glad.to_csv('./test.csv')
glad.to_csv?
glad.to_csv('./test_without_index.csv', index = False)
###Output
_____no_output_____
###Markdown
Indexing and Slicing .iloc[ ] : indexing by position .loc[ ] : indexing by index
###Code
glad.iloc[0]
###Output
_____no_output_____
###Markdown
The function takes array as index, too.
###Code
glad_orig.iloc[:10]
###Output
_____no_output_____
###Markdown
Access the data array/list as array using .values
###Code
glad_orig.iloc[0].values
###Output
_____no_output_____
###Markdown
In this case, indexing by position may not be practical. Instead, we can designate the column of row label 'ID' as an 'index'. It is common operation to pick a column as index to work on. When indexing the dataframe, explicitly designate the row and columns, even if with colon (':').
###Code
glad_id = glad.set_index('ID')
glad_id.head()
glad_id.loc['CARTHE_021']
###Output
_____no_output_____
###Markdown
Use .values to access the data stored in the dataframe.
###Code
lat = glad_id.loc['CARTHE_021', 'Latitude'].values
lat
lon = glad_id.loc['CARTHE_021', 'Longitude'].values
lon
###Output
_____no_output_____
###Markdown
Ploting with matplotlib and cartopy
###Code
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
plt.figure(figsize = (6, 8))
min_lat, max_lat = 23, 30.5
min_lon, max_lon = -91.5, -85
ax = plt.axes(projection = ccrs.PlateCarree())
ax.set_extent([min_lon, max_lon, min_lat, max_lat], ccrs.PlateCarree())
ax.coastlines(resolution = '50m', color = 'black')
ax.gridlines(crs = ccrs.PlateCarree(), draw_labels = True, color = 'grey')
ax.plot(lon, lat)
###Output
_____no_output_____
###Markdown
How to plot every drifter trajectory, aka spagetti plot? Grouping Data Frames In order to aggregate the data of each drifter, we can use group-by method. We can specify which column to group by. In this case, 'ID' will be the choice.
###Code
drifter_grouped = glad.groupby('ID')
###Output
_____no_output_____
###Markdown
Dictionary is a collection of items, which are unordered, changeable and indexed. Each item can be different types such as number, string, list, etc.
###Code
drifter_grouped.groups
###Output
_____no_output_____
###Markdown
Keys of each group are also in a dictionary.
###Code
drifter_grouped.groups.keys()
###Output
_____no_output_____
###Markdown
You can access the items of a dictionary by referring to its key name, inside square brackets
###Code
drifter_grouped.groups['CARTHE_021']
###Output
_____no_output_____
###Markdown
Iterate over the dictinary above to access the coordinates of each drifter.
###Code
drifter_ids = drifter_grouped.groups.keys()
for drifter_id in drifter_ids:
print(drifter_id)
glad_id.head()
plt.figure(figsize = (6, 8))
min_lat, max_lat = 23, 30.5
min_lon, max_lon = -91.5, -85
ax = plt.axes(projection = ccrs.PlateCarree())
ax.set_extent([min_lon, max_lon, min_lat, max_lat], ccrs.PlateCarree())
ax.coastlines(resolution = '50m', color = 'black')
ax.gridlines(crs = ccrs.PlateCarree(), draw_labels = True, color = 'grey')
for drifter_id in drifter_ids:
lon = glad_id.loc[drifter_id, 'Longitude'].values
lat = glad_id.loc[drifter_id, 'Latitude'].values
ax.plot(lon, lat)
###Output
_____no_output_____
###Markdown
Select data in certain time period. Set the date as index.
###Code
glad_date = glad_orig.set_index('Date')
glad_date.head()
###Output
_____no_output_____
###Markdown
the "Date" index is Datetime Index
###Code
glad_date.index
###Output
_____no_output_____
###Markdown
pd.date_range will give us a list of Index
###Code
pd.date_range(start = '2012-07-22', end = '2012-08-05')
glad_date.loc[date_range,:]
###Output
_____no_output_____
###Markdown
use .strftime() method to convert "DatetimeIndex" to "Index"
###Code
date_range = pd.date_range(start=first_day, end = last_day).strftime("%Y-%m-%d")
date_range
glad_selected = glad_date.loc[date_range,:]
###Output
_____no_output_____ |
genre_classification_dl.ipynb | ###Markdown
Training an initial model - Inception network
###Code
class Inception(torch.nn.Module):
def __init__(self, dataset, pretrained=True):
super(Inception, self).__init__()
num_classes = 50 if dataset=="ESC" else 10
self.model = models.inception_v3(pretrained=pretrained, aux_logits=False)
self.model.fc = torch.nn.Linear(2048, num_classes)
def forward(self, x):
output = self.model(x)
return output
device = torch.device("cpu")
model = Inception("GTZAN", True).to(device)
loss_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, weight_decay=1e-3)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 30, gamma=0.1)
train(model, device, train_loader, val_loader, optimizer, loss_fn, scheduler=scheduler, epochs=10, model_name="inception")
with open("models/inception_10epochs_87acc.pkl", 'rb') as f:
final_model = pickle.load(f)
###Output
_____no_output_____
###Markdown
Setting up an intercept hook on the last layer
###Code
activation = {}
def get_activation(name):
def hook(model, input, output):
activation[name] = output.detach()
return hook
final_model.model.avgpool.register_forward_hook(get_activation('avgpool')); # a 2048-element vector
###Output
_____no_output_____
###Markdown
Recording track embeddings
###Code
embeddings = {}
def fill_embeddings(embeddings, data_loader):
for data, target, filepaths in tqdm(data_loader):
_ = final_model(data)
for embedding, filepath in zip(activation['avgpool'], filepaths):
filename = filepath.split('/')[-1][:-4]
embeddings[filename] = embedding.reshape(embedding.shape[0])
fill_embeddings(embeddings, train_loader)
fill_embeddings(embeddings, val_loader)
with open(f"embeddings/audio_embeddings.pkl", 'wb') as f:
pickle.dump(embeddings, f)
def get_embedding(filename):
with open("embeddings/audio_embeddings.pkl", 'rb') as f:
embeddings = pickle.load(f)
return embeddings[filename]
get_embedding('reggae.00032')
###Output
_____no_output_____
###Markdown
(Cut material for now) My netwurk.
###Code
class MyNetwurk(torch.nn.Module):
def __init__(self, input_size, num_classes):
super(MyNetwurk, self).__init__()
c, _, _ = input_size
self.convlayers = torch.nn.Sequential(
torch.nn.Conv2d(c, 6, (3, 3)),
torch.nn.BatchNorm2d(6),
torch.nn.ReLU(inplace=True),
torch.nn.MaxPool2d((2, 2), stride=2),
torch.nn.Conv2d(6, 16, (3, 3)),
torch.nn.BatchNorm2d(16),
torch.nn.ReLU(inplace=True),
torch.nn.MaxPool2d((2, 2), stride=2),
torch.nn.Conv2d(16, 64, (3, 3)),
torch.nn.BatchNorm2d(64),
torch.nn.ReLU(inplace=True),
torch.nn.MaxPool2d((2, 2), stride=2),
)
self.fc = torch.nn.Sequential(
torch.nn.Linear(256, 120),
torch.nn.BatchNorm1d(120),
torch.nn.ReLU(inplace=True),
torch.nn.Linear(120, 60),
torch.nn.BatchNorm1d(60),
torch.nn.ReLU(inplace=True),
torch.nn.Linear(60, num_classes),
)
def forward(self, x):
x = self.convlayers(x)
x = x.view(x.shape[0], -1)
x = self.fc(x)
return x
###Output
_____no_output_____ |
sessions/perceptron.ipynb | ###Markdown
PerceptronsA perceptron is a simple supervised learning linear binary classifier. It is a very simple, single layer neural network.
###Code
import numpy as np, pandas as pd, matplotlib.pyplot as plt
from sklearn import linear_model, metrics, model_selection, preprocessing
from utils import plot_decision
###Output
_____no_output_____
###Markdown
Load and prep the data
###Code
# load the iris data
df = pd.read_csv('data/iris.csv')
df['species_label'], _ = pd.factorize(df['species'])
df.head()
# select features
y = df['species_label']
X = df[['petal_length', 'petal_width']]
# split data randomly into 70% training and 30% test
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.3, random_state=0)
# standardize the features
sc = preprocessing.StandardScaler()
sc.fit(X_train)
X_train_std = sc.transform(X_train)
X_test_std = sc.transform(X_test)
###Output
_____no_output_____
###Markdown
Train the model and make predictions
###Code
# train a perceptron
ppn = linear_model.Perceptron(n_iter=40, eta0=0.1, random_state=0)
ppn.fit(X_train_std, y_train)
# use the trained perceptron to make predictions with the test data
y_pred = ppn.predict(X_test_std)
###Output
_____no_output_____
###Markdown
Evaluate the model's performance
###Code
# how did our model perform?
count_misclassified = (y_test != y_pred).sum()
print('Misclassified samples: {}'.format(count_misclassified))
accuracy = metrics.accuracy_score(y_test, y_pred)
print('Accuracy: {:.2f}'.format(accuracy))
# visualize the model's decision regions to see how it separates the samples
X_combined_std = np.vstack((X_train_std, X_test_std))
y_combined = np.hstack((y_train, y_test))
plot_decision(X=X_combined_std, y=y_combined, classifier=ppn)
plt.xlabel('petal length (standardized)')
plt.ylabel('petal width (standardized)')
plt.legend(loc='upper left')
plt.show()
# same thing, but this time identify the points that constituted the test data set
test_idx = range(len(y_train), len(y_combined))
plot_decision(X=X_combined_std, y=y_combined, classifier=ppn, test_idx=test_idx)
plt.xlabel('petal length (standardized)')
plt.ylabel('petal width (standardized)')
plt.legend(loc='upper left')
plt.show()
###Output
_____no_output_____ |
src/BankStatusClassificationAutoML.ipynb | ###Markdown
YouTube - https://www.youtube.com/c/NikBearBrown GitHub - https://github.com/nikbearbrown/Visual_Analytics Kaggle - https://www.kaggle.com/nikbearbrown Klee.ai (Visual AI) - http://klee.ai AutoML with H2O.ai_Lessons from Kaggle – Ensemble ML and Feature Engineering_99.9% of high ranking Kaggle submissions shared two approaches. Stacking and feature engineering. In this notebook, we will use indivdual models and stacked models to predict lift. Stacking is a type of ensemble, creating a ”super-model” by combining many complementary models.We will use generate thousands on individual models, select the best models and combine the best models into a ”super-model” to predict lift._Models and hyperparamter optimization_A model is an algorithm with a given set of hyperparamters. For example, a random forest estimator that uses 10 trees and one that uses 20 trees are two different models. Using a few algorithms and important tuning paramters (hyperparamters) we will try many combination and select rank the models on some metric like AUC, mean residual deviance, RSME as approriate for the analysis. _The machine learning algorithms_We will use the following algorithms as our base:* Deep Learning (Neural Networks) * Generalized Linear Model (GLM) * Extreme Random Forest (XRT) * Distributed Random Forest (DRF) * Gradient Boosting Machine (GBM) * XGBoost _Deep Learning (Neural Networks)_ The are simple Multiclass perceptrons (MLPs) as discussed in the first notebook. _Generalized Linear Model (GLM)_ The generalized linear model (GLM) is a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.In our case, we will assume that the the distribution of errors is normal and that the link function is the identity, which means the will will be performing simple linear regression. Linear regression predicts the response variable $y$ assuming it has a linear relationship with predictor variable(s) $x$ or $x_1, x_2, ,,, x_n$.$$y = \beta_0 + \beta_1 x + \varepsilon .$$_Distributed Random Forest (DRF)_ A Distributed Random Forest (DRF) is a powerful low-bias classification and regression tool that can fit highly non-linear data. To prevent overfitting a DRF generates a forest of classification or regression trees, rather than a single classification or regression tree through a process called bagging. The variance of estimates can be adjusted by the number of trees used. _Extreme Random Forest (XRT)_Extreme random forests are nearly identical to standard random forests except that the splits, both attribute and cut-point, are chosen totally or partially at random. Bias/varianceanalysis has shown that XRTs work by decreasing variance while at the same time increasing bias. Once the randomization level is properly adjusted, the variance almost vanishes while bias only slightly increases with respect to standard trees. _Gradient Boosting Machine (GBM)_ Gradient Boosting Machine (for Regression and Classification) is a forward learning ensemble method. The guiding heuristic is that good predictive results can be obtained through increasingly refined approximations. Boosting can create more accurate models than bagging but doesn’t help to avoid overfitting as much as bagging does.Unlike a DRF which uses bagging to prevent overfitting a GBM uses boosting to sequentially refine a regression or classification tree. However as each tree is built in parallel it allows for multi-threading (asynchronous) training large data sets.As with all tree based methods it creates decision trees and is highly interpretable._XGBoost_XGBoost is a supervised learning algorithm that implements a process called boosting to yield accurate models. Boosting refers to the ensemble learning technique of building many models sequentially, with each new model attempting to correct for the deficiencies in the previous model. Both XGBoost and GBM follows the principle of gradient boosting. However, XGBoost has a more regularized model formalization to control overfitting. Boosting does not prevent overfitting the way bagging does, but typically gives better accuracy. XGBoost corrects for the deficiencies of boosting by ensembling regularized trees.Like a GBM, each tree is built in parallel it allows for multi-threading (asynchronous) training large data sets.As with all tree based methods it creates decision trees and is highly interpretable. H2O.ai AutomlH2O’s AutoML can be used for automating the machine learning workflow, which includes automatic training and tuning of many models within a user-specified time-limit. Stacked Ensembles – one based on all previously trained models, another one on the best model of each family – will be automatically trained on collections of individual models to produce highly predictive ensemble models which, in most cases, will be the top performing models in the AutoML Leaderboard.You will need to install H2O.ai Automl for python to run this notebook. ```bashpip install requestspip install tabulatepip install "colorama>=0.3.8"pip install futurepip uninstall h2opip install -f http://h2o-release.s3.amazonaws.com/h2o/latest_stable_Py.html h2o```Note: When installing H2O from pip in OS X El Capitan, users must include the --user flag.```bashpip install -f http://h2o-release.s3.amazonaws.com/h2o/latest_stable_Py.html h2o --user```See Downloading & Installing H2O [http://docs.h2o.ai/h2o/latest-stable/h2o-docs/downloading.html](http://docs.h2o.ai/h2o/latest-stable/h2o-docs/downloading.html)
###Code
# Import libraries
# Use pip install or conda install if missing a library
import h2o
from h2o.automl import H2OAutoML
import random, os, sys
from datetime import datetime
import pandas as pd
import logging
import csv
import optparse
import time
import json
from distutils.util import strtobool
import psutil
import numpy as np
import matplotlib.pyplot as plt
# Set a minimum memory size and a run time in seconds
min_mem_size=6
run_time=300
# Use 50% of availible resources
pct_memory=0.5
virtual_memory=psutil.virtual_memory()
min_mem_size=int(round(int(pct_memory*virtual_memory.available)/1073741824,0))
print(min_mem_size)
# 65535 Highest port no
# Start the H2O server on a random port
port_no=random.randint(5555,55555)
# h2o.init(strict_version_check=False,min_mem_size_GB=min_mem_size,port=port_no) # start h2o
try:
h2o.init(strict_version_check=False,min_mem_size_GB=min_mem_size,port=port_no) # start h2o
except:
logging.critical('h2o.init')
h2o.download_all_logs(dirname=logs_path, filename=logfile)
h2o.cluster().shutdown()
sys.exit(2)
###Output
Checking whether there is an H2O instance running at http://localhost:42708 ..... not found.
Attempting to start a local H2O server...
Java Version: java version "1.8.0_311"; Java(TM) SE Runtime Environment (build 1.8.0_311-b11); Java HotSpot(TM) 64-Bit Server VM (build 25.311-b11, mixed mode)
Starting server from /opt/anaconda3/lib/python3.9/site-packages/h2o/backend/bin/h2o.jar
Ice root: /var/folders/5q/w_y2sfjj2bv130sh86slry7r0000gq/T/tmp__1_yumb
JVM stdout: /var/folders/5q/w_y2sfjj2bv130sh86slry7r0000gq/T/tmp__1_yumb/h2o_work_started_from_python.out
JVM stderr: /var/folders/5q/w_y2sfjj2bv130sh86slry7r0000gq/T/tmp__1_yumb/h2o_work_started_from_python.err
Server is running at http://127.0.0.1:42708
Connecting to H2O server at http://127.0.0.1:42708 ... successful.
###Markdown
Import data and Manage Data TypesThis exploration of H2O will use a version of
###Code
# Import the processed data from notebook One
url = "https://raw.githubusercontent.com/sanapsanket/Bank-Loan-Status-Predictive-Analysis/main/KaggleDataset/BankLoanStatusDataset/credit_train.csv"
df = h2o.import_file(path = url)
df.head(1)
df = df.na_omit()
#best.varimp(use_pandas=True)
#df.describe()
#df=df.drop('Loan ID')
#df=df.drop('Customer ID')
#df=df.drop('Tax Liens')
#df=df.drop('Bankruptcies')
#df=df.drop('Number of Credit Problems')
#df=df.drop('Term')
#df=df.drop('Home Ownership')
#df=df.drop('Maximum Open Credit')
#df=df.drop('Purpose')
#df=df.drop('Years in current job')
# Create a 80/20 train/test splie
pct_rows=0.80
df_train, df_test = df.split_frame([pct_rows])
print(df_train.shape)
print(df_test.shape)
df_train.head(1)
###Output
_____no_output_____
###Markdown
Train Models Using H2O's AutoML
###Code
# Set the features and target
X=df.columns
# Set target and predictor variables
target ='Loan Status'
X.remove(target)
df_train[target]=df_train[target].asfactor()
df_test[target]=df_test[target].asfactor()
print(X)
###Output
['Loan ID', 'Customer ID', 'Current Loan Amount', 'Term', 'Credit Score', 'Annual Income', 'Years in current job', 'Home Ownership', 'Purpose', 'Monthly Debt', 'Years of Credit History', 'Months since last delinquent', 'Number of Open Accounts', 'Number of Credit Problems', 'Current Credit Balance', 'Maximum Open Credit', 'Bankruptcies', 'Tax Liens']
###Markdown
RegressionH20 AutoML will automatically perform regression or classification depedending on the target data type.
###Code
# Set up AutoML
aml = H2OAutoML(seed=1,max_runtime_secs=run_time,verbosity="info")
aml.train(x=X,y=target,training_frame=df_train)
###Output
AutoML progress: |
15:42:20.811: Project: AutoML_6_20220211_154220
15:42:20.812: 5-fold cross-validation will be used.
15:42:20.816: Setting stopping tolerance adaptively based on the training frame: 0.0035241429027194466
15:42:20.816: Build control seed: 1
15:42:20.818: training frame: Frame key: AutoML_6_20220211_154220_training_py_27_sid_9826 cols: 19 rows: 80518 chunks: 32 size: 5237594 checksum: -4210142017087432970
15:42:20.818: validation frame: NULL
15:42:20.818: leaderboard frame: NULL
15:42:20.818: blending frame: NULL
15:42:20.818: response column: Loan Status
15:42:20.818: fold column: null
15:42:20.818: weights column: null
15:42:20.821: Loading execution steps: [{XGBoost : [def_2 (1g, 10w), def_1 (2g, 10w), def_3 (3g, 10w), grid_1 (4g, 90w), lr_search (6g, 30w)]}, {GLM : [def_1 (1g, 10w)]}, {DRF : [def_1 (2g, 10w), XRT (3g, 10w)]}, {GBM : [def_5 (1g, 10w), def_2 (2g, 10w), def_3 (2g, 10w), def_4 (2g, 10w), def_1 (3g, 10w), grid_1 (4g, 60w), lr_annealing (6g, 10w)]}, {DeepLearning : [def_1 (3g, 10w), grid_1 (4g, 30w), grid_2 (5g, 30w), grid_3 (5g, 30w)]}, {completion : [resume_best_grids (10g, 60w)]}, {StackedEnsemble : [best_of_family_1 (1g, 5w), best_of_family_2 (2g, 5w), best_of_family_3 (3g, 5w), best_of_family_4 (4g, 5w), best_of_family_5 (5g, 5w), all_2 (2g, 10w), all_3 (3g, 10w), all_4 (4g, 10w), all_5 (5g, 10w), monotonic (6g, 10w), best_of_family_xgboost (6g, 10w), best_of_family_gbm (6g, 10w), all_xgboost (7g, 10w), all_gbm (7g, 10w), best_of_family_xglm (8g, 10w), all_xglm (8g, 10w), best_of_family (10g, 10w), best_N (10g, 10w)]}]
15:42:20.828: AutoML job created: 2022.02.11 15:42:20.805
15:42:20.831: AutoML build started: 2022.02.11 15:42:20.830
15:42:20.834: AutoML: starting XGBoost_1_AutoML_6_20220211_154220 model training
15:42:20.850: _train param, Dropping bad and constant columns: [Customer ID, Loan ID]
15:42:21.896: XGBoost_1_AutoML_6_20220211_154220 [XGBoost def_2] failed: water.exceptions.H2OModelBuilderIllegalArgumentException: Illegal argument(s) for XGBoost model: XGBoost_1_AutoML_6_20220211_154220_cv_1. Details: ERRR on field: _response_column: Response contains missing values (NAs) - not supported by XGBoost.
15:42:21.906: AutoML: starting GLM_1_AutoML_6_20220211_154220 model training
15:42:21.915: _train param, Dropping bad and constant columns: [Customer ID, Loan ID]
███████████████
15:43:29.736: New leader: GLM_1_AutoML_6_20220211_154220, auc: 0.7433923073938888
15:43:29.740: AutoML: starting GBM_1_AutoML_6_20220211_154220 model training
15:43:29.769: _train param, Dropping bad and constant columns: [Customer ID, Loan ID]
████████
15:44:07.986: New leader: GBM_1_AutoML_6_20220211_154220, auc: 0.7653204355283812
15:44:07.991: AutoML: starting StackedEnsemble_BestOfFamily_1_AutoML_6_20220211_154220 model training
15:44:07.997: _train param, Dropping unused columns: [Customer ID, Loan ID]
█
15:44:14.84: New leader: StackedEnsemble_BestOfFamily_1_AutoML_6_20220211_154220, auc: 0.7664410867802438
15:44:14.89: AutoML: starting XGBoost_2_AutoML_6_20220211_154220 model training
15:44:14.100: _train param, Dropping bad and constant columns: [Customer ID, Loan ID]
15:44:15.135: XGBoost_2_AutoML_6_20220211_154220 [XGBoost def_1] failed: water.exceptions.H2OModelBuilderIllegalArgumentException: Illegal argument(s) for XGBoost model: XGBoost_2_AutoML_6_20220211_154220_cv_1. Details: ERRR on field: _response_column: Response contains missing values (NAs) - not supported by XGBoost.
15:44:15.143: AutoML: starting DRF_1_AutoML_6_20220211_154220 model training
15:44:15.152: _train param, Dropping bad and constant columns: [Customer ID, Loan ID]
████████
15:44:51.397: New leader: DRF_1_AutoML_6_20220211_154220, auc: 0.7763699069566031
15:44:51.403: AutoML: starting GBM_2_AutoML_6_20220211_154220 model training
15:44:51.410: _train param, Dropping bad and constant columns: [Customer ID, Loan ID]
██████
15:45:19.600: AutoML: starting GBM_3_AutoML_6_20220211_154220 model training
15:45:19.614: _train param, Dropping bad and constant columns: [Customer ID, Loan ID]
██████
15:45:48.822: AutoML: starting GBM_4_AutoML_6_20220211_154220 model training
15:45:48.828: _train param, Dropping bad and constant columns: [Customer ID, Loan ID]
██████
15:46:20.13: AutoML: starting StackedEnsemble_BestOfFamily_2_AutoML_6_20220211_154220 model training
15:46:20.18: _train param, Dropping unused columns: [Customer ID, Loan ID]
█
15:46:27.67: New leader: StackedEnsemble_BestOfFamily_2_AutoML_6_20220211_154220, auc: 0.779193217843913
15:46:27.71: AutoML: starting StackedEnsemble_AllModels_1_AutoML_6_20220211_154220 model training
15:46:27.74: _train param, Dropping unused columns: [Customer ID, Loan ID]
██
15:46:34.142: New leader: StackedEnsemble_AllModels_1_AutoML_6_20220211_154220, auc: 0.7797212673547402
15:46:34.145: AutoML: starting XGBoost_3_AutoML_6_20220211_154220 model training
15:46:34.154: _train param, Dropping bad and constant columns: [Customer ID, Loan ID]
15:46:35.185: XGBoost_3_AutoML_6_20220211_154220 [XGBoost def_3] failed: water.exceptions.H2OModelBuilderIllegalArgumentException: Illegal argument(s) for XGBoost model: XGBoost_3_AutoML_6_20220211_154220_cv_1. Details: ERRR on field: _response_column: Response contains missing values (NAs) - not supported by XGBoost.
15:46:35.188: AutoML: starting XRT_1_AutoML_6_20220211_154220 model training
15:46:35.193: _train param, Dropping bad and constant columns: [Customer ID, Loan ID]
███
15:46:47.294: AutoML: starting GBM_5_AutoML_6_20220211_154220 model training
15:46:47.299: _train param, Dropping bad and constant columns: [Customer ID, Loan ID]
██
15:46:58.405: AutoML: starting DeepLearning_1_AutoML_6_20220211_154220 model training
15:46:58.410: _train param, Dropping bad and constant columns: [Customer ID, Loan ID]
██
15:47:07.475: AutoML: starting StackedEnsemble_BestOfFamily_3_AutoML_6_20220211_154220 model training
15:47:07.477: _train param, Dropping unused columns: [Customer ID, Loan ID]
█
15:47:14.535: New leader: StackedEnsemble_BestOfFamily_3_AutoML_6_20220211_154220, auc: 0.7801806905064458
15:47:14.539: AutoML: starting StackedEnsemble_AllModels_2_AutoML_6_20220211_154220 model training
15:47:14.543: _train param, Dropping unused columns: [Customer ID, Loan ID]
██| (done) 100%
15:47:22.598: New leader: StackedEnsemble_AllModels_2_AutoML_6_20220211_154220, auc: 0.7806707220739665
15:47:22.612: Actual modeling steps: [{GLM : [def_1 (1g, 10w)]}, {GBM : [def_5 (1g, 10w)]}, {StackedEnsemble : [best_of_family_1 (1g, 5w)]}, {DRF : [def_1 (2g, 10w)]}, {GBM : [def_2 (2g, 10w), def_3 (2g, 10w), def_4 (2g, 10w)]}, {StackedEnsemble : [best_of_family_2 (2g, 5w), all_2 (2g, 10w)]}, {DRF : [XRT (3g, 10w)]}, {GBM : [def_1 (3g, 10w)]}, {DeepLearning : [def_1 (3g, 10w)]}, {StackedEnsemble : [best_of_family_3 (3g, 5w), all_3 (3g, 10w)]}]
15:47:22.613: AutoML build stopped: 2022.02.11 15:47:22.612
15:47:22.613: AutoML build done: built 9 models
15:47:22.614: AutoML duration: 5 min 1.782 sec
Model Details
=============
H2OStackedEnsembleEstimator : Stacked Ensemble
Model Key: StackedEnsemble_AllModels_2_AutoML_6_20220211_154220
No model summary for this model
ModelMetricsBinomialGLM: stackedensemble
** Reported on train data. **
MSE: 0.08727230038534711
RMSE: 0.29541885583920857
LogLoss: 0.2925481911749993
Null degrees of freedom: 9868
Residual degrees of freedom: 9861
Null deviance: 10527.37156155715
Residual deviance: 5774.316197412137
AIC: 5790.316197412137
AUC: 0.9665535992594944
AUCPR: 0.9896455490053936
Gini: 0.9331071985189887
Confusion Matrix (Act/Pred) for max f1 @ threshold = 0.6964519784347294:
###Markdown
Classifying Loan Status
###Code
print(aml.leaderboard)
df=df.drop('Loan ID')
df=df.drop('Customer ID')
df = df.na_omit()
# Create a 80/20 train/test splie
pct_rows=0.80
df_train, df_test = df.split_frame([pct_rows])
# Set the features and target
X=df.columns
# Set target and predictor variables
target ='Loan Status'
X.remove(target)
df_train[target]=df_train[target].asfactor()
df_test[target]=df_test[target].asfactor()
print(X)
# Set up AutoML
aml2 = H2OAutoML(seed=1,max_runtime_secs=run_time, exclude_algos=["StackedEnsemble"],verbosity="info")
aml2.train(x=X,y=target,training_frame=df_train)
print(aml2.leaderboard)
aml.corr
best=aml.get_best_model()
best.accuracy?
best.varimp(use_pandas=True)
best.accuracy()
best.algo
import pandas as pd
k =pd.read_csv("https://raw.githubusercontent.com/sanapsanket/Bank-Loan-Status-Predictive-Analysis/main/KaggleDataset/BankLoanStatusDataset/credit_train.csv")
k.corr()
###Output
_____no_output_____
###Markdown
RSME comparison and understanding the leader boardThe best models after running for a little under four minutes is around 0.005 about half of that of the 0.010 RSME that we got our simple MLP in notebook one and a quarter of the 0.017 RSME that we got with a simple MLP with the same independent variables. When we run for a short time, under 10 minutes, out leaderboard with be biased towards trre-based methods as the deep learners take much more time to converge. It is rare to see deep learners in the top 500 models when we run for less than 5 moinutes.We should still plot the results but before we do that let's discuss a big advantage of these models, model interpretability.
###Code
model_index=0
glm_index=0
glm_model=''
aml_leaderboard_df=aml.leaderboard.as_data_frame()
models_dict={}
for m in aml_leaderboard_df['model_id']:
models_dict[m]=model_index
if 'StackedEnsemble' not in m:
break
model_index=model_index+1
for m in aml_leaderboard_df['model_id']:
if 'GLM' in m:
models_dict[m]=glm_index
break
glm_index=glm_index+1
models_dict
###Output
_____no_output_____
###Markdown
Examine the Best Model
###Code
print(0)
StackedEnsemble = h2o.get_model(aml.leaderboard[0,'model_id'])
DRF = h2o.get_model(aml.leaderboard[6,'model_id'])
DRF.algo
perf = aml.leader.model_performance(df_test)
perf.auc()
StackedEnsemble.algo
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
import matplotlib.cbook
warnings.filterwarnings("ignore", category = matplotlib.cbook.mplDeprecation)
###Output
_____no_output_____
###Markdown
Variable importance plotVariable importance plots in tree-based methods provides a list of the most significant variables in descending order by a measure of the information in each variable. Remember that tree calculates the information content of each variable. A variable importance plot is just a bar chart of each variables information content in decreasing order.It can show actual information estimates or standardized plots like the one below. In a standardized plot the most important variable is always given a value of 1.0. The other variables scores represent their percentage of information relative to the most important variable.Notice that some varibales have almost no information content. Knowing this allows for feature selection by removing unimportant variables. This makes a model more effecient to run and helps prevent overfitting as the unimportant variables can fit noise and, as we saw in notebook one, make strange predictions.
###Code
if DRF.algo in ['gbm','drf','xrt','xgboost']:
DRF.varimp_plot()
if glm_index != 0:
print(glm_index)
glm_model=h2o.get_model(aml.leaderboard[glm_index,'model_id'])
print(glm_model.algo)
glm_model.std_coef_plot()
print("RMSE:")
print("StackedEnsemble 0: ",StackedEnsemble.rmse(train = True))
print("DRF: ",DRF.rmse(train = True))
def model_performance_stats(perf):
d={}
try:
d['mse']=perf.mse()
except:
pass
try:
d['rmse']=perf.rmse()
except:
pass
try:
d['null_degrees_of_freedom']=perf.null_degrees_of_freedom()
except:
pass
try:
d['residual_degrees_of_freedom']=perf.residual_degrees_of_freedom()
except:
pass
try:
d['residual_deviance']=perf.residual_deviance()
except:
pass
try:
d['null_deviance']=perf.null_deviance()
except:
pass
try:
d['aic']=perf.aic()
except:
pass
try:
d['logloss']=perf.logloss()
except:
pass
try:
d['auc']=perf.auc()
except:
pass
try:
d['gini']=perf.gini()
except:
pass
return d
mod_perf=StackedEnsemble.model_performance(df_test)
stats_test={}
stats_test=model_performance_stats(mod_perf)
stats_test
predictions = DRF.predict(df_test)
y_pred=h2o.as_list(predictions)
y_pred[:5]
###Output
_____no_output_____
###Markdown
Partial Dependence PlotsPartial dependence plots (PDP) show the dependence between the target response and a set of features, marginalizing over the values of all other features. Intuitively, we can interpret the partial dependence as the expected target response as a function of the feature.The partial dependence plot gives a graphical depiction of the marginal effect of a variable on the response. The effect of a variable is measured in change in the mean response. This helps one answer the question of how changing a variables values would change the outcome.The partial dependence plots show only impact of single variable if others are kept constant. But in many cases, there is interaction between variables. Never-the-less, they are very useful in estimating whether, for example, doubling some predictor varible will double a response or whether that predictor varible is already saturated.
###Code
print(X)
best_model.partial_plot(df, cols=['Credit Score'])
best_model.partial_plot(df, cols=['total_night_minutes', 'total_night_calls', 'total_night_charge', 'total_intl_minutes', 'total_intl_calls', 'total_intl_charge', 'number_customer_service_calls'])
###Output
PartialDependencePlot progress: |████████████████████████████████████████████████| (done) 100%
PartialDependence: Partial Dependence Plot of model XGBoost_grid_1_AutoML_2_20220105_04416_model_38 on column 'total_night_minutes'.
###Markdown
Shutdown H2O Cluster
###Code
h2o.cluster().shutdown()
###Output
H2O session _sid_840b closed.
###Markdown
Appendix - Generalized Linear Model (GLM) The generalized linear model (GLM) is a flexible generalization of ordinary linear regression that allows for response variables that have error distribution models other than a normal distribution. The GLM generalizes linear regression by allowing the linear model to be related to the response variable via a link function and by allowing the magnitude of the variance of each measurement to be a function of its predicted value.Generalized Linear Models (GLM) estimate regression models for various probability distributions. In addition to the Gaussian (i.e. normal) distribution, these include Poisson, binomial, and gamma distributions. This can be used either for prediction or classification.Overfitting is prevented using L1, L2 and elastic-net regularization.While not easily parallelable, maximum likelihood estimates of coefficients is very efficient.GLMs are highly interpretable as the coefficients (i.e. the slopes) as they directly related the degree the dependent variable changes in response to a change in each independent variable.For these data and the regression case, we can think of this as linear regression. to In linear regression, the use of the least-squares estimator is justified by the Gauss--Markov theorem, which does not assume that the distribution is normal. From the perspective of generalized linear models, however, it is useful to suppose that the distribution function is the normal distribution with constant variance and the link function is the identity, which isthe canonical link if the variance is known.In our case, we will assume that the the distribution of errors is normal and that the link function is the identity, which means the will will be performing simple linear regression. Linear regression predicts the response variable $y$ assuming it has a linear relationship with predictor variable(s) $x$ or $x_1, x_2, ,,, x_n$.$$y = \beta_0 + \beta_1 x + \varepsilon .$$*Simple* regression use only one predictor variable $x$. *Mulitple* regression uses a set of predictor variables $x_1, x_2, ,,, x_n$.The *response variable* $y$ is also called the regressand, forecast, dependent or explained variable. The *predictor variable* $x$ is also called the regressor, independent or explanatory variable.The parameters $\beta_0$ and $\beta_1$ determine the intercept and the slope of the line respectively. The intercept $\beta_0$ represents the predicted value of $y$ when $x=0$. The slope $\beta_1$ represents the predicted increase in $Y$ resulting from a one unit increase in $x$.Note that the regression equation is just our famliar equation for a line with an error term.The equation for a line: $$ Y = bX + a $$$$y = \beta_0 + \beta_1 x $$The equation for a line with an error term: $$ Y = bX + a + \varepsilon $$$$y = \beta_0 + \beta_1 x + \varepsilon .$$- $b$ = $\beta_1$ = slope- $a$ = $\beta_0$ = $Y$ intercept- $\varepsilon$ = error termWe can think of each observation $y_i$ consisting of the systematic or explained part of the model, $\beta_0+\beta_1x_i$, and the random *error*, $\varepsilon_i$._Zero Slope_Note that when $\beta_1 = 0$ then response does not change as the predictor changes.For multiple regression $x$ is a $X$ to produce a system of equations: $$ Y = \beta_0 + \beta_1 X + \varepsilon $$_The error $\varepsilon_i$_The error term is a catch-all for anything that may affect $y_i$ other than $x_i$. We assume that these errors:* have mean zero; otherwise the forecasts will be systematically biased.* statistical independence of the errors (in particular, no correlation between consecutive errors in the case of time series data).* homoscedasticity (constant variance) of the errors.* normality of the error distribution.If any of these assumptions is violated then the robustness of the model to be taken with a grain of salt._Least squares estimation_In a linear model, the values of $\beta_0$ and $\beta_1$. These need to be estimated from the data. We call this *fitting a model*.The least squares method iis the most common way of estimating $\beta_0$ and $\beta_1$ by minimizing the sum of the squared errors. The values of $\beta_0$ and $\beta_1$ are chosen so that that minimize$$\sum_{i=1}^N \varepsilon_i^2 = \sum_{i=1}^N (y_i - \beta_0 - \beta_1x_i)^2. $$Using mathematical calculus, it can be shown that the resulting **least squares estimators** are$$\hat{\beta}_1=\frac{ \sum_{i=1}^{N}(y_i-\bar{y})(x_i-\bar{x})}{\sum_{i=1}^{N}(x_i-\bar{x})^2} $$ and$$\hat{\beta}_0=\bar{y}-\hat{\beta}_1\bar{x}, $$where $\bar{x}$ is the average of the $x$ observations and $\bar{y}$ is the average of the $y$ observations. The estimated line is known as the *regression line*.To solve least squares with gradient descent or stochastic gradient descent (SGD) or losed Form (set derivatives equal to zero and solve for parameters)._Fitted values and residuals_The response values of $y$ obtained from the observed $x$ values arecalled *fitted values*: $\hat{y}_i=\hat{\beta}_0+\hat{\beta}_1x_i$, for$i=1,\dots,N$. Each $\hat{y}_i$ is the point on the regressionline corresponding to $x_i$.The difference between the observed $y$ values and the corresponding fitted values are the *residuals*:$$e_i = y_i - \hat{y}_i = y_i -\hat{\beta}_0-\hat{\beta}_1x_i. $$The residuals have some useful properties including the following two:$$\sum_{i=1}^{N}{e_i}=0 \quad\text{and}\quad \sum_{i=1}^{N}{x_ie_i}=0. $$Residuals are the errors that we cannot predict.Residuals are highly useful for studying whether a given regression model is an appropriate statistical technique for analyzing the relationship. Appendix - _Decision-tree based methods (DRF, XRT, GBM, and XGBoost)_**What is a decision-tree?**What is a tree? In mathematics, and more specifically in graph theory, a [tree](https://en.wikipedia.org/wiki/Tree_(graph_theory)) is a directed or an undirected graph in which any two vertices are connected by exactly one path. In other words, any acyclic connected graph is a tree.A tree is an undirected graph G that satisfies any of the following equivalent conditions: * G is connected and has no cycles. * G is acyclic, and a simple cycle is formed if any edge is added to G. * G is connected, but is not connected if any single edge is removed from G. A rooted tree is a tree in which one vertex/node has been designated the root. The edges of a rooted tree can be assigned a natural orientation, either away from or towards the root, in which case the structure becomes a directed rooted tree. A vertex/node that does not split is called Leaf or Terminal node. A sub section of entire tree is called branch or sub-tree. A vertex/node, which is divided into sub-nodes is called parent node of sub-nodes where as sub-nodes are the child of parent node. A [decision tree](https://en.wikipedia.org/wiki/Decision_tree) is a [supervised learning](https://en.wikipedia.org/wiki/Supervised_learning) algorithm that uses a tree-like graph or model of decisions and their outcomes. The decision tree can be linearized into decision rules, where the outcome is the contents of the leaf node, and the conditions along the path form a conjunction in the if clause. In general, the rules have the form:$if \quad condition1 \quad and \quad condition2 \quad and \quad condition3 \quad then \quad outcome$Each node in the tree is a decisions/tests. Each path from the tree root to a leaf corresponds to a conjunction of attribute decisions/tests. The tree itself corresponds to a disjunction of these conjunctions.**The 20 Questions of machine learning** In the traditional [20 Questions](https://en.wikipedia.org/wiki/Twenty_Questions) game, one player is chosen to be the answerer. That person chooses a subject (object) but does not reveal this to the others. All other players are questioners. They each take turns asking a question which can be answered with a simple "Yes" or "No." The questioners try to guess the answerers subject (object).The Two Rules Rule 1: Questioners ask Yes-or-No questions Rule 2: Answerer responds with a Yes or a No Traditionally,first question is something like the following: * "Is it animal?" * "Is it vegetable?" * "Is it mineral?" Suppose the answer is "Justin Bieber?"Which would be a better first question?"Is it Taylor Swift?" or "Is it animal?" **Estimating the information in a data split?**Like 20 questions we want to split data in such a way as to maximize the information generated from the split.To calculate entropy, we can calculate the information difference, $-p_1 \log p_1 - p_2 \log p_2$. Generalizing this to n events, we get:$$entropy(p_1, p_2, ... p_n) = -p_1 \log p_1 - p_2 \log p_2 ... - p_n \log p_n $$which is just the Shannon entropy$$H_1 (X) = - \sum_{i=1}^n p_i \log p_i. $$For example, if entropy = $-1.0 \log (1.0) - 0.0 \log (0.0) = 0$ then this provides no information. If entropy = $-0.5 \log (0.5) - 0.5 \log (0.5) = 1.0$ then this provides one “bit” of information. Note that when $P(X)$ is 0.5 one is most uncertain and the Shannon entropy is highest (i.e. 1). When $P(X)$ is either 0.0 or 1.0 one is most certain and the Shannon entropy is lowest (i.e. 0)_Shannon entropy_ The notion of using entropy as a measure of change in system state and dynamics comes both from [statistical physics](https://en.wikipedia.org/wiki/Entropy) and from [information theory](https://en.wikipedia.org/wiki/Entropy_(information_theory)). In statistical physics, entropy is a measure of disorder and uncertainty in a random variable; the higher the entropy, the greater the disorder. In the statistical physics context, the term usually refers to [Gibbs entropy](https://en.wikipedia.org/wiki/Entropy_(statistical_thermodynamics)), which measures the macroscopic state of the system as defined by a distribution of atoms and molecules in a thermodynamic system. Gibbs entropy is a measure of the disorder in the arrangements of its particles. As the position of a particle becomes less predictable, the entropy increases. For a classical system (i.e., a collection of classical particles) with a discrete set of microstates, if $E_i$ is the energy of microstate $i$, and $p_i$ is the probability that it occurs during the system's fluctuations, then the entropy of the system is$$S = -k_\text{B}\,\sum_i p_i \ln \,p_i$$The quantity $k_\text{B}$ is a physical constant known as [Boltzmann's constant](https://en.wikipedia.org/wiki/Boltzmann_constant), which, like the entropy, has units of heat capacity. The logarithm is dimensionless.In information theory, entropy is also a measure of the uncertainty in a random variable. In this context, however, the term usually refers to the [Shannon entropy](https://en.wikipedia.org/wiki/Entropy_(information_theory)), which quantifies the expected value of the information contained in a message (or the expected value of the information of the probability distribution). The concept was introduced by [Claude E. Shannon](https://en.wikipedia.org/wiki/Claude_Shannon) in his 1948 paper "A Mathematical Theory of Communication." Shannon entropy establishes the limits to possible data compression and channel capacity. That is, the entropy gives a lower bound for the efficiency of an encoding scheme (in other words, a lower bound on the possible compression of a data stream). Typically this is expressed in the number of ‘bits’ or ‘nats’ that are required to encode a given message. Given the probability of each of n events, the information required to predict an event is the distribution’s entropy. Low entropy means the system is very ordered, that is, very predictable. High entropy means the system is mixed, that is, very unpredictable; a lot of information is needed for prediction. The Shannon entropy can explicitly be written as$$E(X) = \sum_{i} {\mathrm{P}(x_i)\,\mathrm{I}(x_i)} = -\sum_{i} {\mathrm{P}(x_i) \log_b \mathrm{P}(x_i)},$$where b is the base of the logarithm used. Common values of b are 2, Euler's number $e$, and 10, and the unit of entropy is shannon for b = 2, nat for b = e, and hartley for b = 10.When b = 2, the units of entropy are also commonly referred to as bits.The Shannon entropy is by far the most common information-theoretic measure there are others. Other information-theoretic measures include: plog,Rényi entropy, Hartley entropy, collision entropy, min-entropy, Kullback-Leibler divergence and the information dimension.The Shannon entropy is the Rényi entropy with an alpha of one (see appendix). The Shannon entropy is a simple estimate of the expected value of the information contained in a message. It assumes independence and identically distributed random variables, which is a simplification when applied to word counts. In this sense it is analogous to naïve Bayes, in that it is very commonly used and thought to work well in spite of violating some assumptions upon which it is based.The limiting value of $H_\alpha as \alpha \rightarrow 1$ is the Shannon entropy:$$H_1(X) = - \sum_{i=1}^n p_i \log p_i. $$**Classification vs Regression Trees** Types of decision tree is based on the type of target variable we have. It can be of two types:Classification Tree (Categorical Response Variable Decision Tree): Decision Tree which separates the dataset into classes belonging to the categorical target variable. Usually the response variable has two classes: Yes or No (1 or 0). Regression trees (Continuous Response Variable Decision Tree): If a decision Tree has continuous target variable it is applicable for prediction type of problems as opposed to classification.
###Code
def shannon_entropy(p):
return (-p *np.log2(p) - (1-p)*np.log2(1-p))
base=0.0000000001
x = np.arange(base, 1.0-base, 0.01)
plt.figure(1)
plt.plot(x, shannon_entropy(x), 'go', x, shannon_entropy(x), 'k')
plt.ylabel('Shannon entropy(X)')
plt.xlabel('X')
plt.show()
###Output
_____no_output_____ |
other/analysisDataScientist/countingFeaturesInSatelliteImagesUsingScikitImage.ipynb | ###Markdown
The example below uses scikit-image library to detect circular features in farms using center pivot irrigation in Saudi Arabia. It then counts and reports the number of farms. This is one of the ways in which libraries from the scientific Python ecosystem can be integrated with the ArcGIS platform.It uses the Multispectral Landsat imagery available at ArcGIS Online.Note: to run this sample, you need a few extra libraries in your conda environment. If you don't have the libraries, install them by running the following commands from cmd.exe or your shell- conda install scipy- conda install matplotlib- conda install scikit-image
###Code
!pip install arcgis
from arcgis.gis import GIS
agol = GIS()
l8 = agol.content.search('"Multispectral Landsat"', 'Imagery Layer')[0]
l8
l8lyr = l8.layers[0]
l8lyr.extent = {'spatialReference': {'latestWkid': 3857, 'wkid': 102100},
'type': 'extent',
'xmax': 4296559.143733407,
'xmin': 4219969.241391764,
'ymax': 3522726.823081019,
'ymin': 3492152.0117669892}
l8lyr
###Output
_____no_output_____
###Markdown
We can preprocess the imagery using raster functions. The code below uses the ndvi raster function to identify areas that have healthy vegetation. This preprocessing step makes the scikit-image blob detection algorithm work better.
###Code
from arcgis.raster.functions import ndvi, stretch
stretch(ndvi(l8lyr), stretch_type='PercentClip', min_percent=30, max_percent=70, dra=True)
img = stretch(ndvi(l8lyr), stretch_type='PercentClip', min_percent=30, max_percent=70, dra=True).export_image(bbox=l8lyr.extent, bbox_sr=102100, size=[1200, 450],
export_format='jpeg', save_folder='.', save_file='centerpivotfarms.jpg', f='image')
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
img = mpimg.imread('centerpivotfarms.jpg')
# what does it look like?
plt.imshow(img)
plt.show()
from skimage import feature, color
import matplotlib.pyplot as plt
bw = img.mean(axis=2)
fig = plt.figure(figsize = (15,15))
ax = fig.add_subplot(1,1,1)
blobs_dog = [(x[0],x[1],x[2]) for x in feature.blob_dog(-bw,
min_sigma=4,
max_sigma=8,
threshold=0.1,
overlap=0.6)]
#remove duplicates
blobs_dog = set(blobs_dog)
img_blobs = color.gray2rgb(img)
for blob in blobs_dog:
y, x, r = blob
c = plt.Circle((x, y), r+1, color='red', linewidth=2, fill=False)
ax.add_patch(c)
plt.imshow(img_blobs)
plt.title('Center Pivot Farms')
plt.show()
print('Number of center pivot farms detected: ' + str(len(blobs_dog)))
###Output
/usr/local/lib/python3.6/dist-packages/skimage/feature/blob.py:125: RuntimeWarning: invalid value encountered in double_scalars
r1 = blob1[-1] / blob2[-1]
/usr/local/lib/python3.6/dist-packages/skimage/feature/blob.py:126: RuntimeWarning: divide by zero encountered in true_divide
pos1 = blob1[:ndim] / (max_sigma * root_ndim)
/usr/local/lib/python3.6/dist-packages/skimage/feature/blob.py:127: RuntimeWarning: divide by zero encountered in true_divide
pos2 = blob2[:ndim] / (max_sigma * root_ndim)
/usr/local/lib/python3.6/dist-packages/skimage/feature/blob.py:129: RuntimeWarning: invalid value encountered in subtract
d = np.sqrt(np.sum((pos2 - pos1)**2))
|
torch/ReLU Layers.ipynb | ###Markdown
ReLU LayersWe can write a ReLU layer $z = \max(Wx+b, 0)$ as theconvex optimization problem\begin{equation}\begin{array}{ll}\mbox{minimize} & \|z-\tilde Wx - b\|_2^2 \\[.2cm]\mbox{subject to} & z \geq 0, \\& \tilde W = W,\end{array}\label{eq:prob}\end{equation}with variables $z$ and $\tilde W$,and parameters $W$, $b$, and $x$.(Note that we have added an extra variable $\tilde W$ sothat the problem is DPP.)We can embed this problem into a PyTorch `Module` and use itas a layer in a sequential neural network.We note that this example is purely illustrative;one can implement a ReLU layer much more efficientlyby directly performing the matrix multiplication, vector addition,and then taking the positive part.
###Code
from cvxpylayers.torch import CvxpyLayer
import torch
import cvxpy as cp
class ReluLayer(torch.nn.Module):
def __init__(self, D_in, D_out):
super(ReluLayer, self).__init__()
self.W = torch.nn.Parameter(1e-3*torch.randn(D_out, D_in))
self.b = torch.nn.Parameter(1e-3*torch.randn(D_out))
z = cp.Variable(D_out)
Wtilde = cp.Variable((D_out, D_in))
W = cp.Parameter((D_out, D_in))
b = cp.Parameter(D_out)
x = cp.Parameter(D_in)
prob = cp.Problem(cp.Minimize(cp.sum_squares(z-Wtilde@x-b)), [z >= 0, Wtilde==W])
self.layer = CvxpyLayer(prob, [W, b, x], [z])
def forward(self, x):
# when x is batched, repeat W and b
if x.ndim == 2:
batch_size = x.shape[0]
return self.layer(self.W.repeat(batch_size, 1, 1), self.b.repeat(batch_size, 1), x)[0]
else:
return self.layer(self.W, self.b, x)[0]
###Output
_____no_output_____
###Markdown
We generate synthetic data and create a network of two `ReluLayer`s followed by a linear layer.
###Code
torch.manual_seed(0)
net = torch.nn.Sequential(
ReluLayer(20, 20),
ReluLayer(20, 20),
torch.nn.Linear(20, 1)
)
X = torch.randn(300, 20)
Y = torch.randn(300, 1)
###Output
_____no_output_____
###Markdown
Now we can optimize the parameters inside the network using, for example, the ADAM optimizer.The code below solves 15000 convex optimization problems and calls backward 15000 times.
###Code
opt = torch.optim.Adam(net.parameters(), lr=1e-2)
for _ in range(25):
opt.zero_grad()
l = torch.nn.MSELoss()(net(X), Y)
print (l.item())
l.backward()
opt.step()
###Output
1.0796713829040527
1.0764707326889038
1.0727819204330444
1.067252516746521
1.0606187582015991
1.051621913909912
1.0402582883834839
1.0264172554016113
1.0121591091156006
0.9986547231674194
0.9878703951835632
0.9796753525733948
0.9698525667190552
0.9556602239608765
0.939254105091095
0.9228951930999756
0.906936764717102
0.8898395299911499
0.8709890246391296
0.8507254123687744
0.8293333053588867
0.8077667951583862
0.7869061231613159
0.7656839489936829
0.742659330368042
|
ML_com_classificacao_parte_3.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
uri = "https://gist.githubusercontent.com/guilhermesilveira/1b7d5475863c15f484ac495bd70975cf/raw/16aff7a0aee67e7c100a2a48b676a2d2d142f646/projects.csv"
dados = pd.read_csv(uri)
dados.head()
a_renomear = {
'expected_hours' : 'horas_esperadas',
'price' : 'preco',
'unfinished' : 'nao_finalizado'
}
dados = dados.rename(columns = a_renomear)
dados.head()
trocar = {0:1, 1:0}
dados['finalizado'] = dados.nao_finalizado.map(trocar)
dados.head()
import seaborn as sns
sns.scatterplot(x='horas_esperadas', y='preco', data=dados)
sns.scatterplot(x='horas_esperadas', y='preco', hue='finalizado', data=dados)
sns.relplot(x='horas_esperadas', y='preco', col='finalizado', hue='finalizado', data=dados)
x = dados[['horas_esperadas', 'preco']]
y = dados['finalizado']
from sklearn.model_selection import train_test_split
from sklearn.svm import LinearSVC
from sklearn.metrics import accuracy_score
SEED = 20
treino_x, teste_x, treino_y, teste_y = train_test_split(x, y,
random_state = SEED, test_size = 0.25,
stratify = y)
print("Treinaremos com %d elementos e testaremos com %d elementos" % (len(treino_x), len(teste_x)))
modelo = LinearSVC()
modelo.fit(treino_x, treino_y)
previsoes = modelo.predict(teste_x)
acuracia = accuracy_score(teste_y, previsoes) * 100
print("A acurácia foi %.2f%%" % acuracia)
sns.scatterplot(x='horas_esperadas', y='preco', hue=teste_y, data=teste_x)
x_min = teste_x.horas_esperadas.min()
x_max = teste_x.horas_esperadas.max()
y_min = teste_x.preco.min()
y_max = teste_x.preco.max()
pixels = 100
eixo_x = np.arange(x_min, x_max, (x_max-x_min)/pixels)
eixo_y = np.arange(y_min, y_max, (y_max-y_min)/pixels)
xx, yy = np.meshgrid(eixo_x, eixo_y)
pontos = np.c_[xx.ravel(), yy.ravel()]
z = modelo.predict(pontos)
z = z.reshape(xx.shape)
z
import matplotlib.pyplot as plt
plt.contourf(xx, yy, z, alpha=0.3)
plt.scatter(teste_x.horas_esperadas, teste_x.preco, c=teste_y, s=1)
###Output
_____no_output_____
###Markdown
Utilizando SVC
###Code
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
SEED = 5
np.random.seed(SEED)
treino_x, teste_x, treino_y, teste_y = train_test_split(x, y, test_size = 0.25,
stratify = y)
print("Treinaremos com %d elementos e testaremos com %d elementos" % (len(treino_x), len(teste_x)))
modelo = SVC()
modelo.fit(treino_x, treino_y)
previsoes = modelo.predict(teste_x)
acuracia = accuracy_score(teste_y, previsoes) * 100
print("A acurácia foi %.2f%%" % acuracia)
x_min = teste_x.horas_esperadas.min()
x_max = teste_x.horas_esperadas.max()
y_min = teste_x.preco.min()
y_max = teste_x.preco.max()
pixels = 100
eixo_x = np.arange(x_min, x_max, (x_max - x_min) / pixels)
eixo_y = np.arange(y_min, y_max, (y_max - y_min) / pixels)
xx, yy = np.meshgrid(eixo_x, eixo_y)
pontos = np.c_[xx.ravel(), yy.ravel()]
Z = modelo.predict(pontos)
Z = Z.reshape(xx.shape)
import matplotlib.pyplot as plt
plt.contourf(xx, yy, Z, alpha=0.3)
plt.scatter(teste_x.horas_esperadas, teste_x.preco, c=teste_y, s=1)
# DECISION BOUNDARY
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score
SEED = 5
np.random.seed(SEED)
raw_treino_x, raw_teste_x, treino_y, teste_y = train_test_split(x, y, test_size = 0.25,
stratify = y)
print("Treinaremos com %d elementos e testaremos com %d elementos" % (len(treino_x), len(teste_x)))
scaler = StandardScaler()
scaler.fit(raw_treino_x)
treino_x = scaler.transform(raw_treino_x)
teste_x = scaler.transform(raw_teste_x)
modelo = SVC()
modelo.fit(treino_x, treino_y)
previsoes = modelo.predict(teste_x)
acuracia = accuracy_score(teste_y, previsoes) * 100
print("A acurácia foi %.2f%%" % acuracia)
data_x = teste_x[:,0]
data_y = teste_x[:,1]
x_min = data_x.min()
x_max = data_x.max()
y_min = data_y.min()
y_max = data_y.max()
pixels = 100
eixo_x = np.arange(x_min, x_max, (x_max - x_min) / pixels)
eixo_y = np.arange(y_min, y_max, (y_max - y_min) / pixels)
xx, yy = np.meshgrid(eixo_x, eixo_y)
pontos = np.c_[xx.ravel(), yy.ravel()]
Z = modelo.predict(pontos)
Z = Z.reshape(xx.shape)
import matplotlib.pyplot as plt
plt.contourf(xx, yy, Z, alpha=0.3)
plt.scatter(data_x, data_y, c=teste_y, s=1)
# DECISION BOUNDARY
###Output
_____no_output_____ |
notebooks/RecSys-Collaborative-Filtering-Movies.ipynb | ###Markdown
Table of ContentsAcquiring the DataPreprocessingCollaborative FilteringAdd movieId to input userCollect the users who has seen the same moviesSimilarity of users to input userThe top x similar users to input userRating of all movies by selected usersAdvantages and Disadvantages of Collaborative Filtering COLLABORATIVE FILTERING Recommendation systems are a collection of algorithms used to recommend items to users based on information taken from the user. These systems have become ubiquitous can be commonly seen in online stores, movies databases and job finders. In this notebook, we will explore recommendation systems based on Collaborative Filtering and implement simple version of one using Python and the Pandas library. Acquiring the Data To acquire and extract the data, simply run the following Bash scripts: Dataset acquired from [GroupLens](http://grouplens.org/datasets/movielens/). Lets download the dataset. To download the data, we will use **`!wget`** to download it from IBM Object Storage. __Did you know?__ When it comes to Machine Learning, you will likely be working with large datasets. As a business, where can you host your data? IBM is offering a unique opportunity for businesses, with 10 Tb of IBM Cloud Object Storage: [Sign up now for free](http://cocl.us/ML0101EN-IBM-Offer-CC) ```bash!wget -O moviedataset.zip https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/ML0101ENv3/labs/moviedataset.zip!unzip -o -j ./data/moviedataset.zip ./data/``` Now you're ready to start working with the data! Preprocessing First, let's get all of the imports out of the way:
###Code
import pandas as pd
from math import sqrt
import numpy as np
import matplotlib.pyplot as plt
pd.set_option("precision", 3)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Now let's read each file into their Dataframes:
###Code
PATH = './Data/moviedataset/'
# Storing the movie information into a pandas dataframe
movies_df = pd.read_csv(PATH+'movies.csv')
# Storing the user information into a pandas dataframe
ratings_df = pd.read_csv(PATH+'ratings.csv')
###Output
_____no_output_____
###Markdown
Let's also take a peek at how each of them are organized:
###Code
movies_df.head(10)
###Output
_____no_output_____
###Markdown
So each movie has a unique ID, a title with its release year along with it (Which may contain unicode characters) and several different genres in the same field. Let's remove the year from the title column and place it into its own one by using the handy [extract](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.extract.htmlpandas.Series.str.extract) function that Pandas has. Let's remove the year from the __title__ column by using pandas' replace function and store in a new __year__ column.
###Code
# Using regular expressions to find a year stored between parentheses
# We specify the parantheses so we don't conflict with movies that have years in their titles
movies_df['year'] = movies_df.title.str.extract('(\(\d\d\d\d\))', expand=False)
# Removing the parentheses
movies_df['year'] = movies_df.year.str.extract('(\d\d\d\d)', expand=False)
# Removing the years from the 'title' column
movies_df['title'] = movies_df.title.str.replace('(\(\d\d\d\d\))', '')
# Applying the strip function to get rid of any ending whitespace characters that may have appeared
movies_df['title'] = movies_df['title'].apply(lambda x: x.strip())
###Output
_____no_output_____
###Markdown
Let's look at the result!
###Code
movies_df.head()
###Output
_____no_output_____
###Markdown
With that, let's also drop the genres column since we won't need it for this particular recommendation system.
###Code
# Dropping the genres column
movies_df = movies_df.drop('genres', 1)
###Output
_____no_output_____
###Markdown
Here's the final movies dataframe:
###Code
movies_df.head()
###Output
_____no_output_____
###Markdown
Next, let's look at the ratings dataframe.
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Every row in the ratings dataframe has a user id associated with at least one movie, a rating and a timestamp showing when they reviewed it. We won't be needing the timestamp column, so let's drop it to save on memory.
###Code
# Drop removes a specified row or column from a dataframe
ratings_df = ratings_df.drop('timestamp', 1)
###Output
_____no_output_____
###Markdown
Here's how the final ratings Dataframe looks like:
###Code
ratings_df.head()
###Output
_____no_output_____
###Markdown
Collaborative Filtering The technique we're going to take a look at is called __Collaborative Filtering__, which is also known as __User-User Filtering__. As hinted by its alternate name, this technique uses other users to recommend items to the input user. It attempts to find users that have similar preferences and opinions as the input and then recommends items that they have liked to the input. There are several methods of finding similar users (Even some making use of Machine Learning), and the one we will be using here is going to be based on the __Pearson Correlation Function__.The process for creating a User Based recommendation system is as follows:- Select a user with the movies the user has watched- Based on his rating to movies, find the top X neighbours - Get the watched movie record of the user for each neighbour.- Calculate a similarity score using some formula- Recommend the items with the highest scoreLet's begin by creating an input user to recommend movies to:Notice: To add more movies, simply increase the amount of elements in the userInput. Feel free to add more in! Just be sure to write it in with capital letters and if a movie starts with a "The", like "The Matrix" then write it in like this: 'Matrix, The' .
###Code
userInput = [
{'title':'Breakfast Club, The', 'rating':5},
{'title':'Toy Story', 'rating':3.5},
{'title':'Jumanji', 'rating':2},
{'title':"Pulp Fiction", 'rating':5},
{'title':'Akira', 'rating':4.5}
]
inputMovies = pd.DataFrame(userInput)
inputMovies
###Output
_____no_output_____
###Markdown
Add movieId to input user With the input complete, let's extract the input movies's ID's from the movies dataframe and add them into it.We can achieve this by first filtering out the rows that contain the input movies' title and then merging this subset with the input dataframe. We also drop unnecessary columns for the input to save memory space.
###Code
# Filtering out the movies by title
inputId = movies_df[movies_df['title'].isin(inputMovies['title'].tolist())]
inputId
# Then merging it so we can get the movieId. It's implicitly merging it by title.
inputMovies = pd.merge(inputId, inputMovies)
inputMovies
# Dropping information we won't use from the input dataframe
inputMovies = inputMovies.drop('year', 1)
# Final input dataframe
# If a movie you added in above isn't here, then it might not be in the original
# dataframe or it might spelled differently, please check capitalisation.
inputMovies
###Output
_____no_output_____
###Markdown
Collect the users who has seen the same movies Now with the movie ID's in our input, we can now get the subset of users that have watched and reviewed the movies in our input.
###Code
# Filtering out users that have watched movies that the input has watched and storing it
userSubset = ratings_df[ratings_df['movieId'].isin(inputMovies['movieId'].tolist())]
userSubset.head()
userSubset.shape
###Output
_____no_output_____
###Markdown
We now group up the rows by user ID.
###Code
# Groupby creates several sub dataframes where they all have the same value in the column specified as the parameter
userSubsetGroup = userSubset.groupby(['userId'])
###Output
_____no_output_____
###Markdown
lets look at two of the users, e.g. one with **userID=4** and another with **userID=1130**
###Code
userSubsetGroup.get_group(4)
userSubsetGroup.get_group(1130)
[print(row) for (i, row) in zip(range(5), userSubsetGroup)]
###Output
(4, userId movieId rating
19 4 296 4.0)
(12, userId movieId rating
441 12 1968 3.0)
(13, userId movieId rating
479 13 2 2.0
531 13 1274 5.0)
(14, userId movieId rating
681 14 296 2.0)
(15, userId movieId rating
749 15 1 4.0
776 15 296 3.0
911 15 1968 3.0)
###Markdown
Let's also sort these groups so the users that share the most movies in common with the input have higher priority. This provides a richer recommendation since we won't go through every single user.
###Code
# Sorting it so users with most movies in common with input user have higher priority
# userSubsetGroup = sorted(userSubsetGroup, key=lambda x: len(x[1]), reverse=True)
userSubsetGroup = sorted(userSubsetGroup, key=lambda x: x[1].shape[0], reverse=True)
###Output
_____no_output_____
###Markdown
Now lets look at the first user
###Code
userSubsetGroup[0:3]
###Output
_____no_output_____
###Markdown
Similarity of users to input user Next, we are going to compare all users (not really all !!!) to our specified user and find the one that is most similar. we're going to find out how similar each user is to the input through the __Pearson Correlation Coefficient__. It is used to measure the strength of a linear association between two variables. The formula for finding this coefficient between sets X and Y with N values can be seen in the image below. **Why Pearson Correlation?**>Pearson correlation is invariant to scaling, i.e. multiplying all elements by a nonzero constant or adding any constant to all elements. For example, if you have two vectors X and Y,then, pearson(X, Y) == pearson(X, 2 * Y + 3). This is a pretty important property in recommendation systems because for example two users might rate two series of items totally different in terms of absolute rates, but they would be similar users (i.e. with similar ideas) with similar rates in various scales .The values given by the formula vary from r = -1 to r = 1, where 1 forms a direct correlation between the two entities (it means a perfect positive correlation) and -1 forms a perfect negative correlation. In our case, a 1 means that the two users have similar tastes while a -1 means the opposite. We will select a subset of users to iterate through. This limit is imposed because we don't want to waste too much time going through every single user.
###Code
userSubsetGroup = userSubsetGroup[0:100]
###Output
_____no_output_____
###Markdown
Now, we calculate the Pearson Correlation between input user and subset group, and store it in a dictionary, where the key is the user Id and the value is the coefficient
###Code
# Store the Pearson Correlation in a dictionary, where the key is the user Id and the value is the coefficient
pc_dict = {}
# For every user group in our subset
for name, group in userSubsetGroup:
# Let's start by sorting the input and current user group so the values aren't mixed up later on
group = group.sort_values(by='movieId')
inputMovies = inputMovies.sort_values(by='movieId')
# Get the review scores for the movies that they both have in common
temp_df = inputMovies[inputMovies['movieId'].isin(group['movieId'].tolist())].rating.values
# Let's also put the current user group reviews in a list format
tempGroupList = group.rating.values
pc_dict[name] = np.corrcoef(temp_df, tempGroupList)[1,0]
corr_df = pd.DataFrame.from_dict(pc_dict, orient='index')
corr_df.columns = ['similarityIndex']
corr_df.tail()
corr_df['userId'] = pearsonDF.index
# pc_df.index rangerange(len(pearsonDF))
corr_df.reset_index(inplace=True, drop=True)
corr_df.tail()
###Output
_____no_output_____
###Markdown
The top x similar users to input user Now let's get the top 50 users that are most similar to the input.
###Code
topUsers=corr_df.sort_values(by='similarityIndex', ascending=False)[0:50]
topUsers.head()
###Output
_____no_output_____
###Markdown
Now, let's start recommending movies to the input user. Rating of all movies by selected usersWe're going to do this by taking the weighted average of the ratings of the movies using the Pearson Correlation as the weight. But to do this, we first need to get the movies watched by the users in our __corr_df__ from the ratings dataframe and then store their correlation in a new column called "similarityIndex". This is achieved below by merging of these two tables.
###Code
topUsersRating=topUsers.merge(ratings_df, left_on='userId', right_on='userId', how='inner')
topUsersRating.head()
###Output
_____no_output_____
###Markdown
Now all we need to do is simply multiply the movie rating by its weight (The similarity index), then sum up the new ratings and divide it by the sum of the weights.We can easily do this by simply multiplying two columns, then grouping up the dataframe by movieId and then dividing two columns:It shows the idea of all similar users to candidate movies for the input user:
###Code
# Multiplies the similarity by the user's ratings
topUsersRating['weightedRating'] = topUsersRating['similarityIndex']*topUsersRating['rating']
topUsersRating.head()
topUsersRating.groupby('movieId').sum()
# Applies a sum to the topUsers after grouping it up by userId
tempTopUsersRating = topUsersRating.groupby('movieId').sum()[['similarityIndex','weightedRating']]
tempTopUsersRating.columns = ['sum_similarityIndex','sum_weightedRating']
tempTopUsersRating.head()
# Creates an empty dataframe
recommendation_df = pd.DataFrame()
# Now we take the weighted average
recommendation_df['weighted average recommendation score'] = tempTopUsersRating['sum_weightedRating']/tempTopUsersRating['sum_similarityIndex']
recommendation_df['movieId'] = tempTopUsersRating.index
recommendation_df.head()
###Output
_____no_output_____
###Markdown
Now let's sort it and see the top 20 movies that the algorithm recommended!
###Code
recommendation_df = recommendation_df.sort_values(by='weighted average recommendation score', ascending=False)
recommendation_df.head(10)
movies_df.loc[movies_df['movieId'].isin(recommendation_df.head(10)['movieId'].tolist())]
###Output
_____no_output_____ |
codigo/part3test.ipynb | ###Markdown
Parte 3 - IC -IBRA Importando bibliotecas básicas
###Code
from google.colab import drive
drive.mount('/content/drive')
!pip install xlsxwriter
!pip install sentencepiece
!pip install transformers
!pip install fairseq fastBPE
import pandas as pd, numpy as np
import tensorflow as tf
import tensorflow.keras.backend as K
from sklearn.model_selection import StratifiedKFold
import tokenizers
from transformers import RobertaConfig, TFRobertaModel
print('TF version',tf.__version__)
np.random.seed(seed=42)
tf.keras.utils.set_random_seed(42)
from types import SimpleNamespace
from fairseq.data.encoders.fastbpe import fastBPE
from fairseq.data import Dictionary
from sklearn.utils import shuffle
import json
import tensorflow as tf
import csv
import random
import numpy as np
import sklearn
import pandas as pd
import seaborn as sb
import matplotlib.pyplot as plt
import random
!wget https://public.vinai.io/BERTweet_base_transformers.tar.gz
!tar -xzvf BERTweet_base_transformers.tar.gz
class BERTweetTokenizer():
def __init__(self,pretrained_path = '/content/BERTweet_base_transformers/'):
self.bpe = fastBPE(SimpleNamespace(bpe_codes= pretrained_path + "bpe.codes"))
self.vocab = Dictionary()
self.vocab.add_from_file(pretrained_path + "dict.txt")
self.cls_token_id = 0
self.pad_token_id = 1
self.sep_token_id = 2
self.pad_token = '<pad>'
self.cls_token = '<s>'
self.sep_token = '</s>'
def bpe_encode(self,text):
return self.bpe.encode(text) # bpe.encode(line)
def encode(self,text,add_special_tokens=False):
subwords = self.bpe.encode(text)
input_ids = self.vocab.encode_line(subwords, append_eos=False, add_if_not_exist=False).long().tolist() ## Map subword tokens to corresponding indices in the dictionary
return input_ids
def tokenize(self,text):
return self.bpe_encode(text).split()
def convert_tokens_to_ids(self,tokens):
input_ids = self.vocab.encode_line(' '.join(tokens), append_eos=False, add_if_not_exist=False).long().tolist()
return input_ids
#from: https://www.kaggle.com/nandhuelan/bertweet-first-look
def decode_id(self,id):
return self.vocab.string(id, bpe_symbol = '@@')
def decode_id_nospace(self,id):
return self.vocab.string(id, bpe_symbol = '@@ ')
def bert_encode(self, texts, max_len=512):
all_tokens = []
all_masks = []
all_segments = []
for text in texts:
text = self.bpe.encode(text)
input_sequence = '<s> ' + text + ' </s>'
enc = self.vocab.encode_line(input_sequence, append_eos=False, add_if_not_exist=False).long().tolist()
enc = enc[:max_len-2]
pad_len = max_len - len(enc)
tokens = enc + [1] * pad_len #input_ids
pad_masks = [1] * len(enc) + [0] * pad_len #attention_mask
segment_ids = [0] * max_len #token_type_ids
all_tokens.append(tokens)
all_masks.append(pad_masks)
all_segments.append(segment_ids)
return np.array(all_tokens), np.array(all_masks), np.array(all_segments)
def build_model(max_len=512):
PATH = '/content/BERTweet_base_transformers/'
input_word_ids = tf.keras.Input(shape=(max_len,), dtype=tf.int32, name="input_word_ids")
input_mask = tf.keras.Input(shape=(max_len,), dtype=tf.int32, name="input_mask")
segment_ids = tf.keras.Input(shape=(max_len,), dtype=tf.int32, name="segment_ids")
config = RobertaConfig.from_pretrained(PATH+'config.json')
bert_model = TFRobertaModel.from_pretrained(PATH+'model.bin',config=config,from_pt=True)
x = bert_model(input_word_ids,attention_mask=input_mask,token_type_ids=segment_ids)
#pooled_output, sequence_output = bert_layer([input_word_ids, input_mask, segment_ids])
#clf_output = sequence_output[:, 0, :]
net = tf.keras.layers.Dense(64, activation='relu')(x[0])
net = tf.keras.layers.Dropout(0.2)(net)
net = tf.keras.layers.Dense(32, activation='relu')(net)
net = tf.keras.layers.Dropout(0.2)(net)
net = tf.keras.layers.Flatten()(net)
out = tf.keras.layers.Dense(1, activation='sigmoid')(net)
model = tf.keras.models.Model(inputs=[input_word_ids, input_mask, segment_ids], outputs=out)
model.compile(tf.keras.optimizers.Adam(lr=1e-5), loss='binary_crossentropy', metrics=['accuracy'])
#print(out.shape)
return model
df = pd.read_csv('/content/drive/Shareddrives/Projeto IBRA USP/Coleta de Dados/Datasets - IBRA/E1 - Hate Speech and Offensive Language/labeled_data.csv', dtype={'Class': int, 'Tweet': str})
df.head()
###Output
_____no_output_____
###Markdown
Pre-processing
###Code
# Ver se tem valores nulos de tweet
null_tweets = df[df['tweet'].isna()]
null_tweets
# Analisar tamanho dos tweets
df['tweet_len'] = df.tweet.apply(lambda x: len(x))
df.hist('tweet_len', bins=400)
#df_without_outliers = df[(df.tweet_len > df.tweet_len.quantile(5/1000))]
df_small_outliers = df[df.tweet_len < df.tweet_len.quantile(5/1000)]
df_big_outliers = df[df.tweet_len > df.tweet_len.quantile(995/1000)]
df_small_outliers.sort_values(by='tweet_len')
df_big_outliers.sort_values(by='tweet_len')
df = df.drop(['tweet_len'], axis=1)
###Output
_____no_output_____
###Markdown
Class Pre-processing Creating the necessary columns to analyze the model. Binary Class to Hate speechA class will be created that will have a value of 1 if it is offensive language or hate speech, and zero if it is not.
###Code
def binaryClassHateSpeech(dataframe, mod=1):
if(mod == 1): #Put together hate speech and Ofensive language
dataframe['hate_ofencive_speech'] = dataframe['class'].apply(lambda x: 1 if x!=2 else 0)
if(mod == 2): # It take just the hate speech
dataframe['hate_ofencive_speech'] = dataframe['class'].apply(lambda x: 1 if x==0 else 0)
return dataframe
df = binaryClassHateSpeech(df)
df.head()
###Output
_____no_output_____
###Markdown
Creating collumns with artificial subclassification
###Code
def creat_subclass(df, column='hate_ofencive_speech', number_subclasses=3, percent=0.7, seed=10):
random.seed(seed)
for i in range(number_subclasses):
df['subclass' + str(i)] = df['hate_ofencive_speech'].apply(lambda x: 1 if (x==1 and random.random()>percent) else 0)
return df
df = creat_subclass(df)
df.head()
###Output
_____no_output_____
###Markdown
Making Samples to the model
###Code
# Separate dataset in train and test
def separate_train_and_test(df, class_column ,sub_classes_toTakeOff=[], sub_classes_toKeep=[], seed=42, percent_sample=0.1, sample_index=1):
train_samples = []
test_samples = [] # A test_sample it's gonna be all the dataset without a the elementes from train
if sample_index*percent_sample > 1:
print("ERRO: Invalide sample Index")
return [], []
df_without_subclasses = df
# Cut of the subclasses we don't need
for subclass in sub_classes_toTakeOff:
df_without_subclasses = df[df[subclass] != 1]
for subclass in sub_classes_toKeep:
df_without_subclasses = df[df[subclass] == 1]
df_without_subclasses = shuffle(df_without_subclasses, random_state=seed)
tam_new_df = df_without_subclasses.shape[0]
#Getting the samples, doing manual stratification
df2 = df_without_subclasses[df_without_subclasses[class_column] == 1]
tam_df2 = df2.shape[0]
df_train2 = df2[int(percent_sample*tam_df2*(sample_index-1)):int(percent_sample*tam_df2*(sample_index))]
df_test2 = df.loc[df[class_column] == 1].drop(df_train2.index)
df3 = df_without_subclasses[df_without_subclasses[class_column] == 0]
tam_df3 = df3.shape[0]
df_train3 = df3[int(percent_sample*tam_df3*(sample_index-1)):int(percent_sample*tam_df3*(sample_index))]
df_test3 = df.loc[df[class_column] == 0].drop(df_train3.index)
#Juntar
df_train = pd.concat([df_train3, df_train2])
df_test = pd.concat([df_test3, df_test2])
#aleatorizar
df_train = shuffle(df_train, random_state=seed)
df_test = shuffle(df_test, random_state=seed)
return df_train, df_test
df_train1, df_test = separate_train_and_test(df, 'hate_ofencive_speech',sub_classes_toKeep=['subclass0'])
df_train1
df_train2, df_test = separate_train_and_test(df, 'hate_ofencive_speech',sub_classes_toTakeOff=['subclass0'])
df_train2
df.shape
df_train2.shape
df_test.shape
###Output
_____no_output_____
###Markdown
Separate the validation dataset
###Code
def separate_train_validation(df_train, class_column, percent=0.7, seed=12):
from sklearn.model_selection import train_test_split
X_t, X_val, y_t, y_val = train_test_split(df_train, df_train[class_column], train_size=percent, random_state=seed)
return X_t, X_val
###Output
_____no_output_____
###Markdown
Running the Bertwitter Model
###Code
df_train, df_test = separate_train_and_test(df, 'hate_ofencive_speech',sub_classes_toTakeOff=['subclass0'])
df_train, df_val = separate_train_validation(df_train, 'hate_ofencive_speech')
#Transform to Numpy
def transform_to_numpy(df, tweet_column, class_column):
X = df[tweet_column].to_numpy()
Y = df[class_column].to_numpy()
return X, Y
X_train, Y_train = transform_to_numpy(df_train, 'tweet', 'hate_ofencive_speech')
X_test, Y_test = transform_to_numpy(df_test, 'tweet', 'hate_ofencive_speech')
X_val, Y_val = transform_to_numpy(df_val, 'tweet', 'hate_ofencive_speech')
def class_size_graph(Y_train,Y_test, Y_val):
labels = ["%s"%i for i in range(3)]
unique, counts = np.unique(Y_train, return_counts=True)
uniquet, countst = np.unique(Y_test, return_counts=True)
uniquev, countsv = np.unique(Y_val, return_counts=True)
fig, ax = plt.subplots()
rects3 = ax.bar(uniquev - 0.5, countsv, 0.25, label='Validation')
rects1 = ax.bar(unique - 0.2, counts, 0.25, label='Train')
rects2 = ax.bar(unique + 0.1, countst, 0.25, label='Test')
ax.legend()
ax.set_xticks(unique)
ax.set_xticklabels(labels)
plt.title('Hate Speech classes')
plt.xlabel('Class')
plt.ylabel('Frequency')
plt.show()
class_size_graph(Y_train,Y_test, Y_val)
#Tokenização
max_len = 32
tokenizer = BERTweetTokenizer()
#fit
model = build_model(max_len=max_len)
model.summary()
train_history = model.fit(
X_train, Y_train,
validation_data=(X_val, Y_val),
epochs=3,
batch_size=16,
verbose=1
)
#model.save_weights('savefile')
# Running the model to the Test dataframe
X_test = tokenizer.bert_encode(X_test, max_len=max_len)
P_hat = model.predict(X_test)
P_hat[:20]
###Output
_____no_output_____
###Markdown
Analyzing the result
###Code
def plot_confusion_matrix(y, y_pred, beta = 2):
"""
It receives an array with the ground-truth (y)
and another with the prediction (y_pred), both with binary labels
(positve=+1 and negative=-1) and plots the confusion
matrix.
It uses P (positive class id) and N (negative class id)
which are "global" variables ...
"""
TP = np.sum((y_pred == 1) * (y == 1))
TN = np.sum((y_pred == 0) * (y == 0))
FP = np.sum((y_pred == 1) * (y == 0))
FN = np.sum((y_pred == 0) * (y == 1))
total = TP+FP+TN+FN
accuracy = (TP+TN)/total
recall = (TP)/(TP+FN)
precision = (TP)/(TP+FP)
Fbeta = (precision*recall)*(1+beta**2)/(beta**2*precision + recall)
print("TP = %4d FP = %4d\nFN = %4d TN = %4d\n"%(TP,FP,FN,TN))
print("Accuracy = %d / %d (%f)" %((TP+TN),total, (TP+TN)/total))
print("Recall = %d / %d (%f)" %((TP),(TP+FN), (TP)/(TP+FN)))
print("Precision = %d / %d (%f)" %((TP),(TP+FP), (TP)/(TP+FP)))
print("Fbeta Score = %f" %(Fbeta))
confusion = [
[TP/(TP+FN), FP/(TN+FP)],
[FN/(TP+FN), TN/(TN+FP)]
]
P = 1
N = 0
df_cm = pd.DataFrame(confusion, \
['$\hat{y} = %d$'%P, '$\hat{y} = %d$'%N],\
['$y = %d$'%P, '$y = %d$'%N])
plt.figure(figsize = (8,4))
sb.set(font_scale=1.4)
sb.heatmap(df_cm, annot=True) #, annot_kws={"size": 16}, cmap = 'coolwarm')
plt.show()
threshold = 0.55
y_hat = np.where(P_hat > threshold, 1, 0)
y_test = Y_test.reshape([Y_test.shape[0], 1])
total = len(y_hat)
plot_confusion_matrix(y_test, y_hat)
def recall(y, y_pred):
TP = np.sum((y_pred == 1) * (y == 1))
TN = np.sum((y_pred == 0) * (y == 0))
FP = np.sum((y_pred == 1) * (y == 0))
FN = np.sum((y_pred == 0) * (y == 1))
total = TP+FP+TN+FN
recall = (TP)/(TP+FN)
return recall
def plot_threshold_recall(Y_test, P_hat, step_size=0.05):
recalls = []
thresholds = []
i = 0.2
Y_test = Y_test.reshape([Y_test.shape[0], 1])
while i < 0.95:
threshold = i
Y_hat = np.where(P_hat > threshold, 1, 0)
recalls.append(recall(Y_test, Y_hat))
thresholds.append(threshold)
i += step_size
plt.plot(thresholds, recalls)
plot_threshold_recall(Y_test, P_hat, step_size=0.05)
###Output
_____no_output_____
###Markdown
Put all the process together
###Code
def train_model(df_train, df_test, df_val, xColumn, yColumn):
# Pass to numpy array
X_train, Y_train = transform_to_numpy(df_train, xColumn, yColumn)
X_test, Y_test = transform_to_numpy(df_test, xColumn, yColumn)
X_val, Y_val = transform_to_numpy(df_val, xColumn, yColumn)
#Tokenização
max_len = 32
tokenizer = BERTweetTokenizer()
X_train = tokenizer.bert_encode(X_train, max_len=max_len)
X_val = tokenizer.bert_encode(X_val, max_len=max_len)
# Train the model
model = build_model(max_len=max_len)
model.summary()
train_history = model.fit(
X_train, Y_train,
validation_data=(X_val, Y_val),
epochs=3,
batch_size=16,
verbose=1
)
model.save_weights('savefile')
# Running the model to the Test dataframe
X_test = tokenizer.bert_encode(X_test, max_len=max_len)
P_hat = model.predict(X_test)
return P_hat
df = pd.read_csv('/content/drive/MyDrive/IME/IC/datasets/D1-original.csv', dtype={'Class': int, 'Tweet': str})
df = binaryClassHateSpeech(df, mod=2) #let's try just with hate speech
df = creat_subclass(df)
# Separate train, test and validation
df_train, df_test = separate_train_and_test(df, 'hate_ofencive_speech',sub_classes_toTakeOff=['subclass0'])
df_train, df_val = separate_train_validation(df_train, 'hate_ofencive_speech')
# Print confusion matriz:]
X_train, Y_train = transform_to_numpy(df_train, 'tweet', 'hate_ofencive_speech')
X_test, Y_test = transform_to_numpy(df_test, 'tweet', 'hate_ofencive_speech')
X_val, Y_val = transform_to_numpy(df_val, 'tweet', 'hate_ofencive_speech')
class_size_graph(Y_train, Y_test, Y_val)
P_hat = train_model(df_train, df_test, df_val, 'tweet', 'hate_ofencive_speech')
# Print confusion matriz:]
threshold = 0.55
Y_hat = np.where(P_hat > threshold, 1, 0)
Y_test = Y_test.reshape([Y_test.shape[0], 1])
total = len(Y_hat)
plot_confusion_matrix(Y_test, Y_hat)
###Output
TP = 0 FP = 0
FN = 1329 TN = 21018
Accuracy = 21018 / 22347 (0.940529)
Recall = 0 / 1329 (0.000000)
###Markdown
Analizing a new dataset (**E9**) Pre-processing
###Code
test_df = pd.read_csv('/content/drive/Shareddrives/Projeto IBRA USP/Coleta de Dados/Datasets - IBRA/collected_tweets/NAACL_SRW_2016.csv')#, encoding = 'latin-1')
test_label_df = pd.read_csv('/content/drive/Shareddrives/Projeto IBRA USP/Coleta de Dados/Datasets - IBRA/collected_tweets/NAACL_SRW_2016Labels.csv', header = None)
df = pd.concat([test_df,test_label_df], axis = 1)
df.columns = ['tweet','class']
df = df.dropna()
df.head()
pd.unique(df['class'])
df['hate_speech'] = df['class'].apply(lambda x : 0 if x=='none' else 1)
df['racism'] = df['class'].apply(lambda x : 1 if x=='racism' else 0)
df['sexism'] = df['class'].apply(lambda x : 1 if x=='sexism' else 0)
df.head()
plt.hist(df['class'])
plt.show()
len(df[df['class'] == 'racism'])
len(df[df['class'] == 'sexism'])
len(df[df['class'] == 'none'])
for i in range(10):
print(df['tweet'][i] + '\n')
###Output
These girls are the equivalent of the irritating Asian girls a couple years ago. Well done, 7. #MKR
Drasko they didn't cook half a bird you idiot #mkr
Hopefully someone cooks Drasko in the next ep of #MKR
of course you were born in serbia...you're as fucked as A Serbian Film #MKR
So Drasko just said he was impressed the girls cooked half a chicken.. They cooked a whole one #MKR
I've had better looking shits than these two! #MKR2015 #MKR #killerblondes
The face of very ugly promo girls ! Faces like cats arsehole #mkr excited to see them@go down tonight...literally http://t.co/HgoJrfoIeO
@mykitchenrules Elegant and beautiful?Cheap and trashy!Nothing more unattractive than girls banging on about how hot hey are. #mkr #notsassy
"He can't be a server at our restaurant, that beard makes him look like a terrorist." Everyone laughs. #fuckthanksgiving
Stop saying dumb blondes with pretty faces as you need a pretty face to pull that off!!!! #mkr
###Markdown
Running the model
###Code
"""
Training with a sample of sexism
"""
# Separate train, test and validation
#def separate_train_and_test(df, class_column ,sub_classes_toTakeOff=[], sub_classes_toKeep=[], seed=42, percent_sample=0.1, sample_index=1):
df_train, df_test = separate_train_and_test(df, 'hate_speech',sub_classes_toTakeOff=['racism'])
df_train, df_val = separate_train_validation(df_train, 'hate_speech')
# Print confusion matriz:]
X_train, Y_train = transform_to_numpy(df_train, 'tweet', 'hate_speech')
X_test, Y_test = transform_to_numpy(df_test, 'tweet', 'hate_speech')
X_val, Y_val = transform_to_numpy(df_val, 'tweet', 'hate_speech')
class_size_graph(Y_train, Y_test, Y_val)
print(len(df_train), len(df_test), len(df_val))
print(len(df_train[df_train['hate_speech']== 1]), len(df_test[df_test['hate_speech'] == 1]), len(df_val[df_val['hate_speech']==1]))
P_hat, model = train_model2(df_train, df_test, df_val, 'tweet', 'hate_speech')
# Print confusion matriz:]
threshold = 0.55
Y_hat = np.where(P_hat > threshold, 1, 0)
Y_test = Y_test.reshape([Y_test.shape[0], 1])
total = len(Y_hat)
plot_confusion_matrix(Y_test, Y_hat)
###Output
TP = 110 FP = 3
FN = 2342 TN = 5641
Accuracy = 5751 / 8096 (0.710351)
Recall = 110 / 2452 (0.044861)
Precision = 110 / 113 (0.973451)
Fbeta Score = 0.055438
###Markdown
Increaing the size of the sample
###Code
def train_model2(df_train, df_test, df_val, xColumn, yColumn):
# Pass to numpy array
X_train, Y_train = transform_to_numpy(df_train, xColumn, yColumn)
X_test, Y_test = transform_to_numpy(df_test, xColumn, yColumn)
X_val, Y_val = transform_to_numpy(df_val, xColumn, yColumn)
#Tokenização
max_len = 32
tokenizer = BERTweetTokenizer()
X_train = tokenizer.bert_encode(X_train, max_len=max_len)
X_val = tokenizer.bert_encode(X_val, max_len=max_len)
# Train the model
model = build_model(max_len=max_len)
model.summary()
train_history = model.fit(
X_train, Y_train,
validation_data=(X_val, Y_val),
epochs=3,
batch_size=16,
verbose=1
)
model.save_weights('savefile')
# Running the model to the Test dataframe
X_test = tokenizer.bert_encode(X_test, max_len=max_len)
P_hat = model.predict(X_test)
return P_hat, model
"""
Training with a sample of sexism
"""
# Separate train, test and validation
#def separate_train_and_test(df, class_column ,sub_classes_toTakeOff=[], sub_classes_toKeep=[], seed=42, percent_sample=0.1, sample_index=1):
df_train, df_test = separate_train_and_test(df, 'hate_speech',sub_classes_toTakeOff=['racism'],percent_sample=0.5)
df_train, df_val = separate_train_validation(df_train, 'hate_speech')
# Print confusion matriz:]
X_train, Y_train = transform_to_numpy(df_train, 'tweet', 'hate_speech')
X_test, Y_test = transform_to_numpy(df_test, 'tweet', 'hate_speech')
X_val, Y_val = transform_to_numpy(df_val, 'tweet', 'hate_speech')
class_size_graph(Y_train, Y_test, Y_val)
print(len(df_train), len(df_test), len(df_val))
print(len(df_train[df_train['hate_speech']== 1]), len(df_test[df_test['hate_speech'] == 1]), len(df_val[df_val['hate_speech']==1]))
P_hat, model2 = train_model2(df_train, df_test, df_val, 'tweet', 'hate_speech')
# Print confusion matriz:]
threshold = 0.55
Y_hat = np.where(P_hat > threshold, 1, 0)
Y_test = Y_test.reshape([Y_test.shape[0], 1])
total = len(Y_hat)
plot_confusion_matrix(Y_test, Y_hat)
###Output
TP = 936 FP = 274
FN = 432 TN = 2862
Accuracy = 3798 / 4504 (0.843250)
Recall = 936 / 1368 (0.684211)
Precision = 936 / 1210 (0.773554)
Fbeta Score = 0.700389
###Markdown
Testing the model with other dataset
###Code
df_E1 = pd.read_csv('/content/drive/MyDrive/IME/IC/datasets/D1-original.csv', dtype={'Class': int, 'Tweet': str})
df_E1 = binaryClassHateSpeech(df_E1, mod=1) #let's try just with hate speech
X_test, Y_test = transform_to_numpy(df_E1, 'tweet', 'hate_ofencive_speech')
plt.hist(Y_test)
plt.show()
###Output
_____no_output_____
###Markdown
Amostra de 10% dos casos de machismo
###Code
#Tokenização
max_len = 32
tokenizer = BERTweetTokenizer()
X_test = tokenizer.bert_encode(X_test, max_len=max_len)
P_hat = model.predict(X_test)
threshold = 0.50
Y_hat = np.where(P_hat > threshold, 1, 0)
Y_test = Y_test.reshape([Y_test.shape[0], 1])
total = len(Y_hat)
plot_confusion_matrix(Y_test, Y_hat)
###Output
TP = 184 FP = 55
FN = 20436 TN = 4108
Accuracy = 4292 / 24783 (0.173183)
Recall = 184 / 20620 (0.008923)
Precision = 184 / 239 (0.769874)
Fbeta Score = 0.011122
###Markdown
Amostra de 50% dos casos de machismo
###Code
#Tokenização
P_hat = model2.predict(X_test)
# Print confusion matriz:]
threshold = 0.50
Y_hat = np.where(P_hat > threshold, 1, 0)
Y_test = Y_test.reshape([Y_test.shape[0], 1])
total = len(Y_hat)
plot_confusion_matrix(Y_test, Y_hat)
###Output
TP = 10397 FP = 935
FN = 10223 TN = 3228
Accuracy = 13625 / 24783 (0.549772)
Recall = 10397 / 20620 (0.504219)
Precision = 10397 / 11332 (0.917490)
Fbeta Score = 0.554140
|
Pet_Stromatolite/Facies_Classification_Draft2.ipynb | ###Markdown
SEG Machine Learning (Well Log Facies Prediction) Contest Entry by Justin Gosses of team Pet_Stromatolite This is an "open science" contest designed to introduce people to machine learning with well logs and brainstorm different methods through collaboration with others, so this notebook is based heavily on the introductary notebook with my own modifications. more information at https://github.com/seg/2016-ml-contest and even more information at http://library.seg.org/doi/abs/10.1190/tle35100906.1 This data is from the Council Grove gas reservoir in Southwest Kansas. The Panoma Council Grove Field is predominantly a carbonate gas reservoir encompassing 2700 square miles in Southwestern Kansas. This dataset is from nine wells (with 4149 examples), consisting of a set of seven predictor variables and a rock facies (class) for each example vector and validation (test) data (830 examples from two wells) having the same seven predictor variables in the feature vector. Facies are based on examination of cores from nine wells taken vertically at half-foot intervals. Predictor variables include five from wireline log measurements and two geologic constraining variables that are derived from geologic knowledge. These are essentially continuous variables sampled at a half-foot sample rate. The seven predictor variables are:* Five wire line log curves include [gamma ray](http://petrowiki.org/Gamma_ray_logs) (GR), [resistivity logging](http://petrowiki.org/Resistivity_and_spontaneous_%28SP%29_logging) (ILD_log10),[photoelectric effect](http://www.glossary.oilfield.slb.com/en/Terms/p/photoelectric_effect.aspx) (PE), [neutron-density porosity difference and average neutron-density porosity](http://petrowiki.org/Neutron_porosity_logs) (DeltaPHI and PHIND). Note, some wells do not have PE.* Two geologic constraining variables: nonmarine-marine indicator (NM_M) and relative position (RELPOS)The nine discrete facies (classes of rocks) are: 1. Nonmarine sandstone2. Nonmarine coarse siltstone 3. Nonmarine fine siltstone 4. Marine siltstone and shale 5. Mudstone (limestone)6. Wackestone (limestone)7. Dolomite8. Packstone-grainstone (limestone)9. Phylloid-algal bafflestone (limestone)These facies aren't discrete, and gradually blend into one another. Some have neighboring facies that are rather close. Mislabeling within these neighboring facies can be expected to occur. The following table lists the facies, their abbreviated labels and their approximate neighbors.Facies |Label| Adjacent Facies:---: | :---: |:--:1 |SS| 22 |CSiS| 1,33 |FSiS| 24 |SiSh| 55 |MS| 4,66 |WS| 5,77 |D| 6,88 |PS| 6,7,99 |BS| 7,8Let's clean up this dataset. The 'Well Name' and 'Formation' columns can be turned into a categorical data type. ================================================================================================================= Notes: Early Ideas for feature engineering- take out any points in individual wells where not all the logs are present- test whether error increases around the depths where PE is absent?- test whether using formation, depth, or depth&formation as variables impacts prediction- examine well logs & facies logs (including prediction wells) to see if there aren't trends that might be dealt with by increasing the population of certain wells over others in the training set?- explore effect size of using/not using marine or non-marine flags- explore making 'likely to predict wrong' flags based on first-pass results with thin facies surrounded by thicker facies such that you might expand a 'blended' response due to the measured response of the tool being thicker than predicted facies- explore doing the same above but before prediction using range of thickness in predicted facies flags vs. range of thickness in known facies flags- explore using multiple prediction loops, in order words, predict errors not just facies.- Explore error distribution: adjacent vs. non-adjacent facies, by thickness, marine vs. non-marine, by formation, and possible human judgement patterns that influence interpreted facies.
###Code
### loading
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.colors as colors
from mpl_toolkits.axes_grid1 import make_axes_locatable
### setting up options in pandas
from pandas import set_option
set_option("display.max_rows", 10)
pd.options.mode.chained_assignment = None
### taking a look at the training dataset
filename = 'training_data.csv'
training_data = pd.read_csv(filename)
training_data
### Checking out Well Names
training_data['Well Name'] = training_data['Well Name'].astype('category')
training_data['Well Name'].unique()
training_data['Well Name']
well_name_list = training_data['Well Name'].unique()
well_name_list
### Checking out Formation Names
training_data['Formation'] = training_data['Formation'].astype('category')
training_data['Formation'].unique()
training_data.describe()
facies_1 = training_data.loc[training_data['Facies'] == 1]
facies_2 = training_data.loc[training_data['Facies'] == 2]
facies_3 = training_data.loc[training_data['Facies'] == 3]
facies_4 = training_data.loc[training_data['Facies'] == 4]
facies_5 = training_data.loc[training_data['Facies'] == 5]
facies_6 = training_data.loc[training_data['Facies'] == 6]
facies_7 = training_data.loc[training_data['Facies'] == 7]
facies_8 = training_data.loc[training_data['Facies'] == 8]
facies_9 = training_data.loc[training_data['Facies'] == 9]
#showing description for just facies 1, Sandstone
facies_1.describe()
#showing description for just facies 9, Phylloid-algal bafflestone (limestone)
facies_9.describe()
#showing description for just facies 8, Packstone-grainstone (limestone)
facies_8.describe()
###Output
_____no_output_____
###Markdown
This is a quick view of the statistical distribution of the input variables. Looking at the count values, there are 3232 feature vectors in the training set.Remove a single well to use as a blind test later.
###Code
blind = training_data[training_data['Well Name'] == 'SHANKLE']
training_data = training_data[training_data['Well Name'] != 'SHANKLE']
###Output
_____no_output_____
###Markdown
Before we plot the well data, let's define a color map so the facies are represented by consistent color in all the plots in this tutorial. We also create the abbreviated facies labels, and add those to the facies_vectors dataframe.
###Code
# 1=sandstone 2=c_siltstone 3=f_siltstone
# 4=marine_silt_shale 5=mudstone 6=wackestone 7=dolomite
# 8=packstone 9=bafflestone
facies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00',
'#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']
facies_labels = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS',
'WS', 'D','PS', 'BS']
#facies_color_map is a dictionary that maps facies labels
#to their respective colors
facies_color_map = {}
for ind, label in enumerate(facies_labels):
facies_color_map[label] = facies_colors[ind]
def label_facies(row, labels):
return labels[ row['Facies'] -1]
training_data.loc[:,'FaciesLabels'] = training_data.apply(lambda row: label_facies(row, facies_labels), axis=1)
###Output
_____no_output_____
###Markdown
Let's take a look at the data from individual wells in a more familiar log plot form. We will create plots for the five well log variables, as well as a log for facies labels. The plots are based on the those described in Alessandro Amato del Monte's excellent tutorial.
###Code
def make_facies_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[5].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[5])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
make_facies_log_plot(
training_data[training_data['Well Name'] == 'SHRIMPLIN'],
facies_colors)
###Output
_____no_output_____
###Markdown
editing the well viewer code in an attempt to understand it and potentially not show everything
###Code
def make_faciesOnly_log_plot(logs, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster=np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=2, figsize=(3, 9))
# f, ax = plt.subplots(nrows=1, ncols=6, figsize=(8, 12))
# ax[0].plot(logs.GR, logs.Depth, '-g')
ax[0].plot(logs.ILD_log10, logs.Depth, '-')
# ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
# ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
# ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im=ax[1].imshow(cluster, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[1])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-1):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
# ax[0].set_xlabel("GR")
# ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[0].set_xlabel("ILD_log10")
ax[0].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
# ax[2].set_xlabel("DeltaPHI")
# ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
# ax[3].set_xlabel("PHIND")
# ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
# ax[4].set_xlabel("PE")
# ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[1].set_xlabel('Facies')
# ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
# ax[4].set_yticklabels([]);
ax[1].set_yticklabels([])
ax[1].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
###Output
_____no_output_____
###Markdown
looking at several wells at once
###Code
# make_faciesOnly_log_plot(
# training_data[training_data['Well Name'] == 'SHRIMPLIN'],
# facies_colors)
for i in range(len(well_name_list)-1):
# well_name_list[i]
make_faciesOnly_log_plot(
training_data[training_data['Well Name'] == well_name_list[i]],
facies_colors)
###Output
/Users/justingosses/anaconda/lib/python3.5/site-packages/matplotlib/axes/_base.py:3045: UserWarning: Attempting to set identical bottom==top results
in singular transformations; automatically expanding.
bottom=-0.5, top=-0.5
'bottom=%s, top=%s') % (bottom, top))
###Markdown
In addition to individual wells, we can look at how the various facies are represented by the entire training set. Let's plot a histogram of the number of training examples for each facies class. This shows the distribution of examples by facies for the examples in the training set. Dolomite (facies 7) has the fewest with 81 examples. Depending on the performance of the classifier we are going to train, we may consider getting more examples of these facies.Crossplots are a familiar tool in the geosciences to visualize how two properties vary with rock type. This dataset contains 5 log variables, and scatter matrix can help to quickly visualize the variation between the all the variables in the dataset. We can employ the very useful Seaborn library to quickly create a nice looking scatter matrix. Each pane in the plot shows the relationship between two of the variables on the x and y axis, with each point is colored according to its facies. The same colormap is used to represent the 9 facies.
###Code
#save plot display settings to change back to when done plotting with seaborn
inline_rc = dict(mpl.rcParams)
import seaborn as sns
sns.set()
sns.pairplot(training_data.drop(['Well Name','Facies','Formation','Depth','NM_M','RELPOS'],axis=1),
hue='FaciesLabels', palette=facies_color_map,
hue_order=list(reversed(facies_labels)))
#switch back to default matplotlib plot style
mpl.rcParams.update(inline_rc)
from pandas.tools.plotting import radviz
radviz(training_data.drop(['Well Name','Formation','Depth','NM_M','RELPOS']), "Facies")
###Output
_____no_output_____
###Markdown
Conditioning the data setNow we extract just the feature variables we need to perform the classification. The predictor variables are the five wireline values and two geologic constraining variables. We also get a vector of the facies labels that correspond to each feature vector.
###Code
correct_facies_labels = training_data['Facies'].values
# dropping certain labels and only keeping the geophysical log values to train on
feature_vectors = training_data.drop(['Formation', 'Well Name', 'Depth','Facies','FaciesLabels'], axis=1)
feature_vectors.describe()
###Output
_____no_output_____
###Markdown
Scikit includes a [preprocessing](http://scikit-learn.org/stable/modules/preprocessing.html) module that can 'standardize' the data (giving each variable zero mean and unit variance, also called *whitening*). Many machine learning algorithms assume features will be standard normally distributed data (ie: Gaussian with zero mean and unit variance). The factors used to standardize the training set must be applied to any subsequent feature set that will be input to the classifier. The `StandardScalar` class can be fit to the training set, and later used to standardize any training data.
###Code
from sklearn import preprocessing
scaler = preprocessing.StandardScaler().fit(feature_vectors)
scaled_features = scaler.transform(feature_vectors)
feature_vectors
feature_vectors.describe()
###Output
_____no_output_____
###Markdown
Scikit also includes a handy function to randomly split the training data into training and test sets. The test set contains a small subset of feature vectors that are not used to train the network. Because we know the true facies labels for these examples, we can compare the results of the classifier to the actual facies and determine the accuracy of the model. Let's use 20% of the data for the test set.
###Code
from sklearn.cross_validation import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
scaled_features, correct_facies_labels, test_size=0.1, random_state=42)
###Output
_____no_output_____
###Markdown
Training the SVM classifierNow we use the cleaned and conditioned training set to create a facies classifier. As mentioned above, we will use a type of machine learning model known as a [support vector machine](https://en.wikipedia.org/wiki/Support_vector_machine). The SVM is a map of the feature vectors as points in a multi dimensional space, mapped so that examples from different facies are divided by a clear gap that is as wide as possible. The SVM implementation in [scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.htmlsklearn.svm.SVC) takes a number of important parameters. First we create a classifier using the default settings.
###Code
from sklearn import svm
clf = svm.SVC()
###Output
_____no_output_____
###Markdown
Now we can train the classifier using the training set we created above.
###Code
clf.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
Now that the model has been trained on our data, we can use it to predict the facies of the feature vectors in the test set. Because we know the true facies labels of the vectors in the test set, we can use the results to evaluate the accuracy of the classifier.
###Code
predicted_labels = clf.predict(X_test)
###Output
_____no_output_____
###Markdown
We need some metrics to evaluate how good our classifier is doing. A [confusion matrix](http://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/) is a table that can be used to describe the performance of a classification model. [Scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html) allows us to easily create a confusion matrix by supplying the actual and predicted facies labels.The confusion matrix is simply a 2D array. The entries of confusion matrix `C[i][j]` are equal to the number of observations predicted to have facies `j`, but are known to have facies `i`. To simplify reading the confusion matrix, a function has been written to display the matrix along with facies labels and various error metrics. See the file `classification_utilities.py` in this repo for the `display_cm()` function.
###Code
from sklearn.metrics import confusion_matrix
from classification_utilities import display_cm, display_adj_cm
conf = confusion_matrix(y_test, predicted_labels)
display_cm(conf, facies_labels, hide_zeros=True)
def accuracy(conf):
total_correct = 0.
nb_classes = conf.shape[0]
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
acc = total_correct/sum(sum(conf))
return acc
###Output
_____no_output_____
###Markdown
As noted above, the boundaries between the facies classes are not all sharp, and some of them blend into one another. The error within these 'adjacent facies' can also be calculated. We define an array to represent the facies adjacent to each other. For facies label `i`, `adjacent_facies[i]` is an array of the adjacent facies labels.
###Code
adjacent_facies = np.array([[1], [0,2], [1], [4], [3,5], [4,6,7], [5,7], [5,6,8], [6,7]])
def accuracy_adjacent(conf, adjacent_facies):
nb_classes = conf.shape[0]
total_correct = 0.
for i in np.arange(0,nb_classes):
total_correct += conf[i][i]
for j in adjacent_facies[i]:
total_correct += conf[i][j]
return total_correct / sum(sum(conf))
print('Facies classification accuracy = %f' % accuracy(conf))
print('Adjacent facies classification accuracy = %f' % accuracy_adjacent(conf, adjacent_facies))
###Output
_____no_output_____
###Markdown
Model parameter selectionThe classifier so far has been built with the default parameters. However, we may be able to get improved classification results with optimal parameter choices.We will consider two parameters. The parameter `C` is a regularization factor, and tells the classifier how much we want to avoid misclassifying training examples. A large value of C will try to correctly classify more examples from the training set, but if `C` is too large it may 'overfit' the data and fail to generalize when classifying new data. If `C` is too small then the model will not be good at fitting outliers and will have a large error on the training set.The SVM learning algorithm uses a kernel function to compute the distance between feature vectors. Many kernel functions exist, but in this case we are using the radial basis function `rbf` kernel (the default). The `gamma` parameter describes the size of the radial basis functions, which is how far away two vectors in the feature space need to be to be considered close.We will train a series of classifiers with different values for `C` and `gamma`. Two nested loops are used to train a classifier for every possible combination of values in the ranges specified. The classification accuracy is recorded for each combination of parameter values. The results are shown in a series of plots, so the parameter values that give the best classification accuracy on the test set can be selected.This process is also known as 'cross validation'. Often a separate 'cross validation' dataset will be created in addition to the training and test sets to do model selection. For this tutorial we will just use the test set to choose model parameters.
###Code
#model selection takes a few minutes, change this variable
#to true to run the parameter loop
do_model_selection = True
if do_model_selection:
C_range = np.array([.01, 1, 5, 10, 20, 50, 100, 1000, 5000, 10000])
gamma_range = np.array([0.0001, 0.001, 0.01, 0.1, 1, 10])
fig, axes = plt.subplots(3, 2,
sharex='col', sharey='row',figsize=(10,10))
plot_number = 0
for outer_ind, gamma_value in enumerate(gamma_range):
row = int(plot_number / 2)
column = int(plot_number % 2)
cv_errors = np.zeros(C_range.shape)
train_errors = np.zeros(C_range.shape)
for index, c_value in enumerate(C_range):
clf = svm.SVC(C=c_value, gamma=gamma_value)
clf.fit(X_train,y_train)
train_conf = confusion_matrix(y_train, clf.predict(X_train))
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
cv_errors[index] = accuracy(cv_conf)
train_errors[index] = accuracy(train_conf)
ax = axes[row, column]
ax.set_title('Gamma = %g'%gamma_value)
ax.semilogx(C_range, cv_errors, label='CV error')
ax.semilogx(C_range, train_errors, label='Train error')
plot_number += 1
ax.set_ylim([0.2,1])
ax.legend(bbox_to_anchor=(1.05, 0), loc='lower left', borderaxespad=0.)
fig.text(0.5, 0.03, 'C value', ha='center',
fontsize=14)
fig.text(0.04, 0.5, 'Classification Accuracy', va='center',
rotation='vertical', fontsize=14)
###Output
_____no_output_____
###Markdown
The best accuracy on the cross validation error curve was achieved for gamma = 1, and C = 10. We can now create and train an optimized classifier based on these parameters:
###Code
clf = svm.SVC(C=10, gamma=1)
clf.fit(X_train, y_train)
cv_conf = confusion_matrix(y_test, clf.predict(X_test))
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
###Output
_____no_output_____
###Markdown
[Precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall) are metrics that give more insight into how the classifier performs for individual facies. Precision is the probability that given a classification result for a sample, the sample actually belongs to that class. Recall is the probability that a sample will be correctly classified for a given class.Precision and recall can be computed easily using the confusion matrix. The code to do so has been added to the `display_confusion_matrix()` function:
###Code
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
###Output
_____no_output_____
###Markdown
To interpret these results, consider facies `SS`. In our test set, if a sample was labeled `SS` the probability the sample was correct is 0.8 (precision). If we know a sample has facies `SS`, then the probability it will be correctly labeled by the classifier is 0.78 (recall). It is desirable to have high values for both precision and recall, but often when an algorithm is tuned to increase one, the other decreases. The [F1 score](https://en.wikipedia.org/wiki/Precision_and_recallF-measure) combines both to give a single measure of relevancy of the classifier results.These results can help guide intuition for how to improve the classifier results. For example, for a sample with facies `MS` or mudstone, it is only classified correctly 57% of the time (recall). Perhaps this could be improved by introducing more training samples. Sample quality could also play a role. Facies `BS` or bafflestone has the best `F1` score and relatively few training examples. But this data was handpicked from other wells to provide training examples to identify this facies.We can also consider the classification metrics when we consider misclassifying an adjacent facies as correct:
###Code
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
###Output
_____no_output_____
###Markdown
Considering adjacent facies, the `F1` scores for all facies types are above 0.9, except when classifying `SiSh` or marine siltstone and shale. The classifier often misclassifies this facies (recall of 0.66), most often as wackestone. These results are comparable to those reported in Dubois et al. (2007). Applying the classification model to the blind dataWe held a well back from the training, and stored it in a dataframe called `blind`:
###Code
blind
###Output
_____no_output_____
###Markdown
The label vector is just the `Facies` column:
###Code
y_blind = blind['Facies'].values
###Output
_____no_output_____
###Markdown
We can form the feature matrix by dropping some of the columns and making a new dataframe:
###Code
well_features = blind.drop(['Facies', 'Formation', 'Well Name', 'Depth'], axis=1)
###Output
_____no_output_____
###Markdown
Now we can transform this with the scaler we made before:
###Code
X_blind = scaler.transform(well_features)
###Output
_____no_output_____
###Markdown
Now it's a simple matter of making a prediction and storing it back in the dataframe:
###Code
y_pred = clf.predict(X_blind)
blind['Prediction'] = y_pred
###Output
_____no_output_____
###Markdown
Let's see how we did with the confusion matrix:
###Code
cv_conf = confusion_matrix(y_blind, y_pred)
print('Optimized facies classification accuracy = %.2f' % accuracy(cv_conf))
print('Optimized adjacent facies classification accuracy = %.2f' % accuracy_adjacent(cv_conf, adjacent_facies))
###Output
_____no_output_____
###Markdown
We managed 0.71 using the test data, but it was from the same wells as the training data. This more reasonable test does not perform as well...
###Code
display_cm(cv_conf, facies_labels,
display_metrics=True, hide_zeros=True)
###Output
_____no_output_____
###Markdown
...but does remarkably well on the adjacent facies predictions.
###Code
display_adj_cm(cv_conf, facies_labels, adjacent_facies,
display_metrics=True, hide_zeros=True)
def compare_facies_plot(logs, compadre, facies_colors):
#make sure logs are sorted by depth
logs = logs.sort_values(by='Depth')
cmap_facies = colors.ListedColormap(
facies_colors[0:len(facies_colors)], 'indexed')
ztop=logs.Depth.min(); zbot=logs.Depth.max()
cluster1 = np.repeat(np.expand_dims(logs['Facies'].values,1), 100, 1)
cluster2 = np.repeat(np.expand_dims(logs[compadre].values,1), 100, 1)
f, ax = plt.subplots(nrows=1, ncols=7, figsize=(9, 12))
ax[0].plot(logs.GR, logs.Depth, '-g')
ax[1].plot(logs.ILD_log10, logs.Depth, '-')
ax[2].plot(logs.DeltaPHI, logs.Depth, '-', color='0.5')
ax[3].plot(logs.PHIND, logs.Depth, '-', color='r')
ax[4].plot(logs.PE, logs.Depth, '-', color='black')
im1 = ax[5].imshow(cluster1, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
im2 = ax[6].imshow(cluster2, interpolation='none', aspect='auto',
cmap=cmap_facies,vmin=1,vmax=9)
divider = make_axes_locatable(ax[6])
cax = divider.append_axes("right", size="20%", pad=0.05)
cbar=plt.colorbar(im2, cax=cax)
cbar.set_label((17*' ').join([' SS ', 'CSiS', 'FSiS',
'SiSh', ' MS ', ' WS ', ' D ',
' PS ', ' BS ']))
cbar.set_ticks(range(0,1)); cbar.set_ticklabels('')
for i in range(len(ax)-2):
ax[i].set_ylim(ztop,zbot)
ax[i].invert_yaxis()
ax[i].grid()
ax[i].locator_params(axis='x', nbins=3)
ax[0].set_xlabel("GR")
ax[0].set_xlim(logs.GR.min(),logs.GR.max())
ax[1].set_xlabel("ILD_log10")
ax[1].set_xlim(logs.ILD_log10.min(),logs.ILD_log10.max())
ax[2].set_xlabel("DeltaPHI")
ax[2].set_xlim(logs.DeltaPHI.min(),logs.DeltaPHI.max())
ax[3].set_xlabel("PHIND")
ax[3].set_xlim(logs.PHIND.min(),logs.PHIND.max())
ax[4].set_xlabel("PE")
ax[4].set_xlim(logs.PE.min(),logs.PE.max())
ax[5].set_xlabel('Facies')
ax[6].set_xlabel(compadre)
ax[1].set_yticklabels([]); ax[2].set_yticklabels([]); ax[3].set_yticklabels([])
ax[4].set_yticklabels([]); ax[5].set_yticklabels([])
ax[5].set_xticklabels([])
ax[6].set_xticklabels([])
f.suptitle('Well: %s'%logs.iloc[0]['Well Name'], fontsize=14,y=0.94)
compare_facies_plot(blind, 'Prediction', facies_colors)
###Output
_____no_output_____
###Markdown
Applying the classification model to new dataNow that we have a trained facies classification model we can use it to identify facies in wells that do not have core data. In this case, we will apply the classifier to two wells, but we could use it on any number of wells for which we have the same set of well logs for input.This dataset is similar to the training data except it does not have facies labels. It is loaded into a dataframe called `test_data`.
###Code
well_data = pd.read_csv('validation_data_nofacies.csv')
well_data['Well Name'] = well_data['Well Name'].astype('category')
well_features = well_data.drop(['Formation', 'Well Name', 'Depth'], axis=1)
###Output
_____no_output_____
###Markdown
The data needs to be scaled using the same constants we used for the training data.
###Code
X_unknown = scaler.transform(well_features)
###Output
_____no_output_____
###Markdown
Finally we predict facies labels for the unknown data, and store the results in a `Facies` column of the `test_data` dataframe.
###Code
#predict facies of unclassified data
y_unknown = clf.predict(X_unknown)
well_data['Facies'] = y_unknown
well_data
well_data['Well Name'].unique()
###Output
_____no_output_____
###Markdown
We can use the well log plot to view the classification results along with the well logs.
###Code
make_facies_log_plot(
well_data[well_data['Well Name'] == 'STUART'],
facies_colors=facies_colors)
make_facies_log_plot(
well_data[well_data['Well Name'] == 'CRAWFORD'],
facies_colors=facies_colors)
well_data.to_csv('well_data_with_facies.csv')
###Output
_____no_output_____ |
unfair-dice-game/ScienceWorld_Apr08/ScienceWorld_DataLit.ipynb | ###Markdown
Data Literacy in the ClassroomLaura Gutierrez Funderburk (Cybera) Michael Lamoureux (U. Calgary)  What will we cover in this workshop:- Introductions- Data literacy: an example using global warming data- Jupyter Notebooks and the Callysto project- Hands on activities: the Unfair Dice problem- Math behind the game- Computational thinking exercise - Final remarks- Accessing Callysto resources for your lesson plan What is data literacy?- Using data to inform decisions- Reading, working with, analyzing, and arguing with data- How to find good data, how to extract useful information from data Example -- Global warming?- Where do we get data? (e.g. Vancouver temp data)- How do we analyze it? (e.g. Spread sheet. Plot it)- Deeper analysis? (e.g. linear trend in temperature)- Go further (Other cities? Other sources of data?) Vancouver data source: - https://vancouver.weatherstats.ca/charts/temperature-wyearly.html- 1935-2020 Trend line $y = 0.00123 x + 6.778$- x is measured in months, so temperature is rising at .00123 degrees per month- $ .00123 * 12 * 100 = 1.5 $ degrees per century. Looks like a significant rise over a long period SummaryData Literacy- We used available data to look for trends- We used computational tools to put a number to it- We can argue, with data, as to the significance of that number Going further- Can we access more data? (Other cities? Satellites? etc?)- Do we have better tools? Better toolsJupyter NotebooksCallysto ProjectNotebook RepositoriesOpen Data Repositories Jupyter notebooksThis slideshow is actually a Jupyter notebook.A web browser-based document, combining text, computer code, and graphics.It is live, you can edit and use it as you present your stuff.
###Code
# This is live code. You can change it!
import matplotlib.pyplot as plt
plt.plot([0,1,2,4,8,16,32]);
# A simulation of temperatures
import numpy as np
x = np.linspace(0,12*10,num=12*10)
y = 0.00123*x+6.778 + 10*np.sin(2*np.pi*x/12)
plt.plot(x,y);
###Output
_____no_output_____
###Markdown
Callysto projectPartnership of **Cybera Inc** (Alta) and the **Pacific Institute for the Mathematical Sciences**Funded by **CanCode** (Canadian Gov't)Resources for teachers, students to learn coding, data science, computational thinking.Web: callysto.ca Sample notebooks at Callysto Social Justice and Computational ThinkingLaura Gutierrez Funderburk (SFU)Richard Hoshino, (Quest)Michael Lamoureux, (U Calgary) MotivationHow do small advantages, repeated over time, contribute to the success of an individual?Does this make a difference in what we see as fairness, or justice, in real life?Can we make a simulation of this, and explore any remediations? Example: Socio-economic statusSuppose Alice and Bob are both going to the same university.Alice has money, lives close to campus, has a car, doesn't need a job for support.Bob has no money, lives far from campus, takes transit, has a part-time job for tuition.Does Alice end up more successful than Bob? MotivationHow do relatively small advantages play a role with respect to the success of individuals over time? How can we create a scenario involving two or more individuals, each with differing degrees of advantage, and observe how slight advantages (or disadvantages) affect their trajectories over time? We will explore the concept of fairness as well as the concepts of expected and experimental probability via a probability game between two players in which one player possesses a fixed slight advantage over the other. We will solve the “Unfair Dice Problem” via an exploratory approach, and using Jupyter notebooks and the Python programming language, we will develop an interactive application that allows us to simulate how each player fares over time. This work is part of the Callysto Project, a federally-funded program to bring computational thinking and mathematical problem-solving skills into Grade 5-12 classrooms.The Unfair Dice Problem has relevance and application in multiple areas of life, including but not limited to the role small advantages play in the context of employment, housing, access to education and support, mortgages and loan pre-approval, and social advancement and recognition. Although our application is based on a game involving dice, it is an illustration of the effect that seemingly small advantages can play over a long period of time in the outcome that individuals experience. Real Life Application ExampleOne example is the role socio-economic status play in academic success. Let's take the case of two students going into university. Student A owns a car, lives near school and does not need to take out student loans. Student B on the other hand, has taken out a loan to pay for classes, works part time to decrease the amount of money they need to borrow, and rents a room that is far from school but cheap, they use public transit every day. Over time, Student A experiences less stress and is able to devote more time to studying which results in higher marks, being able to access higher-level classes and graduate school. Because student A is a top performer, it is easy for them to get internships and valuable work experience and as a result, Student A has higher probability of getting a high-pay job. Student B on the other hand is constantly stressed over debt, cannot spend as much time on studying as they work part time and as a result does not excel in school. This decreases their opportunities to be accepted into higher-level education and graduate school, as well as getting an intership - both of which would support student B in getting a high-paying job.Through computational thinking we can model problems like the above to simulate people's outcome overtime. We can use this information to modify the conditions of the model to balance unfairness - for instance by providing scholarships based on need, by modifying entry-level requirements to include work experience in addition to grades, and to support creating affordable living options for students. Another solution might be running a survey of socio-economic status and available resources, and adjust tuition accordingly, instead of charging everyone the same amount. Let's replace with a game of dice.Alice and Bob each roll a die. The one with a bigger number wins. Ties go to Alice. Starting with ten dollars each, at each roll the winner takes a dollar from the loser. Let's play. Play until someone runs out of money. Setting up the gameWe will use two dice in this game, and assume the dice are fair, i.e. there is the same probability of getting one of the six faces. Let's suppose two people, Alice and Bob, decide to play with this setup and start with the same amount of money, $10 each. One die will be designated as "Alice's die" while the other one will be designated as "Bob's die". Dice are rolled and the outcome obtained from Bob's outcome is subtracted from the Alice's outcome. If Alice's outcome is greater than or equal to Bob's, Alice takes $\$1$ from Bob. Otherwise, Bob takes $\$1$ from Alice. In class: Let's PlayBreak up the students into groups of two (Alice and Bob).Give each student ten candies, and a die. Have them roll the dice, and trade candy until someone runs out.Count up how many Alices won all the candy? How many Bobs won them all?Is this fair? Let's PlayForming groups of two's, decide which of you plays "Alice" and "Bob". Each player will be given 10 toothpicks - each representing $\$1$ dollar. Roll the two dice. If Alice's outcome is greater than or equal to Bob's outcome, player Alice gets one toothpick from Bob. Otherwise player Bob gets 1 toothpick from Alice. Roll 10 times and note how the number of toothpicks changes for each player. Who has more toothpicks at the end of 10 rolls? Let's Try Again....Let's repeat this game, but this time, if Bob's outcome is less than Alice's outcome, Bob gets 2 toothpicks instead of one. Who has more toothpicks at the end of 10 rolls? What makes this game unfair?Below is the sample space of our game. Bob's die outcome (black column) is subtracted from Alice's die outcome (dark red row). Alice wins whenever the result is 0 or more. Bob wins whenever the result is negative. The probability Alice will win is $$P(A) = \frac{21}{36}$$while the probability Bob will win is $$P(B) = \frac{15}{36}$$
###Code
%%html
<table style='margin: 0 auto;font-size: 25px'>
<tr style="width:100%;text-align:center;background-color:#990000;color:white">
<th style="background-color:white;color:white"> </th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
</tr>
<tr>
<td style="background-color:black;color:white"><strong>1</strong></td>
<td style="background-color:#ffcccc">0</td>
<td style="background-color:#ffcccc">1</td>
<td style="background-color:#ffcccc">2</td>
<td style="background-color:#ffcccc">3</td>
<td style="background-color:#ffcccc">4</td>
<td style="background-color:#ffcccc">5</td>
</tr>
<tr >
<td style="background-color:black;color:white"><strong>2</strong></td>
<td>-1</td>
<td style="background-color:#ffcccc">0</td>
<td style="background-color:#ffcccc">1</td>
<td style="background-color:#ffcccc">2</td>
<td style="background-color:#ffcccc">3</td>
<td style="background-color:#ffcccc">4</td>
</tr>
<tr >
<td style="background-color:black;color:white"><strong>3</strong></td>
<td>-2</td>
<td>-1</td>
<td style="background-color:#ffcccc">0</td>
<td style="background-color:#ffcccc">1</td>
<td style="background-color:#ffcccc">2</td>
<td style="background-color:#ffcccc">3</td>
</tr>
<tr >
<td style="background-color:black;color:white"><strong>4</strong></td>
<td>-3</td>
<td>-2</td>
<td>-1</td>
<td style="background-color:#ffcccc">0</td>
<td style="background-color:#ffcccc">1</td>
<td style="background-color:#ffcccc">2</td>
</tr>
<tr >
<td style="background-color:black;color:white"><strong>5</strong></td>
<td>-4</td>
<td>-3</td>
<td>-2</td>
<td>-1</td>
<td style="background-color:#ffcccc">0</td>
<td style="background-color:#ffcccc">1</td>
</tr>
<tr >
<td style="background-color:black;color:white"><strong>6</strong></td>
<td>-5</td>
<td>-4</td>
<td>-3</td>
<td>-2</td>
<td>-1</td>
<td style="background-color:#ffcccc">0</td>
</tr>
</table>
###Output
_____no_output_____
###Markdown
In the first game, the expected per-round payoff for Alice is $$\big( \frac{21}{36}\big) *1 + \big( \frac{15}{36} \big)*(-1) = \frac{1}{6}.$$ So after about 60 rounds of play, Alice would be "expected" to have all of the toothpicks. Conversely, in the second game, the expected per-round payoff for Alice is $$\big( \frac{21}{36}\big) *1 + \big( \frac{15}{36} \big)*(-2) = \frac{-1}{4}.$$ So after about 40 rounds of play, Bob would be "expected" to have all of the toothpicks. Simulating the GameLet's use some code to simulate many trials of this unfair game.Try making it more fair. Change the starting points, or the payoffs.
###Code
import random
import matplotlib.patches as mpatches
from ipywidgets import interact, interact_manual, widgets, Layout, VBox, HBox, Button
from IPython.display import display, Javascript, Markdown, HTML, clear_output
import matplotlib.pyplot as plt
#def runN_cell( b ):
# display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+1,IPython.notebook.get_selected_index()+2)'))
#def rerun_cell( b ):
# display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index(),IPython.notebook.get_selected_index()+1)'))
### This cell contains code to simulate our game
def roll_dice():
"""This function simulates rolling two dice
and substracting the minor die outcome from the major die outcome"""
major_die = random.choice([1,2,3,4,5,6])
minor_die = random.choice([1,2,3,4,5,6])
if major_die >= minor_die:
return True
else:
return False
def play_game(StartValue_A, StartValue_B, p, q):
"""This function implements two players engaging in the game"""
# Initialize variables
# Set value A to starting value, resp value C
value_A = StartValue_A
value_B = StartValue_B
turn_number = 0
# Store points on each turn
CurrentValue_A = []
CurrentValue_B = []
# Initialize winners
winner_A = 0
winner_B = 0
# We want to continue playing as long as both players have at least one more point
while value_A > 0 and value_B > 0:
# Increase turn
turn_number += 1
# If major die >= minor die
if roll_dice():
# Update and save current values for A and B
CurrentValue_A.append(value_A)
CurrentValue_B.append(value_B)
# Give A one more (set of) point(s)
value_A = value_A + p
# Remove the same quantity from B
value_B = value_B - p
# Otherwise, we have major die < minor die
else:
# Update and save current values for A and B
CurrentValue_A.append(value_A)
CurrentValue_B.append(value_B)
# Give B one more (set of) points
value_B = value_B + q
# Remove the same quantity from A
value_A = value_A - q
# Get winners
# If A has zero or less points, B is the winner
if value_A <= 0: winner_B = 1
# Otherwise, A is the winner
if value_B <= 0: winner_A = 1
return [turn_number, winner_A, winner_B,CurrentValue_A,CurrentValue_B]
def plot_game(StartValue_A, StartValue_B, p, q):
"""This function simulates the game for a given 1000 trials and prints
the average number of times A and B win"""
# Suppose we set 1000 trials
n = 1000
# Initialize variables
wins_for_A = 0
wins_for_B = 0
total_moves = 0
# Iterate over the total number of trials, and repeat game
for i in range(n):
[turn_number, winner_A, winner_B,CurrentValue_A,CurrentValue_B] = play_game(StartValue_A, StartValue_B, p, q)
# Add number of turns
total_moves += turn_number
# Add total number of times A won
wins_for_A += winner_A
# Add total number of times B won
wins_for_B += winner_B
print("The average number of rounds is", total_moves/n)
print("Alice wins", round(100*wins_for_A/n,2), "% of the time")
print("Bob wins", round(100*wins_for_B/n,2), "% of the time")
# Plot results
# Set x axis values
x_co = [i for i in range(len(CurrentValue_A))]
# Initialize figure and set x, y limits
fig,ax = plt.subplots(figsize=(10,5))
ax.set_xlim([0,len(x_co) + 1])
ax.set_ylim([0,StartValue_A+StartValue_B])
ax.grid(True)
# Plot points for A and B at each turn
ax.plot(x_co,CurrentValue_A,label="Alice",c='r')
ax.plot(x_co,CurrentValue_B,label="Bob",c='black')
# Add labels, title and legend to improve readability
ax.set_ylabel("Number of points",fontsize=25)
ax.set_xlabel("Number of turns",fontsize=25)
ax.set_title("A Typical Game",fontsize=25)
plt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=1.)
plt.show()
# Create interactive menu with parameters
style = {'description_width': 'initial'}
all_the_widgets = [widgets.BoundedIntText(
value=10,
min=1,
max=1000,
description='Alice: Initial Points:',
disabled=False,style =style), widgets.BoundedIntText(
value=10,
min=1,
max=1000,
description='Bob:Initial Points:',
disabled=False,style =style), widgets.BoundedFloatText(
value=1,
min=0,
max=1000,
step=0.1,
description='# points for Alice win',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='f',
style =style),widgets.BoundedFloatText(
value=1,
min=0,
max=1000,
step=0.1,
description='# points for Bob win',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='f',
style =style)]
# Button widget
CD_button = widgets.Button(
button_style='success',
description="Run Simulations",
layout=Layout(width='15%', height='30px'),
style=style
)
def draw_results(b):
StartValue_A = all_the_widgets[0].value
StartValue_B = all_the_widgets[1].value
p = all_the_widgets[2].value
q = all_the_widgets[3].value
clear_output()
display(tab) ## Have to redraw the widgets
plot_game(StartValue_A, StartValue_B, p, q)
# Connect widget to function - run subsequent cells
#CD_button.on_click( runN_cell )
CD_button.on_click( draw_results )
# user menu using categories found above
tab3 = VBox(children=[HBox(children=all_the_widgets[0:2]),HBox(children=all_the_widgets[2:4]),
CD_button])
tab = widgets.Tab(children=[tab3])
tab.set_title(0, 'Choose Parameters')
# display(tab) ## We will display in the next cell. So SlideShow works.
display(tab)
StartValue_A = all_the_widgets[0].value
StartValue_B = all_the_widgets[1].value
p = all_the_widgets[2].value
q = all_the_widgets[3].value
plot_game(StartValue_A, StartValue_B, p, q)
###Output
The average number of rounds is 58.886
Alice wins 96.6 % of the time
Bob wins 3.4 % of the time
|
code/Decaps_LSST_illustrate_single-epoch.ipynb | ###Markdown
Executive Summary :DECam image comparison : illustrate catastrophic mismatches between decaps single-epoch catalog, and LSST stack src catalog based on InstCal DECam images.
###Code
import numpy as np
# Necessary imports ..
import matplotlib.pyplot as plt
from astropy.table import Table
import os
import numpy as np
from astropy.io import fits
from astropy.stats import sigma_clipped_stats
from astropy.visualization import SqrtStretch
from astropy.visualization.mpl_normalize import ImageNormalize
import urllib.request
from astropy.coordinates import SkyCoord
from astropy import units as u
from itertools import product
from astropy.coordinates import SkyCoord
from astropy import units as u
from astropy.table import hstack
from astropy.table import vstack
from scipy.stats import binned_statistic as bs
image_database = '../data_products/decaps_catalogs/imdb.fits'
imdb_hdu = fits.open(image_database)
imdb = Table(imdb_hdu[1].data)
#mask_recno = imdb['sb_recno'] == 611980
#m = imdb_short['sb_name'] == 'c4d_170122_053255_ooi_g_v1.fits
# this can be used to find the name of single-band catalog for
# the given visit to make it more automatic ...
visit = 611980
cat_name = imdb[imdb['expnum'] == visit]['catfname'].data[0]
print('The single-band catalog name corresponding to visit %d is %s' %(visit, cat_name))
# the cat name is based on the image name ...
#image_name = 'c4d_170122_055542_ooi_g'
#cat_name = image_name + '_v1.cat.fits'
singleDir = '../data_products/decaps_catalogs/single_epoch/'
file_name = singleDir + cat_name
# check if the catalog already exists
if cat_name not in os.listdir(singleDir) :
print('Downloading the catalog...')# if not, download it ...
url = 'https://faun.rc.fas.harvard.edu/decaps/release/cat/' + cat_name
urllib.request.urlretrieve(url, file_name)
decaps_hdu = fits.open(file_name)
#http://www.astropy.org/astropy-tutorials/FITS-tables.html
# hdu.info() would display all available tables -
# there is a single catalog per CCD,
# called 'S21_CAT', etc, based on CCD name.
# save the zero point for this catalog
decaps_zeropoint = decaps_hdu[0].header['MAGZERO']
# one per entire image composed of multiple CCDs
print('The decaps single-epoch catalog zeropoint is %f'%decaps_zeropoint)
# Read the LSST zeropoint - it's the same
# for all ccd's across the mosaic ...
outDir = '../data_products/LSST_Stack/DECam/'+str(visit)+'/'
# first check calexp for zero point magnitude
# it is exactly the same for all CCDs in a mosaic
calexp_files = os.listdir(outDir+'calexp/')
calexp_hdu = fits.open(outDir+'calexp/' + calexp_files[1])
lsst_zeropoint = calexp_hdu[0].header['MAGZERO']
print('The lsst measured zeropoint for decam is %f'%lsst_zeropoint)
# Make only once : the translation of ccdnum to ccdname, to position in the
# decals hdu
ccd_name_dict = {}
catalog_decaps_pos = {}
# Make sure that all cards got downloaded -
# the single-epoch catalog is a FITS file
# where each card corresponds to a CCD
assert len(decaps_hdu[:]) == 181
for i in range(1,180) :
if 'IMAGE' in decaps_hdu[i].header['XTENSION'] :
ccdnum = decaps_hdu[i].header['CCDNUM']
detpos = decaps_hdu[i].header['DETPOS']
ccd_name_dict[ccdnum] = detpos
catalog_decaps_pos[ccdnum] = int(i+2)
# The coveted dictionary of ccdnum vs ccdname !
print(np.ravel(ccd_name_dict))
# and the translation of ccdnum to hduposition of the catalog ....
print(np.ravel(catalog_decaps_pos))
#sss
# Combine all lsst sources from all ccd's ,
# and all decals sources from all ccd's, to improve statistics ...
#def process_src_ccd(outDir, visit, i):
# Initialize storage AstroPy tables :
arr = {'lsst_mag':[], 'coord_ra':[],'coord_dec':[]}
ccd_lsst_stack = Table(arr, names=('lsst_mag', 'coord_ra', 'coord_dec'),
dtype=('f8', 'f8', 'f8'))
arr = {'decaps_mag':[],'ra':[],'dec':[]}
ccd_decaps_stack = Table(arr, names = ('decaps_mag', 'ra', 'dec'),
dtype = ('f8', 'f8', 'f8'))
# loop over all ccds adding to stacks...
#
src_files = os.listdir(outDir+'src/')
start = len('src-0'+str(visit)+'_')
stop = len('.fits')
for i in range(len(src_files)):
ccdnum = src_files[i][start:-stop] # string
ccd_number = float(ccdnum)
fname = 'src-0'+str(visit)+'_'+ccdnum+'.fits'
hdu = fits.open(outDir +'src/'+ fname)
print(fname)
# convert to an AstroPy table
ccd_data = Table(hdu[1].data)
# only consider positive fluxes...
mask_neg_fluxes = ccd_data['base_PsfFlux_flux'].data > 0
# just select rows that don't have negative fluxes...
ccd_data_good = ccd_data[mask_neg_fluxes]
ccd_data_good['lsst_mag'] = -2.5* np.log10(ccd_data_good['base_PsfFlux_flux']) +\
lsst_zeropoint
# keep only most relevant info...
ccd_lsst = ccd_data_good[['lsst_mag', 'coord_ra', 'coord_dec']]
# add to the stack
ccd_lsst_stack = vstack([ccd_lsst_stack ,ccd_lsst] )
# Display mapping information
print(' * ccd number %d ' % ccd_number)
print(' * ccd name is %s'% ccd_name_dict[ccd_number])
print(' * position in decaps hdu is %d'%catalog_decaps_pos[ccd_number])
# read in decaps single-epoch catalog for that ccd...
ccd_decaps_cat = Table(decaps_hdu[catalog_decaps_pos[ccd_number]].data)
# convert the fluxes to magnitudes
ccd_decaps_cat['decaps_mag'] = -2.5 * np.log10(ccd_decaps_cat['flux'].data) +\
decaps_zeropoint
# keep only the relevant info
ccd_decaps = ccd_decaps_cat[['decaps_mag','ra','dec']]
ccd_decaps_stack = vstack([ccd_decaps_stack, ccd_decaps])
print('Done')
# Match sources from decaps to lsst per ccd
# decam coordinates
decam_coord = SkyCoord(ra = ccd_decaps_stack['ra']*u.degree,
dec = ccd_decaps_stack['dec']*u.degree)
# lsst coordinates : in radians !
lsst_coord = SkyCoord(ra = ccd_lsst_stack['coord_ra']*u.radian,
dec= ccd_lsst_stack['coord_dec']*u.radian)
# indices are into lsst ccd catalog
# match decaps into lsst ...
idx, d2d, d3d = decam_coord.match_to_catalog_sky(lsst_coord)
# stack the two catalogs
decam_to_lsst = hstack([ccd_decaps_stack ,ccd_lsst_stack[idx]],
table_names=['decam','lsst'] )
print('There are %d decaps sources and %d lsst sources.'%(len(decam_coord),
len(lsst_coord))
)
# matches within 0.5 arcsec...
cut_arcsec = 0.5
mask_arcsec = d2d.arcsec < cut_arcsec
# matches within 0.5 mag from one another ...
decam_to_lsst['dmag'] = decam_to_lsst['lsst_mag'] - decam_to_lsst['decaps_mag']
cut_mag = 0.5
mask_mag = abs(decam_to_lsst['dmag'].data) < cut_mag
mask_comb = mask_arcsec * mask_mag
print(' %d decaps srcs have an lsst match within %.1f arcsec'%(
np.sum(mask_arcsec), cut_arcsec)
)
print(' %d decaps srcs have an lsst match within %.1f mag'%(
np.sum(mask_mag), cut_mag)
)
print(' %d decaps srcs have an lsst match fulfilling both criteria'%np.sum(mask_comb)
)
# Using the original table, make a column to flag which
# decaps srcs have a good lsst match
# Initialize with zeros
decam_to_lsst['lsst_match'] = 0
# Set to 1 only where the match is good, i.e. fulfills the combined selection masks
decam_to_lsst['lsst_match'][mask_comb] = 1
ccdnum
# Read in the LSST Stack processed version
# take just one CCD ( eg. ccd10)
ccdnum = 10
fname = 'calexp-0'+str(visit)+'_'+str(ccdnum)+'.fits'
print(fname)
hdu = fits.open(outDir + 'calexp/'+fname)
g_zeropoint = hdu[0].header['MAGZERO']
lsst_image_data = hdu[1].data
# Also read the original DECam image
# it contains all ccds
from photutils import CircularAperture
image_name = imdb[imdb['expnum'] == visit]['pldsid'][0]+'.fits.fz'
print('Seeking image %s'%image_name)
image_dir = '../raw_data/DECam/'
if image_name not in os.listdir(image_dir):
print('Need to download %s'%image_name)
image_hdu = fits.open(image_dir +image_name)
# position of any ccd in the image_hdu
# is the ccdnum-1 :
assert image_hdu[ccdnum-1].header['CCDNUM'] == ccdnum
# need to choose only one to display...
#decam_image_data = image_hdu[ccdnum-1].data
# Translate ra,dec to pixel x,y, coordinates ..
#
x_px_scale = image_hdu[0].header['PIXSCAL1'] # arcsec / pixel
y_px_scale = image_hdu[0].header['PIXSCAL2']
print(x_px_scale)
print(y_px_scale)
from astropy import wcs
def add_ccd_xycoords(table, image_hdu, ccdnum=10):
# Parse the WCS keywords in the primary HDU
w = wcs.WCS(image_hdu[ccdnum-1].header)
# Example: pixel to world
#pixcrd = np.array([[0, 0], [24, 38], [45, 98]], np.float_)
#world = w.wcs_pix2world(pixcrd, 1)
# Example : world to pixel
#pixcrd2 = w.wcs_world2pix(world, 1)
# Check that it works ...
#assert np.max(np.abs(pixcrd - pixcrd2)) < 1e-6
# first need to express the ra,dec in terms of pixel location on the image ...
radec_decaps = np.column_stack((table['ra'],
table['dec'])
)
radec_lsst = np.column_stack((360*table['coord_ra'] / (2*np.pi),
360*table['coord_dec']/(2*np.pi))
)
# DECAPS coords: ra, dec , LSST coords: coord_ra , coord_dec
pixcrd_decaps = w.wcs_world2pix(radec_decaps,1)
pixcrd_lsst = w.wcs_world2pix(radec_lsst,1)
# plot histogram to show
# that since these are sources from the
# entire catalog, their x,y position on the chip would be
# beyond the chip limits,
# which are (0,2048) in x,
# and (0,4096) in y
#plt.hist(pixcrd_decaps)
# make columns for decaps ra,dec based x,y coords
xcords = np.ravel(np.hsplit(pixcrd_decaps,2)[0])
ycords = np.ravel(np.hsplit(pixcrd_decaps,2)[1])
colname_x = 'ccd'+str(ccdnum)+'_x_decaps'
colname_y = 'ccd'+str(ccdnum)+'_y_decaps'
table[colname_x] = xcords
table[colname_y] = ycords
print('We added two columns translating decaps ra,dec to x,y coords on ccd %d'%ccdnum)
print('They are called: \n %s, and %s'%(colname_x, colname_y))
# make columns for lsst ra,dec based x,y coords
xcords = np.ravel(np.hsplit(pixcrd_lsst,2)[0])
ycords = np.ravel(np.hsplit(pixcrd_lsst,2)[1])
colname_x = 'ccd'+str(ccdnum)+'_x_lsst'
colname_y = 'ccd'+str(ccdnum)+'_y_lsst'
table[colname_x] = xcords
table[colname_y] = ycords
print('We added two columns translating lsst ra,dec to x,y coords on ccd %d'%ccdnum)
print('They are called: \n %s, and %s'%(colname_x, colname_y))
return table
decam_to_lsst = add_ccd_xycoords(decam_to_lsst, image_hdu)
#lsst_to_decam = add_ccd_xycoords(lsst_to_decam, image_hdu)
%matplotlib inline
from matplotlib.patches import Circle
# Plot the very bright mismatches ....
# grab the size of the image in pixels
# from the image header ....
xaxis_length = image_hdu[ccdnum-1].header['NAXIS1']
yaxis_length = image_hdu[ccdnum-1].header['NAXIS2']
xcords = decam_to_lsst['ccd10_x_decaps']
ycords = decam_to_lsst['ccd10_y_decaps']
# select a subregion within a chosen part of the image ....
xmin,xmax = 1100,1350 # xaxis_length
ymin,ymax = 25,200 # yaxis_length
mx = (xmin < xcords) * (xcords < xmax)
my = (ymin < ycords) * (ycords < ymax)
print('Total sources within this cutout : %d'%np.sum(mx*my))
# select a subset of the original table within
# this cutout ... ( based on decaps sources in that cutout)
cutout_table = decam_to_lsst[mx*my]
# First select the bright mismatches : only in decam
# but not in LSST ....
# (at least not within 0.5 arcsec and 0.5 mag ... )
#xmin,xmax = min(decam_to_lsst[m]['ccd10_x_decaps']), \
#max(decam_to_lsst[m]['ccd10_x_decaps'])
#ymin,ymax = min(decam_to_lsst[m]['ccd10_y_decaps']), \
#max(decam_to_lsst[m]['ccd10_y_decaps'])
# plot the decam- lsst mismatches ....
fig,ax = plt.subplots(1,1, figsize=(8,16))
# print the image ....
ccdnum = 10
fname = 'calexp-0'+str(visit)+'_'+str(ccdnum)+'.fits'
print('Using %s'%fname)
hdu = fits.open(outDir + 'calexp/'+fname)
lsst_image_data = hdu[1].data
mean, median, std = sigma_clipped_stats(lsst_image_data[ymin:ymax,xmin:xmax],
sigma=3.0, iters=5)
norm = ImageNormalize(stretch=SqrtStretch())
ax.imshow(lsst_image_data[ymin:ymax,xmin:xmax],
cmap='Greys', origin='lower', norm=norm,
vmax = 200, vmin = 5)
# translate x ticks
new_labels = [str(item +xmin) for item in ax.get_xticks()]
ax.set_xticklabels(new_labels)
# translate y ticks
new_labels = [str(item +ymin) for item in ax.get_yticks()]
ax.set_yticklabels(new_labels)
# plot apertures for bright decaps sources, that
# did not have an lsst match within 0.5 arcsec and 0.5 mag
m1 = cutout_table['lsst_match'] == 0
m2 = np.bitwise_not(np.isnan(cutout_table['decaps_mag']))
m3 = cutout_table['decaps_mag'] < 16
m = m1*m2*m3
print('Sources with no lsst match, with decaps mag < 13 : %d'%np.sum(m))
print(cutout_table[m][['decaps_mag', 'lsst_mag']])
for xx,yy in zip(cutout_table['ccd10_x_decaps'][m]-xmin,
cutout_table['ccd10_y_decaps'][m]-ymin):
circ = Circle((xx,yy),50, alpha=0.8, fill=False,ls='--',ec='green')
ax.add_patch(circ)
# Here we just plot all LSST detections in that region ...
xcords = cutout_table['ccd10_x_lsst']
ycords = cutout_table['ccd10_y_lsst']
for xx,yy in zip(xcords-xmin,ycords-ymin):
# https://matplotlib.org/devdocs/api/_as_gen/matplotlib.patches.Circle.html
circ = Circle((xx,yy), radius=8,alpha=0.8, fill=False, ec='orange')
ax.add_patch(circ)
# Finally, also plot all DECaPS sources in that region
xcords = cutout_table['ccd10_x_decaps']
ycords = cutout_table['ccd10_y_decaps']
for xx,yy in zip(xcords-xmin,ycords-ymin):
# https://matplotlib.org/devdocs/api/_as_gen/matplotlib.patches.Circle.html
circ = Circle((xx,yy), radius=11, alpha=0.8, fill=False, ec='red')
ax.add_patch(circ)
plt.savefig('../data_products/decaps_lsst_compare/'+str(visit)+\
'_ccd-'+str(ccdnum)+'_faint_feat.png', bbox_inches='tight')
An example where the LSST vs
###Output
_____no_output_____
###Markdown
Investigate other bright mismatches - those above are an example of LSST reporting a smaller magnitude than decaps. How about even brighter sources ?
###Code
m1= decam_to_lsst['lsst_match'] == 0
m2= decam_to_lsst['decaps_mag'] < 15
m3 = decam_to_lsst['decaps_mag'] > 10
m4 = decam_to_lsst['lsst_mag'] > 15
# within ccd10 ...
xcords = decam_to_lsst['ccd10_x_decaps']
ycords = decam_to_lsst['ccd10_y_decaps']
# select a subregion within a chosen part of the image ....
xmin,xmax = 0, xaxis_length
ymin,ymax = 0, yaxis_length
mx = (xmin < xcords) * (xcords < xmax)
my = (ymin < ycords) * (ycords < ymax)
m = m1*m2*m3*m4
plt.scatter(decam_to_lsst['decaps_mag'][m], decam_to_lsst['lsst_mag'][m])
###Output
/Users/chris/anaconda3/envs/py36/lib/python3.6/site-packages/ipykernel/__main__.py:2: RuntimeWarning: invalid value encountered in less
from ipykernel import kernelapp as app
/Users/chris/anaconda3/envs/py36/lib/python3.6/site-packages/ipykernel/__main__.py:3: RuntimeWarning: invalid value encountered in greater
app.launch_new_instance()
###Markdown
I want to be able to quickly show a small selection of a ccd where this objects lie ... Essentially, a postage stamp of comparison between the decaps and lsst thing...
###Code
decam_to_lsst[m]
decam_to_lsst = add_ccd_xycoords(decam_to_lsst, image_hdu, ccdnum=27)
x = decam_to_lsst[m]['ccd27_x_lsst']
y = decam_to_lsst[m]['ccd27_y_lsst']
mx = (0<x) * (x < xaxis_length)
my = (0<y) * (y < yaxis_length)
mxy = mx*my
decam_to_lsst[m][mxy]
#These are not in ccd10 : go to another..
# It is ccd 44 !
ccdnum = 41
fname = 'calexp-0'+str(visit)+'_'+str(ccdnum)+'.fits'
print(fname)
hdu = fits.open(outDir + 'calexp/'+fname)
g_zeropoint = hdu[0].header['MAGZERO']
lsst_image_data = hdu[1].data
decam_image_data = image_hdu[ccdnum-1].data
# choose which image to plot...
image_data = lsst_image_data # decam_image_data # or lsst_image_data
xcords = decam_to_lsst['ccd'+str(ccdnum)+'_x_decaps']
ycords = decam_to_lsst['ccd'+str(ccdnum)+'_y_decaps']
# select a subregion within a chosen part of the image ....
xmin,xmax = 700,xaxis_length # xaxis_length
ymin,ymax = 2000,2750# yaxis_length
mx = (xmin < xcords) * (xcords < xmax)
my = (ymin < ycords) * (ycords < ymax)
print('Total sources within this cutout : %d'%np.sum(mx*my))
# select a subset of the original table within
# this cutout ... ( based on decaps sources in that cutout)
cutout_table = decam_to_lsst[mx*my]
# First select the bright mismatches : only in decam
# but not in LSST ....
# (at least not within 0.5 arcsec and 0.5 mag ... )
#xmin,xmax = min(decam_to_lsst[m]['ccd10_x_decaps']), \
#max(decam_to_lsst[m]['ccd10_x_decaps'])
#ymin,ymax = min(decam_to_lsst[m]['ccd10_y_decaps']), \
#max(decam_to_lsst[m]['ccd10_y_decaps'])
# plot the decam- lsst mismatches ....
fig,ax = plt.subplots(1,1, figsize=(8,16))
# print the image ....
mean, median, std = sigma_clipped_stats(image_data[ymin:ymax,xmin:xmax],
sigma=3.0, iters=5)
norm = ImageNormalize(stretch=SqrtStretch())
ax.imshow(image_data[ymin:ymax,xmin:xmax],
cmap='Greys', origin='lower', norm=norm,
vmax = 5000, vmin = 5)
# translate x ticks
new_labels = [str(item +xmin) for item in ax.get_xticks()]
ax.set_xticklabels(new_labels)
# translate y ticks
new_labels = [str(item +ymin) for item in ax.get_yticks()]
ax.set_yticklabels(new_labels)
# plot apertures for bright decaps sources, that
# did not have an lsst match within 0.5 arcsec and 0.5 mag
select_bleeding_sources = False
if select_bleeding_sources :
m1 = cutout_table['lsst_match'] == 0
m2= cutout_table['decaps_mag'] < 11
m3 = cutout_table['decaps_mag'] > 10
m4 = cutout_table['lsst_mag'] < 13
m = m1*m2*m3*m4
if select_
print(cutout_table[m][['decaps_mag', 'lsst_mag','ccd'+str(ccdnum)+'_y_decaps',
'ccd'+str(ccdnum)+'_x_decaps',
'ccd'+str(ccdnum)+'_y_lsst',
'ccd'+str(ccdnum)+'_x_lsst']])
for xx,yy,radius in zip(cutout_table['ccd'+str(ccdnum)+'_x_decaps'][m]-xmin,
cutout_table['ccd'+str(ccdnum)+'_y_decaps'][m]-ymin,
cutout_table[m]['decaps_mag']*5):
circ = Circle((xx,yy),radius = radius,
alpha=0.8, fill=False, ec = 'red',lw=3)
ax.add_patch(circ)
# Here we just plot all LSST detections in that region ...
plot_lsst = False
if plot_lsst :
xcords = cutout_table['ccd'+str(ccdnum)+'_x_lsst']
ycords = cutout_table['ccd'+str(ccdnum)+'_y_lsst']
for xx,yy in zip(xcords-xmin,ycords-ymin):
# https://matplotlib.org/devdocs/api/_as_gen/matplotlib.patches.Circle.html
circ = Circle((xx,yy), radius=8,alpha=0.8, fill=False, ec='orange')
ax.add_patch(circ)
# Finally, also plot all DECaPS sources in that region
plot_decaps = False
if plot_decaps :
xcords = cutout_table['ccd'+str(ccdnum)+'_x_decaps']
ycords = cutout_table['ccd'+str(ccdnum)+'_y_decaps']
for xx,yy in zip(xcords-xmin,ycords-ymin):
# https://matplotlib.org/devdocs/api/_as_gen/matplotlib.patches.Circle.html
circ = Circle((xx,yy), radius=11, alpha=0.8, fill=False, ec='red')
ax.add_patch(circ)
plt.savefig('../data_products/decaps_lsst_compare/'+str(visit)+\
'_ccd-'+str(ccdnum)+'_1', bbox_inches='tight')
###Output
Total sources within this cutout : 712
decaps_mag lsst_mag ccd41_y_decaps ... ccd41_y_lsst ccd41_x_lsst
------------- ------------- -------------- ... ------------- -------------
10.1396274567 12.574231905 2142.34166161 ... 2144.69675921 877.002803676
10.6832122803 12.5509064268 2234.56096841 ... 2234.96283608 1880.96032051
10.3209266663 12.5469616542 2621.8386592 ... 2622.4884557 1575.05614874
###Markdown
Save the illustration code from Decaps_LSST_pipeline to put all illustration data here ... Show LSST calexp processed image of one CCD
###Code
# read in the FITS file corresponding to the raw image,
# and the processed image...
# Read in the LSST Stack processed version
# take just one CCD ( eg. ccd10)
ccdnum = 10
fname = 'calexp-0'+str(visit)+'_'+str(ccdnum)+'.fits'
print(fname)
hdu = fits.open(outDir + 'calexp/'+fname)
g_zeropoint = hdu[0].header['MAGZERO']
lsst_image_data = hdu[1].data
mean, median, std = sigma_clipped_stats(lsst_image_data, sigma=3.0, iters=5)
threshold = 5 * std
norm = ImageNormalize(stretch=SqrtStretch())
#fig,ax = plt.subplots(1,1,figsize = (8,16))
#ax.imshow(lsst_image_data, cmap='Greys', origin='lower', norm=norm,
# vmax = 7000,
# vmin = threshold)
###Output
_____no_output_____
###Markdown
Show the DECam single-epoch image for the same CCD
###Code
# Also read the original DECam image
# and display the same ccd ....
from photutils import CircularAperture
# find the archive image filename
image_name = imdb[imdb['expnum'] == visit]['pldsid'][0]+'.fits.fz'
print('Seeking image %s'%image_name)
image_dir = '../raw_data/DECam/'
if image_name not in os.listdir(image_dir):
print('Need to download %s'%image_name)
# plot the DECam - processed image ...
image_hdu = fits.open(image_dir +image_name)
#image_hdu[0].header
fwhm = image_hdu[1].header['FWHM'] # Median FWHM in pixels
print('The median FWHM is %f pixels'%fwhm)
#ccd_name_dict[ccdnum]
#np.ravel(ccd_name_dict)
# position of any ccd in the image_hdu
# is the ccdnum-1 :
assert image_hdu[ccdnum-1].header['CCDNUM'] == ccdnum
# select the instcal image data :
decam_image_data = image_hdu[ccdnum-1].data
print('Min:', np.min(decam_image_data))
print('Max:', np.max(decam_image_data))
print('Mean:', np.mean(decam_image_data))
print('Stdev:', np.std(decam_image_data))
# I fear this works, but it is just one frame...
mean, median, std = sigma_clipped_stats(decam_image_data, sigma=3.0, iters=5)
print('Sigma clipped mean: %f'%mean)
print('Sigma clipped median: %f'%median)
print('Sigma clipped stdev: %f'%std)
# set the detection threshold at 5 sigma
threshold = 5 * std
print('We set the threshold to 5 times the standard deviation \
of pixel value, i.e. %f '%threshold)
norm = ImageNormalize(stretch=SqrtStretch())
fig,ax = plt.subplots(1,1,figsize = (8,16))
ax.imshow(decam_image_data, cmap='Greys', origin='lower', norm=norm,
vmax = vmax,
vmin = f*threshold)
#apertures.plot(color='blue', lw=1.5, alpha=0.5)
###Output
_____no_output_____
###Markdown
Investigate bright decaps sources without lsst match...
###Code
ccdnum = 10
# read in the lsst calexp image...
image_name = imdb[imdb['expnum'] == visit]['pldsid'][0]+'.fits.fz'
print('Seeking image %s'%image_name)
image_dir = '../raw_data/DECam/'
if image_name not in os.listdir(image_dir):
print('Need to download %s'%image_name)
# plot the DECam - processed image ...
image_hdu = fits.open(image_dir +image_name)
%matplotlib inline
from matplotlib.patches import Circle
# Plot the very bright mismatches ....
# grab the size of the image in pixels
# from the image header ....
xaxis_length = image_hdu[ccdnum-1].header['NAXIS1']
yaxis_length = image_hdu[ccdnum-1].header['NAXIS2']
xcords = decam_to_lsst['ccd10_x_decaps']
ycords = decam_to_lsst['ccd10_y_decaps']
# select a subregion within a chosen part of the image ....
xmin,xmax = 1000,1500 # xaxis_length
ymin,ymax = 0,500 # yaxis_length
mx = (xmin < xcords) * (xcords < xmax)
my = (ymin < ycords) * (ycords < ymax)
print('Total sources within this cutout : %d'%np.sum(mx*my))
# select a subset of the original table within
# this cutout ... ( based on decaps sources in that cutout)
cutout_table = decam_to_lsst[mx*my]
# First select the bright mismatches : only in decam
# but not in LSST ....
# (at least not within 0.5 arcsec and 0.5 mag ... )
#xmin,xmax = min(decam_to_lsst[m]['ccd10_x_decaps']), \
#max(decam_to_lsst[m]['ccd10_x_decaps'])
#ymin,ymax = min(decam_to_lsst[m]['ccd10_y_decaps']), \
#max(decam_to_lsst[m]['ccd10_y_decaps'])
# plot the decam- lsst mismatches ....
fig,ax = plt.subplots(1,1, figsize=(8,16))
# print the image ....
mean, median, std = sigma_clipped_stats(lsst_image_data[ymin:ymax,xmin:xmax],
sigma=3.0, iters=5)
norm = ImageNormalize(stretch=SqrtStretch())
ax.imshow(lsst_image_data[ymin:ymax,xmin:xmax],
cmap='Greys', origin='lower', norm=norm,
vmax = 500, vmin = 5)
# plot apertures for bright decaps sources, that
# did not have an lsst match within 0.5 arcsec and 0.5 mag
m1 = cutout_table['lsst_match'] == 0
m2 = np.bitwise_not(np.isnan(cutout_table['decaps_mag']))
m3 = cutout_table['decaps_mag'] < 16
m = m1*m2*m3
print('Sources with no lsst match, with decaps mag < 13 : %d'%np.sum(m))
print(cutout_table[m][['decaps_mag', 'lsst_mag']])
for xx,yy in zip(cutout_table['ccd10_x_decaps'][m]-xmin,
cutout_table['ccd10_y_decaps'][m]-ymin):
circ = Circle((xx,yy),50, alpha=0.8, fill=False)
ax.add_patch(circ)
# Here we just plot all LSST detections in that region ...
xcords = cutout_table['ccd10_x_lsst']
ycords = cutout_table['ccd10_y_lsst']
for xx,yy in zip(xcords-xmin,ycords-ymin):
# https://matplotlib.org/devdocs/api/_as_gen/matplotlib.patches.Circle.html
circ = Circle((xx,yy), radius=8,alpha=0.8, fill=False, ec='orange')
ax.add_patch(circ)
# Finally, also plot all DECaPS sources in that region
xcords = cutout_table['ccd10_x_decaps']
ycords = cutout_table['ccd10_y_decaps']
for xx,yy in zip(xcords-xmin,ycords-ymin):
# https://matplotlib.org/devdocs/api/_as_gen/matplotlib.patches.Circle.html
circ = Circle((xx,yy), radius=11, alpha=0.8, fill=False, ec='red')
ax.add_patch(circ)
###Output
_____no_output_____
###Markdown
On this image we marged with big black circles the two bright decaps sources that did not have an lsst match within 0.5 arcsec AND 0.5 mag. We overplotted in red all decaps detections, and in orange all lsst detections. We find that indeed there are lsst detections in that location, but the lsst magnitudes are 12.5410783705, 12.5401236168 compared to decaps 11.9704704285 , 11.9622058868, i.e. the difference is bigger than 0.5 mag. We check whether this is the case for all sources brighter than 18 :
###Code
decam_to_lsst = add_ccd_xycoords(decam_to_lsst, image_hdu, ccdnum=44)
# select 'mismatches' -decaps sources
# that do not have an lsst source within 0.5 arcsec and 0.5 mag
m1 = decam_to_lsst['lsst_match'] == 0
# select only bright ones....
m2 = decam_to_lsst['decaps_mag'] < 8
m = m1*m2
print(decam_to_lsst[m]['decaps_mag','lsst_mag', 'ccd44_x_decaps', 'ccd44_y_decaps'])
###Output
_____no_output_____
###Markdown
Wow, I didn't expect that ! Why would there be super-bright sources that have very faint LSST counterparts?
###Code
#These are not in ccd10 : go to another..
# It is ccd 44 !
ccdnum = 44
fname = 'calexp-0'+str(visit)+'_'+str(ccdnum)+'.fits'
print(fname)
hdu = fits.open(outDir + 'calexp/'+fname)
g_zeropoint = hdu[0].header['MAGZERO']
lsst_image_data = hdu[1].data
xcords = decam_to_lsst['ccd'+str(ccdnum)+'_x_decaps']
ycords = decam_to_lsst['ccd'+str(ccdnum)+'_y_decaps']
# select a subregion within a chosen part of the image ....
xmin,xmax = 0,xaxis_length # xaxis_length
ymin,ymax = 0,yaxis_length # yaxis_length
mx = (xmin < xcords) * (xcords < xmax)
my = (ymin < ycords) * (ycords < ymax)
print('Total sources within this cutout : %d'%np.sum(mx*my))
# select a subset of the original table within
# this cutout ... ( based on decaps sources in that cutout)
cutout_table = decam_to_lsst[mx*my]
# First select the bright mismatches : only in decam
# but not in LSST ....
# (at least not within 0.5 arcsec and 0.5 mag ... )
#xmin,xmax = min(decam_to_lsst[m]['ccd10_x_decaps']), \
#max(decam_to_lsst[m]['ccd10_x_decaps'])
#ymin,ymax = min(decam_to_lsst[m]['ccd10_y_decaps']), \
#max(decam_to_lsst[m]['ccd10_y_decaps'])
# plot the decam- lsst mismatches ....
fig,ax = plt.subplots(1,1, figsize=(8,16))
# print the image ....
mean, median, std = sigma_clipped_stats(lsst_image_data[ymin:ymax,xmin:xmax],
sigma=3.0, iters=5)
norm = ImageNormalize(stretch=SqrtStretch())
ax.imshow(lsst_image_data[ymin:ymax,xmin:xmax],
cmap='Greys', origin='lower', norm=norm,
vmax = 500, vmin = 5)
# plot apertures for bright decaps sources, that
# did not have an lsst match within 0.5 arcsec and 0.5 mag
m1 = cutout_table['lsst_match'] == 0
m2 = np.bitwise_not(np.isnan(cutout_table['decaps_mag']))
bright_mag = 8
m3 = cutout_table['decaps_mag'] < bright_mag
m = m1*m2*m3
print('Sources with no lsst match, with decaps mag < %d : %d'%(bright_mag,np.sum(m)))
print(cutout_table[m][['decaps_mag', 'lsst_mag','ccd'+str(ccdnum)+'_y_decaps',
'ccd'+str(ccdnum)+'_x_decaps',
'ccd'+str(ccdnum)+'_y_lsst',
'ccd'+str(ccdnum)+'_x_lsst']])
for xx,yy,radius in zip(cutout_table['ccd'+str(ccdnum)+'_x_decaps'][m]-xmin,
cutout_table['ccd'+str(ccdnum)+'_y_decaps'][m]-ymin,
cutout_table[m]['decaps_mag']*5):
circ = Circle((xx,yy),radius = radius,
alpha=0.8, fill=False, ec = 'red',lw=3)
ax.add_patch(circ)
# simply plot the magnitudes : decaps and lsst ....
# I expect that the lsst mags will be smaller than decaps...
plt.scatter(decam_to_lsst[m]['decaps_mag'],
decam_to_lsst[m]['lsst_mag'])
# translete x ticks to real positions
new_labels = [str(item+xmin) for item in ax.get_xticks()]
ax.set_xticklabels(new_labels)
# translate y ticks to real positions
new_labels = [str(item+ymin) for item in ax.get_yticks()]
ax.set_yticklabels(new_labels)
ax.set_title('mismatches: only in decaps single-epoch ', fontsize=14)
#ax[1.set_title("matches : within 0.5 '' and 0.5 mag ", fontsize=14)
ax.set_ylabel('y arcseconds ', fontsize=15)
#fig.subplots_adjust(wspace=0.1)
ax.tick_params(axis='both', which='major', labelsize=13)
#a#x[1].tick_params(axis='both', which='major', labelsize=13)
#fig.text(0.5,0.1, 'x arcseconds ', fontsize=15)
plt.savefig('../data_products/decaps_lsst_compare/'+str(visit)+'_ccd-'+str(ccdnum)+\
'_decam_to_lsst_small22.png', bbox_inches='tight', dpi=my_dpi)
###Output
_____no_output_____
###Markdown
Implement $\Delta m$ vs. $d$ test We match the catalog to self, and plot the separation as a function of magnitude difference to the nearest neighbor... ...
###Code
# Let's use decaps coordinates, and decaps magnitudes ....
# compare those objects that have an lsst match,
# as opposed to those which do not ...
len(decam_to_lsst)
# select only few cols...
decam = decam_to_lsst[['decaps_mag','ra','dec', 'dmag', 'lsst_match' ]]w
# match only those decaps sources to self that have LSST match ...
decam_dict = {}
for match in [0,1]:
m = decam['lsst_match'] == match
# Match sources from decaps to lsst per ccd
# decam coordinates
decam_coord = SkyCoord(ra = decam['ra'][m]*u.degree,
dec = decam['dec'][m]*u.degree)
# indices are into decam catalog itself
idx, d2d, d3d = decam_coord.match_to_catalog_sky(decam_coord, nthneighbor=2)
# stack the two catalogs
decam_self = hstack([decam[m] ,decam[m][idx]],
table_names=['one','two'] )
decam_self['d2d'] = d2d.arcsec
decam_self['decaps_dmag'] = decam_self['decaps_mag_one'] - decam_self['decaps_mag_two']
decam_dict[match] = decam_self
from scipy.stats import binned_statistic_2d
fig,ax = plt.subplots(1,2,figsize=(13,6))
xmin, xmax = -15,15
ymin, ymax = 0,40
colors = {0:'orange', 1:'teal'}
titles = {0:"no LSST match within 0.5 mag, 0.5''",
1:"with LSST match within 0.5 mag, 0.5''"}
cm = plt.cm.get_cmap('viridis')
for match in [0,1]:
x = decam_dict[match]['decaps_dmag']
y = decam_dict[match]['d2d']
mx = (xmin < x )*(x < xmax)
my = (ymin < y) * (y<ymax)
x = x[mx*my]
y = y[mx*my]
stats = binned_statistic_2d(x, y, values = x, statistic='count', bins=70)
z_sigma, x_edges, y_edges = stats[0], stats[1], stats[2]
# replace all nan's by 0 ...
z_sigma[np.isnan(z_sigma)] =0
z_reduce = z_sigma # [:-1, :-1] no need to reduce here because x_edges are already given with the right size
z_min, z_max = z_reduce.min(), np.abs(z_reduce).max()
z_rot = np.rot90(z_reduce) # rotate and flip to properly display...
z_rot_flip = np.flipud(z_rot)
z_masked = np.ma.masked_where(z_rot_flip == 0 , z_rot_flip) # mask out zeros...
# Plot 2D histogram using pcolor
image = ax[match].pcolormesh(x_edges,y_edges,np.log10(z_masked), cmap=cm)
# np.log10(z_masked) gives log counts
#ax[match].scatter(,, s=0.001,
# label=str(match), alpha=0.5, color=colors[match])
#ax[match].legend(loc='upper left', frameon=False,
# bbox_to_anchor=(0.71, 0.9), fontsize=15,markerscale=300)
ax[match].set_xlabel(r'$\Delta mag$', fontsize=15)
ax[match].set_ylabel('d [arcsec]', fontsize=15)
ax[match].set_ylim(ymin,ymax)
ax[match].set_xlim(xmin,xmax)
ax[match].set_title(titles[match], fontsize=15)
ax[match].tick_params(axis='both', which='major', labelsize=14)
colorbar_ax = fig.add_axes([0.91, 0.13, 0.02, 0.75]) # (x0 ,y0 , dx, dy )
colorbar = fig.colorbar(image, cax = colorbar_ax, orientation='vertical')
colorbar.set_label(r'$\log_{10}{(count)}$', fontsize=20)
plt.savefig('../data_products/decaps_lsst_compare/611980/distance_dmag_test.png',
bbox_inches='tight')
plt.scatter(decam_self['decaps_dmag'], decam_self['d2d'], s=0.001)
%matplotlib inline
m = decam_self['lsst_match_one'] == 0
plt.scatter(decam_self['decaps_dmag'][m], decam_self['d2d'][m], s=0.001)
m = decam_self['lsst_match_one'] == 1
plt.scatter(decam_self['decaps_dmag'][m], decam_self['d2d'][m], s=0.0001)
m = decam_self['lsst_match_one'] == 0
###Output
_____no_output_____ |
ds/notebooks/dev/03_jmg_hebci.ipynb | ###Markdown
HE-BCIHere we collect data from the Higher-Education Business Community Interaction Survey available from the HESA website ([link](https://www.hesa.ac.uk/data-and-analysis/business-community))The structure is very similar to other HESA we collected in `01_jmg` so eventually we might want to merge both notebooks. I will definitely be reusing a lot of the code here.In terms of indicators, we would like to create the following:* Graduate start-ups rate (HE-BCI)* Research resource (income) per spin-out (HE-BCI)* Average external investment per formal spin-out (HE-BCI)* Licensing and other IP income as proportion of research income (HE-BCI)* Contract research income with businesses (HE-BCI)* Consultancy income with businesses (HE-BCI)* Contract research income with the public and third sector (HE-BCI)* Consultancy income with the public and third sector (HE-BCI) Preamble
###Code
%run ../notebook_preamble.ipy
import csv
import zipfile
import io
from ast import literal_eval
import seaborn as sn
from nuts_finder import NutsFinder
today_str = str(datetime.datetime.today()).split(' ')[0]
###Output
_____no_output_____
###Markdown
Functions Simple utilities
###Code
def tidy_cols(my_csv):
'''
Tidies column names ie lower and replace spaces with underscores
'''
return([re.sub(' ','_',col.lower()) for col in my_csv.columns])
def filter_data(data,var_val_pairs):
'''
We use this to filter the data more easily than using pandas subsetting
Args:
data (df) is a dataframe
var_val pairs (dict) is a dictionary where the keys are variables and the value are values
'''
d = data.copy()
for k,v in var_val_pairs.items():
d = d.loc[d[k]==v]
return(d.reset_index(drop=True))
def check_categories(data,columns):
'''
This counts frequencies of categorical variables. We use it to decide what variables to choose, and to avoid double counting
Args:
Data (df) is the data
Columns (list) are the categorical variables we want to check
'''
print('FREQUENCIES')
print('===========')
print('\n')
#We check frequencies
for var in columns:
print(var)
print('=====')
print(data[var].value_counts())
print('\n')
print('CROSSTABS')
print('===========')
#We check combinations
combs = list(combinations(columns,2))
for comb in combs:
print(comb[0]+' x '+comb[1])
print('=====')
print(pd.crosstab(data[comb[0]],data[comb[1]]))
print('\n')
###Output
_____no_output_____
###Markdown
Data collection
###Code
def hesa_parser(url,out_name,skip=16,encoding='utf-8'):
'''
Function to obtain and parse data from the HESA website
Args:
url (str) is the location of the csv file
out_name (str) is the saved name of the file
skip is the number of rows to skip (we could automate this by looking for rows at the top with lots of nans)
'''
#Request and parse
rs = requests.get(url)
#Parse the file
parsed = rs.content.decode(encoding)
#Save it
with open(f'../../data/raw/hesa/{out_name}.txt','w') as outfile:
outfile.write(parsed)
#Read it.
my_csv = pd.read_csv(f'../../data/raw/hesa/{out_name}.txt',skiprows=skip)
#Clean column names
my_csv.columns = tidy_cols(my_csv)
return(my_csv)
###Output
_____no_output_____
###Markdown
Data processing
###Code
def gimme_nuts(lat,lon,level=2):
'''
Function to extract nuts information from a pair of coordinates
Args:
lat (float) is the latitude
lon (float) is the longitude
level (int) is the NUTS level we want
'''
info = nf.find(lat=lat,lon=lon)
try:
nuts_id = [x['NUTS_ID'] for x in info if x['LEVL_CODE']==level][0]
nuts_name = [x['NUTS_NAME'] for x in info if x['LEVL_CODE']==level][0]
#print(info)
#nuts_id = info[level]['NUTS_ID']
#nuts_name = info[level]['NUTS_NAME']
except:
print(f'failed with {np.round(lat,2)},{np.round(lon,2)}')
nuts_id = np.nan
nuts_name=np.nan
return([nuts_id,nuts_name])
def compare_data(df_1,df_2,id_1,id_2,name_1,name_2):
'''
We use this function to check if the ids in two datasets we are merging are consistent.
Args:
dfs are the dfs we want to compare
ids are the ids we want to check
names are the names we want to use to explore the data
'''
print('In 1 but not in 2')
print('==================')
d1_miss = set(df_1[id_1].dropna())-set(df_2[id_2])
print(set(df_1.loc[[x in d1_miss for x in df_1[id_1]]][name_1]))
print('\n')
print('In 2 but not in 1')
print('==================')
d2_miss = set(df_2[id_2].dropna())-set(df_1[id_1])
print(set(df_2.loc[[x in d2_miss for x in df_2[id_2]]][name_2]))
###Output
_____no_output_____
###Markdown
Create NUTS aggregations
###Code
def make_nuts_estimate(data,nuts_lookup,counter,name,year_var='academic_year'):
'''
This function takes hesa data and creates a nuts estimate
Args:
data (df) where we have already selected variables of interest eg mode of employment
nuts (dict) is the ukprn - nuts name and code lookup
counter (str) is the variable with counts that we are interested in
year_var (str) is the variable containing the years we want to group by. If None, then we are not grouping by year
'''
d = data.copy()
#Add the nuts names and codes
d['nuts_name'],d['nuts_code'] = [[nuts_lookup[ukprn][var] if ukprn in nuts_lookup.keys() else np.nan for ukprn in data['ukprn']] for
var in ['nuts_name','nuts_code']]
#We are focusing on numbers
d[counter] = d[counter].astype(float)
#Group results by year?
if year_var == None:
out = d.groupby(['nuts_name','nuts_code'])[counter].sum()
else:
out = d.groupby(['nuts_name','nuts_code',year_var])[counter].sum()
out.name = name
return(out)
def multiple_nuts_estimates(data,nuts_lookup,variables,select_var,value,year_var='academic_year'):
'''
Creates NUTS estimates for multiple variables.
Args:
data (df) is the filtered dataframe
select_var (str) is the variable we want to use to select values
nuts_lookup (dict) is the lookup between universities and nuts
variables (list) is the list of variables for which we want to generate the analysis
value (str) is the field that contains the numerical value we want to aggregate in the dataframe
year_var (str) is the year_variable. If none, then we are not interested in years
'''
if year_var==None:
concat = pd.concat([make_nuts_estimate(data.loc[data[select_var]==m],nuts_lookup,value,m) for m in
variables],axis=1)
#If we want to do this by year then we will create aggregates by nuts name and code and year and then concatenate over columns
else:
year_store = []
for m in variables:
y = make_nuts_estimate(data.loc[data[select_var]==m],nuts_lookup,value,m,year_var='academic_year')
year_store.append(y)
concat = pd.concat(year_store,axis=1)
return(concat)
def convert_academic_year(df,year_var = 'academic_year',position=0):
'''
This function converts an academic year variable from HESA into a year (int)
Args:
df (df) with the academic year we want to convert
year_var (str) is the name of the year variable
position (int) is the position of the year. We default to 0 (first year)
'''
#Make copy
df_2 = df.copy()
#Reset index so we can work with it easily
df_2 = df_2.reset_index(level=2)
#Create the new year variable by splitting the academic year variable on /
df_2[year_var] = [int(x.split('/')[position]) if position==0 else int('20'+x.split('/')[position]) for x in df_2[year_var]]
#Reappend the year index
df_2.set_index(year_var,append=True,inplace=True)
#df_2.rename(columns={year_var:'year'},inplace=True)
return(df_2)
def make_indicator(table,target_path,var_lookup,year_var,nuts_var='nuts_code',nuts_spec=2018,decimals=3):
'''
We use this function to create and save indicators using our standardised format.
Args:
table (df) is a df with relevant information
target_path (str) is the location of the directory where we want to save the data (includes interim and processed)
var_lookup (dict) is a lookup to rename the variable into our standardised name
year (str) is the name of the year variable
nuts_var (str) is the name of the NUTS code variable. We assume it is nuts_code
nuts_spec (y) is the value of the NUTS specification. We assume we are working with 2018 NUTS
'''
#Copy
t = table.reset_index(drop=False)
#Reset index (we assume that the index is the nuts code, var name and year - this might need to be changed)
#Process the interim data into an indicator
#This is the variable name and code
var_name = list(var_lookup.keys())[0]
var_code = list(var_lookup.values())[0]
#Focus on those
t = t[[year_var,nuts_var,var_name]]
#Add the nuts specification
t['nuts_year_spec'] = nuts_spec
#Rename variables
t.rename(columns={var_name:var_code,year_var:'year',nuts_var:'nuts_id'},inplace=True)
#Round variables
t[var_code] = [np.round(x,decimals) if decimals>0 else int(x) for x in t[var_code]]
#Reorder variables
t = t[['year','nuts_id','nuts_year_spec',var_code]]
print(t.head())
#Save in the processed folder
t.to_csv(f'../../data/processed/{target_path}/{var_code}.csv',index=False)
###Output
_____no_output_____
###Markdown
Directories etc
###Code
# Create a hesa directory in raw and processed
if 'hebci' not in os.listdir('../../data/raw'):
os.mkdir('../../data/raw/hebci')
if 'hebci' not in os.listdir('../../data/interim'):
os.mkdir('../../data/interim/hebci')
if 'hebci' not in os.listdir('../../data/processed'):
os.mkdir('../../data/processed/hebci')
###Output
_____no_output_____
###Markdown
1. Collect data University metadataThe [learning providers website](http://learning-provider.data.ac.uk/) contains information about universities. We have geocoded them in `0-jmg-university...`
###Code
with open('../../data/metadata/uni_nuts.txt','r') as infile:
uni_nuts = literal_eval(infile.read())
###Output
_____no_output_____
###Markdown
Spin-out activity
###Code
url_1 = 'https://www.hesa.ac.uk/data-and-analysis/providers/business-community/table-4e.csv'
spin = hesa_parser(url_1,'spin',skip=11)
spin.head()
###Output
_____no_output_____
###Markdown
Licensing income
###Code
url_2 = 'https://www.hesa.ac.uk/data-and-analysis/providers/business-community/table-4d.csv'
ip = hesa_parser(url_2,'ip',skip=11)
ip.head()
###Output
_____no_output_____
###Markdown
Services income
###Code
url_3 = 'https://www.hesa.ac.uk/data-and-analysis/providers/business-community/table-2a.csv'
services = hesa_parser(url_3,'services',skip=11)
services.head()
###Output
_____no_output_____
###Markdown
Collaborative research involving public funding
###Code
url_4 = 'https://www.hesa.ac.uk/data-and-analysis/providers/business-community/table-1.csv'
collab = hesa_parser(url_4,'collab',skip=11)
collab.head()
###Output
_____no_output_____
###Markdown
2. Create indicators
###Code
def calculate_perf(table,perf,norm=False,sp_def='all',value='currency'):
'''
Function that calculates performance (employment, turnover, investment, active firms...)
Args:
table (df) long table with the performance and spinoff category information
perf (str) measure of performance
sp_def (str) definition of spinoff
norm (str) if we want to normalise by the number of entities in the category
value (str) if currency multiply by 1000 to extract gpbs
Returns a clean indicator
'''
t = table.copy()
#First get the financials
#Create a dict to filter the data
p_filter = {'metric':perf}
#Extract the estimates
t_filt= multiple_nuts_estimates(filter_data(t,p_filter),uni_nuts,set(spin['category_marker']),'category_marker','value')
#Are we subsetting by a category?
if sp_def == 'all':
t_filt = t_filt.sum(axis=1)
else:
t_filt = t_filt[sp_def]
#Tidy columns
t_filt.name = sp_def
#Scale if the value is a currency
if value=='currency':
t_filt = t_filt*1000
t_filt.name = 'gbp_'+t_filt.name
#Do the same with the totals
if norm == True:
unit_filter = {'metric':'Number of active firms'}
u_filt= multiple_nuts_estimates(filter_data(t,unit_filter),uni_nuts,set(spin['category_marker']),'category_marker','value')
#Are we subsetting by a category?
if sp_def == 'all':
u_filt = u_filt.sum(axis=1)
else:
u_filt = u_filt[sp_def]
#Tidy columns
u_filt.name = 'all_comps'
comb = pd.concat([t_filt,u_filt],axis=1)
comb[f'{t_filt.name}_by_company']= comb[t_filt.name]/comb['all_comps']
#Zeroes are nans (this is to avoid division by zero)
comb.fillna(0,inplace=True)
return(comb)
else:
return(t_filt)
from beis_indicators.utils.nuts_utils import auto_nuts2_uk
###Output
_____no_output_____
###Markdown
a. spinout relatedHere we will focus on the number of spinouts in different categories and the levels of external investment that they have received.This includes issues `77`, `78`, `79`.
###Code
interesting_columns_spin = ['country_of_he_provider','region_of_he_provider','academic_year','metric','category_marker']
#check_categories(spin,interesting_columns_spin)
spin['metric'].value_counts()
spin['category_marker'].value_counts()
###Output
_____no_output_____
###Markdown
Graduate startup rate (item 77)
###Code
startup_rate = calculate_perf(spin,'Number of active firms',sp_def='Graduate start-ups',value='count')
make_indicator(convert_academic_year(startup_rate),'hebci',{'Graduate start-ups':'total_active_graduate_startups'},
year_var='academic_year',decimals=0)
###Output
_____no_output_____
###Markdown
Turnover per spinout (item 78)
###Code
turn_per_startup = calculate_perf(spin,'Estimated current turnover of all active firms (£ thousands)',norm=True,
sp_def='all',value='currency')
make_indicator(convert_academic_year(turn_per_startup),'hebci',{'gbp_all_by_company':'gbp_turnover_per_active_spinoff'},year_var='academic_year',decimals=0)
###Output
_____no_output_____
###Markdown
Average external investment per 'formal' (?) spinout (item 79)This is the same as above but with investment instead of turnover. We will focus on all companies because we have found some mistakes in the data - for example, Cranfield university has recorded £500K of investment recorded vs formal spinoffs, but no active companies.
###Code
# test_2 = spin.loc[
# (spin['category_marker']=='Formal spin-offs, not HEP owned')&(spin['academic_year']=='2014/15')].groupby(
# ['he_provider','metric'])['value'].sum().reset_index(drop=False).pivot(index='he_provider',columns='metric',values='value')
# test_3 = test_2.loc[test_2['Number of active firms']==0]
# test_3.sort_values('Estimated external investment received (£ thousands)')
# #test_3.loc[test_3['Estimated current turnover of all active firms (£ thousands)']>0]
inv_per_formal = calculate_perf(spin,'Estimated external investment received (£ thousands)',norm=True,
sp_def='all',value='currency')
make_indicator(convert_academic_year(inv_per_formal),'hebci',{'gbp_all_by_company':'gbp_investment_per_active_spinoff'},year_var='academic_year',decimals=3)
###Output
_____no_output_____
###Markdown
b. Licensing income relatedWe will extract total IP
###Code
interesting_columns_income = ['country_of_he_provider','region_of_he_provider','academic_year','category_marker','unit']
#check_categories(income,interesting_columns_income)
ip.head()
ip_filter = {'category_marker':'Total IP revenues'}
#Note that we are multiplying by 1000 to convert into GBP
income_nuts = 1000*make_nuts_estimate(filter_data(ip,ip_filter),uni_nuts,'value','total_ip_revenues',year_var='academic_year')
make_indicator(convert_academic_year(income_nuts),'hebci',{'total_ip_revenues':'gbp_ip_revenues'},year_var='academic_year',decimals=0)
###Output
_____no_output_____
###Markdown
Note that they want this normalised by research income. We have already produced that indicator before. We need to pull the HESA data in to produce it. We combine them below services related
###Code
services.head()
# interesting_columns = ['type_of_service','type_of_organisation','number/value_marker']
# check_categories(services,interesting_columns)
services_filter = {'type_of_service':'Consultancy','number/value_marker':'Value'}
#Note that, as before, I am multiplying by 1000 as I am dealing with businesses
services_nuts = 1000*multiple_nuts_estimates(filter_data(
services,services_filter),uni_nuts,set(services['type_of_organisation']),'type_of_organisation','number/value','academic_year')
services_nuts.columns = tidy_cols(services_nuts)
services_nuts.head()
###Output
_____no_output_____
###Markdown
consultancy with business (82)
###Code
services_nuts['business_consultancy'] = services_nuts.iloc[:,0]+services_nuts.iloc[:,2]
make_indicator(convert_academic_year(services_nuts),'hebci',{'business_consultancy':'gbp_business_consulting'},year_var='academic_year',decimals=0)
###Output
_____no_output_____
###Markdown
consultancy with public sector organisations (85)
###Code
make_indicator(convert_academic_year(services_nuts),'hebci',{'non-commercial_organisations':'gbp_non_business_consulting'},
year_var='academic_year',decimals=0)
###Output
_____no_output_____
###Markdown
Contract research with business (81)
###Code
contract_res_filter = {'type_of_service':'Contract research','number/value_marker':'Value'}
#Note that, as before, I am multiplying by 1000 as I am dealing with businesses
res_nuts = 1000*multiple_nuts_estimates(filter_data(
services,contract_res_filter),uni_nuts,set(services['type_of_organisation']),'type_of_organisation','number/value','academic_year')
res_nuts.columns = tidy_cols(res_nuts)
res_nuts.head()
#Add SME and non-SME contract research
res_nuts['business_contract_research'] = res_nuts.iloc[:,0]+services_nuts.iloc[:,2]
make_indicator(convert_academic_year(res_nuts),'hebci',{'business_contract_research':'gbp_business_contract_research'},year_var='academic_year',decimals=0)
###Output
_____no_output_____
###Markdown
Coda: Apply autonuts to all indicatorsThis is a bit clunky. It would be better to apply to the indicators as they are produced but this requires some fidgeting with the code. **TOFIX**
###Code
def autonuts_folder(path):
'''
Applies autonuts to all the files in a folder
'''
csvs = [x for x in os.listdir(path) if '.csv' in x]
for x in csvs:
print(x)
table = pd.read_csv(os.path.join(path,x))
an = auto_nuts2_uk(table)
an.to_csv(os.path.join(path,x),index=False)
autonuts_folder('../../data/processed/hebci/')
###Output
_____no_output_____ |
courses/modsim2018/talitapinheiro/Talita_Task17.ipynb | ###Markdown
Lucas_task 15 - Motor Control Introduction to modeling and simulation of human movementhttps://github.com/BMClab/bmc/blob/master/courses/ModSim2018.md Implement a simulation of the ankle joint model using the parameters from Thelen (2003) and Elias (2014)
###Code
import numpy as np
import pandas as pd
%matplotlib notebook
import matplotlib.pyplot as plt
import math
from Muscle import Muscle
Lslack = 2.4*0.09 # tendon slack length
Lce_o = 0.09 # optimal muscle fiber length
Fmax = 1400 #maximal isometric DF force
alpha = 7*math.pi/180 # DF muscle fiber pennation angle
dt = 0.0001
dorsiflexor = Muscle(Lce_o=Lce_o, Fmax=Fmax, Lslack=Lslack, alpha=alpha, dt = dt)
soleus = Muscle(Lce_o=0.049, Fmax=8050, Lslack=0.289, alpha=25*np.pi/180, dt = dt)
soleus.Fmax
###Output
_____no_output_____
###Markdown
Muscle properties Parameters from Nigg & Herzog (2006).
###Code
Umax = 0.04 # SEE strain at Fmax
width = 0.63 # Max relative length change of CE
###Output
_____no_output_____
###Markdown
Activation dynamics parameters
###Code
a = 1
u = 1 #Initial conditional for Brain's activation
#b = .25*10#*Lce_o
###Output
_____no_output_____
###Markdown
Subject's anthropometricsParameters obtained experimentally or from Winter's book.
###Code
M = 75 #total body mass (kg)
Lseg = 0.26 #segment length (m)
m = 1*M #foot mass (kg)
g = 9.81 #acceleration of gravity (m/s2)
Rcm = Lseg*0.5 #distance from ankle joint to center of mass (m)
Hcm = 0.85
I = 4/3*m*Hcm**2 #moment of inertia
legAng = math.pi/2 #angle of the leg with horizontal (90 deg)
As_TA = np.array([30.6, -7.44e-2, -1.41e-4, 2.42e-6, 1.5e-8]) / 100 # at [m] instead of [cm]
# Coefs for moment arm for ankle angle
Bs_TA = np.array([4.3, 1.66e-2, -3.89e-4, -4.45e-6, -4.34e-8]) / 100 # at [m] instead of [cm]
As_SOL = np.array([32.3, 7.22e-2, -2.24e-4, -3.15e-6, 9.27e-9]) / 100 # at [m] instead of [cm]
Bs_SOL = np.array([-4.1, 2.57e-2, 5.45e-4, -2.22e-6, -5.5e-9]) / 100 # at [m] instead of [cm]
###Output
_____no_output_____
###Markdown
Initial conditions
###Code
phi = 5*np.pi/180
phid = 0 #zero velocity
Lm0 = 0.31 #initial total lenght of the muscle
dorsiflexor.Lnorm_ce = 1 #norm
soleus.Lnorm_ce = 1 #norm
t0 = 0 #Initial time
tf = 60 #Final Time
t = np.arange(t0,tf,dt) # time array
# preallocating
F = np.empty((t.shape[0],2))
phivec = np.empty(t.shape)
Fkpe = np.empty(t.shape)
FiberLen = np.empty(t.shape)
TendonLen = np.empty(t.shape)
a_dynamics = np.empty(t.shape)
Moment = np.empty(t.shape)
###Output
_____no_output_____
###Markdown
Simulation - Series
###Code
def momentArmDF(phi):
'''
Calculate the tibialis anterior moment arm according to Elias et al (2014)
Input:
phi: Ankle joint angle in radians
Output:
Rarm: TA moment arm
'''
# Consider neutral ankle position as zero degrees
phi = phi*180/np.pi # converting to degrees
Rf = 4.3 + 1.66E-2*phi + -3.89E-4*phi**2 + -4.45E-6*phi**3 + -4.34E-8*phi**4
Rf = Rf/100 # converting to meters
return Rf
def ComputeTotalLengthSizeTA(phi):
'''
Calculate TA MTU length size according to Elias et al (2014)
Input:
phi: ankle angle
'''
phi = phi*180/math.pi # converting to degrees
Lm = 30.6 + -7.44E-2*phi + -1.41E-4*phi**2 + 2.42E-6*phi**3 + 1.5E-8*phi**4
Lm = Lm/100
return Lm
def ComputeMomentJoint(Rf_TA, Fnorm_tendon_TA, Fmax_TA, Rf_SOL, Fnorm_tendon_SOL, Fmax_SOL, m, g, phi):
'''
Inputs:
RF = Moment arm
Fnorm_tendon = Normalized tendon force
m = Segment Mass
g = Acelleration of gravity
Fmax= maximal isometric force
Output:
M = Total moment with respect to joint
'''
M = Rf_TA*Fnorm_tendon_TA*Fmax_TA + Rf_SOL*Fnorm_tendon_SOL*Fmax_SOL + m*g*Hcm*np.sin(phi)
return M
def ComputeAngularAcelerationJoint(M, I):
'''
Inputs:
M = Total moment with respect to joint
I = Moment of Inertia
Output:
phidd= angular aceleration of the joint
'''
phidd = M/I
return phidd
def computeMomentArmJoint(theta, Bs):
# theta - joint angle (degrees)
# Bs - coeficients for the polinomio
auxBmultp = np.empty(Bs.shape);
for i in range (len(Bs)):
auxBmultp[i] = Bs[i] * (theta**i)
Rf = sum(auxBmultp)
return Rf
def ComputeTotalLenghtSize(theta, As):
# theta = joint angle(degrees)
# As - coeficients for the polinomio
auxAmultp = np.empty(As.shape);
for i in range (len(As)):
auxAmultp[i] = As[i] * (theta**i)
Lm = sum(auxAmultp)
return Lm
noise = 1000*np.random.randn(len(t))
phiRef = 5*np.pi/180
Kp = 119
Kd = 20
for i in range (len(t)):
Lm_TA = ComputeTotalLenghtSize(phi*180/np.pi, As_TA)
Rf_TA = computeMomentArmJoint(phi*180/np.pi, Bs_TA)
Lm_SOL = ComputeTotalLenghtSize(phi*180/np.pi, As_SOL)
Rf_SOL = computeMomentArmJoint(phi*180/np.pi, Bs_SOL)
##############################################################
e = phiRef - phi
if e > 0:
u_TA = min(1,Kp*e - Kd*phid)
u_SOL = 0.01
else:
u_TA = 0.01
u_SOL = min(1,-Kp*e + Kd*phid)
#############################################################
dorsiflexor.updateMuscle(Lm=Lm_TA, u=u_TA)
soleus.updateMuscle(Lm=Lm_SOL, u=u_SOL)
#####################################################################
#Compute MomentJoint
M = ComputeMomentJoint(Rf_TA,dorsiflexor.Fnorm_tendon,
dorsiflexor.Fmax,
Rf_SOL, soleus.Fnorm_tendon,
soleus.Fmax,
m,g,phi)
#Compute Angular Aceleration Joint
torqueWithNoise = M + noise[i]
phidd = ComputeAngularAcelerationJoint (torqueWithNoise,I)
# Euler integration steps
phid= phid + dt*phidd
phi = phi + dt*phid
phideg= (phi*180)/math.pi #convert joint angle from radians to degree
# Store variables in vectors
F[i,0] = dorsiflexor.Fnorm_tendon*dorsiflexor.Fmax
F[i,1] = soleus.Fnorm_tendon*soleus.Fmax
Fkpe[i] = dorsiflexor.Fnorm_kpe*dorsiflexor.Fmax
FiberLen[i] = dorsiflexor.Lnorm_ce*dorsiflexor.Lce_o
TendonLen[i] = dorsiflexor.Lnorm_see*dorsiflexor.Lce_o
a_dynamics[i] = dorsiflexor.a
phivec[i] = phideg
Moment[i] = M
###Output
_____no_output_____
###Markdown
Plots
###Code
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t,a_dynamics,c='magenta')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Activation dynamics')
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t, Moment)
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('joint moment')
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t, F[:,1], c='red')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Force (N)')
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t,phivec,c='red')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Joint angle (deg)')
plt.show()
fig, ax = plt.subplots(1, 1, figsize=(6,4))
ax.plot(t,FiberLen, label = 'fiber')
ax.plot(t,TendonLen, label = 'tendon')
plt.grid()
plt.xlabel('time (s)')
plt.ylabel('Length (m)')
ax.legend(loc='best')
fig, ax = plt.subplots(1, 3, figsize=(9,4), sharex=True, sharey=True)
ax[0].plot(t,FiberLen, label = 'fiber')
ax[1].plot(t,TendonLen, label = 'tendon')
ax[2].plot(t,FiberLen + TendonLen, label = 'muscle (tendon + fiber)')
ax[1].set_xlabel('time (s)')
ax[0].set_ylabel('Length (m)')
ax[0].legend(loc='best')
ax[1].legend(loc='best')
ax[2].legend(loc='best')
plt.show()
###Output
_____no_output_____ |
005_calc/functional_analysis_critical_points.ipynb | ###Markdown
Plot
###Code
def plot(func, interval, x_points=None):
xs = np.linspace(interval[0], interval[1], 100)
ys = [func(x) for x in xs]
fig, ax = plt.subplots(1,1, sharex='col', figsize = (10, 5))
ax.axhline(0, color='g', linestyle='--')
if x_points is not None:
for p in x_points:
ax.axvline(p, c='gray', alpha=0.7, linestyle='--')
ax.plot(xs, ys, color='blue')
###Output
_____no_output_____
###Markdown
Critical points - first derivative $f'=0$ or undefined $h(x) = x^5 + x^4$${h}'(x) = 5x^4 + 4x^3$${h}'(x) = x^3(5x + 4)$---$5x + 4 = 0$$5x = -4$$x = -\frac{4}{5}$---Critical point (zero): $x = -\frac{4}{5}$ and $x = 0$Critical point (undefined): Within a domain no undefined
###Code
def h(x): return x**5 + x**4
def hp(x): return 5*x**4 + 4*x**3
interval=[-1.5, 1]
plot(h, interval=interval)
plot(hp, interval=interval)
###Output
_____no_output_____
###Markdown
* * * $f = sin(2x), \left[ -\frac{\pi}{2}, \frac{\pi}{2} \right]$$f' = 2cos(2x)$---$2cos(2x) = 0, \left[ -\frac{\pi}{2}, \frac{\pi}{2} \right]$$arccos(cos(2x)) = arccos(0)$$2x = \pm \frac{\pi}{2}$$x = \pm \frac{\pi}{4}$---Critical point (zero): $x = -\frac{\pi}{4}$ and $x = \frac{\pi}{4}$Critical point (undefined): Within a domain no undefined
###Code
def f(x): return math.sin(2 * x)
def fp(x): return 2*math.cos(2 * x)
interval=[-math.pi/2, math.pi/2]
plot(f, interval=interval)
plot(fp, interval=interval)
###Output
_____no_output_____ |
datacamp_projects/linux_evolution.ipynb | ###Markdown
1. IntroductionVersion control repositories like CVS, Subversion or Git can be a real gold mine for software developers. They contain every change to the source code including the date (the "when"), the responsible developer (the "who"), as well as a little message that describes the intention (the "what") of a change.In this notebook, we will analyze the evolution of a very famous open-source project – the Linux kernel. The Linux kernel is the heart of some Linux distributions like Debian, Ubuntu or CentOS. Our dataset at hand contains the history of kernel development of almost 13 years (early 2005 - late 2017). We get some insights into the work of the development efforts by identifying the TOP 10 contributors andvisualizing the commits over the years.
###Code
# Printing the content of git_log_excerpt.csv
with open('datasets/git_log_excerpt.csv') as file:
print(file.readlines())
###Output
['1502382966#Linus Torvalds\n', '1501368308#Max Gurtovoy\n', '1501625560#James Smart\n', '1501625559#James Smart\n', '1500568442#Martin Wilck\n', '1502273719#Xin Long\n', '1502278684#Nikolay Borisov\n', '1502238384#Girish Moodalbail\n', '1502228709#Florian Fainelli\n', '1502223836#Jon Paul Maloy']
###Markdown
2. Reading in the datasetThe dataset was created by using the command git log --encoding=latin-1 --pretty="%at%aN" in late 2017. The latin-1 encoded text output was saved in a header-less CSV file. In this file, each row is a commit entry with the following information:timestamp: the time of the commit as a UNIX timestamp in seconds since 1970-01-01 00:00:00 (Git log placeholder "%at")author: the name of the author that performed the commit (Git log placeholder "%aN")The columns are separated by the number sign . The complete dataset is in the datasets/ directory. It is a gz-compressed csv file named git_log.gz.
###Code
# Loading in the pandas module as 'pd'
import pandas as pd
# Reading in the log file
git_log = pd.read_csv('datasets/git_log.gz', sep='#', names=['timestamp', 'author'],
encoding='ISO-8859-1')
# Printing out the first 5 rows
git_log.head()
###Output
_____no_output_____
###Markdown
3. Getting an overviewThe dataset contains the information about every single code contribution (a "commit") to the Linux kernel over the last 13 years. We'll first take a look at the number of authors and their commits to the repository.
###Code
# calculating number of commits
number_of_commits = git_log.shape[0]
# calculating number of authors
number_of_authors = len(git_log[git_log['author'].notnull()]['author'].unique())
# print(git_log.author.value_counts())
# printing out the results
print("%s authors committed %s code changes." % (number_of_authors, number_of_commits))
###Output
17385 authors committed 699071 code changes.
###Markdown
4. Finding the TOP 10 contributorsThere are some very important people that changed the Linux kernel very often. To see if there are any bottlenecks, we take a look at the TOP 10 authors with the most commits.
###Code
# # Identifying the top 10 authors
top_10_authors = git_log.author.value_counts()[:10]
# # Listing contents of 'top_10_authors'
top_10_authors
# git_log.author.value_counts()[:10]
###Output
_____no_output_____
###Markdown
5. Wrangling the dataFor our analysis, we want to visualize the contributions over time. For this, we use the information in the timestamp column to create a time series-based column.
###Code
# converting the timestamp column
git_log.timestamp = pd.to_datetime(git_log['timestamp'], unit='s')
# summarizing the converted timestamp column
git_log.timestamp
###Output
_____no_output_____
###Markdown
6. Treating wrong timestampsAs we can see from the results above, some contributors had their operating system's time incorrectly set when they committed to the repository. We'll clean up the timestamp column by dropping the rows with the incorrect timestamps.
###Code
# determining the first real commit timestamp
sorted_ = git_log.sort_values(by='timestamp')
first_commit_timestamp = sorted_[sorted_.author == 'Linus Torvalds'].iloc[0].timestamp
# # determining the last sensible commit timestamp
last_commit_timestamp = sorted_[sorted_.timestamp.dt.year < 2018].iloc[-1].timestamp
print(first_commit_timestamp)
print(last_commit_timestamp)
# # filtering out wrong timestamps
corrected_log = git_log[(git_log.timestamp >= first_commit_timestamp) &
(git_log.timestamp <= last_commit_timestamp) ]
# git_log[git_log.timestamp > first_commit_timestamp]
# # summarizing the corrected timestamp column
corrected_log.describe()
# print(git_log.sort_values(by='timestamp')[-20:])
###Output
2005-04-16 22:20:36
2017-10-03 12:57:00
###Markdown
7. Grouping commits per yearTo find out how the development activity has increased over time, we'll group the commits by year and count them up.
###Code
# Counting the no. commits per year
commits_per_year = corrected_log.groupby(pd.Grouper(key='timestamp', freq='AS')).count()
# Listing the first rows
commits_per_year.head()
###Output
_____no_output_____
###Markdown
8. Visualizing the history of LinuxFinally, we'll make a plot out of these counts to better see how the development effort on Linux has increased over the the last few years.
###Code
# Setting up plotting in Jupyter notebooks
import matplotlib.pyplot as plt
%matplotlib inline
# plot the data
commits_per_year.plot(kind='bar', title="Commits per year (Linux kernel)", legend=False)
plt.show()
###Output
DEBUG:matplotlib.pyplot:Loaded backend module://ipykernel.pylab.backend_inline version unknown.
###Markdown
9. ConclusionThanks to the solid foundation and caretaking of Linux Torvalds, many other developers are now able to contribute to the Linux kernel as well. There is no decrease of development activity at sight!
###Code
# calculating or setting the year with the most commits to Linux
year = commits_per_year.author.max()
year_with_most_commits = 2016
year_with_most_commits
###Output
_____no_output_____ |
Python for DS/lecture1.ipynb | ###Markdown
Exploratory data analysis in Python, by Tanu N Prabhu, 2019. Let us understand how to explore the data in python.  Image Credits: Morioh Introduction **What is Exploratory Data Analysis ?**Exploratory Data Analysis or (EDA) is understanding the data sets by summarizing their main characteristics often plotting them visually. This step is very important especially when we arrive at modeling the data in order to apply Machine learning. Plotting in EDA consists of Histograms, Box plot, Scatter plot and many more. It often takes much time to explore the data. Through the process of EDA, we can ask to define the problem statement or definition on our data set which is very important. **How to perform Exploratory Data Analysis ?**This is one such question that everyone is keen on knowing the answer. Well, the answer is it depends on the data set that you are working. There is no one method or common methods in order to perform EDA, whereas in this tutorial you can understand some common methods and plots that would be used in the EDA process. **What data are we exploring today ?**Since I am a huge fan of cars, I got a very beautiful data-set of cars from Kaggle. The data-set can be downloaded from [here](https://www.kaggle.com/CooperUnion/cardataset). To give a piece of brief information about the data set this data contains more of 10, 000 rows and more than 10 columns which contains features of the car such as Engine Fuel Type, Engine HP, Transmission Type, highway MPG, city MPG and many more. So in this tutorial, we will explore the data and make it ready for modeling. --- 1. Importing the required libraries for EDA Below are the libraries that are used in order to perform EDA (Exploratory data analysis) in this tutorial.
###Code
import pandas as pd
import numpy as np
import seaborn as sns #visualisation
import matplotlib.pyplot as plt #visualisation
%matplotlib inline
sns.set(color_codes=True)
###Output
_____no_output_____
###Markdown
--- 2. Loading the data into the data frame. Loading the data into the pandas data frame is certainly one of the most important steps in EDA, as we can see that the value from the data set is comma-separated. So all we have to do is to just read the CSV into a data frame and pandas data frame does the job for us. To get or load the dataset into the notebook, all I did was one trivial step. In Google Colab at the left-hand side of the notebook, you will find a > (greater than symbol). When you click that you will find a tab with three options, you just have to select Files. Then you can easily upload your file with the help of the Upload option. No need to mount to the google drive or use any specific libraries just upload the data set and your job is done. One thing to remember in this step is that uploaded files will get deleted when this runtime is recycled. This is how I got the data set into the notebook.
###Code
df = pd.read_csv("data.csv")
# To display the top 5 rows
df.head(5)
df.tail(5) # To display the botton 5 rows
###Output
_____no_output_____
###Markdown
--- 3. Checking the types of data Here we check for the datatypes because sometimes the MSRP or the price of the car would be stored as a string, if in that case, we have to convert that string to the integer data only then we can plot the data via a graph. Here, in this case, the data is already in integer format so nothing to worry.
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
--- 4. Dropping irrelevant columns This step is certainly needed in every EDA because sometimes there would be many columns that we never use in such cases dropping is the only solution. In this case, the columns such as Engine Fuel Type, Market Category, Vehicle style, Popularity, Number of doors, Vehicle Size doesn't make any sense to me so I just dropped for this instance.
###Code
df = df.drop(['Engine Fuel Type', 'Market Category', 'Vehicle Style', 'Popularity', 'Number of Doors', 'Vehicle Size'], axis=1)
df.head(5)
###Output
_____no_output_____
###Markdown
--- 5. Renaming the columns In this instance, most of the column names are very confusing to read, so I just tweaked their column names. This is a good approach it improves the readability of the data set.
###Code
df = df.rename(columns={"Engine HP": "HP", "Engine Cylinders": "Cylinders", "Transmission Type": "Transmission", "Driven_Wheels": "Drive Mode","highway MPG": "MPG-H", "city mpg": "MPG-C", "MSRP": "Price" })
df.head(5)
###Output
_____no_output_____
###Markdown
--- 6. Dropping the duplicate rows This is often a handy thing to do because a huge data set as in this case contains more than 10, 000 rows often have some duplicate data which might be disturbing, so here I remove all the duplicate value from the data-set. For example prior to removing I had 11914 rows of data but after removing the duplicates 10925 data meaning that I had 989 of duplicate data.
###Code
df.shape
duplicate_rows_df = df[df.duplicated()]
print("number of duplicate rows: ", duplicate_rows_df.shape)
###Output
number of duplicate rows: (989, 10)
###Markdown
Now let us remove the duplicate data because it's ok to remove them.
###Code
df.count() # Used to count the number of rows
###Output
_____no_output_____
###Markdown
So seen above there are 11914 rows and we are removing 989 rows of duplicate data.
###Code
df = df.drop_duplicates()
df.head(5)
df.count()
###Output
_____no_output_____
###Markdown
--- 7. Dropping the missing or null values. This is mostly similar to the previous step but in here all the missing values are detected and are dropped later. Now, this is not a good approach to do so, because many people just replace the missing values with the mean or the average of that column, but in this case, I just dropped that missing values. This is because there is nearly 100 missing value compared to 10, 000 values this is a small number and this is negligible so I just dropped those values.
###Code
print(df.isnull().sum())
###Output
Make 0
Model 0
Year 0
HP 69
Cylinders 30
Transmission 0
Drive Mode 0
MPG-H 0
MPG-C 0
Price 0
dtype: int64
###Markdown
This is the reason in the above step while counting both Cylinders and Horsepower (HP) had 10856 and 10895 over 10925 rows.
###Code
df = df.dropna() # Dropping the missing values.
df.count()
###Output
_____no_output_____
###Markdown
Now we have removed all the rows which contain the Null or N/A values (Cylinders and Horsepower (HP)).
###Code
print(df.isnull().sum()) # After dropping the values
###Output
Make 0
Model 0
Year 0
HP 0
Cylinders 0
Transmission 0
Drive Mode 0
MPG-H 0
MPG-C 0
Price 0
dtype: int64
###Markdown
--- 8. Detecting Outliers An outlier is a point or set of points that are different from other points. Sometimes they can be very high or very low. It's often a good idea to detect and remove the outliers. Because outliers are one of the primary reasons for resulting in a less accurate model. Hence it's a good idea to remove them. The outlier detection and removing that I am going to perform is called IQR score technique. Often outliers can be seen with visualizations using a box plot. Shown below are the box plot of MSRP, Cylinders, Horsepower and EngineSize. Herein all the plots, you can find some points are outside the box they are none other than outliers. The technique of finding and removing outlier that I am performing in this assignment is taken help of a tutorial from[ towards data science](https://towardsdatascience.com/ways-to-detect-and-remove-the-outliers-404d16608dba).
###Code
sns.boxplot(x=df['Price'])
sns.boxplot(x=df['HP'])
sns.boxplot(x=df['Cylinders'])
Q1 = df.quantile(0.25)
Q3 = df.quantile(0.75)
IQR = Q3 - Q1
print(IQR)
###Output
Year 9.0
HP 130.0
Cylinders 2.0
MPG-H 8.0
MPG-C 6.0
Price 21327.5
dtype: float64
###Markdown
Don't worry about the above values because it's not important to know each and every one of them because it's just important to know how to use this technique in order to remove the outliers.
###Code
df = df[~((df < (Q1 - 1.5 * IQR)) |(df > (Q3 + 1.5 * IQR))).any(axis=1)]
df.shape
###Output
_____no_output_____
###Markdown
As seen above there were around 1600 rows were outliers. But you cannot completely remove the outliers because even after you use the above technique there maybe 1–2 outlier unremoved but that ok because there were more than 100 outliers. Something is better than nothing. --- 9. Plot different features against one another (scatter), against frequency (histogram) HistogramHistogram refers to the frequency of occurrence of variables in an interval. In this case, there are mainly 10 different types of car manufacturing companies, but it is often important to know who has the most number of cars. To do this histogram is one of the trivial solutions which lets us know the total number of car manufactured by a different company.
###Code
df.Make.value_counts().nlargest(40).plot(kind='bar', figsize=(10,5))
plt.title("Number of cars by make")
plt.ylabel('Number of cars')
plt.xlabel('Make');
###Output
_____no_output_____
###Markdown
Heat MapsHeat Maps is a type of plot which is necessary when we need to find the dependent variables. One of the best way to find the relationship between the features can be done using heat maps. In the below heat map we know that the price feature depends mainly on the Engine Size, Horsepower, and Cylinders.
###Code
plt.figure(figsize=(10,5))
c= df.corr()
sns.heatmap(c,cmap="BrBG",annot=True)
c
###Output
_____no_output_____
###Markdown
ScatterplotWe generally use scatter plots to find the correlation between two variables. Here the scatter plots are plotted between Horsepower and Price and we can see the plot below. With the plot given below, we can easily draw a trend line. These features provide a good scattering of points.
###Code
fig, ax = plt.subplots(figsize=(10,6))
ax.scatter(df['HP'], df['Price'])
ax.set_xlabel('HP')
ax.set_ylabel('Price')
plt.show()
###Output
_____no_output_____ |
notebooks/FOX Correlations and Statistics.ipynb | ###Markdown
FOX Correlations and Statistics
###Code
import pandas as pd
import numpy as np
import math
from matplotlib import pyplot as plt
import seaborn as sns
fox_df = pd.read_excel('../data/interim/fox_ready_to_code.xlsx')
fox_df.head()
fox_df = fox_df.drop(columns=['Unnamed: 0', 'Unnamed: 0.1'])
fox_df = fox_df.fillna(0)
for col in fox_df.columns:
print(col)
def plot_correlation_matrix(df, fig_name, title):
fig = plt.figure(figsize=(20,10))
_ = plt.title(title)
c= df.corr()
sns.heatmap(c,cmap='BrBG',annot=True)
fig.savefig('../reports/figures/fox_' + fig_name + '.png')
plt.show()
# I'm curious to see what correlation there is between my engineered features
corr_df = fox_df[['isad', 'ad_cluster', 'news_cluster', 'snip_ad', 'has_prev_back' , 'has_next_back',
'has_next_welcome', 'has_prev_miss', 'has_next_good morning', 'has_prev_next',
'has_next_talk', 'has_prev_appreciate', 'has_next_appreciate', 'has_prev_ahead',
'has_prev_return', 'has_prev_after this', 'has_next_good evening',
'has_prev_applause', 'has_next_applause', 'has_prev_tuned']]
plot_correlation_matrix(corr_df, 'eng_corr', 'FOX Engineered Features Correlation Matrix')
topics_0_14_df = fox_df[['isad', 'topic_0', 'topic_1', 'topic_2', 'topic_3', 'topic_4', 'topic_5',
'topic_6', 'topic_7', 'topic_8', 'topic_9', 'topic_10', 'topic_11', 'topic_12',
'topic_13', 'topic_14']]
plot_correlation_matrix(topics_0_14_df, 'topic_0_14_corr', 'FOX Topics 0 to 14 Correlation Matrix')
column_list = ['topic_' + str(i) for i in range (15, 30)]
column_list = ['isad'] + column_list
topics_15_29_df = fox_df[column_list]
plot_correlation_matrix(topics_0_14_df, 'topic_15_29_corr', 'FOX Topics 15 to 29 Correlation Matrix')
column_list = ['topic_' + str(i) for i in range (30, 45)]
column_list = ['isad'] + column_list
topics_df = fox_df[column_list]
plot_correlation_matrix(topics_df, 'topic_30_44_corr', 'FOX Topics 30 to 44 Correlation Matrix')
column_list = ['topic_' + str(i) for i in range (45, 60)]
column_list = ['isad'] + column_list
topics_df = fox_df[column_list]
plot_correlation_matrix(topics_df, 'topic_45_60_corr', 'FOX Topics 45 to 60 Correlation Matrix')
column_list = ['topic_' + str(i) for i in range (60, 75)]
column_list = ['isad'] + column_list
topics_df = fox_df[column_list]
plot_correlation_matrix(topics_df, 'topic_60_to_74_corr', 'FOX Topics 60 to 74 Correlation Matrix')
###Output
_____no_output_____ |
SGD_with_momentum_from_scratch.ipynb | ###Markdown
###Code
%matplotlib inline
from fastai.basics import *
from fastai.vision import *
with gzip.open('/content/mnist.pkl.gz', 'rb') as f:
((x_train, y_train), (x_valid, y_valid), _) = pickle.load(f, encoding='latin-1')
plt.imshow(x_train[7].reshape(28,28),cmap='gray')
x_train=torch.from_numpy(x_train)
x_valid=torch.from_numpy(x_valid)
y_train=torch.from_numpy(y_train)
y_valid=torch.from_numpy(y_valid)
train_ds=TensorDataset(x_train,y_train)
valid_ds=TensorDataset(x_valid,y_valid)
data=DataBunch.create(train_ds,valid_ds,bs=64)
x,y=next(iter(data.train_dl))
x.shape,y.shape
class mnist_nn(nn.Module):
def __init__(self):
super().__init__()
self.lin1=nn.Linear(784,60)
self.lin2=nn.Linear(60,10)
def forward(self,minibatch):
x=self.lin1(minibatch)
F.relu_(x)
x=self.lin2(x)
# torch.sigmoid_(x)
return x
model=mnist_nn().cuda()
loss_function=nn.CrossEntropyLoss()
def update(x,y,lr):
opt=optim.SGD(model.parameters(),lr,weight_decay=1e-5,momentum=0.9)
y_pred=model(x)
loss=loss_function(y_pred,y)
loss.backward()
opt.step()
opt.zero_grad()
return loss.item()
loss_sgd=[update(x,y,4e-2) for x,y in data.train_dl]
plt.figure(figsize=(8,8))
plt.plot(loss_sgd);
min(loss_sgd)
previous=[]
for p in model.parameters():
temp_tensor=torch.cuda.FloatTensor(p.shape).fill_(0)
previous.append(temp_tensor)
model=mnist_nn().cuda()
def update_scratch(lr):
loss_list=[]
for a,b in data.train_dl:
x=a
y=b
wd=1e-5
y_pred=model(x)
w2=0
for p in model.parameters():w2+=(p**2).sum()
loss=loss_function(y_pred,y)+w2*wd
loss.backward()
with torch.no_grad():
i=0
for p in model.parameters():
p-=((lr*p.grad)*0.1)+(previous[i]*0.9)
previous_tmp=((lr*p.grad)*0.1)+(previous[i]*0.9)
previous[i]=previous_tmp
i+=1
p.grad.zero_()
loss_list.append(loss.item())
return loss_list
tmp=update_scratch(4e-2)
plt.figure(figsize=(8,8))
plt.plot(tmp)
min(tmp)
###Output
_____no_output_____ |
docs/examples/Thorlabs_K10CR1.ipynb | ###Markdown
Qcodes example with Thorlabs K10CR1 InitializationCreate an instance of `Thorlabs_APT`, which is a wrapper for the APT.dll of the APT server which is part of the Thorlabs drivers.
###Code
from qcodes_contrib_drivers.drivers.Thorlabs.APT import Thorlabs_APT
apt = Thorlabs_APT()
###Output
_____no_output_____
###Markdown
Create an instance of `Thorlabs_K10CR1`, the actual driver class.
###Code
from qcodes_contrib_drivers.drivers.Thorlabs.K10CR1 import Thorlabs_K10CR1
inst = Thorlabs_K10CR1("K10CR1", 0, apt)
###Output
Connected to: Thorlabs K10CR1 (serial:55125694, firmware:SW Version 1.0.3) in 0.01s
###Markdown
Moving the rotator Moving homeMove the rotator to its home position (zero) and recalibrate it.
###Code
# Move to zero and recalibrate
inst.move_home()
# Read position
print("Position:", inst.position())
###Output
Position: 0.0
###Markdown
Moving to certain positionMove to 120° with 10°/s
###Code
# Set target velocity to 10 deg/s
inst.velocity_max(10)
# Move to 120 and wait until it's finished
inst.position(120)
# Read position
print("Position:", inst.position())
###Output
Position: 120.0
###Markdown
Moving to certain position (asynchronously)The following commands will start a rotation to position 240°. This will happen asynchronously so that you can read out the current position in the meantime. After reaching 180° the motor will be stopped.
###Code
import time
# Move to 300 without blocking
inst.position_async(240)
last_position = 120
# Print current position every 250 ms, until 240 is reached
while last_position < 180:
last_position = inst.position()
print("Position:", last_position)
time.sleep(0.25)
# Stop at around 240 (before 280 is reached)
inst.stop()
# Read position
print("Position:", inst.position())
###Output
Position: 120.0
Position: 120.33045196533203
Position: 121.30647277832031
Position: 122.93938446044922
Position: 125.22875213623047
Position: 127.80081939697266
Position: 130.36468505859375
Position: 132.91712951660156
Position: 135.5030059814453
Position: 138.07122802734375
Position: 140.61135864257812
Position: 143.18075561523438
Position: 145.73727416992188
Position: 148.30560302734375
Position: 150.8717498779297
Position: 153.4274444580078
Position: 155.98837280273438
Position: 158.54783630371094
Position: 161.1175994873047
Position: 163.6906280517578
Position: 166.25445556640625
Position: 168.7959442138672
Position: 171.37112426757812
Position: 173.93038940429688
Position: 176.48873901367188
Position: 179.0663604736328
Position: 181.61782836914062
Position: 184.19651794433594
###Markdown
Clean up resources
###Code
inst.close()
apt.apt_clean_up()
###Output
_____no_output_____ |
Pandas - Crash Course/JP Nan/PandasL5.ipynb | ###Markdown
Titanic Dataset Questions
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Find the Childern who died during the titanic Incident
###Code
df = pd.read_csv('../Datasets/titanicKaggle.csv')
df.head()
df.tail()
df[(df['Age'] <=12) & (df['Survived'] == 0)][['Name','Age']]
df[(df['Age'] <=12) & (df['Survived'] == 0)]['Name'].count()
###Output
_____no_output_____
###Markdown
Find the Old Persons who Survivied
###Code
df[(df.Age >= 50) &(df.Survived == 1)][['Name','Age']]
df[(df.Age >= 50) &(df.Survived == 1)]['Name'].count()
###Output
_____no_output_____
###Markdown
Find the Min and Max Fares
###Code
df['Fare'].min()
df['Fare'].max()
df['Fare'].mean()
###Output
_____no_output_____
###Markdown
How many Male Persons were in class - 1
###Code
df[(df.Sex == 'male') & (df.Pclass ==1)][['Name','Age']]
df[(df.Sex == 'male') & (df.Pclass ==1)]['Name'].count()
###Output
_____no_output_____
###Markdown
Find the Ratio of Survival between male and Female
###Code
df[(df.Survived ==1)& (df.Sex == 'male')].Name.count()
df[(df.Survived ==1)& (df.Sex == 'female')].Name.count()
df[(df.Survived ==1)]['Sex'].value_counts(normalize=True)
df[(df.Survived ==1)]['Sex'].value_counts()
df[(df.Survived ==0)& (df.Sex == 'male')].Name.count()
df[(df.Survived ==0)& (df.Sex == 'female')].Name.count()
df[(df.Survived ==0)]['Sex'].value_counts(normalize=True)
df[(df.Survived ==0)]['Sex'].value_counts()
df.Sex.value_counts().plot(kind='bar',color='r')
###Output
_____no_output_____
###Markdown
Doing the same with Group by Gender
###Code
dfg = df.groupby('Sex')
dfg.Survived.value_counts()
dfg.Survived.value_counts(normalize=True)
###Output
_____no_output_____
###Markdown
Doing the same with Group by Survived
###Code
dfg2 = df.groupby('Survived')
dfg2.Sex.value_counts(normalize=True)
dfg2.Sex.value_counts()
###Output
_____no_output_____ |
Task1-PredictionUsingSupervisedML/PredictionUsingSupervisedML.ipynb | ###Markdown
**TASK 1 - Prediction Using Supervised ML*** To predict the percentage of an student based on the no. of study hours they study. Author - Manish Bhardwaj---
###Code
import numpy as np
import pandas as pd
import seaborn as sns
from sklearn import metrics
import matplotlib.pyplot as plt
from sklearn.metrics import mean_absolute_error
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
%matplotlib inline
print("Required libraries imported")
###Output
Required libraries imported
###Markdown
Importing the dataset
###Code
data = pd.read_csv('http://bit.ly/w-data')
print("Data successfully imported")
data.tail(2)
data.describe()
data.isnull == True
###Output
_____no_output_____
###Markdown
**No null values found in dataset, so let us visualize our data.** Data Visualisation
###Code
data.plot(x='Hours', y='Scores', style='x')
plt.title('Marks vs Study Hours',size=18)
plt.xlabel('Hours Studied', size=12)
plt.ylabel('Marks / Percentage', size=12)
plt.show()
###Output
_____no_output_____
###Markdown
**The above scatter plot seems to suggest a correlation between the 'Marks / Percentage' and 'Hours Studied'. This indicates a positive linear relationship between hours studied and percentage. Plotting a regression line will confirm the correlation.**
###Code
sns.regplot(x= data['Hours'], y= data['Scores'])
plt.title('Regression Plot',size=18)
plt.ylabel('Marks / Percentage', size=12)
plt.xlabel('Hours Studied', size=12)
plt.show()
print(data.corr())
###Output
_____no_output_____
###Markdown
**This confirms that the variables are positively correlated.** Preparing data and splitting it into train and test sets.
###Code
X = data.iloc[:, :-1].values
y = data.iloc[:, 1].values
train_X, val_X, train_y, val_y = train_test_split(X, y, random_state = 0)
###Output
_____no_output_____
###Markdown
Training the model using linear regression.
###Code
regression = LinearRegression()
regression.fit(train_X, train_y)
print("Model trained")
###Output
Model trained
###Markdown
Predicting Marks.
###Code
pred_y = regression.predict(val_X)
prediction = pd.DataFrame({'Hours': [i[0] for i in val_X], 'Predicted Marks': [k for k in pred_y]})
prediction.tail(2)
###Output
_____no_output_____
###Markdown
Comparing Predicted Marks with Actual Marks.
###Code
CompareScore = pd.DataFrame({'Actual Marks': val_y, 'Predicted Marks': pred_y})
CompareScore.tail(2)
plt.scatter(x=val_X, y=val_y, color='red')
plt.plot(val_X, pred_y, color='blue')
plt.title('Actual vs Predicted', size=18)
plt.ylabel('Marks / Percentage', size=12)
plt.xlabel('Hours Studied', size=12)
plt.show()
###Output
_____no_output_____
###Markdown
Accuracy of the Model.
###Code
metrics.r2_score(val_y,pred_y)
###Output
_____no_output_____
###Markdown
**Above 93% percentage indicates that above fitted Model is a Good Model.** Evaluting the Model.
###Code
print("Mean Squared Error > ",metrics.mean_squared_error(val_y,pred_y))
print("Mean Absolute Error > ",mean_absolute_error(val_y,pred_y))
###Output
Mean Squared Error > 20.33292367497996
Mean Absolute Error > 4.130879918502482
###Markdown
**A small value for Mean Absolute Error means that the chances of error through this predictive model is very less.** Predicted marks for a student if they study for 9.25 hrs/day?
###Code
hrs = [9.25]
answer = regression.predict([hrs])
print("Marks = {}".format(round(answer[0],8)))
###Output
Marks = 93.89272889
|
AV-Intel-Scene-Classification-Challenge/scene-classification-cnn-data-aug-inception-resnet.ipynb | ###Markdown
{'buildings' -> 0, 'forest' -> 1,'glacier' -> 2,'mountain' -> 3,'sea' -> 4,'street' -> 5 }
###Code
labels = ['buildings', 'forest', 'glacier', 'mountain', 'sea', 'street']
from google.colab import drive
drive.mount('/content/gdrive')
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import json
import os
from tqdm import tqdm, tqdm_notebook
import tensorflow as tf
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.layers import *
from tensorflow.keras.optimizers import *
from tensorflow.keras.applications import *
from tensorflow.keras.callbacks import *
from tensorflow.keras.initializers import *
from tensorflow.keras.preprocessing.image import ImageDataGenerator
home_dir = '/content/gdrive/My Drive/data-science-my-projects/AV-Intel-Scene-Classification-Challenge'
print(os.listdir(home_dir))
###Output
['dataset', 'submissions', 'scene-classification-cnn-data-aug-vgg16.ipynb', 'models', 'scene-classification-cnn-vgg16-hybrid1365.ipynb', 'scene-classification-cnn-vgg16-places365.ipynb', 'scene-classification-cnn-data-aug-resnet50.ipynb', 'scene-classification-cnn-data-aug-inception-resnet.ipynb']
###Markdown
Read and set up data
###Code
# Read data
dataset_dir = os.path.join(home_dir, "dataset")
train_dir = os.path.join(dataset_dir, "train")
train_df = pd.read_csv(dataset_dir + '/train.csv')
train_df.head()
# Read and display an image
image = plt.imread(os.path.join(train_dir, os.listdir(train_dir)[150]))
print("Image shape =", image.shape)
train_input_shape = image.shape
plt.imshow(image)
plt.show()
# Number of unique classes
n_classes = len(train_df.label.unique())
print("Number of unique classes =", n_classes)
###Output
Number of unique classes = 6
###Markdown
Image Augmentation
###Code
# Augment data
batch_size = 16
#train_input_shape = (75, 75, 3)
train_datagen = ImageDataGenerator(validation_split=0.2,
rescale=1./255.,
rotation_range=45,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.7,
zoom_range=0.7,
horizontal_flip=True,
#vertical_flip=True,
)
train_generator = train_datagen.flow_from_dataframe(dataframe=train_df,
directory=train_dir,
x_col="image_name",
y_col="label",
class_mode="other",
subset="training",
target_size=train_input_shape[0:2],
shuffle=True,
batch_size=batch_size)
valid_generator = train_datagen.flow_from_dataframe(dataframe=train_df,
directory=train_dir,
x_col="image_name",
y_col="label",
class_mode="other",
subset="validation",
target_size=train_input_shape[0:2],
shuffle=True,
batch_size=batch_size)
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n//valid_generator.batch_size
print("Total number of batches =", STEP_SIZE_TRAIN, "and", STEP_SIZE_VALID)
###Output
Found 13628 images.
Found 3406 images.
Total number of batches = 851 and 212
###Markdown
Build model
###Code
# Load pre-trained model
base_model = InceptionResNetV2(weights=None, include_top=False, input_shape=train_input_shape)
#for layer in resnet50.layers:
# layer.trainable = False
#resnet50.summary()
# Add layers at the end
X = base_model.output
X = Flatten()(X)
X = Dense(16, kernel_initializer='he_uniform')(X)
X = Dropout(0.5)(X)
X = BatchNormalization()(X)
X = Activation('relu')(X)
output = Dense(n_classes, activation='softmax')(X)
model = Model(inputs=base_model.input, outputs=output)
#model.summary()
model_dir = os.path.join(home_dir, "models")
MY_MODEL = os.path.join(model_dir, "InceptionResNetV2_model.h5")
MY_MODEL_WEIGHTS = os.path.join(model_dir, "InceptionResNetV2_weights.h5")
from keras.models import load_model
#model = load_model(MY_MODEL)
model.load_weights(MY_MODEL_WEIGHTS)
#model.summary()
#optimizer = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
optimizer=Adam()
model.compile(loss='sparse_categorical_crossentropy',
optimizer=optimizer,
metrics=['accuracy'])
early_stop = EarlyStopping(monitor='val_acc', patience=20, verbose=1,
mode='auto', restore_best_weights=True)
reduce_lr = ReduceLROnPlateau(monitor='val_loss', factor=0.3, patience=5,
verbose=1, mode='auto')
%%time
n_epoch = 25
history = model.fit_generator(generator=train_generator, steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator, validation_steps=STEP_SIZE_VALID,
epochs=n_epoch,
shuffle=True,
verbose=1,
callbacks=[reduce_lr],
#use_multiprocessing=True,
#workers=6
)
model_dir = os.path.join(home_dir, "models")
#model.save(model_dir + '/InceptionResNetV2_model1.h5')
model.save_weights(model_dir + '/InceptionResNetV2_weights2.h5')
# Plot the training graph
def plot_training(history):
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs = range(len(acc))
fig, axes = plt.subplots(1, 2, figsize=(15,5))
axes[0].plot(epochs, acc, 'r-', label='Training Accuracy')
axes[0].plot(epochs, val_acc, 'b--', label='Validation Accuracy')
axes[0].set_title('Training and Validation Accuracy')
axes[0].legend(loc='best')
axes[1].plot(epochs, loss, 'r-', label='Training Loss')
axes[1].plot(epochs, val_loss, 'b--', label='Validation Loss')
axes[1].set_title('Training and Validation Loss')
axes[1].legend(loc='best')
plt.show()
plot_training(history)
# Evaluate on validation set
result = model.evaluate_generator(generator=valid_generator, verbose=1)
result
# Classification report and confusion matrix
from sklearn.metrics import *
import seaborn as sns
def showClassficationReport_Generator(model, valid_generator, STEP_SIZE_VALID):
# Loop on each generator batch and predict
y_pred, y_true = [], []
for i in range(STEP_SIZE_VALID):
(X,y) = next(valid_generator)
y_pred.append(model.predict(X))
y_true.append(y)
# Create a flat list for y_true and y_pred
y_pred = [subresult for result in y_pred for subresult in result]
y_true = [subresult for result in y_true for subresult in result]
y_true = np.asarray(y_true).ravel()
# Update Prediction vector based on argmax
#y_pred = np.asarray(y_pred).astype('float32').ravel()
#y_pred = y_pred >= 0.5
#y_pred = y_pred.astype('int').ravel()
y_pred = np.argmax(y_pred, axis=1)
y_pred = np.asarray(y_pred).ravel()
# Confusion Matrix
conf_matrix = confusion_matrix(y_true, y_pred, labels=[0,1,2,3,4,5])
sns.heatmap(conf_matrix, annot=True, fmt="d", square=True, cbar=False,
cmap=plt.cm.gray, xticklabels=labels, yticklabels=labels)
plt.ylabel('Actual')
plt.xlabel('Predicted')
plt.title('Confusion Matrix')
plt.show()
print(classification_report(y_true, y_pred))
#print("\nAUC: ", roc_auc_score(y_true, y_pred, average='micro'))
showClassficationReport_Generator(model, valid_generator, STEP_SIZE_VALID)
###Output
_____no_output_____
###Markdown
Prepare data for prediction on test set
###Code
test_df = pd.read_csv(dataset_dir + "/test_WyRytb0.csv")
test_df.shape
test_datagen = ImageDataGenerator(rescale=1./255.)
test_generator = test_datagen.flow_from_dataframe(dataframe=test_df,
directory=train_dir,
x_col="image_name",
#y_col="label",
class_mode=None,
target_size=train_input_shape[0:2],
batch_size=1,
shuffle=False
)
###Output
_____no_output_____
###Markdown
Predict and Submit
###Code
# Predict on test data
test_generator.reset()
predictions = model.predict_generator(test_generator,verbose=1)
predictions = np.argmax(predictions, axis=1)
#predictions = predictions.astype('int').ravel()
predictions.shape
# Retrieve filenames
import re
#test_img_ids = [re.split("/", val)[1] for val in test_generator.filenames]
test_img_ids = test_generator.filenames
len(test_img_ids)
# Create dataframe for submission
submission_df = pd.DataFrame({'image_name' : test_img_ids,
'label' : predictions })
submission_df.head()
# Create submission file
submission_dir = os.path.join(home_dir, "submissions")
submission_df.to_csv(submission_dir + '/submission_InceptionResnetV2_3.csv', index=False)
###Output
_____no_output_____ |
.sandbox/examples/friction_contact_tutorials/demo_hdf5.ipynb | ###Markdown
Un exemple d'utilisation d'hdf5 en python Jupyter notebooksUn "notebook" est une feuille de travail interactive, dans laquelle vous allez pouvoir exécuter descommandes, décrite dans "cases", les cellules.Plusieurs environnements sont disponibles (menu kernel ci-dessus -->change kernel). On travaillera ici avec Python3.Dans notre cas, chaque cellule pourra contenir soit du code python, soit des commentaires, du texte en markdown.Voici un résumé des principaux raccourcis:* Editer une cellule : Enter* Exécuter le contenu d'une cellule : Shift + Enter* Exécuter toutes les cellules : menu kernel (haut de la page) --> Run all* Effacer une cellule : DD* Ajouter une cellule : Ctrl-mb* Afficher les raccourcis : Ctrl-m h* Liste des "magic commands" python : exécuter %lsmagic dans une cellulePlus d'infos : https://jupyter-notebook-beginner-guide.readthedocs.io/en/latest/what_is_jupyter.htmlreferencesAttention: chaque cellule peut-être exécutée indépendamment des autres mais les résultats d'exécution sont conservées.Pour repartir de zero il faut soit faire appel à "%reset" ou à restart dans le menu kernel Import du package hdf5Pour utiliser hdf5 en python, nous aurons besoin du package h5py.Nous aurons également besoin de numpy qui est le package standard de calcul scientifique en python, http://www.numpy.org.Nous l'utiliserons pour manipuler des matrices et des vecteurs.
###Code
import h5py
import numpy as np
###Output
_____no_output_____
###Markdown
Pour obtenir des infos sur un package ou une fonction, il suffit d'utiliser"?", et la doc apparait en bas du navigateur. L'objectif de cette démo/TP est d'illustrer les concepts de base d'hdf5, à travers la création d'un exemple simple.Nous allons sauvegarder dans un fichier hdf5 des champs scalaires représentant certaines grandeurs physiques sur une grille 3D (sous forme de tableau numpy), puis les visualiser, les relire etc. Definition des variablesPour commencer, on crée un tableau 3D de dimension Nx X Ny X Nz, rempli aléatoirement, grâce à numpy (np).
###Code
# Resolution du champ
Nx = Ny = Nz = 256
resolution = (Nx, Ny, Nz)
# Deux champs scalaires, initialisés aléatoirement
vx = np.random.random_sample(resolution)
temperature = np.random.random_sample(resolution)
###Output
_____no_output_____
###Markdown
Dans numpy, l'accès aux valeurs du tableau se fait comme suit (pour plus de détails voir un des nombreux tuto disponibles en ligne, par exemple,https://docs.scipy.org/doc/numpy-dev/user/quickstart.html)
###Code
# un exemple de manipulation de tableau ...
small_tab = np.random.random((4,6))
# un élement:
print(small_tab[3, 3])
# une "ligne"
print(small_tab[2, :])
# une sous-partie du tableau:
print(small_tab[2:4, 2:4])
###Output
0.6716746851776874
[0.71640469 0.50838259 0.81869901 0.86250276 0.58697174 0.25576814]
[[0.81869901 0.86250276]
[0.02303176 0.67167469]]
###Markdown
1 - Le "fichier" hdf5Le "fichier" hdf5 est l'objet principal qui permettra de stocker vos données et leurs attributs. On parlera à la foisde "fichier" pour le fichier sur le disque (extension .h5, .hdf5 ou .he5) et pour l'objet manipulé dans le code.Il s'agit d'une sorte de container de **datasets** (les structures de données, voir plus bas) qui peut également être organisé en **groupes** et sous-groupes. **TP** - *Créez un "fichier" hdf5 en mode 'écriture'. Il faudra pour cela faire appel à la fonctionh5py.File*Rappel : pour accèder à la doc, il suffit de taper?h5py.NOM_FONCTION.
###Code
# Affichage de la documentation de la fonction
?h5py.File
filename = 'demo_v0.h5'
# Création/ouverture en mode 'ecriture'
mode = 'w'
hdf_file = h5py.File(filename, mode)
###Output
_____no_output_____
###Markdown
Quand toutes les données auront été sauvegardées, il sera nécessaire de fermer le fichier, pourvalider l'écriture sur le disque, via la fonction close(). *Vérifiez que le fichier a bien été créé. Notez au passage que dans ipython notebook vous avez accès à certaines commandes du terminal*
###Code
ls -altr
###Output
total 120
-rw-r--r--@ 1 Franck staff 6148 5 fév 2018 .DS_Store
-rw-r--r-- 1 Franck staff 2882 5 fév 2018 demo_hdf5.py
-rw-r--r-- 1 Franck staff 4404 5 fév 2018 xdmf.py
-rw-r--r-- 1 Franck staff 847 6 fév 2018 demo_io.cxx
-rw-r--r-- 1 Franck staff 720 6 fév 2018 demo_io.py
-rw-r--r-- 1 Franck staff 18241 11 fév 17:08 demo_hdf5.ipynb
-rw-r--r-- 1 Franck staff 5384 11 fév 17:08 tp_part1.ipynb
drwxr-xr-x 22 Franck staff 704 20 fév 15:41 [34m..[m[m/
drwxr-xr-x 3 Franck staff 96 15 mar 14:58 [34m.ipynb_checkpoints[m[m/
drwxr-xr-x 11 Franck staff 352 28 mar 10:25 [34m.[m[m/
-rw-r--r-- 1 Franck staff 96 28 mar 10:25 demo_v0.h5
###Markdown
Dans la mesure ou h5py.File est une classe, on peut avoir accès à ses attributs et méthodes.Dans le notebook il suffit d'utiliser la complétion pour avoir une liste complète des attributs:nom_class. + TAB*Affichage du nom du fichier sur le disque et le nom de l'objet file:*
###Code
print(hdf_file.name)
print(hdf_file.filename)
###Output
/
demo_v0.h5
###Markdown
Création de datasets: des tableaux dans le fichiers hdf5Dans un fichier hdf, les "données" sont stockées sous forme de dataset.Un dataset est un tableau multi-dimensionnel contenant des données d'un même type.**TP** *Créez deux datasets dans le fichier hdf5, pour stocker les deux champs scalaires définis plus haut:** *un dataset 'data_velo' vide et de même résolution que vx** *un dataset 'data_tp' qui contient une copie de temperature*
###Code
# Création de dataset :
?h5py.Dataset
# Paramètres : nom, shape, type
data_velo = hdf_file.create_dataset('velocity', resolution, dtype=np.float64)
# Paramètres : un tableau numpy
data_tp = hdf_file.create_dataset('temperature', data=temperature)
###Output
_____no_output_____
###Markdown
Manipulation des datasetsLes datasets peuvent être manipulés comme des tableaux numpy:
###Code
print(data_tp)
print(data_velo)
print(data_tp.shape)
# A ce stade, data_tp contient les mêmes valeurs que temperature tandis que tous les éléments de data_velo sont nuls.
print(np.allclose(data_tp, temperature))
print(temperature[1,5,3])
print(data_velo[1:10, 1,3])
###Output
<HDF5 dataset "temperature": shape (256, 256, 256), type "<f8">
<HDF5 dataset "velocity": shape (256, 256, 256), type "<f8">
(256, 256, 256)
True
0.5643377532174662
[0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
ou par l'intermédiaire du fichier hdf5, via leur nom:
###Code
print(hdf_file['velocity'])
###Output
<HDF5 dataset "velocity": shape (256, 256, 256), type "<f8">
###Markdown
La modification du contenu de chaque dataset est similaire à celle d'un tableau numpy.Nous allons maintenant remplir data_velo en calculant le cosinus de vx:
###Code
data_velo[...] = np.cos(vx)
###Output
_____no_output_____
###Markdown
Les groupesIl est donc possible de l'organiser en groupes et sous-groupes contenant des datasets, via la fonctioncreate_group.**TP ** *Créez un groupe 'champs' et un groupe 'infos' contenant un sous-groupe 'diverses'.*Remarque: l'objet fichier hdf5 possède a une structure arborescente, à la manière d'un système de fichier classique.Nous avons vu plus haut que le nom de l'objet hdf_file est '/'. Cela se traduit également dans la manière de nommer les groupes. le groupe 'diverses' apparaitra ainsi:/infos/diverses.
###Code
?h5py.File.create_group
# Creation d'un groupe 'champs'
g1 = hdf_file.create_group('champs')
# Puis d'un groupe infos/diverses
hdf_file.create_group('/infos/diverses/')
###Output
_____no_output_____
###Markdown
L'accès aux données et attributs se fait de manière classique:
###Code
print(hdf_file['champs'])
print(hdf_file['infos'])
###Output
<HDF5 group "/champs" (0 members)>
<HDF5 group "/infos" (1 members)>
###Markdown
Nous sommes maintenant en mesure de créer un dataset dans le groupe champs
###Code
g1.create_dataset('density', resolution, dtype=np.float64)
###Output
_____no_output_____
###Markdown
On peut balayer tous les éléments du groupe
###Code
for it in hdf_file.items():
print(it)
for it in hdf_file['champs'].items():
print("groupe champs ...")
print(it)
###Output
('champs', <HDF5 group "/champs" (1 members)>)
('infos', <HDF5 group "/infos" (1 members)>)
('temperature', <HDF5 dataset "temperature": shape (256, 256, 256), type "<f8">)
('velocity', <HDF5 dataset "velocity": shape (256, 256, 256), type "<f8">)
groupe champs ...
('density', <HDF5 dataset "density": shape (256, 256, 256), type "<f8">)
###Markdown
Ou également supprimer un groupe avec la fonction python del
###Code
del hdf_file['/infos/diverses']
print(g1['density'])
###Output
<HDF5 dataset "density": shape (256, 256, 256), type "<f8">
###Markdown
AttributsUn autre intérêt du format hdf5 est de pouvoir associer aux datasets et groupes des méta-données, i.e. des informations sous formes d'attributs.Voici quelques exemples:
###Code
hdf_file['velocity'].attrs['année'] = 2015
hdf_file['velocity'].attrs['commentaires'] = 'Valeurs experimentales du champs de vitesse'
g1['density'].attrs['description'] = u"une description du champs"
###Output
_____no_output_____
###Markdown
Puis afficher les caractéristiques d'un dataset:
###Code
for it in hdf_file['velocity'].attrs.values():
print(it)
###Output
2015
Valeurs experimentales du champs de vitesse
###Markdown
Ecriture et fermeture du fichier :Nous sommes maintenant en mesure de fermer le fichier.La visualisation des données peut se faire par différentes méthodes:* h5dump* hdfview* un logiciel capable de lire du hdf5 (visit ...)**TD** *Visualisez le contenu du fichier avec hdfview et hdfdump, dans votre terminal*
###Code
hdf_file.close()
###Output
_____no_output_____
###Markdown
Nous allons maintenant repartir de zero et charger des données d'un fichier hdf5 Lecture d'un fichier hdf5**TP** *Créez un tableau 'new_field' à partir du champs temperature du fichier hdf5 'demo_v0.h5'** ouvrir le fichier demo_v0.h5* créer le tableau numpy à partir du dataset temperatureNotes : * la lecture se fait simplement en créant un objet fichier en mode 'lecture'* il faudra utiliser la fonction np.asarray pour assurer la conversion du dataset vers le tableau numpy tab_numpy = np.asarray(dataset)
###Code
# Remise à zero de l'environnement ...
%reset
import h5py
import numpy as np
# Lecture du fichier hdf5
filename = 'demo_v0.h5'
in_file = h5py.File(filename, 'r')
###Output
_____no_output_____
###Markdown
Affichage du contenu du fichier ...Notez au passage l'intérêt du format hdf5 : le fichier est 'auto-suffisant': toutes les informationsnécessaires à la comprehension de son contenu sont disponibles (noms des variables, dimensions des tableaux ...)
###Code
# On balaies tout le contenu du fichier (datasets et groupes)
for keys in in_file:
print(keys, in_file[keys])
# Dans chaque cas, on affiche la liste des attributs
for it in in_file[keys].attrs.items():
print('-->', it)
# Création d'un nouveau tableau
new_field = np.asarray(in_file['velocity'])
print(new_field[1:10, 1:10, 3])
print(new_field.shape)
print(new_field.dtype)
# Ne pas oublier de fermer le fichier!
in_file.close()
import matplotlib.pyplot as plt
%matplotlib inline
plt.plot(new_field[1:100, 1, 1])
###Output
_____no_output_____
###Markdown
Un exemple d'utilisation de XDMF et HDF5Paraview n'est (malheureusement) pas capable de lire directement du hdf5.Il faut le convertir en xmf. Nous vous proposons ici un exemple avec une fonction python capable defaire cette conversion.
###Code
# On recupère la fonction permettant d'écrire l'entête xdmf
from xdmf import XDMFWriter
###Output
_____no_output_____
###Markdown
Cette fonction est un exemple de génération de la partie 'ASCII' du fichier xdmf, en fonction de la géométrie du domaine et de la grille.
###Code
help(XDMFWriter)
# Description du domaine, de la grille
origin = [0.,] * 3
space_step = [0.1,] * 3
resolution = new_field.shape
filename = 'demo_v0.h5'
wr = XDMFWriter(filename, 3, resolution, origin, space_step, ['velocity', 'temperature'], 1, 0.0)
###Output
_____no_output_____ |
Data Science and Machine Learning/Machine-Learning-In-Python-THOROUGH/MACHINE_LEARNING/04_LINEAR_RIDGE_LASSO_ELASTICNET_POLYNOMINAL_REGRESSIONS.ipynb | ###Markdown
LASSO RIDGE PRACTICE 1. Linear Regression2. Ridge Regression3. Lasso Regression4. Elastic Net5. Polynomial Regression
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_absolute_error
from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score
from sklearn.model_selection import train_test_split, GridSearchCV
import warnings
warnings.filterwarnings("ignore")
df=pd.read_csv("Advertising.csv")
df.head()
df.info()
sns.heatmap(df.corr(), annot = True)
X = df.drop(["sales"], axis =1)
y = df["sales"]
X.head()
def eval_metrics(actual, pred):
rmse = np.sqrt(mean_squared_error(actual, pred))
mae = mean_absolute_error(actual, pred)
mse = mean_squared_error(actual, pred)
score = r2_score(actual, pred)
return print("r2_score:", score, "\n","mae:", mae, "\n","mse:",mse, "\n","rmse:",rmse)
###Output
_____no_output_____
###Markdown
**1.Linear Regression**
###Code
from sklearn.linear_model import LinearRegression
lm = LinearRegression()
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test=train_test_split(X, y, test_size = 0.2, random_state = 0)
lm.fit(X_train, y_train)
dir(lm)
lm.coef_
lm.intercept_
coeff_parameter = pd.DataFrame(lm.coef_, X.columns, columns=['Coefficient'])
coeff_parameter
y_pred = lm.predict(X_test)
eval_metrics(y_test, y_pred)
lm.score(X_test, y_test)
r2_score(y_test, y_pred)
my_dict={"Actual":y_test, "Pred":y_pred}
compare=pd.DataFrame(my_dict)
compare.sample(10)
from yellowbrick.regressor import PredictionError
# Instantiate the linear model and visualizer
visualizer = PredictionError(lm)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Finalize and render the figure
plt.scatter(y_test, y_pred)
from yellowbrick.regressor import ResidualsPlot
# Instantiate the linear model and visualizer
visualizer = ResidualsPlot(lm)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Finalize and render the figure
###Output
_____no_output_____
###Markdown
**Model Score with cv**
###Code
from sklearn.model_selection import cross_val_score
accuraries = cross_val_score(estimator=lm, X=X_train, y=y_train, cv=10)
accuraries.mean()
accuraries
from sklearn.model_selection import cross_val_score
accuraries = cross_val_score(estimator=lm, X=X_train, y=y_train, scoring = "neg_mean_squared_error", cv=10)
accuraries.mean()
from sklearn.model_selection import cross_val_score
accuraries = cross_val_score(estimator=lm, X=X_train, y=y_train, scoring = "neg_mean_squared_error", cv=10)
-accuraries.mean()
accuraries
###Output
_____no_output_____
###Markdown
**2.Ridge Regression**
###Code
from sklearn.linear_model import Ridge
from sklearn.linear_model import RidgeCV
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
scaler.fit(X_train)
X_train.head()
X_train = scaler.transform(X_train)
X_test = scaler.transform(X_test)
a = pd.DataFrame(X_train, columns = X.columns)
a.head()
# scaled_X_train = scaler.fit_transform(X_train) you can do fit_transform together
# not fit for X_test, but only transform.
ridge_model = Ridge()
ridge_model.fit(X_train, y_train)
y_pred = ridge_model.predict(X_test)
eval_metrics(y_test, y_pred)
accuraries = cross_val_score(estimator=ridge_model, X=X_train, y=y_train, cv=10)
accuraries.mean()
dir(ridge_model)
ridge_model.alpha
ridge_model = Ridge(3).fit(X_train, y_train)
y_pred = ridge_model.predict(X_test)
eval_metrics(y_test, y_pred)
alpha_space = np.linspace(0.1, 20, 100)
alpha_space
help(RidgeCV)
#searching for best alpha
ridgecv = RidgeCV(alphas=alpha_space, cv=10)
ridgecv.fit(X_train, y_train)
# best alpha
ridgecv.alpha_
#let's find the same alpha with yellowbrick
from yellowbrick.regressor import ManualAlphaSelection
# Create a list of alphas to cross-validate against
alpha_space = np.linspace(0.1, 20, 100)
# Instantiate the visualizer
visualizer = ManualAlphaSelection(
Ridge(),
alphas=alpha_space,
cv=10
)
visualizer.fit(X_train, y_train)
visualizer.show()
# train the ridge model again with best alpha
ridge_model = Ridge(3.7).fit(X_train, y_train)
accuraries = cross_val_score(estimator=ridge_model, X=X_train, y=y_train, cv=10)
accuraries.mean()
accuraries
from yellowbrick.model_selection import FeatureImportances
# Load the regression dataset
# Title case the feature for better display and create the visualizer
labels = list(map(lambda s: s.title(), X.columns))
viz = FeatureImportances(ridge_model, labels=labels, relative=False)
# Fit and show the feature importances
viz.fit(X_train, y_train)
viz.show()
ridge_model.coef_
lm.coef_
###Output
_____no_output_____
###Markdown
**3.Lasso Regression**
###Code
from sklearn.linear_model import Lasso
from sklearn.linear_model import LassoCV
lasso_model = Lasso()
lasso_model.fit(X_train, y_train)
y_pred = lasso_model.predict(X_test)
eval_metrics(y_test, y_pred)
accuraries = cross_val_score(estimator=lasso_model, X=X_train, y=y_train, cv=10)
accuraries.mean()
lasso_model.alpha
alpha_space = np.linspace(0.1, 20, 100)
lasso_cv_model = LassoCV(alphas = alpha_space, cv = 10).fit(X_train, y_train)
lasso_cv_model.alpha_
from sklearn.linear_model import LassoCV
from yellowbrick.regressor import AlphaSelection
# Create a list of alphas to cross-validate against
alpha_space = np.linspace(0.1, 20, 100)
# Instantiate the linear model and visualizer
model = LassoCV(alphas=alpha_space)
visualizer = AlphaSelection(model)
visualizer.fit(X_train, y_train)
visualizer.show()
lasso_model = Lasso(0.1).fit(X_train, y_train)
y_pred = lasso_model.predict(X_test)
eval_metrics(y_test, y_pred)
lasso_model = Lasso(0.01).fit(X_train, y_train)
y_pred = lasso_model.predict(X_test)
eval_metrics(y_test, y_pred)
# cv score when alpha is 0.1
from sklearn.model_selection import cross_val_score
accuraries = cross_val_score(estimator=lasso_model, X=X_train, y=y_train, cv=10)
accuraries.mean()
lasso_model = Lasso(3).fit(X_train, y_train)
from sklearn.linear_model import Lasso
from yellowbrick.datasets import load_concrete
from yellowbrick.model_selection import FeatureImportances
# Load the regression dataset
# Title case the feature for better display and create the visualizer
labels = list(map(lambda s: s.title(), X.columns))
viz = FeatureImportances(lasso_model, labels=labels, relative=False)
# Fit and show the feature importances
viz.fit(X_train, y_train)
viz.show()
###Output
_____no_output_____
###Markdown
**4.Elastic Net**
###Code
from sklearn.linear_model import ElasticNetCV
elastic_model = ElasticNetCV(alphas=alpha_space, l1_ratio=[.1, .5, .7,.9, .95, .99, 1])
elastic_model.fit(X_train,y_train)
elastic_model.l1_ratio_
elastic_model.alpha_
y_pred = elastic_model.predict(X_test)
eval_metrics(y_test,y_pred)
from sklearn.model_selection import cross_val_score
accuraries = cross_val_score(estimator=elastic_model, X=X_train, y=y_train, cv=10)
accuraries.mean()
###Output
_____no_output_____
###Markdown
**5.Polynomial Regression**
###Code
# we will use not scaled X (original X)
from sklearn.preprocessing import PolynomialFeatures
polynomial_converter = PolynomialFeatures(degree=2)
poly_features = polynomial_converter.fit_transform(X)
poly_features.shape
X.shape
X.iloc[0]
poly_features[0]
X_train, X_test, y_train, y_test = train_test_split(poly_features, y, test_size=0.3, random_state=101)
model = LinearRegression()
model.fit(X_train,y_train)
y_pred=model.predict(X_test)
eval_metrics(y_test, y_pred)
accuraries = cross_val_score(estimator=model, X=X_train, y=y_train, cv=10)
accuraries.mean()
accuraries
#y_pred_train=model.predict(X_train)
#eval_metrics(y_train, y_pred_train)
visualizer = PredictionError(model)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show()
visualizer = ResidualsPlot(model)
visualizer.fit(X_train, y_train) # Fit the training data to the visualizer
visualizer.score(X_test, y_test) # Evaluate the model on the test data
visualizer.show() # Finalize and render the figure
from sklearn.preprocessing import PolynomialFeatures
polynomial_converter = PolynomialFeatures(degree=5)
poly_features = polynomial_converter.fit_transform(X)
poly_features.shape
X_train, X_test, y_train, y_test = train_test_split(poly_features, y, test_size=0.3, random_state=101)
model = LinearRegression()
model.fit(X_train,y_train)
y_pred=model.predict(X_test)
eval_metrics(y_test, y_pred)
y_pred=model.predict(X_train)
eval_metrics(y_train, y_pred)
accuraries = cross_val_score(estimator=model, X=X_train, y=y_train, cv=10)
accuraries.mean()
###Output
_____no_output_____ |
notebooks/D6_L2_Filtering/05_image_faces.ipynb | ###Markdown
Detecting faces in an image with OpenCV
###Code
import io
import zipfile
import requests
import numpy as np
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
img = cv2.imread('data/people.jpg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
path = 'data/haarcascade_frontalface_default.xml'
face_cascade = cv2.CascadeClassifier(path)
for x, y, w, h in face_cascade.detectMultiScale(
gray, 1.3):
cv2.rectangle(
gray, (x, y), (x + w, y + h), (255, 0, 0), 2)
fig, ax = plt.subplots(1, 1, figsize=(8, 6))
ax.imshow(gray, cmap=plt.cm.gray)
ax.set_axis_off()
###Output
_____no_output_____ |
Examples/RedditPlace/RedditPlace_AnomalousSubredditsDetection.ipynb | ###Markdown
Results
###Code
results[[col for col in results.columns if 'ranking' in col and 'weighted_sum' not in col and 'median' not in col]][-3:].reset_index(drop=True)
###Output
_____no_output_____ |
GC Submission XGB File.ipynb | ###Markdown
Compiling Meter predictions for a Building
###Code
####Chunk of code for saving 3meter predictions for arima(Run this only after running all meters for a building)
#Input variable
index_reader = 1
main_meter = pd.read_csv('csv_files/prediction_files/XGB/building_'+str(index_reader)+'_'+'main_meter'+'_xgb_predictions.csv')
final_index = main_meter['timestamp']
main_meter = main_meter.drop(['timestamp'],axis=1)
sub_meter_1 = pd.read_csv('csv_files/prediction_files/XGB/building_'+str(index_reader)+'_'+'sub_meter_1'+'_xgb_predictions.csv')
sub_meter_1 = sub_meter_1.drop(['timestamp'],axis=1)
sub_meter_2 = pd.read_csv('csv_files/prediction_files/XGB/building_'+str(index_reader)+'_'+'sub_meter_2'+'_xgb_predictions.csv')
sub_meter_2 = sub_meter_2.drop(['timestamp'],axis=1)
all_meter = pd.concat([main_meter,sub_meter_1,sub_meter_2],axis=1)
all_meter.index = final_index
#Saving Dataframe into csv
all_meter.to_csv('csv_files/prediction_files/XGB/building_'+str(index_reader)+'_3meter_xgb_predictions.csv')
all_meter
###Output
_____no_output_____ |
los-check.ipynb | ###Markdown
神戸の夜景は本当に100万ドルなのか?衛星データを使って調べてみた任意の観測点から指定した範囲の建物の見通しを判定するコードです。詳しくはこちら、https://sorabatake.jp/15363 をご覧下さい。 実行の仕方JupyterLabは対話型のコード実行環境です。各セルを「Shift + Enter」で順に実行して下さい。 必要情報を設定`YOUR-TOKEN` を、ご自身のTellus開発環境のトークンに置き換えて下さい。 参考:[APIキーの取得方法](https://sorabatake.jp/5048)
###Code
#ご自身のTellus開発環境のトークンを設定してください
TOKEN = 'YOUR-TOKEN'
#地球の半径(km)
EARTH_RADIUS = 8494.666; #光の屈折を踏まえた地球の半径4/3倍(ここでは光の屈折率を電磁波と同じと見なす) (実際の半径は6371km)
#見通し判定する座標1を指定
OBSERVER = [34.751888,135.237224] #六甲山展望台
#見通し判定する建物があるエリアをOpenStreetMapのデータから指定。
TARGET_AREA = "example.pbf"
#タイル座標のズームレベルを12に指定
ZOOM_LEVEL = 12
###Output
_____no_output_____
###Markdown
建物の緯度経度を取得`TARGET_AREA`で指定した、OSMファイル(.pbf)の範囲にある建物から、緯度経度を取得。
###Code
import osmium
from tqdm import tqdm_notebook as tqdm
import time
# 指定した範囲から`building`タグを含むノードのIDを抽出するクラスを定義。
class GetBuildings(osmium.SimpleHandler):
def __init__(self):
osmium.SimpleHandler.__init__(self)
self.num_nodes = 0
self.num_ways = 0
self.num_building = 0
self.nodes = []
self.nodeId_of_buildings = []
def count_building(self, w):
if 'building' in w.tags:
# way(複数のノードで構成されている要素)の1つのノードIDを抽出
self.nodeId_of_buildings.append(w.nodes[0].ref)
self.num_building += 1
def node(self, n):
self.num_nodes += 1
self.nodes.append([n.id,n.location])
def way(self, w):
self.num_ways += 1
self.count_building(w)
# 建物のノードIDから緯度経度を取得
GB = GetBuildings()
GB.apply_file(TARGET_AREA)
"Number of nodes: %d" % GB.num_nodes
"Number of ways: %d" % GB.num_ways
"Number of building: %d" % GB.num_building
building = []
for node in tqdm(GB.nodes):
if node[0] in GB.nodeId_of_buildings:
building.append(node)
###Output
_____no_output_____
###Markdown
見通し計算に必要なクラスを定義参考:[富士山が見える場所はどこまで?標高データから解析!【Tellusでやってみた編】](https://sorabatake.jp/12087/)
###Code
import requests, json
import numpy
from numpy import arctanh
from skimage import io
from io import BytesIO
import math
from math import *
import matplotlib.pyplot as plt
%matplotlib inline
import collections as cl
class isInterVisible():
def __init__(self,z):
self.z = z
def export_json(self,latlon_list,output_file):
ys = cl.OrderedDict()
ys["type"] = "FeatureCollection"
ys["features"] = []
if'true' in output_file:
LOS = "LOS:True"
else:
LOS = "LOS:False"
for i in range(len(latlon_list)):
data = cl.OrderedDict()
data["properties"] = {"Name":LOS}
data["type"] = "Feature"
data["geometry"] = {"type":"Point","coordinates":latlon_list[i]}
ys["features"].append(data)
fw = open(output_file,'w')
json.dump(ys,fw,indent=4)
def vincenty_inverse(self, lat1, lon1, lat2, lon2, ellipsoid=None):
"""
緯度経度で指定した2地点間の距離を返す
"""
# 楕円体
ELLIPSOID_GRS80 = 1 # GRS80
ELLIPSOID_WGS84 = 2 # WGS84
# 楕円体ごとの長軸半径と扁平率
GEODETIC_DATUM = {
ELLIPSOID_GRS80: [
6378137.0, # [GRS80]長軸半径
1 / 298.257222101, # [GRS80]扁平率
],
ELLIPSOID_WGS84: [
6378137.0, # [WGS84]長軸半径
1 / 298.257223563, # [WGS84]扁平率
],
}
# 反復計算の上限回数
ITERATION_LIMIT = 1000
# 差異が無ければ0.0を返す
if isclose(lat1, lat2) and isclose(lon1, lon2):
return 0.0
# 計算時に必要な長軸半径(a)と扁平率(ƒ)を定数から取得し、短軸半径(b)を算出する
# 楕円体が未指定の場合はGRS80の値を用いる
a, ƒ = GEODETIC_DATUM.get(ellipsoid, GEODETIC_DATUM.get(ELLIPSOID_GRS80))
b = (1 - ƒ) * a
φ1 = radians(lat1)
φ2 = radians(lat2)
λ1 = radians(lon1)
λ2 = radians(lon2)
# 更成緯度(補助球上の緯度)
U1 = atan((1 - ƒ) * tan(φ1))
U2 = atan((1 - ƒ) * tan(φ2))
sinU1 = sin(U1)
sinU2 = sin(U2)
cosU1 = cos(U1)
cosU2 = cos(U2)
# 2点間の経度差
L = λ2 - λ1
# λをLで初期化
λ = L
# 以下の計算をλが収束するまで反復する
# 地点によっては収束しないことがあり得るため、反復回数に上限を設ける
for i in range(ITERATION_LIMIT):
sinλ = sin(λ)
cosλ = cos(λ)
sinσ = sqrt((cosU2 * sinλ) ** 2 + (cosU1 * sinU2 - sinU1 * cosU2 * cosλ) ** 2)
cosσ = sinU1 * sinU2 + cosU1 * cosU2 * cosλ
σ = atan2(sinσ, cosσ)
sinα = cosU1 * cosU2 * sinλ / sinσ
cos2α = 1 - sinα ** 2
cos2σm = cosσ - 2 * sinU1 * sinU2 / cos2α
C = ƒ / 16 * cos2α * (4 + ƒ * (4 - 3 * cos2α))
λʹ = λ
λ = L + (1 - C) * ƒ * sinα * (σ + C * sinσ * (cos2σm + C * cosσ * (-1 + 2 * cos2σm ** 2)))
# 偏差が.000000000001以下ならbreak
if abs(λ - λʹ) <= 1e-12:
break
else:
# 計算が収束しなかった場合はNoneを返す
return None
# λが所望の精度まで収束したら以下の計算を行う
u2 = cos2α * (a ** 2 - b ** 2) / (b ** 2)
A = 1 + u2 / 16384 * (4096 + u2 * (-768 + u2 * (320 - 175 * u2)))
B = u2 / 1024 * (256 + u2 * (-128 + u2 * (74 - 47 * u2)))
Δσ = B * sinσ * (cos2σm + B / 4 * (cosσ * (-1 + 2 * cos2σm ** 2) - B / 6 * cos2σm * (-3 + 4 * sinσ ** 2) * (-3 + 4 * cos2σm ** 2)))
# 2点間の楕円体上の距離
s = b * A * (σ - Δσ)
# 各点における方位角
α1 = atan2(cosU2 * sinλ, cosU1 * sinU2 - sinU1 * cosU2 * cosλ)
α2 = atan2(cosU1 * sinλ, -sinU1 * cosU2 + cosU1 * sinU2 * cosλ) + pi
if α1 < 0:
α1 = α1 + pi * 2
return s / 1000 #距離(km)
def get_combined_image(self, topleft_x, topleft_y, size=1):
"""
指定した範囲のタイルを結合して取得
"""
rows = []
blank = numpy.zeros((256, 256, 4), dtype=numpy.uint8)
for y in range(size):
row = []
for x in range(size):
try:
img = self.get_astergdem2_dsm(self.z, topleft_x + x, topleft_y + y)
except Exception as e:
img = blank
row.append(img)
rows.append(numpy.hstack(row))
return numpy.vstack(rows)
# 標高タイル(ASTER GDEM 2.0)を取得
def get_astergdem2_dsm(self, zoom, xtile, ytile):
"""
ASTER GDEM2 の標高タイルを取得
"""
url = " https://gisapi.tellusxdp.com/astergdem2/dsm/{}/{}/{}.png".format(zoom, xtile, ytile)
headers = {
"Authorization": "Bearer " + TOKEN
}
r = requests.get(url, headers=headers)
return io.imread(BytesIO(r.content))
def latlon2tile(self, lon, lat, z):
"""
指定した緯度経度からピクセル座標を取得
https://qiita.com/kobakou/items/4a5840542e74860a6b1b
"""
L = 85.05112878
x = int((lon/180 + 1) * 2**(z+7))
y = int( (2**(z+7) / pi * ( -arctanh(sin(pi*lat/180)) + arctanh(sin(pi*L/180)) ) ))
return [y, x]
def num2deg(self, xtile, ytile, zoom):
"""
この関数は引数に ( x、y、z ) を指定することで、タイルの左上の角の緯度経度を返します。
他にも、xとy の両方もしくは片方に、+1 した引数を指定すると別の角の座標を取得でき、
+0.5 で中央の座標を取得できます。
https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames#Python
"""
n = 2.0 ** zoom
lon_deg = xtile / n * 360.0 - 180.0
lat_rad = atan(sinh(pi * (1 - 2 * ytile / n)))
lat_deg = degrees(lat_rad)
return (lon_deg, lat_deg)
def get_radian_btw_p1p2(self, y1, x1, y2, x2):
"""
2点間のかたむき(y=ax)を取得する
"""
#0で割ると division by zero エラーが出る
if (x2 - x1) == 0:
return "vertical"
elif (y2 - y1) == 0:
return "horizontal"
else:
return (y2 - y1) / (x2 - x1)
def get_heights_btw_p1p2(self, tile, y1, x1, y2, x2, radian):
"""
始点から終点を結ぶ直線上の、標高データを取得
一次関数:y = ax + b
Parameters
----------
tile : ndarray
標高タイル
y1 : number
点Aのy座標
x1 : number
点AのX座標
radian : number
傾き
Returns
-------
actual_heights : ndarray
x1y1を通り、かたむきradian上の標高データ
"""
if radian == "vertical":
return tile[:,y1]
elif radian == "horizontal":
return tile[x1,:]
#原点を設定
#x座標が小さい点をoriginに設定
if x1 <= x2:
reverseFlag = True
origin_y = y1
origin_x = x1
point_y = y2
point_x = x2
else:
reverseFlag = False
origin_y = y2
origin_x = x2
point_y = y1
point_x = x1
max_index = len(tile) -1
radian_tile = []
while (round(origin_y) <= max_index and round(origin_x) <= max_index) and (round(origin_y) != point_y or round(origin_x) != point_x):
#round関数で平均化
radian_tile.append(tile[round(origin_y),round(origin_x)])
origin_y += radian
origin_x += 1
if reverseFlag:
radian_tile.reverse()
return numpy.array(radian_tile)
def calc_height_chiriin_style(self, R, G, B, u=1):
"""
標高タイルのRGB値から標高を計算する
"""
hyoko = int(R*256*256 + G * 256 + B)
if hyoko == 8388608:
raise ValueError('N/A')
if hyoko > 8388608:
hyoko = (hyoko - 16777216)/u
if hyoko < 8388608:
hyoko = hyoko/u
return hyoko
def plot_height(self, actual_heights, u=1):
"""
高度のグラフをプロットする
"""
heights = []
heights_err = []
for k in range(len(actual_heights)):
R = actual_heights[k,0]
G = actual_heights[k,1]
B = actual_heights[k,2]
try:
heights.append(self.calc_height_chiriin_style(R, G, B, u))
except ValueError as e:
heights.append(0)
heights_err.append((i,j))
fig, ax= plt.subplots(figsize=(8, 4))
plt.plot(heights)
ax.text(0.01, 0.95, 'Errors: {}'.format(len(heights_err)), transform=ax.transAxes)
plt.show()
def worldpx2tilepx(self, worldpx,left_top_bbx_px):
"""
世界座標をタイル内座標に変換する
"""
y1 = worldpx[0] - left_top_bbx_px[0] -1
x1 = worldpx[1] - left_top_bbx_px[1] -1
return [y1,x1]
def calc_max_theta(self, distance, earth_r):
"""
観測点と対象点の角度(radian)を返す
"""
return distance/(2 *earth_r)
def getHorizonDistance_km(self, h0_km,earth_r):
return sqrt(pow(h0_km, 2) + 2 * earth_r * h0_km);
def getTargetHiddenHeight_km(self, d2_km, earth_r):
if d2_km < 0:
return 0;
return sqrt(pow(d2_km, 2) + pow(earth_r, 2)) - earth_r;
def calculate_shizumikomi(self, h0,d0,earth_r):
"""
h0 : 観測点の標高
d0 : 対象点との距離 km
"""
h0_km = h0 * 0.001
d0_km = d0
d1_km = self.getHorizonDistance_km(h0_km,earth_r);
h1_m = self.getTargetHiddenHeight_km(d0_km - d1_km,earth_r) * 1000;
return h1_m
def km_per_tile(self, d0_km,tileNum):
"""
1タイルの辺の長さ(km)を取得
"""
return d0_km/tileNum
def get_tile_num(self, lat, lon, zoom):
"""
緯度経度からタイル座標を取得する
"""
# https://wiki.openstreetmap.org/wiki/Slippy_map_tilenames#Python
lat_rad = math.radians(lat)
n = 2.0 ** zoom
xtile = int((lon + 180.0) / 360.0 * n)
ytile = int((1.0 - math.log(math.tan(lat_rad) + (1 / math.cos(lat_rad))) / math.pi) / 2.0 * n)
return (xtile, ytile)
def get_smaller_tile_axis(self, observer_tile,target_tile,axis):
if axis == "x":
index = 0
elif axis == "y":
index = 1
if observer_tile[index] > target_tile[index]:
smaller = target_tile[index]
elif target_tile[index] > observer_tile[index]:
smaller = observer_tile[index]
else:
smaller = observer_tile[index]
return smaller
def get_origin_tile(self, observer_tile,target_tile):
x_smaller = self.get_smaller_tile_axis(observer_tile,target_tile,"x")
y_smaller = self.get_smaller_tile_axis(observer_tile,target_tile,"y")
return [x_smaller,y_smaller]
def get_bigger_diff(self, observer_tile,target_tile):
x_diff = abs(observer_tile[0] - target_tile[0])
y_diff = abs(observer_tile[1] - target_tile[1])
if x_diff > y_diff:
return x_diff
elif y_diff > x_diff:
return y_diff
else:
return x_diff
def calc_intervisibility(self, target, observer, losTrueArray, losFalseArray ):
# 緯度と経度に分割
lon1=target[1]
lat1=target[0]
lon2=observer[1]
lat2=observer[0]
#タイル座標取得
target_tile = self.get_tile_num(lat1,lon1,self.z)
observer_tile = self.get_tile_num(lat2,lon2,self.z)
#始点を取得:観測点と対象点を角とする四角形の左上の座標
origin_tile = self.get_origin_tile(observer_tile,target_tile)
#上の四角形で長辺のタイル数を取得
diff = self.get_bigger_diff(observer_tile,target_tile)
##タイルを取得
combined_img = self.get_combined_image(origin_tile[0], origin_tile[1], diff+1)
#地球の曲率を含めた2点間の距離
distance_km = self.vincenty_inverse(lat1, lon1, lat2, lon2, 1)
##世界ピクセル座標を取得
left_top_bbx = self.num2deg(origin_tile[0], origin_tile[1], self.z)
left_top_bbx_px = self.latlon2tile(left_top_bbx[0], left_top_bbx[1], self.z) #始点
target_wpx = self.latlon2tile(lon1, lat1, self.z) #対象点
observer_wpx = self.latlon2tile(lon2, lat2, self.z) #観測点
##タイル内ピクセル座標を取得
target_tpx = self.worldpx2tilepx(target_wpx,left_top_bbx_px)#対象点
target_y = target_tpx[0]
target_x = target_tpx[1]
observer_tpx = self.worldpx2tilepx(observer_wpx,left_top_bbx_px) #観測点
observer_y = observer_tpx[0]
observer_x = observer_tpx[1]
##2点間のかたむきを取得
radian = self.get_radian_btw_p1p2(target_y, target_x, observer_y, observer_x)
##2点間の標高データを取得(単位:m)
actual_heights = self.get_heights_btw_p1p2(combined_img, target_y, target_x, observer_y, observer_x, radian)
#2点のカラーコードを取得
target_color = combined_img[target_y][target_x]
observer_color = combined_img[observer_y][observer_x]
#2点の標高を取得(単位:m)
u=1
target_height = round(self.calc_height_chiriin_style(target_color[0],target_color[1],target_color[2],u),2) # 対象点
observer_height = round(self.calc_height_chiriin_style(observer_color[0],observer_color[1],observer_color[2],u),2) #観測点
#沈み込み量を取得(単位:m)
shizumikomi = self.calculate_shizumikomi(observer_height,distance_km,EARTH_RADIUS)
#対象点の見た目の標高を取得(単位:m)
h1_mitame = target_height - shizumikomi
#タイル当たりの辺の長さを取得(km)
tile_km = self.km_per_tile(distance_km,len(actual_heights))
#標高で取得したタイル数分ループを回す
for indexNum in range(len(actual_heights)):
#中間の標高を取得
hyoko = round(self.calc_height_chiriin_style( actual_heights[indexNum][0], actual_heights[indexNum][1], actual_heights[indexNum][2],u),2)
#沈み込み量を取得
d0 = index*tile_km
shizumikomi = self.calculate_shizumikomi(observer_height,d0,EARTH_RADIUS)
#中間の見た目の標高を取得
h2_mitame = hyoko - shizumikomi
#見た目の高さで比較
distance_km = round(distance_km,3)
if h1_mitame < 0:
message="見通し:不可(地平線)"
result=False
break;
elif h1_mitame < h2_mitame:
message='''
見通し:不可(障害物有り)
障害物までの距離:{}(km)'''.format(d0)
result=False
break;
else:
message="見通し:可"
result=True
# 配列に代入
if result:
los_true.append([lon1,lat1])
else:
los_false.append([lon1,lat1])
###Output
_____no_output_____
###Markdown
各建物に対して見通し判定を実行
###Code
from tqdm import tqdm_notebook as tqdm
import time
IV = isInterVisible(ZOOM_LEVEL)
index=0;
los_true_bulding=0;
los_true = []
los_false = []
base=10000000
for node in tqdm(building):
target = [int(node[1].y)/base, int(node[1].x)/base]
result = IV.calc_intervisibility(target,OBSERVER,los_true,los_false)
index+=1
if(result):
los_true_bulding += 1
IV.export_json(los_true,"los_true.geojson")
IV.export_json(los_false,"los_false.geojson")
print("見通し可能な建物の数: %d" % len(los_true))
print("見通し不可能な建物の数: %d" % len(los_false))
###Output
_____no_output_____ |
02_convnet_image_classification.ipynb | ###Markdown
使用卷积神经网络进行图像分类**作者:** [PaddlePaddle](https://github.com/PaddlePaddle) **日期:** 2021.01 **摘要:** 本示例教程将会演示如何使用飞桨的卷积神经网络来完成图像分类任务。这是一个较为简单的示例,将会使用一个由三个卷积层组成的网络完成[cifar10](https://www.cs.toronto.edu/~kriz/cifar.html)数据集的图像分类任务。 一、环境配置本教程基于Paddle 2.0 编写,如果您的环境不是本版本,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.0 。
###Code
import paddle
import paddle.nn.functional as F
from paddle.vision.transforms import ToTensor
import numpy as np
import matplotlib.pyplot as plt
print(paddle.__version__)
###Output
2.0.1
###Markdown
二、加载数据集我们将会使用飞桨提供的API完成数据集的下载并为后续的训练任务准备好数据迭代器。cifar10数据集由60000张大小为32 * 32的彩色图片组成,其中有50000张图片组成了训练集,另外10000张图片组成了测试集。这些图片分为10个类别,我们的任务是训练一个模型能够把图片进行正确的分类。
###Code
transform = ToTensor()
cifar10_train = paddle.vision.datasets.Cifar10(mode='train',
transform=transform)
cifar10_test = paddle.vision.datasets.Cifar10(mode='test',
transform=transform)
###Output
/Users/zhangjun25/Desktop/virtualenvs/venv-paddle-develop/lib/python3.7/site-packages/ipykernel/ipkernel.py:287: DeprecationWarning: `should_run_async` will not call `transform_cell` automatically in the future. Please pass the result to `transformed_cell` argument and any exception that happen during thetransform in `preprocessing_exc_tuple` in IPython 7.17 and above.
and should_run_async(code)
###Markdown
三、组建网络接下来我们使用飞桨定义一个使用了三个二维卷积( ``Conv2D`` ) 且每次卷积之后使用 ``relu`` 激活函数,两个二维池化层( ``MaxPool2D`` ),和两个线性变换层组成的分类网络,来把一个(32, 32, 3)形状的图片通过卷积神经网络映射为10个输出,这对应着10个分类的类别。
###Code
class MyNet(paddle.nn.Layer):
def __init__(self, num_classes=1):
super(MyNet, self).__init__()
self.conv1 = paddle.nn.Conv2D(in_channels=3, out_channels=32, kernel_size=(3, 3))
self.pool1 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)
self.conv2 = paddle.nn.Conv2D(in_channels=32, out_channels=64, kernel_size=(3,3))
self.pool2 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)
self.conv3 = paddle.nn.Conv2D(in_channels=64, out_channels=64, kernel_size=(3,3))
self.flatten = paddle.nn.Flatten()
self.linear1 = paddle.nn.Linear(in_features=1024, out_features=64)
self.linear2 = paddle.nn.Linear(in_features=64, out_features=num_classes)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.pool1(x)
x = self.conv2(x)
x = F.relu(x)
x = self.pool2(x)
x = self.conv3(x)
x = F.relu(x)
x = self.flatten(x)
x = self.linear1(x)
x = F.relu(x)
x = self.linear2(x)
return x
###Output
_____no_output_____
###Markdown
四、模型训练&预测接下来,我们用一个循环来进行模型的训练,我们将会: - 使用 ``paddle.optimizer.Adam`` 优化器来进行优化。 - 使用 ``F.cross_entropy`` 来计算损失值。 - 使用 ``paddle.io.DataLoader`` 来加载数据并组建batch。
###Code
epoch_num = 10
batch_size = 32
learning_rate = 0.001
val_acc_history = []
val_loss_history = []
def train(model):
print('start training ... ')
# turn into training mode
model.train()
opt = paddle.optimizer.Adam(learning_rate=learning_rate,
parameters=model.parameters())
train_loader = paddle.io.DataLoader(cifar10_train,
shuffle=True,
batch_size=batch_size)
valid_loader = paddle.io.DataLoader(cifar10_test, batch_size=batch_size)
for epoch in range(epoch_num):
for batch_id, data in enumerate(train_loader()):
x_data = data[0]
y_data = paddle.to_tensor(data[1])
y_data = paddle.unsqueeze(y_data, 1)
logits = model(x_data)
loss = F.cross_entropy(logits, y_data)
if batch_id % 1000 == 0:
print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, loss.numpy()))
loss.backward()
opt.step()
opt.clear_grad()
# evaluate model after one epoch
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(valid_loader()):
x_data = data[0]
y_data = paddle.to_tensor(data[1])
y_data = paddle.unsqueeze(y_data, 1)
logits = model(x_data)
loss = F.cross_entropy(logits, y_data)
acc = paddle.metric.accuracy(logits, y_data)
accuracies.append(acc.numpy())
losses.append(loss.numpy())
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("[validation] accuracy/loss: {}/{}".format(avg_acc, avg_loss))
val_acc_history.append(avg_acc)
val_loss_history.append(avg_loss)
model.train()
model = MyNet(num_classes=10)
train(model)
plt.plot(val_acc_history, label = 'validation accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 0.8])
plt.legend(loc='lower right')
###Output
_____no_output_____ |
src/notebooks/learning/common_learning.ipynb | ###Markdown
Learning settings ***
###Code
DATA_PATH = '..\\..\\scripts\\_features_all'
DATA_TYPE = "bt"
WINDOW_TYPE = "rolling"
WINDOW_SIZE = "60s"
DATA_TYPES = ['wifi', 'bt', 'location']
WINDOW_TYPES = ['rolling', 'sampling']
WINDOWS = ['5s', '10s', '30s', '60s', '90s', '120s', '240s', '600s']
catboost_params = {
'iterations': 100,
'depth': 6,
'loss_function': 'Logloss',
'l2_leaf_reg': 1,
'leaf_estimation_iterations': 5,
'logging_level': 'Silent'
}
randomforest_params = {
'n_estimators': 100,
'criterion': 'gini',
'max_depth': None,
'min_samples_split': 2,
'min_samples_leaf': 1,
'max_features': 'auto',
'n_jobs': -1,
'class_weight': 'balanced',
}
svc_params = {
'C': 1,
'kernel': 'rbf',
'degree': 1,
'gamma': 5,
'probability': True
}
logreg_params = {
'penalty': 'l2',
'C': 0.01,
'solver': 'newton-cg',
'max_iter': 1000,
'n_jobs': -1
}
MODELS = [
(CatBoostClassifier(**catboost_params), "CatBoost"),
(RandomForestClassifier(**randomforest_params), "RandomForest"),
(SVC(**svc_params), "SVC"),
(LogisticRegression(**logreg_params), "LogReg")
]
###Output
_____no_output_____
###Markdown
*** Cross-validation
###Code
for data_type in DATA_TYPES:
for wnd_type in WINDOW_TYPES:
for wnd in WINDOWS:
df, RESULTS_FILE = get_dataframe(DATA_PATH, data_type, wnd_type, wnd)
features = df.columns.to_list()
df, _ = process_train_df(df, features)
df['labels'] = df['user']
for model, tag in MODELS:
print(data_type, wnd_type, wnd, tag)
model_cross_validation(RESULTS_FILE, model, df, tag, data_type, wnd_type, wnd, is_SVM=tag=='SVC')
###Output
_____no_output_____
###Markdown
Final Validation
###Code
for data_type in DATA_TYPES:
for wnd_type in WINDOW_TYPES:
for wnd in WINDOWS:
df, RESULTS_FILE = get_dataframe(DATA_PATH, data_type, wnd_type, wnd)
features = df.columns.to_list()
df, _ = process_train_df(df, features)
df['labels'] = df['user']
for model, tag in MODELS:
print(data_type, wnd_type, wnd, tag)
model_final_validation(RESULTS_FILE, model, df, tag, data_type, wnd_type, wnd, is_SVM=tag=='SVC')
###Output
_____no_output_____ |
CODE/CNN_scripts/process-video-JCL-short-with-plotting.ipynb | ###Markdown
This is where the actual image inference happens
###Code
import pickle
import os
max_batches = 10000
t = time.time()
with torch.no_grad():
for batch_num, image_batch in enumerate(data_loader):
if batch_num >= max_batches:
break
if batch_num % 250 == 0:
print('{} images processed'.format(batch_num * 2))
for i in range(len(image_batch)):
image_batch[i]['image'] = np.squeeze(image_batch[i]['image'])
image_batch[i]['image'] = image_batch[i]['image'].to(cuda0)
image_batch[i]['width'] = image_batch[i]['width'].to(cuda0).item()
image_batch[i]['height'] = image_batch[i]['height'].to(cuda0).item()
#print(image_batch['image'].shape)
#print(image_batch)
predictions = model(image_batch)
for preds, im_dict in zip(predictions, image_batch):
name = os.path.splitext(os.path.basename(im_dict['file_name'][0]))[0]
file = os.path.join(save_root, '{}-predictions.pkl'.format(name))
preds_instance = preds["instances"].to("cpu")
with open(file, 'wb') as out:
pickle.dump(preds_instance, out)
out.close()
print(time.time() - t)
os.path.join(images_folder, "*.tiff")
###Output
_____no_output_____
###Markdown
Below is some stuff to dig into the output
###Code
type(predictions[0]['instances'])
import os
name = image_batch[0]['file_name'][0].split('/')[-2]
image_batch[0]['file_name'][0].split('/')
name
# file = os.path.join(save_root, '{}-predictions.pkl'.format(name))
# preds = predictions[0]["instances"].to("cpu")
# with open(file, 'wb') as out:
# pickle.dump(preds, out)
# out.close()
np_detections_file = os.path.join(save_root, '{}_detections.npy'.format(name))
files = sorted(glob.glob(os.path.join(save_root, '*-predictions.pkl')))
all_detections = []
raw_instances = []
for file in files[:]:
with open(file, 'rb') as readfile:
detections=pickle.load(readfile)
#print( detections )
detection_dict = detections.get_fields()
detection_dict['pred_boxes'] = detection_dict['pred_boxes'].tensor.numpy()
detection_dict['scores'] = detection_dict['scores'].numpy()
detection_dict['pred_classes'] = detection_dict['pred_classes'].numpy()
detection_dict['image_name'] = os.path.basename(file).split('-')[0]
all_detections.append(detection_dict)
raw_instances.append(detections)
np_detections_file = os.path.join(save_root, '{}_detections.npy'.format(name))
np.save(np_detections_file, all_detections)
files
raw_instances[0]
os.getcwd()
import numpy as np
import glob
import matplotlib.pyplot as plt
import matplotlib.patches as patches
%matplotlib inline
files = [np_detections_file]
print( files )
width = 20
height = 20
fig = plt.figure( figsize = ( width, height ) )
for file in files[0:1]:
detections = np.load(file, allow_pickle=True)
counter = 0
for detection in detections:
print(detection['scores'].shape)
print(detection['image_name'])
img = plt.imread( root + 'DATA/images_to_process/' + detection['image_name'] + '.tiff' )
#img = cv2.cvtColor( img, cv2.COLOR_BGR2RGB )
plt.imshow( img )
# Get the current reference
ax = plt.gca()
ax.axes.xaxis.set_visible(False)
ax.axes.yaxis.set_visible(False)
for item in detection['pred_boxes']:
x1 = item[ 0 ]
x2 = item[ 2 ]
y1 = item[ 1 ]
y2 = item[ 3 ]
wid = x2 - x1
hei = y2 - y1
# Create a Rectangle patch
rect = patches.Rectangle( (x1, y1 ), wid, hei, linewidth=1, edgecolor='c', facecolor='none' )
# Add the patch to the Axes
ax.add_patch( rect )
plt.savefig( root + 'RESULTS/images_to_process/visualization/' + f'{counter:05}' , bbox_inches = 'tight' )
fig.clear()
counter += 1
item
thermal_file = 'X:/baboon/archive/rawdata/video/thermal/2019_summer/cliff_data/thermal/viewpoint_1/T1020/20190806/20190806_16_20_00-__TIME__.seq'
import fnv
import fnv.reduce
import fnv.file
import numpy as np
import seaborn as sns
import datetime as dt
import time
import pandas as pd
from tkinter import filedialog
from tkinter import *
import os
import matplotlib.pyplot as plt
import glob
import h5py
import gc
import shlex
import pipes
from subprocess import check_call
im = fnv.file.ImagerFile( thermal_file )
for item in detections[1]['pred_boxes']:
print(item)
print()
# image_files = glob.glob(images_folder + '/*.jpg')
detections
# detections
files
import matplotlib.pyplot as plt
im_ind = 0
images_folder
files = sorted(glob.glob(images_folder + '/*.tiff'))
im = plt.imread(files[0])
print(im.shape)
import matplotlib.pyplot as plt
make_video = True
draw_plots = False
max_frames = 5000
fps = 30
output_file = '/home/golden/Dropbox/locusts/test_video_full.mp4'
if make_video:
frames = 0
out = cv2.VideoWriter(output_file, cv2.VideoWriter_fourcc(*'mp4v'), fps, (3840, 2160))
print('here')
# for im_ind in np.linspace(0, len(raw_instances)-1, 20, dtype=int):
for im_ind in range(len(raw_instances)):
# for im_ind in range(60):
if im_ind >= max_frames:
break
if im_ind % 500 == 0:
print(im_ind)
# observation_name = raw_instances[im_ind].image_name.split('_')[0] + '_' + raw_instances[im_ind].image_name.split('_')[1]
# image_raw = plt.imread(os.path.join(os.path.dirname(images_folder), observation_name, raw_instances[im_ind].image_name + '.jpg'))
image_raw = plt.imread( files[im_ind] )
v = Visualizer(image_raw,
metadata=train_metadata,
scale=1.0,
)
v = v.draw_instance_predictions(raw_instances[im_ind])
if make_video:
out.write(v.get_image()[...,::-1])
frames += 1
if draw_plots:
plt.figure(figsize=(20,20))
plt.imshow(v.get_image())
if make_video:
out.release()
print('num frames {}'.format(frames))
os.path.exists(output_file)
v.get_image()[...,::-1].shape
plt.imshow(v.get_image()[...,::-1])
v.get_image().shape
test = np.load(os.path.join(save_root, '{}-predictions.npy'.format(name)), allow_pickle=True)
files = sorted(glob.glob(os.path.join(save_root, '*-predictions.pkl')))
readfile = files[0]
with open(file, 'rb') as readfile:
detections=pickle.load(readfile)
from detectron2.engine import DefaultTrainer
from detectron2.config import get_cfg
import os
cfg = get_cfg()
cfg.merge_from_file(
"/home/golden/detectron2-master/configs/COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"
)
cfg.DATASETS.TRAIN = ("salmon-train",)
cfg.DATASETS.TEST = ("salmon-val",)
cfg.DATALOADER.NUM_WORKERS = 6
cfg.DATALOADER.ASPECT_RATIO_GROUPING = False
cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url(
"COCO-Detection/faster_rcnn_R_50_FPN_3x.yaml"
)
cfg.SOLVER.IMS_PER_BATCH = 8
cfg.SOLVER.BASE_LR = 0.019
cfg.SOLVER.MAX_ITER = (2000)
cfg.SOLVER.WARMUP_ITERS = 100
cfg.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = (256)
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 2
cfg.TEST.EVAL_PERIOD = 100
cfg.TEST.DETECTIONS_PER_IMAGE = 200
cfg.INPUT.MIN_SIZE_TEST = (0)
cfg.INPUT.MAX_SIZE_TEST = (4000)
# Check validation
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, "model_final.pth")
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5
cfg.DATASETS.TEST = ("salmon-val", )
predictor = DefaultPredictor(cfg)
for d in val_dicts:
im = cv2.imread(d["file_name"])
outputs = predictor(im)
v = Visualizer(im[:, :, ::-1],
metadata=train_metadata,
scale=1.0,
)
v = v.draw_instance_predictions(outputs["instances"].to("cpu"))
cv2_imshow(v.get_image()[:, :, ::-1])
###Output
_____no_output_____ |
08-Pandas-NumPy-ScikitLearn.ipynb | ###Markdown
Pandas and Scikit-Learn[Pandas](http://pandas.pydata.org) and [scikit-learn](http://scikit-learn.org/stable/) are two of the most popular scientific python libraries.Pandas is commonly used to preprocess, reshape, and transform the data prior to handing it to scikit-learn to fit a model. Three-Minute Intro to Scikit-LearnIt's the goto library for machine learning in Python.They use a consistent API for specifiying and fitting models.For *supervised* learning tasks, you have a *feature matrix* `X`, that's an `[N x P]` NumPy array, and a *target array* `y`, that's typically a 1-dimensional array with length `N`.
###Code
from sklearn.datasets import load_boston
boston = load_boston()
y = boston['target']
X = boston['data']
print(boston['feature_names'])
y[:5]
X[:5]
###Output
_____no_output_____
###Markdown
Scikit-learn cleanly separates the *model specification* from the *model fitting*.You specify your model by instantiating an *estimator*, for example `sklearn.linear_model.LinearRegression`.
###Code
from sklearn.linear_model import LinearRegression
model = LinearRegression(normalize=True)
###Output
_____no_output_____
###Markdown
You can set *hyperparameters* (parameters that are "outside", or not learned by the model) when you specify the model. `normalize=True` is a hyperparameter that tells scikit-learn to normalize the data before fitting. Then you fit the model by passing the data (feature matrix `X` and target array `y`) to the `.fit` method.At this point, the estimator *learns the parameters* that best fit the data.For a linear regression, that's the `.coef_` attribute, which stores the parameters of the linear model (one per feature, plus an intercept by default).
###Code
model.fit(X, y)
model.coef_
###Output
_____no_output_____
###Markdown
The Problem1. Different data models: - NumPy is homogenous, n-dimensional arrays - Pandas is heterogenous, 2-dimensional tables3. Pandas has additional dtypes Pandas and scikit-learn have largely overlapping, but still different data models.Scikit-learn uses NumPy arrays for most everything (the exception being SciPy sparse matricies for certain tasks, which we'll ignore).Pandas builds on top of NumPy, but has made several extensions to its type system, creating a slight rift between the two. Most notably, pandas supports heterogenous data and has added several extension data-types on top of NumPy. 1. Homogeneity vs. HeterogeneityNumPy `ndarray`s (and so scikit-learn feature matrices) are *homogeneous*, they must have a single dtype, regardless of the number of dimensions.Pandas `DataFrame`s are *heterogenous*, and can store columns of multiple dtypes within a single DataFrame.
###Code
%matplotlib inline
import seaborn as sns
import numpy as np
import pandas as pd
x = np.array([
[10, 1.0], # mix of integer and floats
[20, 2.0],
[30, 3.0],
])
x.dtype
df = pd.DataFrame([
[10, 1.0],
[20, 2.0],
[30, 3.0]
])
df.dtypes
###Output
_____no_output_____
###Markdown
2. Extension TypesPandas has implemented some *extension dtypes*: `Categoricals` and datetimes with timezones. These extension types cannot be expressed natively as NumPy arrays, *even if they are a single homogenouse dimension*, and must go through some kind of (potentially lossy) conversion process when converting to NumPy.
###Code
s = pd.Series(pd.Categorical(['a', 'b', 'c', 'a'],
categories=['d', 'a', 'b', 'c'],
ordered=True))
s
###Output
_____no_output_____
###Markdown
Casting this to a NumPy array loses the categories and ordered information.
###Code
np.asarray(s)
###Output
_____no_output_____
###Markdown
"Real-world" data is often complex and heterogeneous, making pandas the tool of choice.However, tools like scikit-learn, which do not depend on pandas, can't use itsricher data model.In my experience, most of the time the different data models aren't an issue.Recent versions of scikit-learn are much better about taking and returning DataFrames where possible (e.g. `train_test_split`).That said, there are a few rough edges that you can run into.In these cases, we need a way of bridging the gap between pandas' DataFrames and the NumPy arrays appropriate for scikit-learn.Fortunately the tools are all there to make this conversion smooth. The DataFor our example we'll work with a simple dataset on tips:
###Code
df = pd.read_csv("data/tips.csv")
df.head()
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 244 entries, 0 to 243
Data columns (total 7 columns):
total_bill 244 non-null float64
tip 244 non-null float64
sex 244 non-null object
smoker 244 non-null object
day 244 non-null object
time 244 non-null object
size 244 non-null int64
dtypes: float64(2), int64(1), object(4)
memory usage: 13.4+ KB
###Markdown
Exercise: Target, Feature arraysSplit the DataFrame `df` into a `Series` called `y` containing the `tip` amount, and a DataFrame `X` containing everything else.Our target variable is the tip amount. The remainder of the columns make up our features.
###Code
%load solutions/sklearn_pandas_split.py
y.head()
X.head()
###Output
_____no_output_____
###Markdown
Notice the feature matrix is a mixture of numeric and categorical variables.In statistics, a categorical variable is a variable that comes from a limited, fixed set of values.At the moment though, the actual data-type of those columns is just `object`, containing python strings. We'll convert those to pandas `Categorical`s later. The StatsOur focus is about how to use pandas and scikit-learn together, not how to build the best tip-predicting model.To keep things simple, we'll fit a linear regression to predict `tip`, rather than some more complicated model.
###Code
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
When you fit a linear regression, you (or scikit-learn, rather) end up having to solve an equation to find the line that minimizes the mean squared error between the predictions and observations. The equation that gives the best-fit line is$$\hat{\boldsymbol{\beta}} = \left(\boldsymbol{X}^T\boldsymbol{X}\right)^{-1} \boldsymbol{X}^T \boldsymbol{y}$$where- $\hat{\boldsymbol{\beta}}$ is our estimate for the vector of coefficients describing the best-fit line (`LinearRegression.coef_`)- $\boldsymbol{X}$ is the feature matrix- $\boldsymbol{y}$ is the target array (tip amount)There's no need to worry about that equation; it likely won't make sense unless you've seen it before.The only point I want to emphasize is that finding the best-fit line requires doing some matrix multiplications.If we just tried to fit a regression on our raw data, we'd get an error:
###Code
%xmode Plain
lm = LinearRegression()
lm.fit(X, y)
###Output
Exception reporting mode: Plain
###Markdown
The message, "could not convert string to float" says it all.We (or our library) need to somehow convert our *categorical* data (`sex`, `smoker`, `day`, and `time`) into numeric data.The next two sections offer some possible ways of doing that conversion. Dummy Encoding Dummy encoding is one approach to converting categorical to numeric data.It expands each categorical column to *multiple* columns, one per distinct value.The values in these new dummy-encoded columns are either 1, indicating the presence of that value in that observation, or 0.Versions of this are implemented in both scikit-learn and pandas. I recommend the pandas version, `get_dummies`. It offers a few conveniences:- Operates on multiple columns at once- Passes through numeric columns unchanged- Preserves row and column labels- Provides a `drop_first` keyword for dropping a level per column. You might want this to avoid [perfect multicolinearity](https://en.wikipedia.org/wiki/Multicollinearity) if you have an intercept- Uses Categorical information (more on this later)
###Code
X_dummy = pd.get_dummies(X)
X_dummy.head()
lm = LinearRegression()
lm.fit(X_dummy, y)
pd.Series(lm.coef_, index=X_dummy.columns)
###Output
_____no_output_____
###Markdown
RefinementsOur last approach worked, but there's still room for improvement.1. We can't easily go from dummies back to categoricals2. Doesn't integrate with scikit-learn `Pipeline` objects.3. If working with a larger dataset and `partial_fit`, codes could be missing from subsets of the data.4. Memory inefficient if there are many records relative to distinct categories These items become more important when you go to "productionize" your model.But keep in mind that we've solved the basic problem of moving from pandas DataFrames to NumPy arrays for scikit-learn; now we're just making the bridge sturdier.To accomplish this we'll store additonal information in the *type* of the column and write a [Transformer](http://scikit-learn.org/stable/modules/generated/sklearn.base.TransformerMixin.html) to handle the conversion to and from dummies. Aside: scikit-learn Pipelines Rarely when doing data analysis do we take plug a raw dataset directly into a model.There's typically some preprocessing and feature engineering before the fitting stage.`scikit-learn` provides the `Pipeline` interface for chaining together a sequence of fit and transform steps. For example, suppose we wanted our pipeline to be- standardize each column (subtract the mean, normalize the variance to 1)- compute all the [interaction terms](https://en.wikipedia.org/wiki/Interaction_(statistics))- fitting a Lasso regressionWithout using a scikit-learn `Pipeline`, you need to assign the output of each step to a temporary variable, and manually shuttling data through to the end:
###Code
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
from sklearn.linear_model import Lasso
X_scaled = StandardScaler().fit_transform(X_dummy, y)
X_poly = (PolynomialFeatures(interaction_only=True)
.fit_transform(X_scaled, y))
model = Lasso(alpha=.5)
model.fit(X_poly, y)
###Output
_____no_output_____
###Markdown
With pipelines, this becomes:
###Code
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(StandardScaler(),
PolynomialFeatures(interaction_only=True),
Lasso(alpha=.5))
pipe.fit(X_dummy, y)
###Output
_____no_output_____
###Markdown
I always recommend the pipeline version.For one thing, I prefer the aesthetics.Especially with longer chains of computations, pipelines remove the need for many temporary variables.They are also easier to save to disk with [joblib](https://pythonhosted.org/joblib/persistence.html).But the most important reason is the interaction of `Pipeline` and [`GridSearchCV`](http://scikit-learn.org/stable/modules/grid_search.html).When fitting a model you'll typically have a space of *hyperparameters* to search over.These are the parameters passed to each estimators `__init__` method, so before the `.fit` step.In the pipeline above, some examples of hyperparameters are the `interaction_only` parameter of `PolynomialFeatures` and the `alpha` parameter of `Lasso`.A common mistake in machine learning is to let information from your test dataset leak into your training dataset by preprocessing *before* splitting.This means the score you get on the test may not be an accurate representation of the score you'll get on new data. `scikit-learn` provides many tools for you to write custom transformers that work well in its `Pipeline`.When writing a custom transformer, you should:- inherit from `sklearn.base.TransformerMixin`- implement a `.fit` method that takes a feature matrix `X` and a target array `y`, returning `self`.- implement a `.transform` method that also takes an `X` and a `y`, returning the transformed feature matrixBelow, we'll write a couple custom transformers to make our last regression more robust. But before that, we need to examine one of pandas' extension dtypes. Pandas `Categorical` dtypeWe've already talked about Categoricals, but as a refresher:- There are a fixed set of possible values the variable can take- The cateogories can be ordered or unordered- The array of data is dictionary encoded, so the set of possible values is stored once, and the array of actual values is stored efficiently as an array of integers `Categorical`s can be constructed either with the `pd.Categorical` constructor, or using the `.astype` method on a `Series`. For example
###Code
day = df['day'].astype('category').head()
day.head()
###Output
_____no_output_____
###Markdown
With `.astype('category')` we're just using the defaults of- The set of categories is just the set present in the column- There is no orderingThe categorical-specific information of a `Series` is stored under the `.cat` accessor.
###Code
day.cat.categories
day.cat.ordered
###Output
_____no_output_____
###Markdown
The following class is a transformer that transforms to categorical columns.
###Code
from sklearn.base import TransformerMixin
class CategoricalTransformer(TransformerMixin):
"Converts a set of columns in a DataFrame to categoricals"
def __init__(self, columns):
self.columns = columns
def fit(self, X, y=None):
'Records the categorical information'
self.cat_map_ = {col: X[col].astype('category').cat
for col in self.columns}
return self
def transform(self, X, y=None):
X = X.copy()
for col in self.columns:
X[col] = pd.Categorical(X[col],
categories=self.cat_map_[col].categories,
ordered=self.cat_map_[col].ordered)
return X
def inverse_transform(self, trn, y=None):
trn = trn.copy()
trn[self.columns] = trn[self.columns].apply(lambda x: x.astype(object))
return trn
###Output
_____no_output_____
###Markdown
The most important rule when writing custom objects to be used in a `Pipeline` is that the `transfrom` and `inverse_transform` steps shouldn't modify `self`. That should only occur in `fit`. Because we inherited from `TransformerMixin`, we get the `fit_transform` method.
###Code
ct = CategoricalTransformer(columns=['sex', 'smoker', 'day', 'time'])
X_cat = ct.fit_transform(X)
X_cat.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 244 entries, 0 to 243
Data columns (total 6 columns):
total_bill 244 non-null float64
sex 244 non-null category
smoker 244 non-null category
day 244 non-null category
time 244 non-null category
size 244 non-null int64
dtypes: category(4), float64(1), int64(1)
memory usage: 5.3 KB
###Markdown
DummyEncoderWe now have the pieces in place to solve all our issues.We'll write a class `DummyEncoder` for use in a scikit-learn `Pipeline`.The entirety is given in the next cell, but we'll break it apart piece by piece.
###Code
class DummyEncoder(TransformerMixin):
def fit(self, X, y=None):
self.columns_ = X.columns
self.cat_cols_ = X.select_dtypes(include=['category']).columns
self.non_cat_cols_ = X.columns.drop(self.cat_cols_)
self.cat_dtypes_ = {col: X[col].dtype for col in self.cat_cols_}
return self
def transform(self, X, y=None):
# Could do basic asserts here, like checking that
# the column names / dtypes / categories match
return np.asarray(pd.get_dummies(X))
self = DummyEncoder()
trn = self.fit_transform(X_cat)
trn
###Output
_____no_output_____
###Markdown
Using our pipeline
###Code
columns = ['sex', 'smoker', 'day', 'time']
pipe = make_pipeline(CategoricalTransformer(columns), DummyEncoder(), LinearRegression())
pipe.fit(X, y)
yhat = pipe.predict(X)
sns.jointplot(y, y-yhat)
from sklearn.decomposition import PCA
pipe = make_pipeline(CategoricalTransformer(columns), DummyEncoder(), PCA())
trn = pipe.fit_transform(X)
sns.jointplot(trn[:, 0], trn[:, 1]);
###Output
_____no_output_____ |
75_GPU_Workshop/06a_Train_Model_XLA_GPU.ipynb | ###Markdown
Train Model with XLA_GPU (and CPU*)Some operations do not have XLA_GPU equivalents, so we still need to use CPU. IMPORTANT: You Must STOP All Kernels and Terminal SessionThe GPU is wedged at this point. We need to set it free!!
###Code
import tensorflow as tf
from tensorflow.python.client import timeline
import pylab
import numpy as np
import os
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
tf.logging.set_verbosity(tf.logging.INFO)
###Output
_____no_output_____
###Markdown
Reset TensorFlow GraphUseful in Jupyter Notebooks
###Code
tf.reset_default_graph()
###Output
_____no_output_____
###Markdown
Create TensorFlow Session
###Code
config = tf.ConfigProto(
log_device_placement=True,
)
config.gpu_options.allow_growth=True
config.graph_options.optimizer_options.global_jit_level \
= tf.OptimizerOptions.ON_1
print(config)
sess = tf.Session(config=config)
print(sess)
###Output
_____no_output_____
###Markdown
Generate Model Version (current timestamp)
###Code
from datetime import datetime
version = int(datetime.now().strftime("%s"))
###Output
_____no_output_____
###Markdown
Load Model Training and Test/Validation Data
###Code
num_samples = 100000
import numpy as np
import pylab
x_train = np.random.rand(num_samples).astype(np.float32)
print(x_train)
noise = np.random.normal(scale=0.01, size=len(x_train))
y_train = x_train * 0.1 + 0.3 + noise
print(y_train)
pylab.plot(x_train, y_train, '.')
x_test = np.random.rand(len(x_train)).astype(np.float32)
print(x_test)
noise = np.random.normal(scale=.01, size=len(x_train))
y_test = x_test * 0.1 + 0.3 + noise
print(y_test)
pylab.plot(x_test, y_test, '.')
with tf.device("/cpu:0"):
W = tf.get_variable(shape=[], name='weights')
print(W)
b = tf.get_variable(shape=[], name='bias')
print(b)
with tf.device("/device:XLA_GPU:0"):
x_observed = tf.placeholder(shape=[None],
dtype=tf.float32,
name='x_observed')
print(x_observed)
y_pred = W * x_observed + b
print(y_pred)
learning_rate = 0.025
with tf.device("/device:XLA_GPU:0"):
y_observed = tf.placeholder(shape=[None], dtype=tf.float32, name='y_observed')
print(y_observed)
loss_op = tf.reduce_mean(tf.square(y_pred - y_observed))
optimizer_op = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer_op.minimize(loss_op)
print("Loss Scalar: ", loss_op)
print("Optimizer Op: ", optimizer_op)
print("Train Op: ", train_op)
###Output
_____no_output_____
###Markdown
Randomly Initialize Variables (Weights and Bias)The goal is to learn more accurate Weights and Bias during training.
###Code
with tf.device("/cpu:0"):
init_op = tf.global_variables_initializer()
print(init_op)
sess.run(init_op)
print("Initial random W: %f" % sess.run(W))
print("Initial random b: %f" % sess.run(b))
###Output
_____no_output_____
###Markdown
View Accuracy of Pre-Training, Initial Random VariablesWe want this to be close to 0, but it's relatively far away. This is why we train!
###Code
def test(x, y):
return sess.run(loss_op, feed_dict={x_observed: x, y_observed: y})
test(x_test, y_test)
###Output
_____no_output_____
###Markdown
Setup Loss Summary Operations for Tensorboard
###Code
loss_summary_scalar_op = tf.summary.scalar('loss', loss_op)
loss_summary_merge_all_op = tf.summary.merge_all()
train_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/xla_gpu/%s/train' % version,
graph=tf.get_default_graph())
test_summary_writer = tf.summary.FileWriter('/root/tensorboard/linear/xla_gpu/%s/test' % version,
graph=tf.get_default_graph())
###Output
_____no_output_____
###Markdown
Train Model
###Code
%%time
from tensorflow.python.client import timeline
with tf.device("/device:XLA_GPU:0"):
run_metadata = tf.RunMetadata()
max_steps = 401
for step in range(max_steps):
if (step < max_steps - 1):
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train})
else:
test_summary_log, _ = sess.run([loss_summary_merge_all_op, loss_op], feed_dict={x_observed: x_test, y_observed: y_test})
train_summary_log, _ = sess.run([loss_summary_merge_all_op, train_op], feed_dict={x_observed: x_train, y_observed: y_train},
options=tf.RunOptions(trace_level=tf.RunOptions.SOFTWARE_TRACE),
run_metadata=run_metadata)
trace = timeline.Timeline(step_stats=run_metadata.step_stats)
with open('timeline-xla-gpu.json', 'w') as trace_file:
trace_file.write(trace.generate_chrome_trace_format(show_memory=True))
if step % 10 == 0:
print(step, sess.run([W, b]))
train_summary_writer.add_summary(train_summary_log, step)
train_summary_writer.flush()
test_summary_writer.add_summary(test_summary_log, step)
test_summary_writer.flush()
pylab.plot(x_train, y_train, '.', label="target")
pylab.plot(x_train, sess.run(y_pred,
feed_dict={x_observed: x_train,
y_observed: y_train}),
".",
label="predicted")
pylab.legend()
pylab.ylim(0, 1.0)
###Output
_____no_output_____
###Markdown
View Loss Summaries in TensorboardNavigate to the `Scalars` and `Graphs` tab at this URL:http://[ip-address]:30002 Save Graph For OptimizationWe will use this later.
###Code
import os
optimize_me_parent_path = '/root/models/optimize_me/linear/xla_gpu'
saver = tf.train.Saver()
os.system('rm -rf %s' % optimize_me_parent_path)
os.makedirs(optimize_me_parent_path)
unoptimized_model_graph_path = '%s/unoptimized_xla_gpu.pb' % optimize_me_parent_path
tf.train.write_graph(sess.graph_def,
'.',
unoptimized_model_graph_path,
as_text=False)
print(unoptimized_model_graph_path)
model_checkpoint_path = '%s/model.ckpt' % optimize_me_parent_path
saver.save(sess,
save_path=model_checkpoint_path)
print(model_checkpoint_path)
print(optimize_me_parent_path)
os.listdir(optimize_me_parent_path)
sess.close()
###Output
_____no_output_____
###Markdown
Show Graph
###Code
%%bash
summarize_graph --in_graph=/root/models/optimize_me/linear/xla_gpu/unoptimized_xla_gpu.pb
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import re
from google.protobuf import text_format
from tensorflow.core.framework import graph_pb2
def convert_graph_to_dot(input_graph, output_dot, is_input_graph_binary):
graph = graph_pb2.GraphDef()
with open(input_graph, "rb") as fh:
if is_input_graph_binary:
graph.ParseFromString(fh.read())
else:
text_format.Merge(fh.read(), graph)
with open(output_dot, "wt") as fh:
print("digraph graphname {", file=fh)
for node in graph.node:
output_name = node.name
print(" \"" + output_name + "\" [label=\"" + node.op + "\"];", file=fh)
for input_full_name in node.input:
parts = input_full_name.split(":")
input_name = re.sub(r"^\^", "", parts[0])
print(" \"" + input_name + "\" -> \"" + output_name + "\";", file=fh)
print("}", file=fh)
print("Created dot file '%s' for graph '%s'." % (output_dot, input_graph))
input_graph='/root/models/optimize_me/linear/xla_gpu/unoptimized_xla_gpu.pb'
output_dot='./unoptimized_xla_gpu.dot'
convert_graph_to_dot(input_graph=input_graph, output_dot=output_dot, is_input_graph_binary=True)
%%bash
dot -T png ./unoptimized_xla_gpu.dot \
-o ./unoptimized_xla_gpu.png > /tmp/a.out
from IPython.display import Image
Image('./unoptimized_xla_gpu.png', width=1024, height=768)
###Output
_____no_output_____
###Markdown
XLA JIT Visualizations
###Code
%%bash
dot -T png /tmp/hlo_graph_1.*.dot -o ./hlo_graph_1.png &>/dev/null
dot -T png /tmp/hlo_graph_10.*.dot -o ./hlo_graph_10.png &>/dev/null
dot -T png /tmp/hlo_graph_50.*.dot -o ./hlo_graph_50.png &>/dev/null
dot -T png /tmp/hlo_graph_75.*.dot -o ./hlo_graph_75.png &>/dev/null
###Output
_____no_output_____ |
evaluate-predictions/which-systems-were-modified.ipynb | ###Markdown
Measure Predicted Changes in Phase DiagramsGiven a list of compounds that are predicted to be stable by Dipendra's DL model, measure changes in the phase diagrams.
###Code
%matplotlib inline
from pymatgen import Composition, Element
from pymatgen.phasediagram.maker import PhaseDiagram
from pymatgen.phasediagram.plotter import PDPlotter
from pymatgen.phasediagram.entries import PDEntry
from pymatgen.phasediagram.analyzer import PDAnalyzer
from sklearn.cluster import AgglomerativeClustering
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import silhouette_score
from pathos.multiprocessing import ProcessingPool as Pool
import itertools
import os
import re
import pandas as pd
import numpy as np
pool = Pool(processes=os.cpu_count())
###Output
_____no_output_____
###Markdown
Load in the OQMD data
###Code
oqmd_data = pd.read_csv('oqmd_all.txt', delim_whitespace=True)
print('Read in %d entries'%len(oqmd_data))
###Output
Read in 506114 entries
###Markdown
Rename `comp` to `composition` (you'll thank me later)
###Code
oqmd_data.rename(columns={'comp':'composition'}, inplace=True)
oqmd_data.head()
###Output
_____no_output_____
###Markdown
Eliminate entries with no `delta_e`
###Code
oqmd_data = oqmd_data[~ oqmd_data['delta_e'].isnull()]
print('%d entries with delta_e'%len(oqmd_data))
###Output
506114 entries with delta_e
###Markdown
Eliminate insanely low formation enthalpies
###Code
oqmd_data = oqmd_data.query('delta_e > -10')
oqmd_data.describe()
###Output
_____no_output_____
###Markdown
Convert this data to PDEntries. This is what `pymatgen`'s PhaseDiagram creates
###Code
%%time
def get_pdentry(row, attribute=None):
comp = Composition(row['composition'])
return PDEntry(comp.fractional_composition, row['delta_e'], comp.reduced_formula, attribute)
oqmd_data['pdentry'] = oqmd_data.apply(lambda x: get_pdentry(x, 'oqmd'), axis=1)
###Output
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/pymatgen/core/periodic_table.py:398: UserWarning: No electronegativity for Ne. Setting to infinity. This has no physical meaning, and is mainly done to avoid errors caused by the code expecting a float.
% self.symbol)
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/pymatgen/core/periodic_table.py:398: UserWarning: No electronegativity for He. Setting to infinity. This has no physical meaning, and is mainly done to avoid errors caused by the code expecting a float.
% self.symbol)
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/pymatgen/core/periodic_table.py:398: UserWarning: No electronegativity for Ar. Setting to infinity. This has no physical meaning, and is mainly done to avoid errors caused by the code expecting a float.
% self.symbol)
###Markdown
List out the system for each entry
###Code
elem_re = re.compile('[A-Z][a-z]?')
def get_elems(s):
return ''.join(sorted(set(elem_re.findall(s))))
assert get_elems('AlFeFe2') == 'AlFe'
oqmd_data['system'] = oqmd_data['composition'].apply(get_elems)
###Output
_____no_output_____
###Markdown
Make a function to get data from a single system
###Code
def get_data_from_system(data, system):
"""Extract rows from a pandas array that are in a certain phase diagram
:param data: DataFrame, data from which to query. Must contain column "system"
:param system: list/set, list of elements to serve as input
:return: DataFrame, with only entries that exclusively contain these elements"""
# Get the systems that make up this phase diagram
constit_systems = set()
for sys in itertools.product(system, repeat=len(system)):
constit_systems.add(''.join(sorted(set(sys))))
# Get all points that are at any of those systems
query_str = ' or '.join(['system == "%s"'%s for s in constit_systems])
return data.query(query_str)
assert set(get_data_from_system(oqmd_data, ['Al','Ni','Zr'])['system']) == {'Al', 'Ni', 'Zr', 'AlNi', 'AlZr', 'NiZr', 'AlNiZr'}
###Output
_____no_output_____
###Markdown
Plot one of the ternary diagrams. This chart shows the compositions of stable phases in the Te-Ni-Hf system. Stable phases are those on the convex hull.
###Code
pdg = PhaseDiagram(get_data_from_system(oqmd_data, ['Te', 'Ni', 'Hf'])['pdentry'])
PDPlotter(pdg).show()
###Output
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/numpy/linalg/linalg.py:1821: RuntimeWarning: invalid value encountered in det
r = _umath_linalg.det(a, signature=signature)
###Markdown
Compute the number of stable entries in each phase diagramThis is our "baseline" measurement for what the diagrams look like before deep learning Get the lists of systems to searchCounting the number of stable phases in each system could take a very
###Code
element_list = set()
oqmd_data['composition'].apply(lambda x: element_list.update(elem_re.findall(x)))
print('Number of elements:', len(element_list))
###Output
Number of elements: 89
###Markdown
Get certain groups of elements
###Code
noble_gases = ['He', 'Ne', 'Ar', 'Kr', 'Xe']
alkali_metals = ['Li', 'Na', 'K'] # , 'Rb', 'Cs'] - Only do the common ones
threed_tms = ['Sc', 'Ti', 'V', 'Cr', 'Mn', 'Fe', 'Co', 'Ni', 'Cu', 'Zn']
actinides = ['Ac', 'Th', 'Pa', 'U', 'Np', 'Pu'] # VASP only has these
lanthanides = [Element.from_Z(x).symbol for x in range(57, 72)]
tms = [Element.from_Z(x).symbol for x in range(1,102) if Element.from_Z(x).is_transition_metal > 0]
###Output
_____no_output_____
###Markdown
Remove noble gases, lanthanides, and actinides
###Code
element_list.difference_update(noble_gases)
element_list.difference_update(actinides)
element_list.difference_update(lanthanides)
print('Number of elements:', len(element_list))
###Output
Number of elements: 63
###Markdown
Get all of the binary systems
###Code
def assemble_list_of_systems(order):
"""Create a DataFrame of all possible systems with a certain number of elements"""
output = pd.DataFrame()
output['system'] = list(itertools.combinations(element_list, order))
return output
binary_systems = assemble_list_of_systems(2)
print('Generated %d binary systems'%len(binary_systems))
###Output
Generated 1953 binary systems
###Markdown
Get the ternary systems that contain an common Alkali metal
###Code
ternary_systems = assemble_list_of_systems(3)
ternary_systems = ternary_systems[[any([x in s for x in alkali_metals]) for s in ternary_systems['system']]]
print('Generated %d ternary systems'%len(ternary_systems))
###Output
Generated 5491 ternary systems
###Markdown
Get quarternary systems that contain at least 3 3d TMsLW 2June17: I made these filters stringent to keep runtimes down.
###Code
quaternary_systems = assemble_list_of_systems(4)
quaternary_systems = quaternary_systems[[
sum([x in threed_tms for x in s]) >= 3 and all([x in tms for x in s]) for s in quaternary_systems['system']]]
print('Generated %d quaternary systems'%len(quaternary_systems))
###Output
Generated 2490 quaternary systems
###Markdown
Count the number of stable systemsUsing the already-computed hull distances, count how many stable phases there are
###Code
%%time
def find_number_of_stable_compounds(systems, data, colname):
"""Count the number of stable compounds in a list of systems
:param systems: DataFrame, list of systems to evaluate
:param data: DataFrame, stability data to use
:param colname: str, name of output column in `systems`"""
def count_stable(system):
pdf = PhaseDiagram(get_data_from_system(data, system)['pdentry'])
return len(pdf.stable_entries)
systems[colname] = systems['system'].apply(count_stable)
find_number_of_stable_compounds(binary_systems, oqmd_data, 'oqmd_stable')
###Output
CPU times: user 2min 59s, sys: 245 ms, total: 2min 59s
Wall time: 2min 59s
###Markdown
Repeat this for the binary and ternary cases
###Code
%%time
for systems in [ternary_systems, quaternary_systems]:
find_number_of_stable_compounds(systems, oqmd_data, 'oqmd_stable')
###Output
CPU times: user 37min 42s, sys: 4.08 s, total: 37min 46s
Wall time: 37min 42s
###Markdown
Which are the binary systems with the greatest number of stable phases?
###Code
binary_systems.sort_values('oqmd_stable', ascending=False).head()
binary_systems.to_csv('binary_systems.csv', index=False)
###Output
_____no_output_____
###Markdown
Assess the effect of adding DL predictionsLook at several things:1. In which systems did DL predict the most stable compositions2. In which systems the the convex hull change the most
###Code
%%time
def load_DL_predictions(path):
"""Loads in the predictions from Dipendra, and renames the `delta_e` column to match the `oqmd_data`
Also generates a `PDEntry` for each composition, and computes which system this entry is in
"""
output = pd.read_csv(path, sep=' ')
output.rename(columns={'delta_e_predicted': 'delta_e'}, inplace=True)
output['pdentry'] = output.apply(get_pdentry, axis=1)
output['system'] = output['composition'].apply(get_elems)
return output
dl_binary = load_DL_predictions(os.path.join('new-datasets', 'binary_stable-0.2.data.gz'))
###Output
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/pymatgen/core/periodic_table.py:398: UserWarning: No electronegativity for Ne. Setting to infinity. This has no physical meaning, and is mainly done to avoid errors caused by the code expecting a float.
% self.symbol)
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/pymatgen/core/periodic_table.py:398: UserWarning: No electronegativity for He. Setting to infinity. This has no physical meaning, and is mainly done to avoid errors caused by the code expecting a float.
% self.symbol)
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/pymatgen/core/periodic_table.py:398: UserWarning: No electronegativity for Ar. Setting to infinity. This has no physical meaning, and is mainly done to avoid errors caused by the code expecting a float.
% self.symbol)
###Markdown
Specific DiagramsThese generally are the systems with the greatest number of stable compounds, as identified in [where-are-stable-compounds.ipynb](where-are-stable-compounds.ipynb) Update data with both binary and ternary datasets
###Code
updated_oqmd_data = oqmd_data.append(dl_binary)
%%time
updated_oqmd_data = updated_oqmd_data.append(load_DL_predictions(os.path.join('new-datasets', 'ternary_stable-0.2.data.gz')))
###Output
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/pymatgen/core/periodic_table.py:398: UserWarning: No electronegativity for He. Setting to infinity. This has no physical meaning, and is mainly done to avoid errors caused by the code expecting a float.
% self.symbol)
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/pymatgen/core/periodic_table.py:398: UserWarning: No electronegativity for Ar. Setting to infinity. This has no physical meaning, and is mainly done to avoid errors caused by the code expecting a float.
% self.symbol)
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/pymatgen/core/periodic_table.py:398: UserWarning: No electronegativity for Ne. Setting to infinity. This has no physical meaning, and is mainly done to avoid errors caused by the code expecting a float.
% self.symbol)
###Markdown
Ternary systems to assess
###Code
ternary_choices = []
###Output
_____no_output_____
###Markdown
Binary SystemsAssess the changes to phase diagrams after adding new binary data points Update the OQMD data with these new values, and recompute the phase diagrams
###Code
%%time
find_number_of_stable_compounds(binary_systems, updated_oqmd_data, 'with_DL')
###Output
CPU times: user 4min 28s, sys: 272 ms, total: 4min 28s
Wall time: 4min 27s
###Markdown
Figure out how many diagrams changed
###Code
binary_systems['new_compounds'] = binary_systems['with_DL'] - binary_systems['oqmd_stable']
###Output
_____no_output_____
###Markdown
Which had the greatest number of new compounds
###Code
binary_systems.sort_values('new_compounds', ascending=False).head()
###Output
_____no_output_____
###Markdown
Plot one of them
###Code
pdg = PhaseDiagram(get_data_from_system(oqmd_data,
binary_systems.sort_values('new_compounds', ascending=False)['system'].tolist()[3])
['pdentry'])
PDPlotter(pdg).show()
pdg = PhaseDiagram(get_data_from_system(updated_oqmd_data,
binary_systems.sort_values('new_compounds', ascending=False)['system'].tolist()[3])
['pdentry'])
PDPlotter(pdg).show()
###Output
_____no_output_____
###Markdown
Ternary SystemsAdd in ternary data, reassess ternaries Update the ternary systems
###Code
%%time
find_number_of_stable_compounds(ternary_systems, updated_oqmd_data, 'with_DL')
ternary_systems['new_compounds'] = ternary_systems['with_DL'] - ternary_systems['oqmd_stable']
###Output
CPU times: user 28min 56s, sys: 2.58 s, total: 28min 58s
Wall time: 28min 55s
###Markdown
Plot which have the most numbers of new compounds
###Code
ternary_systems.sort_values('new_compounds', ascending=False, inplace=True)
ternary_systems.head()
pdg = PhaseDiagram(get_data_from_system(oqmd_data,
ternary_systems.sort_values('new_compounds', ascending=False)['system'].tolist()[0])
['pdentry'])
PDPlotter(pdg).show()
pdg = PhaseDiagram(get_data_from_system(updated_oqmd_data,
ternary_systems.sort_values('new_compounds', ascending=False)['system'].tolist()[0])
['pdentry'])
PDPlotter(pdg).show()
def get_clusters(system, data=updated_oqmd_data):
"""Get the distinct clusters of compounds within the predictions
:param pdg: PhaseDiagram, phase diagram to analyze
:return: list of compositions to pick"""
pdg = PhaseDiagram(get_data_from_system(data, system)['pdentry'])
# Get the DL predictions
stable_dl = [x for x in pdg.stable_entries if x.attribute is not 'oqmd']
# Convert compositions to vectors
comps = np.zeros((len(stable_dl), len(system)))
for j,e in enumerate(system):
for i,c in enumerate(stable_dl):
comps[i,j] = c.composition[e]
# Determine the "unclustered" score
score = 0
best_labels = [0,]*len(comps)
# Determine the optimal number of clusters
ac = AgglomerativeClustering()
best_score = -1
for size in range(2,5):
ac.set_params(n_clusters=size)
labels = ac.fit_predict(comps)
score = silhouette_score(comps, labels)
if score > best_score:
best_labels = labels
best_score = score
# Get the best labels for each cluster
output = []
for cluster in range(best_labels.max()+1):
# Get the points in the cluster
my_points = comps[best_labels==cluster]
# Get the point closest to the center
my_dists = np.power(my_points - my_points.mean(axis=0), 2).sum(axis=1)
output.append(Composition(dict((e,c) for e,c in zip(system,my_points[my_dists.argmin(),:]))).reduced_formula)
return output
get_clusters(['K','Si','F'])
ternary_systems.iloc[:10]
ternary_systems.to_csv('ternary_systems.csv', index=False)
best_ternary_systems = ternary_systems.iloc[:10].copy()
best_ternary_systems['choices'] = best_ternary_systems['system'].apply(get_clusters)
best_ternary_systems.to_csv('best_ternaries.csv', index=False)
###Output
_____no_output_____
###Markdown
Quaternary SystemsSame thing, but with quaternaries
###Code
%%time
updated_oqmd_data = updated_oqmd_data.append(load_DL_predictions(os.path.join('new-datasets', 'quaternary_stable-0.2.data.gz')))
###Output
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/pymatgen/core/periodic_table.py:398: UserWarning: No electronegativity for Ne. Setting to infinity. This has no physical meaning, and is mainly done to avoid errors caused by the code expecting a float.
% self.symbol)
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/pymatgen/core/periodic_table.py:398: UserWarning: No electronegativity for Ar. Setting to infinity. This has no physical meaning, and is mainly done to avoid errors caused by the code expecting a float.
% self.symbol)
/home/wardlt/software/miniconda3/lib/python3.6/site-packages/pymatgen/core/periodic_table.py:398: UserWarning: No electronegativity for He. Setting to infinity. This has no physical meaning, and is mainly done to avoid errors caused by the code expecting a float.
% self.symbol)
###Markdown
Update the counts of stable compounds
###Code
%%time
find_number_of_stable_compounds(quaternary_systems, updated_oqmd_data, 'with_DL')
quaternary_systems['new_compounds'] = quaternary_systems['with_DL'] - quaternary_systems['oqmd_stable']
quaternary_systems.sort_values('new_compounds', ascending=False).head()
quaternary_systems.to_csv('quaternary_systems.csv', index=False)
best_quaternary_systems = quaternary_systems.iloc[:10].copy()
best_quaternary_systems['choices'] = best_quaternary_systems['system'].apply(get_clusters)
best_quaternary_systems.to_csv('best_quaternaries.csv', index=False)
###Output
_____no_output_____ |
ml_feature/05_Prepare_Features/05_03/End/05_03.ipynb | ###Markdown
Prepare Features For Modeling: Write Out All Final Datasets Read In DataUsing the Titanic dataset from [this](https://www.kaggle.com/c/titanic/overview) Kaggle competition.This dataset contains information about 891 people who were on board the ship when departed on April 15th, 1912. As noted in the description on Kaggle's website, some people aboard the ship were more likely to survive the wreck than others. There were not enough lifeboats for everybody so women, children, and the upper-class were prioritized. Using the information about these 891 passengers, the challenge is to build a model to predict which people would survive based on the following fields:- **Name** (str) - Name of the passenger- **Pclass** (int) - Ticket class (1st, 2nd, or 3rd)- **Sex** (str) - Gender of the passenger- **Age** (float) - Age in years- **SibSp** (int) - Number of siblings and spouses aboard- **Parch** (int) - Number of parents and children aboard- **Ticket** (str) - Ticket number- **Fare** (float) - Passenger fare- **Cabin** (str) - Cabin number- **Embarked** (str) - Port of embarkation (C = Cherbourg, Q = Queenstown, S = Southampton)
###Code
# Read in data
import pandas as pd
titanic_train = pd.read_csv('../../../data/split_data/train_features.csv')
titanic_val = pd.read_csv('../../../data/split_data/val_features.csv')
titanic_test = pd.read_csv('../../../data/split_data/test_features.csv')
titanic_train.head()
# Define the list of features to be used for each dataset
raw_original_features = ['Pclass', 'Sex', 'Age_clean', 'SibSp', 'Parch', 'Fare',
'Cabin', 'Embarked']
# minimum cleaning
cleaned_original_features = ['Pclass', 'Sex', 'Age_clean', 'SibSp', 'Parch', 'Fare_clean',
'Cabin', 'Embarked_clean']
all_features = ['Pclass', 'Sex', 'Age_clean', 'SibSp', 'Parch', 'Fare_clean', 'Fare_clean_tr',
'Cabin', 'Cabin_ind', 'Embarked_clean', 'Title', 'Family_cnt']
# supposedly to be most useful
reduced_features = ['Pclass', 'Sex', 'Age_clean', 'Family_cnt', 'Fare_clean_tr',
'Cabin_ind', 'Title']
###Output
_____no_output_____
###Markdown
Write Out All Data
###Code
# Write out final data for each feature set
titanic_train[raw_original_features].to_csv('../../../data/final_data/train_features_raw.csv', index=False)
titanic_val[raw_original_features].to_csv('../../../data/final_data/val_features_raw.csv', index=False)
titanic_test[raw_original_features].to_csv('../../../data/final_data/test_features_raw.csv', index=False)
titanic_train[cleaned_original_features].to_csv('../../../data/final_data/train_features_original.csv', index=False)
titanic_val[cleaned_original_features].to_csv('../../../data/final_data/val_features_original.csv', index=False)
titanic_test[cleaned_original_features].to_csv('../../../data/final_data/test_features_original.csv', index=False)
titanic_train[all_features].to_csv('../../../data/final_data/train_features_all.csv', index=False)
titanic_val[all_features].to_csv('../../../data/final_data/val_features_all.csv', index=False)
titanic_test[all_features].to_csv('../../../data/final_data/test_features_all.csv', index=False)
titanic_train[reduced_features].to_csv('../../../data/final_data/train_features_reduced.csv', index=False)
titanic_val[reduced_features].to_csv('../../../data/final_data/val_features_reduced.csv', index=False)
titanic_test[reduced_features].to_csv('../../../data/final_data/test_features_reduced.csv', index=False)
###Output
_____no_output_____
###Markdown
Move Labels To Proper Directory
###Code
# Read in all labels
titanic_train_labels = pd.read_csv('../../../data/split_data/train_labels.csv')
titanic_val_labels = pd.read_csv('../../../data/split_data/val_labels.csv')
titanic_test_labels = pd.read_csv('../../../data/split_data/test_labels.csv')
# Double-check the labels
titanic_train_labels
# Write out labels to final directory
titanic_train_labels.to_csv('../../../data/final_data/train_labels.csv', index=False)
titanic_val_labels.to_csv('../../../data/final_data/val_labels.csv', index=False)
titanic_test_labels.to_csv('../../../data/final_data/test_labels.csv', index=False)
###Output
_____no_output_____ |
gru_baseline.ipynb | ###Markdown
Generate Train and Test Data
###Code
def simulate_noevent(baseline_period, baseline_amplitude, baseline_phase, noise, t_range=range(100)):
i_s0 = np.array([intensity_baseline(baseline_period, baseline_amplitude, baseline_phase, t) for t in t_range])
n = np.random.normal(scale=noise, size=len(t_range))
return i_s0 + n
from tqdm import tqdm
def generate_dataset(num_dataset, max_t=100, predict_window=30, produce_sim=True):
X_out = np.zeros((num_dataset, max_t, 1))
y_out = np.zeros((num_dataset, max_t))
for x in tqdm(range(num_dataset)):
seq=None
ys=None
baseline_period=np.random.uniform(low=2,high=5)
baseline_amplitude=np.random.uniform(low=0.00008, high=0.0002)
lens_min_impact=np.random.uniform(low=8,high=12)
lens_radius=np.random.uniform(low=5,high=max_t/20)
if np.random.uniform() > 0.5:
peak_t = int(np.random.uniform(low=predict_window+10,
high=max_t-predict_window-10))
seq = simulate_microlensing(baseline_period=baseline_period,
baseline_amplitude=baseline_amplitude,
baseline_phase=0,
lens_min_impact=lens_min_impact,
lens_shift=peak_t,
lens_radius=lens_radius,
noise=0.00003,
t_range=range(max_t))
seq_avg = np.mean(seq[:predict_window])
seq_std = np.std(seq[:predict_window])
seq = (seq - seq_avg) / seq_std
ys = np.zeros(max_t)
ys[peak_t-int(lens_radius*1.5):peak_t+int(lens_radius*1.5)] = 1
else:
seq = simulate_noevent(baseline_period=baseline_period,
baseline_amplitude=baseline_amplitude,
baseline_phase=0,
noise=0.00003,
t_range=range(max_t))
seq_avg = np.mean(seq[:predict_window])
seq_std = np.std(seq[:predict_window])
seq = (seq - seq_avg) / seq_std
ys = np.zeros(max_t)
X_out[x,:,0] = seq
y_out[x] = ys
return X_out, y_out
np.random.seed(420)
X_train, y_train = generate_dataset(500, max_t=1000, predict_window=200, produce_sim=False)
# Plot generated graphs.
plt.figure(figsize=(20,15))
for i in range(20):
plt.subplot(5,4,i+1)
plt.plot(X_train[i,:,0])
plt.plot(y_train[i])
plt.show()
np.random.seed(520)
X_test, y_test = generate_dataset(100, max_t=1000, predict_window=200, produce_sim=False)
np.random.seed(620)
X_dev, y_dev = generate_dataset(100, max_t=2000, predict_window=400, produce_sim=False)
###Output
100%|██████████| 100/100 [00:00<00:00, 111.29it/s]
###Markdown
Injecting the residuals into LSTM Building the LSTM model
###Code
from tensorflow.keras import backend as K
def recall_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
possible_positives = K.sum(K.round(K.clip(y_true, 0, 1)))
recall = true_positives / (possible_positives + K.epsilon())
return recall
def precision_m(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision
def f1_m(y_true, y_pred):
precision = precision_m(y_true, y_pred)
recall = recall_m(y_true, y_pred)
return 2*((precision*recall)/(precision+recall+K.epsilon()))
import tensorflow as tf
from tensorflow.keras import layers
# Initialising the RNN
rnn_lstm = tf.keras.Sequential()
# Adding the LSTM layers and some Dropout regularisation
# Adding the first layer
rnn_lstm.add(layers.GRU(units=200, return_sequences=True, input_shape=(None, 1)))
rnn_lstm.add(layers.Dropout(0.2))
rnn_lstm.add(layers.GRU(units=100, return_sequences=True, input_shape=(None, 200)))
rnn_lstm.add(layers.Dropout(0.2))
# Output layer
rnn_lstm.add(layers.Dense(units=1, activation='sigmoid'))
# Compiling the RNN
rnn_lstm.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc', f1_m])
rnn_lstm.summary()
# Fitting the RNN to training set
rnn_lstm.fit(X_train, y_train, batch_size=32, epochs=50, validation_data=(X_test, y_test), validation_freq=10,verbose=1, workers=4)
rnn_lstm.evaluate(X_dev, y_dev)
y_pred=rnn_lstm.predict_proba(X_test)
plt.plot(y_pred[4][30:])
plt.plot(X_test[4,:,0][30:])
plt.plot(y_test[4][30:])
###Output
_____no_output_____
###Markdown
Evaluation
###Code
# Generate predictions.
y_pred=rnn_lstm.predict_proba(X_dev)
plt.figure(figsize=(20,15))
for i in range(20):
plt.subplot(5,4,i+1)
plt.plot(y_pred[i])
plt.plot(y_dev[i],'--')
plt.ylim(0,1)
plt.show()
###Output
_____no_output_____
###Markdown
Trying on MOA data
###Code
import pickle
from google.colab import drive
drive.mount('/content/drive')
gb18 = pickle.load(open('/content/drive/My Drive/pTSA_microlensing/gb18.pkl', 'rb'))
def normalize(seq, predict_window):
seq_avg = np.mean(seq[:predict_window])
seq_std = np.std(seq[:predict_window])
seq = (seq - seq_avg) / seq_std
return seq
moa1=np.array(gb18[1][1]['ab_mag'] * -1)
moa1=moa1[~np.isnan(moa1)]
moa1 = normalize(moa1, 100)
plt.plot(moa1)
X_moa = np.array([np.array(list(zip(moa1)))])
y_moa_pred = rnn_lstm.predict_proba(X_moa)
plt.plot(X_moa[0][:,0],'--')
plt.plot(y_moa_pred[0])
###Output
_____no_output_____
###Markdown
Save data
###Code
pickle.dump(X_train, open('/content/drive/My Drive/pTSA_microlensing/X_train.p', 'wb'))
pickle.dump(y_train, open('/content/drive/My Drive/pTSA_microlensing/y_train.p', 'wb'))
pickle.dump(X_test, open('/content/drive/My Drive/pTSA_microlensing/X_test.p', 'wb'))
pickle.dump(y_test, open('/content/drive/My Drive/pTSA_microlensing/y_test.p', 'wb'))
pickle.dump(X_dev, open('/content/drive/My Drive/pTSA_microlensing/X_dev.p', 'wb'))
pickle.dump(y_dev, open('/content/drive/My Drive/pTSA_microlensing/y_dev.p', 'wb'))
###Output
_____no_output_____ |
module2-intermediate-linear-algebra/Zhenya_Warshavsky_Intermediate_Linear_Algebra_Assignment.ipynb | ###Markdown
Statistics 1.1 Sales for the past week was the following amounts: [3505, 2400, 3027, 2798, 3700, 3250, 2689]. Without using library functions, what is the mean, variance, and standard deviation of of sales from last week? (for extra bonus points, write your own function that can calculate these two values for any sized list)
###Code
import numpy as np
sales = [3505, 2400, 3027, 2798, 3700, 3250, 2689]
mean = sum(sales) / len(sales)
def var(n):
l = []
m = sum(n) / len(n)
for i in n:
l.append((i - m) ** 2)
return sum(l)/len(l)
def stdev(n):
l = []
m = sum(n) / len(n)
for i in n:
l.append((i - m) ** 2)
return (sum(l) / len(l)) ** .5
print("The mean is:",mean,"\nThe variance is:",var(sales),"\nthe standard deviation is:",stdev(sales))
###Output
The mean is: 3052.714285714286
The variance is: 183761.06122448976
the standard deviation is: 428.67360686714756
###Markdown
1.2 Find the covariance between last week's sales numbers and the number of customers that entered the store last week: [127, 80, 105, 92, 120, 115, 93] (you may use librray functions for calculating the covariance since we didn't specifically talk about its formula)
###Code
#using numpy for covariance output
customers = [127, 80, 105, 92, 120, 115, 93]
np.cov(sales,customers)
#creating dataframe
data = {"sales": [3505, 2400, 3027, 2798, 3700, 3250, 2689], "entered": [127, 80, 105, 92, 120, 115, 93]}
df = pd.DataFrame(data)
df
###Output
_____no_output_____
###Markdown
Covariance: need to explain to oneself
###Code
#question: what only two values per intersection? what does this interaction actually saying?
#calculating covariance via pandas
cov = df.cov()
cov
###Output
_____no_output_____
###Markdown
1.3 Find the standard deviation of customers who entered the store last week. Then, use the standard deviations of both sales and customers to standardize the covariance to find the correlation coefficient that summarizes the relationship between sales and customers. (You may use library functions to check your work.)
###Code
import statistics
#not getting the correct output from running this manually and also what is wrong with my standard deviation
def stdev(n):
l = []
m = sum(n) / (len(n) - 1)
for i in n:
l.append((i - m) ** 2)
return (sum(l) / len(l)) ** .5
print("custy stdev:",stdev(customers),"sales stdev:",stdev(sales))
prod1 = stdev(customers) * stdev(sales)
prod2 = statistics.stdev(customers) * statistics.stdev(sales)
display(cov.div(prod1),cov.div(prod2))
# 214387.904762 / prod
prod
#The correct correlation coefficients
cov.corr()
###Output
_____no_output_____
###Markdown
1.4 Use pandas to import a cleaned version of the titanic dataset from the following link: [Titanic Dataset](https://raw.githubusercontent.com/Geoyi/Cleaning-Titanic-Data/master/titanic_clean.csv) Calculate the variance-covariance matrix and correlation matrix for the titanic dataset's numeric columns. (you can encode some of the categorical variables and include them as a stretch goal if you finish early)
###Code
dft = pd.read_csv("https://raw.githubusercontent.com/Geoyi/Cleaning-Titanic-Data/master/titanic_clean.csv")
dft = dft.fillna(0)
dft.head()
dft2 = dft.drop(['name', 'sex',"age","cabin","embarked","boat","home.dest","has_cabin_number"],axis=1)
dft2.head()
dft2.cov()
dft2.corr()
###Output
_____no_output_____
###Markdown
Orthogonality 2.1 Plot two vectors that are orthogonal to each other. What is a synonym for orthogonal?
###Code
vector_1 = [0, 1]
vector_2 = [1, 0]
# Plot the Scaled Vectors
plt.arrow(0,0, vector_1[0], vector_1[1],head_width=.05, head_length=0.05, color ='orange')
plt.arrow(0,0, vector_2[0], vector_2[1],head_width=.05, head_length=0.05, color ='green')
plt.xlim(-1,3)
plt.ylim(-1,3)
plt.title("Orthogonal Vectors AKA PERPENDICULAR Vectors")
plt.show()
#clearly shows these two are orthogonal:
np.dot(vector_1,vector_2)
###Output
_____no_output_____
###Markdown
2.2 Are the following vectors orthogonal? Why or why not?\begin{align}a = \begin{bmatrix} -5 \\ 3 \\ 7 \end{bmatrix}\qquadb = \begin{bmatrix} 6 \\ -8 \\ 2 \end{bmatrix}\end{align}
###Code
#calculating dot product of the two vectors
a = [-5,3,7]
b = [6,-8,2]
print("vecotrs a and b are not orthogonal because their dot product does not equal zero:",np.dot(a,b))
###Output
vecotrs a and b are not orthogonal because their dot product does not equal zero: -40
###Markdown
2.3 Compute the following values: What do these quantities have in common? What is $||c||^2$? What is $c \cdot c$? What is $c^{T}c$?\begin{align}c = \begin{bmatrix} 2 & -15 & 6 & 20 \end{bmatrix}\end{align}
###Code
#NEED HELP WITH THIS
from numpy import linalg as LA
c = [2,-15,6,20]
print("these values are all identical because it is performing the same calculation \n","squared norm of c:", LA.norm(c)**2,"\ndot product of c,c:",np.dot(c,c),"\ndot product of c transposed and c:",np.dot(np.transpose(c),c))
###Output
these values are all identical because it is performing the same calculation
squared norm of c: 665.0
dot product of c,c: 665
dot product of c transposed and c: 665
###Markdown
Unit Vectors 3.1 Using Latex, write the following vectors as a linear combination of scalars and unit vectors:\begin{align}d = \begin{bmatrix} 7 \\ 12 \end{bmatrix}\qquade = \begin{bmatrix} 2 \\ 11 \\ -8 \end{bmatrix}\end{align} (Zhenya) Answer\begin{align}c = 7\hat{i} + 12\hat{j}\end{align}\begin{align}c = 2\hat{i} + 11\hat{j} + -8\hat{k} \end{align} 3.2 Turn vector $f$ into a unit vector:\begin{align}f = \begin{bmatrix} 4 & 12 & 11 & 9 & 2 \end{bmatrix}\end{align}
###Code
#IS THIS CORRECT?
#Can turn any vector inot a unit vector by dividing by the norm
f = [4,12,11,9,2]
fn = np.linalg.norm(f)
f / fn
###Output
_____no_output_____
###Markdown
Linear Independence / Dependence 4.1 Plot two vectors that are linearly dependent and two vectors that are linearly independent (bonus points if done in $\mathbb{R}^3$).
###Code
#how do you plot an R3 version of this?!
# Plot Linearly Dependent Vectors
# Axis Bounds
plt.xlim(-1.1,4)
plt.ylim(-1.1,4)
# Original Vector
v = [1,0]
# Scaled Vectors
v2 = np.multiply(3, v)
v3 = np.multiply(-1,v)
# Get Vals for L
axes = plt.gca()
x_vals = np.array(axes.get_xlim())
y_vals = 0*x_vals
# Plot Vectors and L
plt.plot(x_vals, y_vals, '--', color='b', linewidth=1)
plt.arrow(0,0, v2[0], v2[1], linewidth=3, head_width=.05, head_length=0.05, color ='yellow')
plt.arrow(0,0, v[0], v[1], linewidth=3, head_width=.05, head_length=0.05, color ='green')
plt.arrow(0,0, v3[0], v3[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')
plt.title("Linearly Dependent Vectors")
plt.show()
# Plot Linearly Independent Vectors
# Axis Bounds
plt.xlim(-2,3.5)
plt.ylim(-1,3)
# Original Vector
a = [-1.5,.5]
b = [3, 1]
# Plot Vectors
plt.arrow(0,0, a[0], a[1], linewidth=3, head_width=.05, head_length=0.05, color ='blue')
plt.arrow(0,0, b[0], b[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')
plt.title("Linearly Independent Vectors")
plt.show()
###Output
_____no_output_____
###Markdown
Span 5.1 What is the span of the following vectors?\begin{align}g = \begin{bmatrix} 1 & 2 \end{bmatrix}\qquadh = \begin{bmatrix} 4 & 8 \end{bmatrix}\end{align}
###Code
#These are a single R because they scale
###Output
_____no_output_____
###Markdown
5.2 What is the span of $\{l, m, n\}$?\begin{align}l = \begin{bmatrix} 1 & 2 & 3 \end{bmatrix}\qquadm = \begin{bmatrix} -1 & 0 & 7 \end{bmatrix}\qquadn = \begin{bmatrix} 4 & 8 & 2\end{bmatrix}\end{align}
###Code
#the span here is R^3 because neither vector is in the same dimension
###Output
_____no_output_____
###Markdown
Basis 6.1 Graph two vectors that form a basis for $\mathbb{R}^2$
###Code
# Plot Linearly Independent Vectors
# Axis Bounds
plt.xlim(-2,3.5)
plt.ylim(-1,3)
# Original Vector
a = [-1.5,.5]
b = [3, 1]
# Plot Vectors
plt.arrow(0,0, a[0], a[1], linewidth=3, head_width=.05, head_length=0.05, color ='blue')
plt.arrow(0,0, b[0], b[1], linewidth=3, head_width=.05, head_length=0.05, color ='red')
plt.title("Linearly Independent Vectors")
plt.show()
###Output
_____no_output_____
###Markdown
6.2 What does it mean to form a basis? Rank 7.1 What is the Rank of P?\begin{align}P = \begin{bmatrix} 1 & 2 & 3 \\ -1 & 0 & 7 \\4 & 8 & 2\end{bmatrix}\end{align} 7.2 What does the rank of a matrix tell us? Linear Projections 8.1 Line $L$ is formed by all of the vectors that can be created by scaling vector $v$ \begin{align}v = \begin{bmatrix} 1 & 3 \end{bmatrix}\end{align}\begin{align}w = \begin{bmatrix} -1 & 2 \end{bmatrix}\end{align} find $proj_{L}(w)$ graph your projected vector to check your work (make sure your axis are square/even)
###Code
###Output
_____no_output_____
###Markdown
Stretch Goal For vectors that begin at the origin, the coordinates of where the vector ends can be interpreted as regular data points. (See 3Blue1Brown videos about Spans, Basis, etc.) Write a function that can calculate the linear projection of each point (x,y) (vector) onto the line y=x. run the function and plot the original points in blue and the new projected points on the line y=x in red. For extra points plot the orthogonal vectors as a dashed line from the original blue points to the projected red points.
###Code
import pandas as pd
import matplotlib.pyplot as plt
# Creating a dataframe for you to work with -Feel free to not use the dataframe if you don't want to.
x_values = [1, 4, 7, 3, 9, 4, 5 ]
y_values = [4, 2, 5, 0, 8, 2, 8]
data = {"x": x_values, "y": y_values}
df = pd.DataFrame(data)
df.head()
plt.scatter(df.x, df.y)
plt.show()
###Output
_____no_output_____ |
Diversity_Using_Vegetarian_and_Tidyverse.ipynb | ###Markdown
A Tidyverse Approach to Alpha, Beta and Gamma Diversities Computed and Visualized Using Formulas and Using the Package `vegetarian` Amir Barghi The FormulasFor the formulas used in this file, see the following articles:- L. Jost, "Entropy and diversity", *Oikos*, vol. 113, pp. 363--375, Jan. 2006.- L. Jost, "Partitioning diversity into independent alpha and beta components", *Ecology*, vol. 88, pp. 2427--2439, Oct. 2008. Loading Packages
###Code
library(vegetarian)
library(tidyverse)
library(latex2exp)
# about the vegetarian package
?vegetarian
###Output
_____no_output_____
###Markdown
Loading the Data Set `vegetarian::simesants`
###Code
data(simesants)
df <- simesants
###Output
_____no_output_____
###Markdown
Tidying the Data Set
###Code
# adding a new column `Weight`
# `Weight` is the proportion of the total population in each `Habitat`
df <- df %>%
rowwise() %>%
mutate(Count = sum(c_across(where(is.numeric)))) %>%
ungroup()
df <- df %>%
mutate(Total_Count = sum(Count))
df <- df %>%
mutate(Weight = Count / Total_Count)
df <- df %>%
select(Habitat, Weight, everything(), -Count, -Total_Count)
df
var_names <- df %>%
select(-Habitat, -Weight) %>% names()
DF <- df %>%
gather(all_of(var_names), key = 'Species', value = 'Count') %>%
filter(Count > 0) %>%
select(Habitat, Species, everything())
DF # gathered data set, with zero counts removed
# number of habitats or communities
N <- DF %>% select(Habitat) %>% unique() %>% nrow()
###Output
_____no_output_____
###Markdown
Gamma Diversities Unweighted Gamma Diversities Computed Using Fromulas and Using `vegetarian::d`
###Code
giml_df <- DF %>%
group_by(Habitat) %>%
mutate(Habitat_Pop = sum(Count)) %>%
ungroup()
giml_u_m <- giml_df %>%
mutate(Total_Pop = sum(Count),
Prop = Count / Habitat_Pop,
Weight = 1 / N)
giml_u_m <- giml_u_m %>%
select(-Habitat) %>%
group_by(Species) %>%
mutate(Weighted_Prop = sum(Prop * Weight)) %>%
ungroup() %>%
select(Species, Weighted_Prop) %>%
unique()
giml_u_m <- giml_u_m %>%
summarise(Gamma_Richness = n(),
Gamma_Shannon = exp(-sum(Weighted_Prop * log(Weighted_Prop))),
Gamma_Greenberg = 1 / sum(Weighted_Prop ** 2))
giml_u_m
giml_u_v <- df %>%
summarise(Gamma_Richness = d(.[, -c(1, 2)], lev = 'gamma', q = 0),
Gamma_Shannon = d(.[, -c(1, 2)], lev = 'gamma', q = 1),
Gamma_Greenberg = d(.[, -c(1, 2)], lev = 'gamma', q = 2))
giml_u_v
###Output
_____no_output_____
###Markdown
Weighted Gamma Diversities Computed Using Fromulas and Using `vegetarian::d`
###Code
giml_w_m <- giml_df %>%
mutate(Total_Pop = sum(Count),
Prop = Count / Habitat_Pop,
Weight = Habitat_Pop / Total_Pop)
giml_w_m <- giml_w_m %>%
select(-Habitat) %>%
group_by(Species) %>%
mutate(Weighted_Prop = sum(Prop * Weight)) %>%
ungroup() %>%
select(Species, Weighted_Prop) %>%
unique()
giml_w_m <- giml_w_m %>%
summarise(Gamma_Richness = n(),
Gamma_Shannon = exp(-sum(Weighted_Prop * log(Weighted_Prop))),
Gamma_Greenberg = 1 / sum(Weighted_Prop ** 2))
giml_w_m
giml_w_v <- df %>%
summarise(Gamma_Richness = d(.[, -c(1, 2)], lev = 'gamma', wt = .$Weight, q = 0),
Gamma_Shannon = d(.[, -c(1, 2)], lev = 'gamma', wt = .$Weight, q = 1),
Gamma_Greenberg = d(.[, -c(1, 2)], lev = 'gamma', wt = .$Weight, q = 2))
giml_w_v
###Output
_____no_output_____
###Markdown
Alpha Diversities Unweighted Alpha Diversities Computed Using Fromulas and Using `vegetarian::d`
###Code
alep_df <- DF %>% mutate(Total_Pop = sum(Count))
alep_u_m <- alep_df %>%
group_by(Habitat) %>%
mutate(Pop = sum(Count),
Prop = Count / Pop,
Total_Prop = Count / Total_Pop,
Weight = 1 / N)
suppressMessages(alep_u_m <- alep_u_m %>%
summarise(Richness = n(),
Shannon = -sum(Prop * log(Prop)),
Greenberg = sum(Prop ** 2)) %>%
ungroup() %>%
unique())
alep_u_m <- alep_u_m %>%
summarise(Alpha_Richness = mean(Richness),
Alpha_Shannon = exp(mean(Shannon)),
Alpha_Greenberg = 1 / mean(Greenberg))
alep_u_m
alep_u_v <- df %>%
summarise(Alpha_Richness = d(.[, -c(1, 2)], lev = 'alpha', q = 0),
Alpha_Shannon = d(.[, -c(1, 2)], lev = 'alpha', q = 1),
Alpha_Greenberg = d(.[, -c(1, 2)], lev = 'alpha', q = 2))
alep_u_v
###Output
_____no_output_____
###Markdown
Weighted Alpha Diversities Computed Using Fromulas and Using `vegetarian::d`
###Code
alep_df <- DF %>% mutate(Total_Pop = sum(Count))
alep_w_m <- alep_df %>%
group_by(Habitat) %>%
mutate(Habitat_Pop = sum(Count),
Prop = Count / Habitat_Pop,
Weight = Habitat_Pop / Total_Pop,
Richness = n(),
Shannon = -sum(Prop * log(Prop)),
Greenberg = sum(Prop ** 2)) %>%
ungroup() %>%
select(Habitat,
Richness,
Shannon,
Greenberg,
Habitat_Pop,
Total_Pop,
Weight) %>%
unique()
suppressMessages(alep_w_m <- alep_w_m %>%
summarise(Alpha_Richness = mean(Richness),
Alpha_Shannon = exp(sum(Shannon * Weight)),
Alpha_Greenberg = sum(Weight ** 2) / sum(Weight ** 2 * Greenberg )))
alep_w_m
alep_w_v <- df %>%
summarise(Alpha_Richness = d(.[, -c(1, 2)], lev = 'alpha', wt = .$Weight, q = 0),
Alpha_Shannon = d(.[, -c(1, 2)], lev = 'alpha', wt = .$Weight, q = 1),
Alpha_Greenberg = d(.[, -c(1, 2)], lev = 'alpha', wt = .$Weight, q = 2))
alep_w_v
###Output
_____no_output_____
###Markdown
Beta Diversities Unweighted Beta Diversities Computed Using Fromulas and Using `vegetarian::d`
###Code
bet_u_m <- giml_u_m / alep_u_m
names(bet_u_m) <- c('Beta_Richness', 'Beta_Shannon', 'Beta_Greenberg')
bet_u_v <- giml_u_v / alep_u_v
names(bet_u_v) <- c('Beta_Richness', 'Beta_Shannon', 'Beta_Greenberg')
bet_u_m
bet_u_v
# alternatively
beta_u_v <- df %>%
summarise(Beta_Richness = d(.[, -c(1, 2)], lev = 'beta', q = 0),
Beta_Shannon = d(.[, -c(1, 2)], lev = 'beta', q = 1),
Beta_Greenberg = d(.[, -c(1, 2)], lev = 'beta', q = 2))
beta_u_v
###Output
_____no_output_____
###Markdown
Weighted Beta Diversities Computed Using Fromulas and Using `vegetarian::d`
###Code
bet_w_m <- giml_w_m / alep_w_m
names(bet_w_m) <- c('Beta_Richness', 'Beta_Shannon', 'Beta_Greenberg')
bet_w_v <- giml_w_v / alep_w_v
names(bet_w_v) <- c('Beta_Richness', 'Beta_Shannon', 'Beta_Greenberg')
bet_w_m
bet_w_v
# alternatively
beta_u_v <- df %>%
summarise(Beta_Richness = d(.[, -c(1, 2)], lev = 'beta', wt = .$Weight, q = 0),
Beta_Shannon = d(.[, -c(1, 2)], lev = 'beta', wt = .$Weight, q = 1),
Beta_Greenberg = d(.[, -c(1, 2)], lev = 'beta', wt = .$Weight, q = 2))
beta_u_v
###Output
_____no_output_____
###Markdown
Visualizing Diversities
###Code
# defining two ranges: (0, 1) and (1, 10)
range_1 <- seq(0.001, 1, .01)
range_2 <- seq(1.001, 5, .01)
###Output
_____no_output_____
###Markdown
Visualizing Unweighted Gamma Diversity Computed Using Fromulas
###Code
qsum1_g_u_m <- NULL
giml <- giml_df %>%
mutate(Total_Pop = sum(Count),
Prop = Count / Habitat_Pop,
Weight = 1 / N)
for (q in range_1) {
df2 <- NULL
df2 <- giml %>%
select(-Habitat) %>%
group_by(Species) %>%
mutate(Weighted_Prop = sum(Prop * Weight)) %>%
ungroup() %>%
select(Species, Weighted_Prop) %>%
unique()
df2 <- df2 %>%
summarise(Giml_Manual = sum(Weighted_Prop ** q) ** (1/ (1 - q)),
q = q)
qsum1_g_u_m <- rbind(qsum1_g_u_m, df2)
}
print(qsum1_g_u_m %>%
ggplot(aes(x = q, y = Giml_Manual)) +
geom_line( color = 'blue') +
geom_hline(yintercept = giml_u_m$Gamma_Shannon, color = 'green') +
geom_vline(xintercept = 1, color= 'green') +
geom_hline(yintercept = giml_u_m$Gamma_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color= 'yellow') +
labs(title = 'Unweighted Gamma Diversity Computed Using Fromulas ') +
xlab(TeX('$q$')) +
ylab(TeX('$\\gamma$ Diversity')))
qsum2_g_u_m <- NULL
giml <- giml_df %>%
mutate(Total_Pop = sum(Count),
Prop = Count / Habitat_Pop,
Weight = 1 / N)
for (q in range_2) {
df2 <- NULL
df2 <- giml %>%
select(-Habitat) %>%
group_by(Species) %>%
mutate(Weighted_Prop = sum(Prop * Weight)) %>%
ungroup() %>%
select(Species, Weighted_Prop) %>%
unique()
df2 <- df2 %>%
summarise(Giml_Manual = sum(Weighted_Prop ** q) ** (1 / (1 - q)),
q = q)
qsum2_g_u_m <- rbind(qsum2_g_u_m, df2)
}
print(qsum2_g_u_m %>%
ggplot(aes(x = q, y = Giml_Manual)) +
geom_line( color = 'blue') +
geom_hline(yintercept = giml_u_m$Gamma_Shannon, color = 'green') +
geom_vline(xintercept = 1, color= 'green') +
geom_hline(yintercept = giml_u_m$Gamma_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color= 'yellow') +
labs(title = 'Unweighted Gamma Diversity Computed Using Fromulas') +
xlab(TeX('$q$')) +
ylab(TeX('$\\gamma$ Diversity')))
###Output
_____no_output_____
###Markdown
Visualizing Weighted Gamma Diversity Computed Using Fromulas
###Code
qsum1_g_w_m <- NULL
giml <- giml_df %>%
mutate(Total_Pop = sum(Count),
Prop = Count / Habitat_Pop,
Weight = Habitat_Pop / Total_Pop)
for (q in range_1) {
df2 <- NULL
df2 <- giml %>%
select(-Habitat) %>%
group_by(Species) %>%
mutate(Weighted_Prop = sum(Prop * Weight)) %>%
ungroup() %>%
select(Species, Weighted_Prop) %>%
unique()
df2 <- df2 %>%
summarise(Giml_Manual = sum(Weighted_Prop ** q) ** (1/ (1 - q)),
q = q)
qsum1_g_w_m <- rbind(qsum1_g_w_m, df2)
}
print(qsum1_g_w_m %>%
ggplot(aes(x = q, y = Giml_Manual)) +
geom_line( color = 'blue') +
geom_hline(yintercept = giml_w_m$Gamma_Shannon, color = 'green') +
geom_vline(xintercept = 1, color= 'green') +
geom_hline(yintercept = giml_w_m$Gamma_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color= 'yellow') +
labs(title = 'Weighted Gamma Diversity Computed Using Fromulas ') +
xlab(TeX('$q$')) +
ylab(TeX('$\\gamma$ Diversity')))
qsum2_g_w_m <- NULL
giml <- giml_df %>%
mutate(Total_Pop = sum(Count),
Prop = Count / Habitat_Pop,
Weight = Habitat_Pop / Total_Pop)
for (q in range_2) {
df2 <- NULL
df2 <- giml %>%
select(-Habitat) %>%
group_by(Species) %>%
mutate(Weighted_Prop = sum(Prop * Weight)) %>%
ungroup() %>%
select(Species, Weighted_Prop) %>%
unique()
df2 <- df2 %>%
summarise(Giml_Manual = sum(Weighted_Prop ** q) ** (1 / (1 - q)),
q = q)
qsum2_g_w_m <- rbind(qsum2_g_w_m, df2)
}
print(qsum2_g_w_m %>%
ggplot(aes(x = q, y = Giml_Manual)) +
geom_line(color = 'blue') +
geom_hline(yintercept = giml_w_m$Gamma_Shannon, color = 'green') +
geom_vline(xintercept = 1, color= 'green') +
geom_hline(yintercept = giml_w_m$Gamma_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color= 'yellow') +
labs(title = 'Weighted Gamma Diversity Computed Using Fromulas') +
xlab(TeX('$q$')) +
ylab(TeX('$\\gamma$ Diversity')))
###Output
_____no_output_____
###Markdown
Visualizing Unweighted Gamma Diversity Computed Using `vegetarian::d`
###Code
qsum1_g_u_v <- NULL
for (q in range_1) {
df2 <- NULL
df2 <- data.frame(d(df[, -c(1, 2)], lev = 'gamma', wt = 1 / N, q = q), q = q)
names(df2) <- c('Giml_Vegetarian', 'q')
qsum1_g_u_v <- rbind(qsum1_g_u_v, df2)
}
print(qsum1_g_u_v %>%
ggplot(aes(x = q, y = Giml_Vegetarian)) +
geom_line(color = 'blue') +
geom_hline(yintercept = giml_u_v$Gamma_Shannon, color = 'green') +
geom_vline(xintercept = 1, color= 'green') +
geom_hline(yintercept = giml_u_v$Gamma_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color= 'yellow') +
labs(title = 'Unweighted Gamma Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\gamma$ Diversity')))
qsum2_g_u_v <- NULL
for (q in range_2) {
df2 <- NULL
df2 <- data.frame(d(df[, -c(1, 2)], lev = 'gamma', wt = 1 / N, q = q), q = q)
names(df2) <- c('Giml_Vegetarian', 'q')
qsum2_g_u_v <- rbind(qsum2_g_u_v, df2)
}
print(qsum2_g_u_v %>%
ggplot(aes(x = q, y = Giml_Vegetarian)) +
geom_line(color = 'blue') +
geom_hline(yintercept = giml_u_v$Gamma_Shannon, color = 'green') +
geom_vline(xintercept = 1, color= 'green') +
geom_hline(yintercept = giml_u_v$Gamma_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color= 'yellow') +
labs(title = 'Unweighted Gamma Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\gamma$ Diversity')))
###Output
_____no_output_____
###Markdown
Visualizing Weighted Gamma Diversity Computed Using `vegetarian::d`
###Code
qsum1_g_w_v <- NULL
for (q in range_1) {
df2 <- NULL
df2 <- data.frame(d(df[, -c(1, 2)], lev = 'gamma', wt = df$Weight, q = q), q = q)
names(df2) <- c('Giml_Vegetarian', 'q')
qsum1_g_w_v <- rbind(qsum1_g_w_v, df2)
}
print(qsum1_g_w_v %>%
ggplot(aes(x = q, y = Giml_Vegetarian)) +
geom_line(color = 'blue') +
geom_hline(yintercept = giml_w_v$Gamma_Shannon, color = 'green') +
geom_vline(xintercept = 1, color= 'green') +
geom_hline(yintercept = giml_w_v$Gamma_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color= 'yellow') +
labs(title = 'Weighted Gamma Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\gamma$ Diversity')))
qsum2_g_w_v <- NULL
for (q in range_2) {
df2 <- NULL
df2 <- data.frame(d(df[, -c(1, 2)], lev = 'gamma', wt = df$Weight, q = q), q = q)
names(df2) <- c('Giml_Vegetarian', 'q')
qsum2_g_w_v <- rbind(qsum2_g_w_v, df2)
}
print(qsum2_g_w_v %>%
ggplot(aes(x = q, y = Giml_Vegetarian)) +
geom_line(color = 'blue') +
geom_hline(yintercept = giml_w_v$Gamma_Shannon, color = 'green') +
geom_vline(xintercept = 1, color= 'green') +
geom_hline(yintercept = giml_w_v$Gamma_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color= 'yellow') +
labs(title = 'Weighted Gamma Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\gamma$ Diversity')))
alep <- DF %>% mutate(Total_Pop = sum(Count))
###Output
_____no_output_____
###Markdown
Visualizing Unweighted Alpha Diversity Computed Using Fromulas
###Code
qsum1_a_u_m <- NULL
for (q in range_1) {
df2 <- NULL
df2 <- alep %>%
group_by(Habitat) %>%
mutate(Habitat_Pop = sum(Count),
Prop = Count / Habitat_Pop,
Weight = 1 / N,
Smallqsum = sum((Prop * Weight) ** q),
q = q) %>%
ungroup() %>%
select(Habitat, Smallqsum, Weight, q) %>%
unique()
suppressMessages(df2 <- df2 %>%
group_by(q) %>%
summarise(Alep_Manual = (sum(Smallqsum) / sum(Weight ** q)) ** (1 / (1 - q)),
q = q))
qsum1_a_u_m <- rbind(qsum1_a_u_m, df2)
}
print(qsum1_a_u_m %>%
ggplot(aes(x = q, y = Alep_Manual)) +
geom_line( color = 'blue') +
geom_hline(yintercept = alep_u_m$Alpha_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = alep_u_m$Alpha_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color = 'yellow') +
labs(title = 'Unweighted Alpha Diversity Computed Using Fromulas ') +
xlab(TeX('$q$')) +
ylab(TeX('$\\alpha$ Diversity')))
qsum2_a_u_m <- NULL
for (q in range_2) {
df2 <- NULL
df2 <- alep %>%
group_by(Habitat) %>%
mutate(Habitat_Pop = sum(Count),
Prop = Count / Habitat_Pop,
Weight = 1 / N,
Smallqsum = sum((Prop * Weight) ** q),
q = q) %>%
ungroup() %>%
select(Habitat, Smallqsum, Weight, q) %>%
unique()
suppressMessages(df2 <- df2 %>%
group_by(q) %>%
summarise(Alep_Manual = (sum(Smallqsum) / sum(Weight ** q)) ** (1 / (1 - q)),
q = q))
qsum2_a_u_m <- rbind(qsum2_a_u_m, df2)
}
print(qsum2_a_u_m %>%
ggplot(aes(x = q, y = Alep_Manual)) +
geom_line( color = 'blue') +
geom_hline(yintercept = alep_u_m$Alpha_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = alep_u_m$Alpha_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color = 'yellow') +
labs(title = 'Unweighted Alpha Diversity Computed Using Fromulas ') +
xlab(TeX('$q$')) +
ylab(TeX('$\\alpha$ Diversity')))
###Output
_____no_output_____
###Markdown
Visualizing Weighted Alpha Diversity Computed Using Fromulas
###Code
qsum1_a_w_m <- NULL
for (q in range_1) {
df2 <- NULL
df2 <- alep %>%
group_by(Habitat) %>%
mutate(Habitat_Pop = sum(Count),
Prop = Count / Habitat_Pop,
Weight = Habitat_Pop / Total_Pop,
Smallqsum = sum((Prop * Weight) ** q),
q = q) %>%
ungroup() %>%
select(Habitat, Smallqsum, Weight, q) %>%
unique()
suppressMessages(df2 <- df2 %>%
group_by(q) %>%
summarise(Alep_Manual = (sum(Smallqsum) / sum(Weight ** q)) ** (1 / (1 - q)),
q = q))
qsum1_a_w_m <- rbind(qsum1_a_w_m, df2)
}
print(qsum1_a_w_m %>%
ggplot(aes(x = q, y = Alep_Manual)) +
geom_line( color = 'blue') +
geom_hline(yintercept = alep_w_m$Alpha_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = alep_w_m$Alpha_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color = 'yellow') +
labs(title = 'Weighted Alpha Diversity Computed Using Fromulas ') +
xlab(TeX('$q$')) +
ylab(TeX('$\\alpha$ Diversity')))
qsum2_a_w_m <- NULL
for (q in range_2) {
df2 <- NULL
df2 <- alep %>%
group_by(Habitat) %>%
mutate(Habitat_Pop = sum(Count),
Prop = Count / Habitat_Pop,
Weight = Habitat_Pop / Total_Pop,
Smallqsum = sum((Prop * Weight) ** q),
q = q) %>%
ungroup() %>%
select(Habitat, Smallqsum, Weight, q) %>%
unique()
suppressMessages(df2 <- df2 %>%
group_by(q) %>%
summarise(Alep_Manual = (sum(Smallqsum) / sum(Weight ** q)) ** (1 / (1 - q)),
q = q))
qsum2_a_w_m <- rbind(qsum2_a_w_m, df2)
}
print(qsum2_a_w_m %>%
ggplot(aes(x = q, y = Alep_Manual)) +
geom_line( color = 'blue') +
geom_hline(yintercept = alep_w_m$Alpha_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = alep_w_m$Alpha_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color = 'yellow') +
labs(title = 'Weighted Alpha Diversity Computed Using Fromulas ') +
xlab(TeX('$q$')) +
ylab(TeX('$\\alpha$ Diversity')))
###Output
_____no_output_____
###Markdown
Visualizing Unweighted Alpha Diversity Computed Using `vegetarian::d`
###Code
qsum1_a_u_v <- NULL
for (q in range_1) {
df2 <- NULL
df2 <- data.frame(d(df[, -c(1, 2)], lev = 'alpha', q = q), q = q)
names(df2) <- c('Alep_Vegetarian', 'q')
qsum1_a_u_v <- rbind(qsum1_a_u_v, df2)
}
print(qsum1_a_u_v %>%
ggplot(aes(x = q, y = Alep_Vegetarian)) +
geom_line(color = 'blue') +
geom_hline(yintercept = alep_u_v$Alpha_Shannon, color = 'green') +
geom_vline(xintercept = 1, color= 'green') +
geom_hline(yintercept = alep_u_v$Alpha_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color= 'yellow') +
labs(title = 'Unweighted Alpha Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\alpha$ Diversity')))
qsum2_a_u_v <- NULL
for (q in range_2) {
df2 <- NULL
df2 <- data.frame(d(df[, -c(1, 2)], lev = 'alpha', q = q), q = q)
names(df2) <- c('Alep_Vegetarian', 'q')
qsum2_a_u_v <- rbind(qsum2_a_u_v, df2)
}
print(qsum2_a_u_v %>%
ggplot(aes(x = q, y = Alep_Vegetarian)) +
geom_line(color = 'blue') +
geom_hline(yintercept = alep_u_v$Alpha_Shannon, color = 'green') +
geom_vline(xintercept = 1, color= 'green') +
geom_hline(yintercept = alep_u_v$Alpha_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color= 'yellow') +
labs(title = 'Unweighted Alpha Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\alpha$ Diversity')))
###Output
_____no_output_____
###Markdown
Visualizing Weighted Alpha Diversity Computed Using `vegetarian::d``
###Code
qsum1_a_w_v <- NULL
for (q in range_1) {
df2 <- NULL
df2 <- data.frame(d(df[, -c(1, 2)], lev = 'alpha', wt = df$Weight, q = q), q = q)
names(df2) <- c('Alep_Vegetarian', 'q')
qsum1_a_w_v <- rbind(qsum1_a_w_v, df2)
}
print(qsum1_a_w_v %>%
ggplot(aes(x = q, y = Alep_Vegetarian)) +
geom_line( color = 'blue') +
geom_hline(yintercept = alep_w_v$Alpha_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = alep_w_v$Alpha_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color = 'yellow') +
labs(title = 'Weighted Alpha Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\alpha$ Diversity')))
qsum2_a_w_v <- NULL
for (q in range_2) {
df2 <- NULL
df2 <- data.frame(d(df[, -c(1, 2)], lev = 'alpha', wt = df$Weight, q = q), q = q)
names(df2) <- c('Alep_Vegetarian', 'q')
qsum2_a_w_v <- rbind(qsum2_a_w_v, df2)
}
print(qsum2_a_w_v %>%
ggplot(aes(x = q, y = Alep_Vegetarian)) +
geom_line( color = 'blue') +
geom_hline(yintercept = alep_w_v$Alpha_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = alep_w_v$Alpha_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color = 'yellow') +
labs(title = 'Weighted Alpha Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\alpha$ Diversity')))
###Output
_____no_output_____
###Markdown
Visualizing Unweighted Beta Diversity Computed Using Fromulas
###Code
qsum1_b_u_m <- inner_join(qsum1_g_u_m, qsum1_a_u_m, by = 'q')
qsum1_b_u_m <- qsum1_b_u_m %>% mutate(Bet_Manual = Giml_Manual / Alep_Manual)
print(qsum1_b_u_m %>%
ggplot(aes(x = q, y = Bet_Manual)) +
geom_line( color = 'blue') +
geom_hline(yintercept = bet_u_m$Beta_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = bet_u_m$Beta_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color = 'yellow') +
labs(title = 'Unweighted Beta Diversity Computed Using Fromulas') +
xlab(TeX('$q$')) +
ylab(TeX('$\\beta$ Diversity')))
qsum2_b_u_m <- inner_join(qsum2_g_u_m, qsum2_a_u_m, by = 'q')
qsum2_b_u_m <- qsum2_b_u_m %>% mutate(Bet_Manual = Giml_Manual / Alep_Manual)
print(qsum2_b_u_m %>%
ggplot(aes(x = q, y = Bet_Manual)) +
geom_line( color = 'blue') +
geom_hline(yintercept = bet_u_m$Beta_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = bet_u_m$Beta_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color = 'yellow') +
labs(title = 'Unweighted Beta Diversity Computed Using Fromulas') +
xlab(TeX('$q$')) +
ylab(TeX('$\\beta$ Diversity')))
# alternatively
qsum1_b_u_v <- NULL
for (q in range_1) {
df2 <- NULL
df2 <- data.frame(d(df[, -c(1, 2)], lev = 'beta', q = q), q = q)
names(df2) <- c('Bet_Vegetarian', 'q')
qsum1_b_u_v <- rbind(qsum1_b_u_v, df2)
}
print(qsum1_b_u_v %>%
ggplot(aes(x = q, y = Bet_Vegetarian)) +
geom_line( color = 'blue') +
geom_hline(yintercept = bet_u_v$Beta_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = bet_u_v$Beta_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color = 'yellow') +
labs(title = 'Weighted Alpha Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\alpha$ Diversity')))
# alternatively
qsum2_b_u_v <- NULL
for (q in range_2) {
df2 <- NULL
df2 <- data.frame(d(df[, -c(1, 2)], lev = 'beta', q = q), q = q)
names(df2) <- c('Bet_Vegetarian', 'q')
qsum2_b_u_v <- rbind(qsum2_b_u_v, df2)
}
print(qsum2_b_u_v %>%
ggplot(aes(x = q, y = Bet_Vegetarian)) +
geom_line( color = 'blue') +
geom_hline(yintercept = bet_u_v$Beta_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = bet_u_v$Beta_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color = 'yellow') +
labs(title = 'Weighted Beta Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\beta$ Diversity')))
###Output
_____no_output_____
###Markdown
Visualizing Weighted Beta Diversity Computed Using Fromulas
###Code
qsum1_b_w_m <- inner_join(qsum1_g_w_m, qsum1_a_w_m, by = 'q')
qsum1_b_w_m <- qsum1_b_w_m %>% mutate(Bet_Manual = Giml_Manual / Alep_Manual)
print(qsum1_b_w_m %>%
ggplot(aes(x = q, y = Bet_Manual)) +
geom_line( color = 'blue') +
geom_hline(yintercept = bet_w_m$Beta_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = bet_w_m$Beta_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color = 'yellow') +
labs(title = 'Weighted Beta Diversity Computed Using Fromulas') +
xlab(TeX('$q$')) +
ylab(TeX('$\\beta$ Diversity')))
qsum2_b_w_m <- inner_join(qsum2_g_w_m, qsum2_a_w_m, by = 'q')
qsum2_b_w_m <- qsum2_b_w_m %>% mutate(Bet_Manual = Giml_Manual / Alep_Manual)
print(qsum2_b_w_m %>%
ggplot(aes(x = q, y = Bet_Manual)) +
geom_line( color = 'blue') +
geom_hline(yintercept = bet_w_m$Beta_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = bet_w_m$Beta_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color = 'yellow') +
labs(title = 'Weighted Beta Diversity Computed Using Fromulas') +
xlab(TeX('$q$')) +
ylab(TeX('$\\beta$ Diversity')))
###Output
_____no_output_____
###Markdown
Visualizing Unweighted Beta Diversity Computed Using `vegetarian::d`
###Code
qsum1_b_u_v <- inner_join(qsum1_g_u_v, qsum1_a_u_v, by = 'q')
qsum1_b_u_v <- qsum1_b_u_v %>% mutate(Bet_Vegetarian = Giml_Vegetarian / Alep_Vegetarian)
print(qsum1_b_u_v %>%
ggplot(aes(x = q, y = Bet_Vegetarian)) +
geom_line( color = 'blue') +
geom_hline(yintercept = bet_u_v$Beta_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = bet_u_v$Beta_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color = 'yellow') +
labs(title = 'Unweighted Beta Diversity Computed Using Fromulas') +
xlab(TeX('$q$')) +
ylab(TeX('$\\beta$ Diversity')))
qsum2_b_u_v <- inner_join(qsum2_g_u_v, qsum2_a_u_v, by = 'q')
qsum2_b_u_v <- qsum2_b_u_v %>% mutate(Bet_Vegetarian = Giml_Vegetarian / Alep_Vegetarian)
print(qsum2_b_u_v %>%
ggplot(aes(x = q, y = Bet_Vegetarian)) +
geom_line( color = 'blue') +
geom_hline(yintercept = bet_u_v$Beta_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = bet_u_v$Beta_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color = 'yellow') +
labs(title = 'Unweighted Beta Diversity Computed Using Fromulas') +
xlab(TeX('$q$')) +
ylab(TeX('$\\beta$ Diversity')))
###Output
_____no_output_____
###Markdown
Visualizing Weighted Beta Diversity Computed Using `vegetarian::d`
###Code
qsum1_b_w_v <- inner_join(qsum1_g_w_v, qsum1_a_w_v, by = 'q')
qsum1_b_w_v <- qsum1_b_w_v %>% mutate(Bet_Vegetarian = Giml_Vegetarian / Alep_Vegetarian)
print(qsum1_b_w_v %>%
ggplot(aes(x = q, y = Bet_Vegetarian)) +
geom_line( color = 'blue') +
geom_hline(yintercept = bet_w_v$Beta_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = bet_w_v$Beta_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color = 'yellow') +
labs(title = 'Weighted Beta Diversity Computed Using `vegetarian::d`') +
xlab(TeX('$q$')) +
ylab(TeX('$\\beta$ Diversity')))
qsum2_b_w_v <- inner_join(qsum2_g_w_v, qsum2_a_w_v, by = 'q')
qsum2_b_w_v <- qsum2_b_w_v %>% mutate(Bet_Vegetarian = Giml_Vegetarian / Alep_Vegetarian)
print(qsum2_b_w_v %>%
ggplot(aes(x = q, y = Bet_Vegetarian)) +
geom_line( color = 'blue') +
geom_hline(yintercept = bet_w_v$Beta_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = bet_w_v$Beta_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color = 'yellow') +
labs(title = 'Weighted Beta Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\beta$ Diversity')))
# alternatively
qsum1_b_w_v <- NULL
for (q in range_1) {
df2 <- NULL
df2 <- data.frame(d(df[, -c(1, 2)], lev = 'beta', wt = df$Weight, q = q), q = q)
names(df2) <- c('Bet_Vegetarian', 'q')
qsum1_b_w_v <- rbind(qsum1_b_w_v, df2)
}
print(qsum1_b_w_v %>%
ggplot(aes(x = q, y = Bet_Vegetarian)) +
geom_line( color = 'blue') +
geom_hline(yintercept = bet_w_v$Beta_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = bet_w_v$Beta_Richness, color = 'yellow') +
geom_vline(xintercept = 0, color = 'yellow') +
labs(title = 'Weighted Beta Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\alpha$ Diversity')))
# alternatively
qsum2_b_w_v <- NULL
for (q in range_2) {
df2 <- NULL
df2 <- data.frame(d(df[, -c(1, 2)], lev = 'beta', wt = df$Weight, q = q), q = q)
names(df2) <- c('Bet_Vegetarian', 'q')
qsum2_b_w_v <- rbind(qsum2_b_w_v, df2)
}
print(qsum2_b_w_v %>%
ggplot(aes(x = q, y = Bet_Vegetarian)) +
geom_line( color = 'blue') +
geom_hline(yintercept = bet_w_v$Beta_Shannon, color = 'green') +
geom_vline(xintercept = 1, color = 'green') +
geom_hline(yintercept = bet_w_v$Beta_Greenberg, color = 'yellow') +
geom_vline(xintercept = 2, color = 'yellow') +
labs(title = 'Weighted Beta Diversity Computed Using vegetarian::d') +
xlab(TeX('$q$')) +
ylab(TeX('$\\beta$ Diversity')))
###Output
_____no_output_____
###Markdown
MacArthur’s Homogeneity Measure Order 0 (Richness)
###Code
# using formulas
alep_u_m$Alpha_Richness / giml_u_m$Gamma_Richness
# using vegetarian::M.homog
M.homog(df[, -c(1, 2)], q = 0)
# alternatively
alep_u_v$Alpha_Richness / giml_u_v$Gamma_Richness
###Output
_____no_output_____
###Markdown
Order 1 (Shannon)
###Code
# using formulas
alep_u_m$Alpha_Shannon / giml_u_m$Gamma_Shannon
# using vegetarian::M.homog
M.homog(df[, -c(1, 2)])
# using vegetarian
alep_u_v$Alpha_Shannon / giml_u_v$Gamma_Shannon
###Output
_____no_output_____
###Markdown
Order 2 (Greenberg)
###Code
# using formulas
alep_u_m$Alpha_Greenberg / giml_u_m$Gamma_Greenberg
# using vegetarian::M.homog
M.homog(df[, -c(1, 2)], q = 2)
# alternatively
alep_u_v$Alpha_Greenberg / giml_u_v$Gamma_Greenberg
###Output
_____no_output_____
###Markdown
Relative Homogeneity Unweighted
###Code
# using formulas
homog_u_m <- data.frame(list((1 / bet_u_m$Beta_Richness - 1 / N) / (1 - 1 / N),
(1 / bet_u_m$Beta_Shannon - 1 / N) / (1 - 1 / N),
(1 / bet_u_m$Beta_Greenberg - 1 / N) / (1 - 1 / N)))
names(homog_u_m) <- c('Order 0 Homogeneity',
'Order 1 Homogeneity',
'Order 2 Homogeneity')
homog_u_m
# using vegetarian::Rel.homog
Rel.homog(df[, -c(1, 2)])
###Output
_____no_output_____
###Markdown
Weighted
###Code
# using formulas
d_1_w_m <- exp(-sum(df$Weight * log(df$Weight)))
(1 / bet_w_m$Beta_Shannon - 1 / d_1_w_m) / (1 - 1 / d_1_w_m)
(1 / bet_u_m$Beta_Shannon - 1 / d_1_w_m) / (1 - 1 / d_1_w_m)
# using vegetarian::Rel.homog
Rel.homog(df[, -c(1, 2)], wt = df$Weight)
# is there a bug in vegetarian::Rel.homog?
###Output
_____no_output_____
###Markdown
Turnover Order 0 (Richness)
###Code
# using formulas
(bet_u_m$Beta_Richness - 1) / (N - 1)
# using vegetarian::turnover
turnover(df[, -c(1, 2)], q = 0)
# alternatively
(bet_u_v$Beta_Richness - 1) / (N - 1)
###Output
_____no_output_____
###Markdown
Order 1 (Shannon)
###Code
# using formulas
(bet_u_m$Beta_Shannon - 1) / (N - 1)
# using vegetarian::turnover
turnover(df[, -c(1, 2)])
# alternatively
(bet_u_v$Beta_Shannon - 1) / (N - 1)
###Output
_____no_output_____
###Markdown
Order 2 (Greenberg)
###Code
# using formulas
(bet_u_m$Beta_Greenberg - 1) / (N - 1)
# using vegetarian::turnover
turnover(df[, -c(1, 2)], q = 2)
# alternatively
(bet_u_v$Beta_Greenberg - 1) / (N - 1)
###Output
_____no_output_____ |
ml-exercises/sparsity_and_l1_regularization.ipynb | ###Markdown
Sparsity and L1 RegularizationOnce again, we'll work on our logistic regression model. We'll use feature columns and add a significant number of features. This model will be pretty complex. Let's see if we can keep this complexity in check.One way to reduce complexity is to use a regularization function that encourages weights to be exactly zero. For linear models such as regression, a zero weight is equivalent to not using the corresponding feature at all. In addition to avoiding overfitting, the resulting model will be more efficient.L1 regularization is a good way to increase sparsity.Run the cell below to load the data and create feature definitions.
###Code
import math
from IPython import display
from matplotlib import cm
from matplotlib import gridspec
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
from sklearn import metrics
import tensorflow as tf
tf.logging.set_verbosity(tf.logging.ERROR)
pd.options.display.max_rows = 10
pd.options.display.float_format = '{:.1f}'.format
california_housing_dataframe = pd.read_csv("https://storage.googleapis.com/ml_universities/california_housing_train.csv", sep=",")
california_housing_dataframe = california_housing_dataframe.reindex(
np.random.permutation(california_housing_dataframe.index))
def preprocess_features(california_housing_dataframe):
"""Prepares input features from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the features to be used for the model, including
synthetic features.
"""
selected_features = california_housing_dataframe[
["latitude",
"longitude",
"housing_median_age",
"total_rooms",
"total_bedrooms",
"population",
"households",
"median_income"]]
processed_features = selected_features.copy()
# Create a synthetic feature.
processed_features["rooms_per_person"] = (
california_housing_dataframe["total_rooms"] /
california_housing_dataframe["population"])
return processed_features
def preprocess_targets(california_housing_dataframe):
"""Prepares target features (i.e., labels) from California housing data set.
Args:
california_housing_dataframe: A Pandas DataFrame expected to contain data
from the California housing data set.
Returns:
A DataFrame that contains the target feature.
"""
output_targets = pd.DataFrame()
# Create a boolean categorical feature representing whether the
# medianHouseValue is above a set threshold.
output_targets["median_house_value_is_high"] = (
california_housing_dataframe["median_house_value"] > 265000).astype(float)
return output_targets
training_examples = preprocess_features(california_housing_dataframe.head(12000))
training_targets = preprocess_targets(california_housing_dataframe.head(12000))
validation_examples = preprocess_features(california_housing_dataframe.tail(5000))
validation_targets = preprocess_targets(california_housing_dataframe.tail(5000))
def input_function(examples_df, targets_df, single_read=False):
"""Converts a pair of examples/targets `DataFrame`s to `Tensor`s.
The `Tensor`s are reshaped to `(N,1)` where `N` is number of examples in the `DataFrame`s.
Args:
examples_df: A `DataFrame` that contains the input features. All its columns will be
transformed into corresponding input feature `Tensor` objects.
targets_df: A `DataFrame` that contains a single column, the targets corresponding to
each example in `examples_df`.
single_read: A `bool` that indicates whether this function should stop after reading
through the dataset once. If `False`, the function will loop through the data set.
This stop mechanism is user by the estimator's `predict()` to limit the number of
values it reads.
Returns:
A tuple `(input_features, target_tensor)`:
input_features: A `dict` mapping string values (the column name of the feature) to
`Tensor`s (the actual values of the feature).
target_tensor: A `Tensor` representing the target values.
"""
features = {}
for column_name in examples_df.keys():
batch_tensor = tf.to_float(
tf.reshape(tf.constant(examples_df[column_name].values), [-1, 1]))
if single_read:
features[column_name] = tf.train.limit_epochs(batch_tensor, num_epochs=1)
else:
features[column_name] = batch_tensor
target_tensor = tf.to_float(
tf.reshape(tf.constant(targets_df[targets_df.keys()[0]].values), [-1, 1]))
return features, target_tensor
def get_quantile_based_buckets(feature_values, num_buckets):
quantiles = feature_values.quantile(
[(i+1.)/(num_buckets + 1.) for i in xrange(num_buckets)])
return [quantiles[q] for q in quantiles.keys()]
bucketized_households = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("households"),
boundaries=get_quantile_based_buckets(training_examples["households"], 10))
bucketized_longitude = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("longitude"),
boundaries=get_quantile_based_buckets(training_examples["longitude"], 50))
bucketized_latitude = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("latitude"),
boundaries=get_quantile_based_buckets(training_examples["latitude"], 50))
bucketized_housing_median_age = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("housing_median_age"),
boundaries=get_quantile_based_buckets(
training_examples["housing_median_age"], 10))
bucketized_total_rooms = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("total_rooms"),
boundaries=get_quantile_based_buckets(training_examples["total_rooms"], 10))
bucketized_total_bedrooms = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("total_bedrooms"),
boundaries=get_quantile_based_buckets(training_examples["total_bedrooms"], 10))
bucketized_population = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("population"),
boundaries=get_quantile_based_buckets(training_examples["population"], 10))
bucketized_median_income = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("median_income"),
boundaries=get_quantile_based_buckets(training_examples["median_income"], 10))
bucketized_rooms_per_person = tf.contrib.layers.bucketized_column(
tf.contrib.layers.real_valued_column("rooms_per_person"),
boundaries=get_quantile_based_buckets(
training_examples["rooms_per_person"], 10))
long_x_lat = tf.contrib.layers.crossed_column(
set([bucketized_longitude, bucketized_latitude]), hash_bucket_size=1000)
feature_columns = set([
long_x_lat,
bucketized_longitude,
bucketized_latitude,
bucketized_housing_median_age,
bucketized_total_rooms,
bucketized_total_bedrooms,
bucketized_population,
bucketized_households,
bucketized_median_income,
bucketized_rooms_per_person])
###Output
_____no_output_____
###Markdown
Calculate the model sizeTo calculate the model size, we simply count the number of parameters that are non-zero. We provide a helper function below to do that. The function uses intimate knowledge of the Estimators API - don't worry about understanding how it works.
###Code
def model_size(estimator):
variables = estimator.get_variable_names()
size = 0
for variable in variables:
if not any(x in variable
for x in ['global_step',
'centered_bias_weight',
'bias_weight',
'Ftrl']
):
size += np.count_nonzero(estimator.get_variable_value(variable))
return size
###Output
_____no_output_____
###Markdown
Reduce the model sizeYour team needs to build a highly accurate Logistic Regression model on the *SmartRing*, a ring that is so smart it can sense the demographics of a city block ('median_income', 'avg_rooms', 'households', ..., etc.) and tell you whether the given city block is high cost city block or not.Since the SmartRing is small, the engineering team has determined that it can only handle a model that has **no more than 600 parameters**. On the other hand, the product management team has determined that the model is not launchable unless the **LogLoss is less than 0.35** on the holdout test set.Can you use your secret weapon — L1 regularization — to tune the model to satisfy both the size and accuracy constraints? Task 1: Find a good regularization coefficient.**Find an L1 regularization strength parameter which satisfies both constraints — model size is less than 600 and log-loss is less than 0.35 on validation set.**The following code will help you get started. There are many ways to apply regularization to your model. Here, we chose to do it using `FtrlOptimizer`, which is designed to give better results with L1 regularization than standard gradient descent.Again, the model will train on the entire data set, so expect it to run slower than normal.
###Code
def train_linear_classifier_model(
learning_rate,
regularization_strength,
steps,
feature_columns,
training_examples,
training_targets,
validation_examples,
validation_targets):
"""Trains a linear regression model.
In addition to training, this function also prints training progress information,
as well as a plot of the training and validation loss over time.
Args:
learning_rate: A `float`, the learning rate.
regularization_strength: A `float` that indicates the strength of the L1
regularization. A value of `0.0` means no regularization.
steps: A non-zero `int`, the total number of training steps. A training step
consists of a forward and backward pass using a single batch.
feature_columns: A `set` specifying the input feature columns to use.
training_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for training.
training_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for training.
validation_examples: A `DataFrame` containing one or more columns from
`california_housing_dataframe` to use as input features for validation.
validation_targets: A `DataFrame` containing exactly one column from
`california_housing_dataframe` to use as target for validation.
Returns:
A `LinearClassifier` object trained on the training data.
"""
periods = 7
steps_per_period = steps / periods
# Create a linear classifier object.
linear_classifier = tf.contrib.learn.LinearClassifier(
feature_columns=feature_columns,
optimizer=tf.train.FtrlOptimizer(
learning_rate=learning_rate,
l1_regularization_strength=regularization_strength),
gradient_clip_norm=5.0
)
training_input_function = lambda: input_function(
training_examples, training_targets)
training_input_function_for_predict = lambda: input_function(
training_examples, training_targets, single_read=True)
validation_input_function_for_predict = lambda: input_function(
validation_examples, validation_targets, single_read=True)
# Train the model, but do so inside a loop so that we can periodically assess
# loss metrics.
print "Training model..."
print "LogLoss (on validation data):"
training_log_losses = []
validation_log_losses = []
for period in range (0, periods):
# Train the model, starting from the prior state.
linear_classifier.fit(
input_fn=training_input_function,
steps=steps_per_period
)
# Take a break and compute predictions.
training_probabilities = np.array(list(linear_classifier.predict_proba(
input_fn=training_input_function_for_predict)))
validation_probabilities = np.array(list(linear_classifier.predict_proba(
input_fn=validation_input_function_for_predict)))
# Compute training and validation loss.
training_log_loss = metrics.log_loss(training_targets, training_probabilities[:, 1])
validation_log_loss = metrics.log_loss(validation_targets, validation_probabilities[:, 1])
# Occasionally print the current loss.
print " period %02d : %0.2f" % (period, validation_log_loss)
# Add the loss metrics from this period to our list.
training_log_losses.append(training_log_loss)
validation_log_losses.append(validation_log_loss)
print "Model training finished."
# Output a graph of loss metrics over periods.
plt.ylabel("LogLoss")
plt.xlabel("Periods")
plt.title("LogLoss vs. Periods")
plt.tight_layout()
plt.plot(training_log_losses, label="training")
plt.plot(validation_log_losses, label="validation")
plt.legend()
return linear_classifier
linear_classifier = train_linear_classifier_model(
learning_rate=0.1,
regularization_strength=0.0,
steps=300,
feature_columns=feature_columns,
training_examples=training_examples,
training_targets=training_targets,
validation_examples=validation_examples,
validation_targets=validation_targets)
print "Model size:", model_size(linear_classifier)
###Output
_____no_output_____ |
2S2018/Ex05 Filtros de Agucamento.ipynb | ###Markdown
Ex05 - Filtros de aguçamento 1. Unsharp maskUm filtro bastante utilizado para aguçar a imagem é denominado *unsharp mask*. Ele é capaz de realçar bordas calculando a diferença entre a imagem original e uma versão suavizada da imagem filtrada pela gaussiana. Para conseguir o realce de bordas, faça:- Calcule primeiro a *unsharp mask* ($df$)- Faça uma ponderação entre a imagem original e a imagem diferença: $$((1-k)*f + k*df)$$ onde $f$ é a imagem, $df$ é a *unsharp mask* e $k$ é o fator de ponderação - Mude o fator de ponderacao $k$ e veja o efeito na imagem final 2. Filtro de SobelExistem vários filtros que procuram realçar as bordas da imagem. Um dos mais conhecidos é o Operador Sobel, composto por uma máscara vertical (Sv) e uma máscara horizontal (Sh).
###Code
import numpy as np
Sv = np.array([[1,0,-1],[2,0,-2],[1,0,-1]])
print('Sv =\n',Sv)
Sh = np.array([[1,2,1],[0,0,0],[-1,-2,-1]])
print('Sh =\n',Sh)
###Output
Sv =
[[ 1 0 -1]
[ 2 0 -2]
[ 1 0 -1]]
Sh =
[[ 1 2 1]
[ 0 0 0]
[-1 -2 -1]]
|
students/acengel/Project3.ipynb | ###Markdown
Project: Week 3 This week, I continued my approach from last week. I wanted to investigate - once COVID-19 got into the US, how did it initially spread? Import Stuff
###Code
!pip install biopython
import pandas as pd
import numpy as np
import yaml
import urllib
from Bio.Phylo.TreeConstruction import DistanceMatrix
from Bio.Phylo.TreeConstruction import DistanceTreeConstructor
from Bio import Phylo
import matplotlib
import random
import matplotlib.pylab as plt
import matplotlib.patches as mpatches
%matplotlib inline
###Output
_____no_output_____
###Markdown
Get Given Data - aligned sequences Once again, I started with the data given by my professor. Below is a table of all of the COVID-19 genomes made available by the galaxy project. Specifically, this table contains the aligned sequences corresponding to the spike protein.
###Code
position_table = pd.read_csv('../../data/position_table.csv')
position_table
###Output
_____no_output_____
###Markdown
Get location and date data Last time, I wanted to get extra info about these genomes, like when and where the sample was taken. After rooting around on the galaxy project website, I found a script called _fetch_sra_acc.sh_. In this file, I found out that they get their data from the NCBI. Specifically, they pull info on the genomes from a file called [ncov-sequences.yaml](https://www.ncbi.nlm.nih.gov/core/assets/genbank/files/ncov-sequences.yaml). I loaded the data into a dictionary to look up in.
###Code
with urllib.request.urlopen("https://www.ncbi.nlm.nih.gov/core/assets/genbank/files/ncov-sequences.yaml") as response:
text = response.read()
lookup = yaml.load(text)
name_to_description = {}
for description in lookup["genbank-sequences"]:
name = description["accession"]
name_to_description[name] = description
###Output
_____no_output_____
###Markdown
Combine Data Next, I swept over my genome table to add the extra location and date info. They changed their format since last time I used the data, so I had to update my code.
###Code
position_table["name"] = position_table["seqid"].str.replace("\.\d", "")
def get_location(name):
if name in name_to_description:
description = name_to_description[name]
location = description["country"]
splits = [fragment.strip() for fragment in location.split(":")]
country = splits[0]
state = "".join(splits[1:])
if state == "":
state = np.nan
return country, state
else:
return np.nan
position_table["country"], position_table["state"] = zip(*position_table["name"].apply(get_location))
position_table["country"].isna().sum(), position_table["state"].isna().sum()
position_table["country"].unique()
position_table["state"].unique()
###Output
_____no_output_____
###Markdown
Looks like all genomes have a country, but many don't have a state.
###Code
def get_date(name):
if name in name_to_description:
description = name_to_description[name]
date = description["collection_date"]
return date
else:
return np.nan
position_table["date"] = position_table["name"].apply(get_date)
position_table["date"].isna().sum()
###Output
_____no_output_____
###Markdown
All of the genomes also have a date. I had to get rid of one malformed date, though.
###Code
position_table = position_table[position_table["date"] != "2020"]
position_table["date"] = pd.to_datetime(position_table["date"])
###Output
_____no_output_____
###Markdown
Problem and Sequence Selection I wanted to take a look at how COVID-19 started spreading inside the US. To do this, I selected the first 100 recorded COVID-19 genomes in the US (by date).
###Code
position_table = position_table.set_index("seqid")
subset_seqs = position_table[position_table.country == "USA"]["date"].nsmallest(100).index
subset_seqs
###Output
_____no_output_____
###Markdown
Select our distance metric Next, I got a distance matrix among all of the selected sequences. I decided to stick with a simple distance metric - just the number of differences between the two. This seemed appropriate because the sequences were relatively short and well-aligned.
###Code
def get_distance(seq1, seq2):
return sum(seq1 != seq2)
distances = {}
for i,seqid1 in enumerate(subset_seqs):
distances[seqid1,seqid1]=0
for j in range(i+1,len(subset_seqs)):
seqid2 = subset_seqs[j]
distances[seqid1,seqid2] = get_distance(position_table.loc[seqid1], position_table.loc[seqid2])
distances[seqid2,seqid1] = distances[seqid1,seqid2]
distances = pd.Series(distances).unstack()
distances.head()
position_table.loc[subset_seqs]["state"]
###Output
_____no_output_____
###Markdown
Select colors for each state I knew that I would want to visualize my results, so I assigned a color to each of the states in my subset of genomes.
###Code
states = position_table.loc[subset_seqs]["state"].unique()
states
state_to_color = {}
all_colors = [color for color in Phylo.BaseTree.BranchColor.color_names
if len(color) > 1 and color not in ["white", "grey"]]
all_colors
#colors = random.sample(all_colors, len(countries))
colors = all_colors[:len(states)]
for state, color in zip(states, colors):
state_to_color[state] = color
state_to_color
###Output
_____no_output_____
###Markdown
I also made a legend for my graph at the end.
###Code
patches = []
for state in state_to_color:
patch = mpatches.Patch(color=state_to_color[state], label=state)
patches.append(patch)
###Output
_____no_output_____
###Markdown
Use Biopython to construct a phylogenetic tree Finally, I used Biopython to contruct a phylogenetic tree from my distance matrix
###Code
matrix = np.tril(distances.values).tolist()
for i in range(len(matrix)):
matrix[i] = matrix[i][:i+1]
distance_matrix = DistanceMatrix(list(distances.index), matrix)
tree_constructor = DistanceTreeConstructor()
###Output
_____no_output_____
###Markdown
Neighbor Joining tree I tried out a Neighbor Joining tree and an Upgma tree, and the Neighbor Joining tree worked better in the end.
###Code
nj_tree = tree_constructor.nj(distance_matrix)
nj_tree.ladderize()
for clade in nj_tree.get_terminals():
state = get_location(clade.name.split(".")[0])[1]
clade.color = state_to_color[state]
fig = plt.figure(figsize=(15, 15))
axes = fig.add_subplot(1, 1, 1)
plt.legend(handles=patches)
Phylo.draw(nj_tree, axes=axes)
###Output
_____no_output_____ |
Tournments/LiarsGame.ipynb | ###Markdown
Game1. In Liars Game, a community of $n$ players start with the same amount of money each.2. Every round each individual selects how much they want to contribute to the central pot.3. After each round the central point is distributed between the entire community.4. The community does not like selfish people, so they kick out the person who gives the least amount of money each round.5. This also means that if you give too much in the beginning, you will not have enough in the end to survive.6. What strategy increases the odds of you coming out victorious? Getting startedThere's a cell below titled *Your Custom Player*. Make the improvements you want to it, uncomment your player from the strategies list, and then run the entire notebook :) Links- GitRepo: https://github.com/migueltorrescosta/nash_equilibria/tree/main/tournments/liars_game Remark- The Uniformly Random function might not perform very well, but for now it is the only one introducing a significant amount of randomness. Without it the other strategies become deterministic and provide for a very boring analysis- In a game where all opponents always put everything, then you are limited to do the same. However as soon as someone else has a different strategy, you can beat it by putting slightly more and then always putting almost everything. Current Best Strategies:1. Slightly More2. Exponential Decay3. Two Over N Players4. Everything5. Everything Except First Round6. Ninety Percentile7. Half8. Uniformly Random9. Tenth Percentile(ordered by average number of rounds survived. Descriptions and more details below) All imports
###Code
import random
import pandas as pd
from abc import abstractmethod
import seaborn as sns
from tqdm import tqdm
import itertools
cm = sns.light_palette("green", as_cmap=True)
###Output
_____no_output_____
###Markdown
Game Class
###Code
class LiarsGame:
def __init__(self, strategies, verbose=False):
assert len(strategies) > 2, "You need at least 3 players to start a game"
# Secret attributes
self.__players = [strategy(name=strategy.__name__) for strategy in strategies]
self.__initial_n_players = len(self.__players)
self.__money = {player: 100 for player in self.__players}
self.__game_history = pd.DataFrame(
columns=[player.name for player in self.__players],
index=[],
data=0
)
self.__verbose = verbose
self.__eliminations = []
self.__run_game()
def __repr__(self):
return f"LiarsGame: {self.n_players} with {self.total_money}¥"
def __weight(self, coin_list):
return sum([self.__coin_weights[i] for i in coin_list])
# Accessible attributes
@property
def money(self):
return self.__money
@property
def total_money(self):
return sum(self.__money.values())
@property
def players(self):
return self.__players
@property
def n_players(self):
return len(self.__players)
@property
def eliminations(self):
return self.__eliminations
@property
def game_history(self):
return self.__game_history
def show_game_history_heatmap(self):
return self.__game_history.T.style.background_gradient(cmap=cm).set_precision(2).highlight_null('red')
def show_game_history_bar_plot(self):
return self.__game_history.plot.bar(
stacked=True,
figsize=(20,10),
width=.95
).legend(
loc="center right",
bbox_to_anchor=(0, 0.5),
prop={'size': 18}
)
def my_money(self, player):
return self.__money[player]
# Key Methods
def __run_round(self):
self.__game_history.loc[self.__initial_n_players - self.n_players] = {
player.name: self.money[player] for player in self.players}
current_move = {
player: max([min([player.move(self), 1]), 0])*self.money[player]
for player in self.players
}
if self.__verbose:
for player in self.players:
print(
f"{player.name}: {current_move[player]:.2f} / {self.money[player]:.2f}")
print("\n" + "="*50 + "\n")
lowest_contribution = min(current_move.values())
smallest_contributor = random.choice([
player
for player in self.__players
if current_move[player]==lowest_contribution
])
self.__eliminations.append(smallest_contributor)
current_move[smallest_contributor] = self.__money[smallest_contributor]
pot = sum(current_move.values())
self.__players = [
player for player in self.__players if player != smallest_contributor]
self.__money = {
player: self.__money[player] -
current_move[player] + pot/self.n_players
for player in self.players
}
def __run_game(self):
while self.n_players > 1:
self.__run_round()
winner = self.players[0]
self.__game_history.loc[self.__initial_n_players - self.n_players] = {winner.name: self.money[winner]}
self.__eliminations.append(winner)
if self.__verbose:
print(f"Winner: {winner.name}")
return winner.name
###Output
_____no_output_____
###Markdown
Player Parent Class
###Code
class Player:
def __init__(self, name):
self.name = name
def __repr__(self):
return self.name
@abstractmethod
def move(self):
pass
###Output
_____no_output_____
###Markdown
Submitted Strategies
###Code
# Always contributes half of their wealth
class half(Player):
def move(self, status):
return .5
# Always contributes all their wealth
class everything(Player):
def move(self, status):
return 1
# Always contributes 90% of their wealth
class ninety_percentile(Player):
def move(self, status):
return .9
# Always contributes 10% of their wealth
class tenth_percentile(Player):
def move(self, status):
return .1
# Contributes an uniformly random amount of their wealth
class uniformly_random(Player):
def move(self, status):
return random.random()
# Contributes a weird amount
class two_over_n_players(Player):
def __init__(self, name):
self.name = name
self.initial_number_of_players = None
def move(self, status):
if not self.initial_number_of_players:
self.initial_number_of_players = status.n_players
return (1 - (status.n_players-2)/self.initial_number_of_players)
# In the first random it contributes a random amount, otherwise it contributes everything
class everything_except_first_round(Player):
def __init__(self, name):
self.name = name
self.is_first_move = True
def move(self, status):
if self.is_first_move:
self.is_first_move = False
return random.random()
else:
return 1
# Contributes an amount that converges to 1 exponentially
class exponential_decay(Player):
def __init__(self, name):
self.name = name
self.initial_number_of_players = None
def move(self, status):
if not self.initial_number_of_players:
self.initial_number_of_players = status.n_players
return 1 - 0.3**(1 + self.initial_number_of_players - status.n_players)
# First round it contributes an uniformly random amount, after that it contributes the minimum needed to ensure survival.
class slightly_more(Player):
def __init__(self, name):
self.name = name
self.is_first_move = True
def move(self, status):
if self.is_first_move:
self.is_first_move = False
return random.random()
if status.n_players == 2:
return 1
else:
least_money = min(status.money.values())
return least_money/status.my_money(self) + 10e-9
class slightly_less(Player):
def __init__(self, name):
self.name = name
self.is_first_move = True
def move(self, status):
least_money = min(status.money.values())
if self.is_first_move:
self.is_first_move = False
return random.random()
if status.n_players == 2:
return 1
else:
least_money = min(status.money.values())
return least_money - 10e-9
class hrna_ox(Player):
def __init__(self, name, num_risk_rounds = 2, risk_min_frac = 0.4, risk_bias = 0.1):
self.name = name
self.risk_rounds = num_risk_rounds
self.risk_min_frac = risk_min_frac
self.risk_bias = risk_bias
self.current_round = 1
def move(self, status):
# For the first risk_rounds, adopt an agressive, risky strategy based on fraction of the minimum with bias. Otherwise, minimum
min_money = min(status.money.values())
if self.current_round <= self.risk_rounds:
# Risky bet!
bet = min_money * (self.risk_bias + self.risk_min_frac * random.random())
self.current_round += 1
else:
bet = min_money + 1e-9
return bet
###Output
_____no_output_____
###Markdown
Your playgroundEdit the code below with your own ideas :)
###Code
class your_custom_player(Player):
def __init__(self, name):
self.name = name
self.is_first_move = True
def move(self, status):
if self.is_first_move:
self.is_first_move = False
return random.random()*.9 + .1
if status.n_players == 2:
return 1
else:
least_money = min(status.money.values())
return least_money/status.my_money(self) + 10e-9
###Output
_____no_output_____
###Markdown
Setup variables
###Code
best_strategies = [
exponential_decay,
half,
everything,
# tenth_percentile,
ninety_percentile,
two_over_n_players,
uniformly_random,
everything_except_first_round,
slightly_more,
slightly_less,
your_custom_player,
hrna_ox
]
###Output
_____no_output_____
###Markdown
Sample Run
###Code
x = LiarsGame(
strategies=best_strategies
)
x.show_game_history_heatmap()
x.show_game_history_bar_plot()
###Output
_____no_output_____
###Markdown
Distribution of Rounds survivedThe more rounds one survives, the better the strategy is
###Code
runs = int(10e4)
strategy_names = [strategy.__name__ for strategy in best_strategies]
rounds_survived = pd.DataFrame(
columns=range(len(strategy_names)),
index=strategy_names,
data=0
)
for _ in tqdm(range(runs)):
eliminations = LiarsGame(strategies=best_strategies).eliminations
for (i, player) in enumerate(eliminations):
rounds_survived[i][player.name] += 1
rounds_survived.T.plot.bar(
stacked=True,
figsize=(20,5),
width=.5
).legend(
loc="center right",
bbox_to_anchor=(0, 0.5),
prop={'size': 18}
)
average_survival = {
name: rounds_survived.loc[name].dot(rounds_survived.columns)/(runs*(len(best_strategies)-1))
for name in strategy_names
}
rounds_survived["mean"] = rounds_survived.index.map(average_survival)
rounds_survived = rounds_survived.sort_values(by="mean", ascending=False)
print(f"Results from {runs} runs")
rounds_survived.style.background_gradient(cmap=cm)
###Output
Results from 100000 runs
###Markdown
Online PlayYou can use this section to play directly against the above strategies
###Code
import pprint
pp = pprint.PrettyPrinter()
class online_player(Player):
def move(self, status):
pp.pprint(status.money)
x = float(input())
return x/status.my_money(self)
best_strategies.append(online_player)
x = LiarsGame(
strategies=best_strategies
)
x.show_game_history_heatmap()
###Output
{everything: 100,
everything_except_first_round: 100,
exponential_decay: 100,
half: 100,
hrna_ox: 100,
ninety_percentile: 100,
online_player: 100,
slightly_less: 100,
slightly_more: 100,
two_over_n_players: 100,
uniformly_random: 100,
your_custom_player: 100}
|
content/lectures/lecture07/notebook/cs109b_Bayes.ipynb | ###Markdown
Title Bayesian Example Notebook Description :This notebook provides example code based on the lecture material.If you wish to run or edit the notebook, we recommend downloading it and running it either on your local machine or on JupyterHub.
###Code
import pymc3 as pm
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import pandas as pd
import scipy.stats as st
import seaborn as sns
import matplotlib.pyplot as plt
n_theta = 10000
# generate 10,000 values from Beta(2,5)
theta = np.random.beta(2,5,n_theta)
print("First five values of theta:\n\t", theta[0:5])
print("Sample mean:\n\t", np.mean(theta))
print("The 2.5% and 97.5% of quantiles:\n\t", np.percentile(theta,[2.5,97.5]))
plt.hist(theta,50)
plt.xlabel("Value of Theta")
plt.ylabel("Count")
plt.show()
# simulate y from posterior predictive distribution
y = np.random.binomial(1, theta, n_theta) # generate a heads/tails value from each of the 10,000 thetas
print("First 5 heads/tails values (tails=0, heads=1)\n\t", y[0:10])
print("Overall frequency of Tails and Heads, accounting for uncertainty about theta itself\n\t", np.bincount(y)/10000)
plt.hist(y, density=True)
plt.xticks([.05,.95],["Tails","Heads"])
plt.show()
###Output
First 5 heads/tails values (tails=0, heads=1)
[0 0 0 0 0 1 0 0 0 0]
Overall frequency of Tails and Heads, accounting for uncertainty about theta itself
[0.7163 0.2837]
###Markdown
Rejection sampling and Weighted bootstrapExample adapted from https://wiseodd.github.io/techblog/2015/10/21/rejection-sampling/
###Code
sns.set()
def h(x):
return st.norm.pdf(x, loc=30, scale=10) + st.norm.pdf(x, loc=80, scale=20)
def g(x):
return st.norm.pdf(x, loc=50, scale=30)
x = np.arange(-50, 151)
M = max(h(x) / g(x)) # for rejection sampling
###Output
_____no_output_____
###Markdown
h is a mixture of two normal distributions (unnormalized), and density h is a normal distribution with mean 50 and standard deviation 30.
###Code
plt.plot(x, h(x))
plt.show()
# Superimpose h and g on same plot
plt.plot(x,h(x))
plt.plot(x,g(x))
plt.show()
# Superimpose h and M*g on same plot - now M*g envelopes h
plt.plot(x,h(x))
plt.plot(x,M*g(x))
plt.show()
def rejection_sampling(maxiter=10000,sampsize=1000):
samples = []
sampcount = 0 # counter for accepted samples
maxcount = 0 # counter for proposal simulation
# sampcount/maxcount at any point in the iteration is the acceptance rate
while (sampcount < sampsize and maxcount < maxiter):
z = np.random.normal(50, 30)
u = np.random.uniform(0, 1)
maxcount += 1
if u <= h(z)/(M*g(z)):
samples.append(z)
sampcount += 1
print('Rejection rate is',100*(1-sampcount/maxcount))
if maxcount == maxiter: print('Maximum iterations achieved')
return np.array(samples)
s = rejection_sampling(maxiter=10000,sampsize=1000)
sns.displot(s)
# weighted bootstrap computation involving h and g
import random
def weighted_bootstrap(iter=1000,size=100):
w = []
y = []
for i in range(iter):
z = np.random.normal(50, 30)
y.append(z)
wz = h(z)/g(z)
w.append(wz)
v = random.choices(y,weights=w,k=size) # do not need to renormalize w
return np.array(v)
wb = weighted_bootstrap(iter=10000,size=1000)
sns.displot(wb)
###Output
_____no_output_____
###Markdown
Beetles
###Code
beetles_x = np.array([1.6907, 1.7242, 1.7552, 1.7842, 1.8113, 1.8369, 1.8610, 1.8839])
beetles_x_mean = beetles_x - np.mean(beetles_x)
beetles_n = np.array([59, 60, 62, 56, 63, 59, 62, 60])
beetles_y = np.array([6, 13, 18, 28, 52, 53, 61, 60])
beetles_N = np.array([8]*8)
from scipy.special import expit
expit(2)
with pm.Model() as beetle_model:
# The intercept (log probability of beetles dying when dose=0)
# is centered at zero, and wide-ranging (easily anywhere from 0 to 100%)
# If we wanted, we could choose something like Normal(-3,2) for a no-dose
# death rate roughly between .007 and .25
alpha_star = pm.Normal('alpha*', mu=0, sigma=100)
# the effect on the log-odds of each unit of the dose is wide-ranging:
# we're saying we've got little idea what the effect will be, and it could
# be strongly negative.
beta = pm.Normal('beta', mu=0, sigma=100)
# given alpha, beta, and the dosage, the probability of death is deterministic:
# it's the inverse logit of the intercept+slope*dosage
# Because beetles_x has 8 entries, we end up with 8 p_i values
p_i = pm.Deterministic('$P_i$', pm.math.invlogit(alpha_star + beta*beetles_x_mean))
# finally, the number of bettles we see killed is Binomial(n=number of beetles, p=probability of death)
deaths = pm.Binomial('obs_deaths', n=beetles_n, p=p_i, observed=beetles_y)
trace = pm.sample(2000, tune=2000, target_accept=0.9)
pm.traceplot(trace, compact=False);
def trace_summary(trace, var_names=None):
if var_names is None:
var_names = trace.varnames
quants = [0.025,0.25,0.5,0.75,0.975]
colnames = ['mean', 'sd', *["{}%".format(x*100) for x in quants]]
rownames = []
series = []
for cur_var in var_names:
var_trace = trace[cur_var]
if var_trace.ndim == 1:
vals = [np.mean(var_trace, axis=0), np.std(var_trace, axis=0), *np.quantile(var_trace, quants, axis=0)]
series.append(pd.Series(vals, colnames))
rownames.append(cur_var)
else:
for i in range(var_trace.shape[1]):
cur_col = var_trace[:,i]
vals = [np.mean(cur_col, axis=0), np.std(cur_col, axis=0), *np.quantile(cur_col, quants, axis=0)]
series.append(pd.Series(vals, colnames))
rownames.append("{}[{}]".format(cur_var,i))
return pd.DataFrame(series, index=rownames)
trace_summary(trace)
###Output
_____no_output_____
###Markdown
We can also plot the density each chain explored. Any major deviations between chains are signs of difficulty converging.
###Code
for x in trace.varnames:
pm.plot_forest(trace, var_names=[x], combined=True)
###Output
_____no_output_____
###Markdown
In addition to the above summaries of the distribution, pymc3 has statistics intended to summarize the quality of the samples. The most common of these is r_hat, which measures whether the different chains seem to be exploring the same space or if they're stuck in different spaces. R-hat above 1.3 is a strong sign the sample isn't good yet. Values close to 1 are ideal.
###Code
pm.summary(trace)
###Output
_____no_output_____
###Markdown
Sleep Study
###Code
import pandas as pd
sleepstudy = pd.read_csv("sleepstudy.csv")
sleepstudy
# adding a column that numbers the subjects from 0 to n
raw_ids = np.unique(sleepstudy['Subject'])
raw2newid = {x:np.where(raw_ids == x)[0][0] for x in raw_ids}
sleepstudy['SeqSubject'] = sleepstudy['Subject'].map(raw2newid)
sleepstudy
with pm.Model() as sleep_model:
# In this model, we're going to say the alphas (individuals' intercepts; their starting reaction time)
# and betas (individuals' slopes; how much worse they get with lack of sleep) are normally distributed.
# We'll specify that we're certain about the mean of those distribution [more on that later], but admit
# we're uncertain about how much spread there is (i.e. uncertain about the SD). Tau_alpha and Tau_beta
# will be the respective SD.
#
# Of course, the SDs must be positive (negative SD isn't mathematically possible), so we draw them from
# a Gamma, which cannot ever output negative numbers. Here, we use alpha and beta values that spread the
# distribution: "the SD could be anything!". If we had more intuition (e.g. "the starting reaction times can't
# have SD above 3,000") we would plot Gamma(a,b) and tune the parameters so that there was little mass
# above 3,000, then use those values below)
tau_alpha = pm.Gamma('tau_alpha', alpha=.001, beta=.001)
tau_beta = pm.Gamma('tau_beta', alpha=.001, beta=.001)
# Across the population of people, we suppose that
# the slopes are normally distributed, as are the intercepts,
# and the two are drawn independently
#
# (Here, we hard-code assumed means, but we don't have to.
# In general, these should be set from our pre-data intuition,
# rather than from plots/exploration of the data)
alpha = pm.Normal('alpha', mu=300, tau=tau_alpha, shape=len(raw_ids))
beta = pm.Normal('beta', mu=10, tau=tau_beta, shape=len(raw_ids))
# Remember: there's only one alpha/beta per person, but
# we have lots of observations per person. The below
# builds a vector with one entry per observation, recording
# the alpha/beta we want to use with that observation.
#
# That is, the length is 180, but it only has 17 unique values,
# matching the 17 unique patients' personal slopes or intercepts
intercepts = alpha[sleepstudy['SeqSubject']]
slopes = beta[sleepstudy['SeqSubject']]
# now we have the true/predicted response time for each observation (each row of original data)
# (Here we use pm.Deterministic to signal that this is something we'll care about)
mu_i = pm.Deterministic('mu_i', intercepts + slopes*sleepstudy['Days'])
# The _observed_ values are noisy versions of the hidden true values, however!
# Specifically, we model them as a normal at the true value and single unknown variance
# (one explanation: we're saying the measurement equipment adds normally-distributed noise tau_obs
# so noise doesn't vary from observation to observation or person to person: there's just one universal
# noise level)
tau_obs = pm.Gamma('tau_obs', 0.001, 0.001)
obs = pm.Normal('observed', mu=mu_i, tau=tau_obs, observed=sleepstudy['Reaction'])
trace = pm.sample(2000, tune=2000, target_accept=0.9)
# this command can take a few minutes to finish... or never :-/
#pm.traceplot(trace);
trace_summary(trace, var_names=['tau_alpha', 'tau_beta', 'alpha', 'beta', 'tau_obs'])
pm.summary(trace, var_names=['tau_alpha', 'tau_beta', 'alpha', 'beta', 'tau_obs'])
import statsmodels.formula.api as sm
import seaborn as sns
from matplotlib import gridspec
ymin,ymax = np.min(sleepstudy["Reaction"]),np.max(sleepstudy["Reaction"])
plt.figure(figsize=(11,8.5))
gs = gridspec.GridSpec(3, 6)
gs.update(wspace=0.5, hspace=0.5)
for i, subj in enumerate(np.unique(sleepstudy['Subject'])):
ss_extract = sleepstudy.loc[sleepstudy['Subject']==subj]
ss_extract_ols = sm.ols(formula="Reaction~Days",data=ss_extract).fit()
#new subplot
subplt = plt.subplot(gs[i])
#plot without confidence intervals
sns.regplot(x='Days', y='Reaction', ci=None, data=ss_extract).set_title('Subject '+str(subj))
if i not in [0,6,12]:
plt.ylabel("")
i+=1
subplt.set_ylim(ymin,ymax)
_ = plt.figlegend(['Estimated from each subject alone'],loc = 'lower center', ncol=6)
_ = plt.show()
plt.figure(figsize=(11,8.5))
for i, subj in enumerate(np.unique(sleepstudy['Subject'])):
ss_extract = sleepstudy.loc[sleepstudy['Subject']==subj]
#new subplot
subplt = plt.subplot(gs[i])
#plot without confidence intervals
sns.regplot(x='Days', y='Reaction', ci=None, data=ss_extract).set_title('Subject '+str(subj))
sns.regplot(x='Days', y='Reaction', ci=None, scatter=False, data=sleepstudy)
if i not in [0,6,12]:
plt.ylabel("")
i+=1
subplt.set_ylim(ymin,ymax)
_ = plt.figlegend(['Estimated from each subject alone','Pooling all subjects'],loc = 'lower center', ncol=6)
_ = plt.show()
plt.figure(figsize=(11,8.5))
subj_arr = np.unique(sleepstudy['Subject'])
for i, subj in enumerate(subj_arr):
ss_extract = sleepstudy.loc[sleepstudy['Subject']==subj]
#new subplot
subplt = plt.subplot(gs[i])
#plot without confidence intervals
sns.regplot(x='Days', y='Reaction', ci=None, data=ss_extract).set_title('Subject '+str(subj))
sns.regplot(x='Days', y='Reaction', ci=None, scatter=False, data=sleepstudy)
subj_num = int(np.where(subj_arr==subj)[0])
subjects_avg_intercept = np.mean(trace['alpha'][:,i])
subjects_avg_slope = np.mean(trace['beta'][:,i])
hmodel_fit = [subjects_avg_intercept + subjects_avg_slope*x for x in range(-1,11)]
sns.lineplot(x=range(-1,11),y=hmodel_fit)
if i not in [0,6,12]:
plt.ylabel("")
i+=1
subplt.set_ylim(ymin,ymax)
_ = plt.figlegend(['Estimated from each subject alone','Pooling all subjects','Hierarchical (partial pooling)'],loc = 'lower center', ncol=6)
_ = plt.show()
model_predictions = trace['mu_i'].mean(axis=0)
obs_reactions = sleepstudy['Reaction']
plt.figure(figsize=(11,8.5))
plt.scatter(sleepstudy['Reaction'], model_predictions)
plt.plot(plt.xlim(), plt.ylim(), c='black')
plt.xlabel("Observed Reaction Time (ms)")
plt.ylabel("Predicted Reaction Time [Mean of Posterior] (ms)")
plt.title("Observed and Fitted Reaction Times from . Bayesian Hierarchical Model")
plt.show()
###Output
_____no_output_____ |
.ipynb_checkpoints/logistic-regression-pH-white-checkpoint.ipynb | ###Markdown
Logistic RegressionLogistic Regression is a statistical method for predicting binary outcomes from data.Examples of this are "yes" vs "no" or "young" vs "old".These are categories that translate to probability of being a 0 or a 1.Source: Logistic Regression We can calculate logistic regression by adding an activation function as the final step to our linear model.This converts the linear regression output to a probability.
###Code
import matplotlib.pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
import pandas as pd
import os
###Output
_____no_output_____
###Markdown
Generate some data
###Code
df = pd.read_csv(os.path.join(".", "datasets", "winequality-white.csv"))
df.head()
y = df["quality"]
y
X = df.drop("quality", axis=1)
X.head()
print(f"Labels: {y[:10]}")
print(f"Data: {X[:10]}")
# Visualizing both classes
#plt.scatter(X[:, 0], X[:, 1], c=y)
y_arr = y.to_numpy()
y_arr
X_arr = X.to_numpy()
X_arr
X_arr[:,8]
###Output
_____no_output_____
###Markdown
Split our data into training and testing
###Code
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X_arr, y_arr, random_state=1)
from sklearn.linear_model import LogisticRegression
classifier = LogisticRegression(max_iter=40000)
classifier
classifier.fit(X_train, y_train)
print(f"Training Data Score: {classifier.score(X_train, y_train)}")
print(f"Testing Data Score: {classifier.score(X_test, y_test)}")
X = X_arr
X
y = y_arr
y
# Review for Volatile Acidity
import numpy as np
plt.scatter(X[:, 8], y, c=y)
print(X, y)
predictions = classifier.predict(X_test)
pd.DataFrame({"Prediction": predictions, "Actual": y_test})
###Output
_____no_output_____ |
solutions/intro-to-python.ipynb | ###Markdown
Introduction to Python for Natural Language ProcessingIn this notebook, we're going to go over some of the basics of Python. This is so that in later sessions we can focus on the big ideas behind the methods, rather than the implementation details.[Data Types & Operations](section 1)[A few tricks up your sleeve](section 2) Time- Teaching: 20 minutes- Exercises: 15 minutes Python codeIn the data directory, you will find a text file of an English dictionary. We can use this to count how many English words end in "ing".
###Code
dictionary_file = 'data/dictionary.txt'
total = 0
for line in open(dictionary_file):
word = line.strip()
if word.endswith('ing'):
total = total + 1
print(total)
###Output
_____no_output_____
###Markdown
Data types & Operations Arithmetic
###Code
5+2
print(5+2)
print(5-2)
print(5*2)
print(5/2)
5>2
###Output
_____no_output_____
###Markdown
Variable assignmentAssigning variables is something that we do all the time in programming. These aren't quite like the variables from high school algebra, where x represents an unknown to solve for. Instead these are like notes to ourselves that we want to save some value(s) for later use.Note that the equals sign is directional, like an arrow, telling the computer to give a certain value to a certain label.
###Code
# 'a' is being given the value 2; 'b' is given 5
a = 2
b = 5
# Let's perform an operation on the variables
a+b
# Variables can have many different kinds of names
this_number = 2
b/this_number
###Output
_____no_output_____
###Markdown
StringsIn Python, human language text gets represented as a string. These contain sequential sets of characters and they are offset by quotation marks, either double (") or single (').We will explore different kinds of operations in Python that are specific to human language objects, but it is useful to start by trying to see them as the computer does, as numerical representations.
###Code
# The iconic string
print("Hello, World!")
# Assign these strings to variables
a = "Hello"
b = 'World'
# Try out arithmetic operations.
# When we add strings we call it 'concatenation'
print(a+b)
print(a*5)
# Unlike a number that consists of a single value, a string is an ordered
# sequence of characters. We can find out the length of that sequence.
len("Hello, World!")
###Output
_____no_output_____
###Markdown
ListsThe _numbers_ and _strings_ we have just looked at are the two basic data types that we will focus our attention on in this workshop. When we are working with just a few numbers or strings, it is easy to keep track of them, but as we collect more we will want a system to organize them.One such organizational system is a _list_. This contains values (regardless of type) in order, and we can perform operations on it very similarly to the way we did with numbers.
###Code
# A list in which each element is a string
['Call', 'me', 'Ishmael']
# Let's assign a couple lists to variables
list1 = ['Call', 'me', 'Ishmael']
list2 = ['In', 'the', 'beginning']
###Output
_____no_output_____
###Markdown
ChallengeWhat will happen when we run the following cell?
###Code
print(list1+list2)
print(list1*5)
# As with a string, we can find out the length of a list
len(list1)
# Sometimes we just want a single value from the list at a time
print(list1[0])
print(list1[1])
print(list1[2])
# Or maybe we want the first few
print(list1[0:2])
print(list1[:2])
# Of course, lists can contain numbers or even a mix of numbers and strings
list3 = [7,8,9]
list4 = [7,'ate',9]
# And python is smart with numbers, so we can add them easily!
sum(list3)
###Output
_____no_output_____
###Markdown
Challenge- Concatenate 'list1' and 'list2' into a single list.- Retrieve the third element from the combined list.- Retrieve the fourth through sixth elements from the combined list. A few tricks up your sleeve String MethodsThe creators of Python recognize that human language has many important yet idiosyncratic features, so they have tried to make it easy for us to identify and manipulate them. For example, in the demonstration at the very beginning of the workshop, we referred to the idea of the suffix: the final letters of a word tell us something about its grammatical role and potentially the author's argument.We can analyze or manipulate certain features of a string using its methods. These are basically internal functions that every string automatically possesses. Note that even though the method may transform the string at hand, they don't change it permanently!
###Code
# Let's assign a variable to perform methods upon
greeting = "Hello, World!"
# We saw the 'endswith' method at the very beginning
# Note the type of output that gets printed
greeting.startswith('H'), greeting.endswith('d')
# We can check whether the string is a letter or a number
this_string = 'f'
this_string.isalpha()
# When there are multiple characters, it checks whether *all*
# of the characters belong to that category
greeting.isalpha(), greeting.isdigit()
# Similarly, we can check whether the string is lower or upper case
greeting.islower(), greeting.isupper(), greeting.istitle()
# Sometimes we want not just to check, but to change the string
greeting.lower(), greeting.upper()
# The case of the string hasn't changed!
greeting
# But if we want to permanently make it lower case we re-assign it
greeting = greeting.lower()
greeting
# Oh hey. And strings are kind of like lists, so we can slice them similarly
greeting[:3]
# Strings may be like lists of characters, but as humans we often treat them as
# lists of words. We tell the computer to can perform that conversion.
greeting.split()
###Output
_____no_output_____
###Markdown
Challenge- Return the second through eighth characters in 'greeting' ChallengeSplit the string below into a list of words and assign this to a new variable._NB: A slash at the end of a line allows a string to continue unbroken onto the next._
###Code
new_string = "It, is a truth universally acknowledged, that a single \
man in possession of a good fortune must be in want of a wife."
###Output
_____no_output_____
###Markdown
List ComprehensionYou can think of them as list filters. Often, we don't need every value in a list, just a few that fulfill certain criteria.
###Code
# 'list1' had contained three words, two of which were in title case.
# We can automatically return those words using a list comprehension
[word for word in list1 if word.istitle()]
# Or we can include all the words in the list but just take their first letters
[word[0] for word in list1]
###Output
_____no_output_____
###Markdown
ChallengeUsing the list of words you produced by splitting 'new_string', create a new list that contains only the words whose last letter is "e". ChallengeCreate a new list that contains the first letter of each word. ChallengeCreate a new list that contains only words longer than two letters. BONUS: Exploratory Natural Language Processing TasksNow that we have some of Python's basics in our toolkit, we can immediately perform the kinds of tasks that are the digital humanist's bread and butter. When we first meet a text in the wild, we often wish to find out a little about it before digging in deeply, so we start with simple questions like "How many words are in this text?" or "How long is the average word?" ChallengeRun the cell below to read in the text of "Pride and Prejudice" and answer the following questions:- How many words are in the novel?- How many words in the novel appear in title case?- Approximately how long is the average word in the novel?
###Code
austen_file = 'data/pride-and-prejudice.txt'
with open(austen_file) as f:
contents = f.read()
###Output
_____no_output_____ |
My notebooks/T1 - 3 - Data Cleaning - Plots.ipynb | ###Markdown
Plots y visualización de los datos
###Code
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
mainpath = "/Users/diegogarcia-viana/Desktop/Curso machine learning & data scientist/python-ml-course/datasets/"
filename = "customer-churn-model/Customer Churn Model.txt"
data = pd.read_csv(os.path.join(mainpath, filename))
data.head()
% matplotlib inline
#savefig("path_donde_guardar_la_imagen")
###Output
_____no_output_____
###Markdown
Scatter Plot (nube de dispersión)
###Code
data.plot(kind = "scatter", x = "Day Mins", y = "Day Charge")
data.plot(kind = "scatter", x = "Night Mins", y = "Night Charge")
figure, axs = plt.subplots(2,2, sharey = True, sharex = True)
data.plot(kind = "scatter", x = "Day Mins", y = "Day Charge", ax = axs[0][0])
data.plot(kind = "scatter", x = "Night Mins", y = "Night Charge", ax = axs[0][1])
data.plot(kind = "scatter", x = "Day Calls", y = "Day Charge", ax = axs[1][0])
data.plot(kind = "scatter", x = "Night Calls", y = "Night Charge", ax = axs[1][1])
###Output
_____no_output_____
###Markdown
Histogramas de frecuencias
###Code
#plt.hist(data["Day Calls"], bins = 20) #Bins es el número de rangos (escalones) en los que queremos subdividir los datos
k = int(np.ceil(1 + np.log2(3333))) #np.ceil trunca el resultado (12 y pico) a 13
plt.hist(data["Day Calls"], bins = k) #bins = [0,30,60,...,200]
plt.xlabel("Número de llamadas al día")
plt.ylabel("Frecuencia")
plt.title("Histograma de número de llamadas al día")
###Output
_____no_output_____
###Markdown
La regla de Sturges nos indica cuántas divisiones son necesarias en un histograma Boxplot: diagrama de caja y bigotes
###Code
plt.boxplot(data["Day Calls"])
plt.ylabel("Número de llamadas diarias")
plt.title("Boxplot de las llamadas diarias")
data["Day Calls"].describe()
IQR = data["Day Calls"].quantile(0.75) - data["Day Calls"].quantile(0.25) #Rango intercuantílico (Inter Quantilic Range)
data["Day Calls"].quantile(0.25) - 1.5*IQR
data["Day Calls"].quantile(0.75) + 1.5*IQR
###Output
_____no_output_____ |
module3-databackedassertions/Xander_Bennett_DS7_LS_DS_113_Making_Data_backed_Assertions_Assignment.ipynb | ###Markdown
Lambda School Data Science - Making Data-backed AssertionsThis is, for many, the main point of data science - to create and support reasoned arguments based on evidence. It's not a topic to master in a day, but it is worth some focused time thinking about and structuring your approach to it. Assignment - what's going on here?Consider the data in `persons.csv` (already prepared for you, in the repo for the week). It has four columns - a unique id, followed by age (in years), weight (in lbs), and exercise time (in minutes/week) of 1200 (hypothetical) people.Try to figure out which variables are possibly related to each other, and which may be confounding relationships.Try and isolate the main relationships and then communicate them using crosstabs and graphs. Share any cool graphs that you make with the rest of the class in Slack!
###Code
# TODO - your code here
# Use what we did live in lecture as an example
# HINT - you can find the raw URL on GitHub and potentially use that
# to load the data with read_csv, or you can upload it yourself
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
!pip freeze
!pip install pandas==0.23.4
persons_url = 'https://raw.githubusercontent.com/xander-bennett/DS-Unit-1-Sprint-1-Dealing-With-Data/master/module3-databackedassertions/persons.csv'
persons_data = pd.read_csv(persons_url)
# Making sure we 'got' something
persons_data.head()
# Going to cut the labels from the frame to make it easier
age = pd.cut(persons_data['age'], 6)
exercise = pd.cut(persons_data['exercise_time'], 5)
weight = pd.cut(persons_data['weight'], 5)
ct = pd.crosstab([age, weight], exercise)
import numpy as np
import matplotlib as plt
ct.plot();
persons_data.plot();
ct.plot(kind='bar');
# That's messy. Gonna try another one
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure(figsize=(8, 6))
ax = fig.add_subplot(111, projection='3d')
plt.gca().patch.set_facecolor('Grey')
ax.w_xaxis.set_pane_color((0.8, 0.8, 0.8, 1.0))
ax.w_yaxis.set_pane_color((0.8, 0.8, 0.8, 1.0))
ax.w_zaxis.set_pane_color((0.8, 0.8, 0.8, 1.0))
xs = persons_data['weight']
ys = persons_data['age']
zs = persons_data['exercise_time']
ax.scatter(xs, ys, zs, s=50, alpha=0.6, c = 'Red', edgecolors='White')
ax.set_xlabel('Weight')
ax.set_ylabel('Age')
ax.set_zlabel('Exercise Time')
plt.show()
###Output
_____no_output_____ |
doc/nb/MHW_Cube.ipynb | ###Markdown
Cube of MHW_Events
###Code
# imports
import numpy as np
import os
from matplotlib import pyplot as plt
from datetime import date
import pandas
import sqlalchemy
import iris
###Output
_____no_output_____
###Markdown
Load
###Code
mhw_file = '/home/xavier/Projects/Oceanography/MHW/db/mhws_allsky_defaults.db'
tst_file = '/home/xavier/Projects/Oceanography/MHW/db/test_mhws_allsky.db'
#
engine = sqlalchemy.create_engine('sqlite:///'+mhw_file)
connection = engine.connect()
connection
mhw_events = pandas.read_sql_table('MHW_Events', con=engine,
columns=['date', 'lon', 'lat', 'duration' ,
'ievent', 'time_peak', 'time_start'])
mhw_events.head()
mhw_events = mhw_events.set_index('date')
mhw_events.head()
###Output
_____no_output_____
###Markdown
Size the Cube Load climate for spatial dimensions
###Code
climate_file = '/home/xavier/Projects/Oceanography/MHW/db/NOAA_OI_climate_1983-2012.nc'
cube = iris.load(climate_file)
climate = cube[0]
climate
###Output
_____no_output_____
###Markdown
lat, lon
###Code
lat = climate.coord('latitude').points
lat[0:5]
lon = climate.coord('longitude').points
lon[0:5]
###Output
_____no_output_____
###Markdown
Time
###Code
min_time = np.min(mhw_events['time_start'])
min_time
date(1982,1,1).toordinal()
max_time = np.max(mhw_events['time_start'] + mhw_events['duration'])
max_time
date(2019,12,31).toordinal()
max_time-min_time
ntimes = date(2019,12,31).toordinal() - date(1982,1,1).toordinal() + 1
ntimes
###Output
_____no_output_____
###Markdown
Cube Init
###Code
cube = np.zeros((720,1440,ntimes), dtype=bool)
###Output
_____no_output_____
###Markdown
Do It!
###Code
ilon = ((mhw_events['lon'].values-0.125)/0.25).astype(np.int32)
ilon
jlat = ((mhw_events['lat'].values+89.975)/0.25).astype(np.int32)
jlat
tstart = mhw_events['time_start'].values
durs = mhw_events['duration'].values
cube[:] = False
for kk in range(len(mhw_events)):
# Convenience
#iilon, jjlat, tstart, dur = ilon[kk], jlat[kk], time_start[kk], durations[kk]
#
if kk % 1000000 == 0:
print('kk = {}'.format(kk))
cube[jlat[kk], ilon[kk], tstart[kk]-min_time:tstart[kk]-min_time+durs[kk]] = True
###Output
kk = 0
kk = 1000000
kk = 2000000
kk = 3000000
kk = 4000000
kk = 5000000
kk = 6000000
kk = 7000000
kk = 8000000
kk = 9000000
kk = 10000000
kk = 11000000
kk = 12000000
kk = 13000000
kk = 14000000
kk = 15000000
kk = 16000000
kk = 17000000
kk = 18000000
kk = 19000000
kk = 20000000
kk = 21000000
kk = 22000000
kk = 23000000
kk = 24000000
kk = 25000000
kk = 26000000
kk = 27000000
kk = 28000000
kk = 29000000
kk = 30000000
kk = 31000000
kk = 32000000
kk = 33000000
kk = 34000000
kk = 35000000
kk = 36000000
kk = 37000000
kk = 38000000
kk = 39000000
kk = 40000000
kk = 41000000
kk = 42000000
kk = 43000000
kk = 44000000
kk = 45000000
kk = 46000000
kk = 47000000
kk = 48000000
kk = 49000000
kk = 50000000
kk = 51000000
kk = 52000000
kk = 53000000
kk = 54000000
kk = 55000000
kk = 56000000
kk = 57000000
###Markdown
Save
###Code
np.savez_compressed('/home/xavier/Projects/Oceanography/MHW/db/MHWevent_cube.npz', cube=cube)
###Output
_____no_output_____
###Markdown
Load
###Code
tmp = np.load('/home/xavier/Projects/Oceanography/MHW/db/MHWevent_cube.npz')
tmp2 = tmp['cube']
tmp2.itemsize
np.sum(tmp2)
###Output
_____no_output_____ |
scratch/Lecture10.ipynb | ###Markdown
`numpy.vectorize` Threading and multi-core processing
###Code
def plot_one(data, name):
xs, ys = data.T
plt.scatter(xs, ys, s=1, edgecolor=None)
plt.savefig('%s.png' % name)
data = np.random.random((10, 10000, 2))
###Output
_____no_output_____
###Markdown
Single core
###Code
%%time
for i, M in enumerate(data):
plot_one(M, i)
###Output
CPU times: user 12.8 s, sys: 103 ms, total: 12.9 s
Wall time: 12.8 s
###Markdown
Threads
###Code
%%time
args = [(x, i) for i, x in enumerate(data)]
with ThreadPoolExecutor() as pool:
pool.map(lambda x: plot_one(*x), args)
###Output
CPU times: user 8.83 s, sys: 1.27 s, total: 10.1 s
Wall time: 9.4 s
###Markdown
Processes
###Code
%%time
args = [(x, i) for i, x in enumerate(data)]
with mp.Pool() as pool:
pool.starmap(plot_one, args)
###Output
CPU times: user 29.4 ms, sys: 93 ms, total: 122 ms
Wall time: 3.49 s
###Markdown
Parallel comprehensions with `joblib`
###Code
%%time
Parallel(n_jobs=-1)(delayed(plot_one)(x, i) for i, x in enumerate(data))
pass
###Output
CPU times: user 139 ms, sys: 114 ms, total: 253 ms
Wall time: 3.72 s
|
COAD-DRD/scripts/top50_Table.ipynb | ###Markdown
Create dynamic table for Top 50 interactions for COAD
###Code
import ipywidgets as widgets
from ipywidgets import interact, interact_manual
import pandas as pd
import numpy as np
# get data
database = "../db/dbCOAD-DRD.csv"
df = pd.read_csv(database)
df
# order by AE and get only AE < -8.5 kcal/mol
df_repurposing = df[df['AE']<=-8.5]
# get only the best 50 interactions
df_repurposing = df_repurposing.head(50)
# use other order of columns
df_repurposing = df_repurposing[['AE', 'HGNC_symbol', 'DrugName', 'ProteinID', 'DrugCID', 'Drug']]
df_repurposing
###Output
_____no_output_____
###Markdown
From 23272 interactions (pairs of PDB - compound), we selected only the best 50 interactions with AE < -8.5 kcal/mol.
###Code
# data export to HTML
print(df_repurposing.to_html())
# saving as HTML file the same result
fout = open("../extras/top50_table.html","w")
fout.write(df_repurposing.to_html(index=False))
fout.close()
# counting the elements
print('No of genes:', len(list(set(df_repurposing['HGNC_symbol']))))
print('No of PDBs:', len(list(set(df_repurposing['ProteinID']))))
print('No of drug names:', len(list(set(df_repurposing['DrugName']))))
print('No of drug compounds:', len(list(set(df_repurposing['DrugCID']))))
# save dataset with Top50
df_repurposing.to_csv("../db/dbCOAD-DRD_Top50.csv", index=False)
###Output
_____no_output_____ |
module-3/Supervised-Learning/your-code/.ipynb_checkpoints/main-checkpoint.ipynb | ###Markdown
Before you start:- Read the README.md file- Comment as much as you can and use the resources in the README.md file- Happy learning!
###Code
# Import your libraries:
%matplotlib inline
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from datetime import timedelta
from sklearn.metrics import accuracy_score
# accuracy_score(pred_y, y)
###Output
_____no_output_____
###Markdown
In this lab, we will explore a dataset that describes websites with different features and labels them either benign or malicious . We will use supervised learning algorithms to figure out what feature patterns malicious websites are likely to have and use our model to predict malicious websites. Challenge 1 - Explore The DatasetLet's start by exploring the dataset. First load the data file:
###Code
websites = pd.read_csv('../data/website.csv')
###Output
_____no_output_____
###Markdown
Explore the data from an bird's-eye view.You should already been very familiar with the procedures now so we won't provide the instructions step by step. Reflect on what you did in the previous labs and explore the dataset.Things you'll be looking for:* What the dataset looks like?* What are the data types?* Which columns contain the features of the websites?* Which column contains the feature we will predict? What is the code standing for benign vs malicious websites?* Do we need to transform any of the columns from categorical to ordinal values? If so what are these columns?Feel free to add additional cells for more exploration. Make sure to comment what you find!
###Code
# Your code here
websites.shape
'''
sense check
do value count remove url, 2nd keep, drop charset or make 3-4 categories, server make 4 categories,
check number of uniques and based on categories -from county, drop whois state,
decide on extracted years range before 1995 and after you dod that then drop column,
tcp...do heatmap
'''
'''
how many 1s and 0s are there:
from the original dataset-
216 1s - malicious
1563 0s - bening
imbalance data set. almost 7 times over. overrepresnted, have to balance the data.
'''
websites.head()
websites.dtypes
'''
date time type examples
from datetime import timedelta
d = timedelta(microseconds=-1)
(d.days, d.seconds, d.microseconds)
(-1, 86399, 999999)
pd.to_timedelta(df.hour_in + ':00', errors='coerce')
df['date'] = pd.to_datetime(df['date'], errors='coerce')
print (df)
df['date'] = pd.to_datetime(df['date'], format="%m/%d/%Y")
print (df)
'''
# CONVERTING TO DATETIME WITH SPECIFIED FORMAT. REPLACING COLUMN INFO. UTC IS UNIVERSAL TIME
websites['WHOIS_REGDATE'] = pd.to_datetime(websites['WHOIS_REGDATE'], format='%d/%m/%Y %H:%M', errors='coerce', utc=True)
websites
websites['WHOIS_UPDATED_DATE'] = pd.to_datetime(websites['WHOIS_UPDATED_DATE'], format='%d/%m/%Y %H:%M', errors='coerce', utc=True)
websites
# Dropped newly made columns
# websites.drop(['NEW_WHOIS_REGDATE', 'NEW_WHOIS_UPDATED_DATE', 'NEW_WHOIS_REGDATE2'], axis=1, inplace=True)
# updated WHOIS_REGDATE & WHOIS_UPDATED_DATE
websites.dtypes
websites.head()
# Your comment here
###Output
_____no_output_____
###Markdown
Next, evaluate if the columns in this dataset are strongly correlated.In class, we discussed that we are concerned if our dataset has strongly correlated columns because if this is the case we need to choose certain ML algorithms instead of others. We need to evaluate this for our dataset now.Luckily, most of the columns in this dataset are ordinal which makes things a lot easier for us. In the cells below, evaluate the level of collinearity of the data.We provide some general directions for you to consult in order to complete this step:1. You will create a correlation matrix using the numeric columns in the dataset.1. Create a heatmap using `seaborn` to visualize which columns have high collinearity.1. Comment on which columns you might need to remove due to high collinearity.
###Code
# Your code here
numericals = websites._get_numeric_data()
numericals
len(numericals.columns)
# use corr function, will untilize number of numerical columns
corr_matrix = numericals.corr()
# set fig size to have better readibility of heatmap
fig, ax = plt.subplots(figsize=(14,14))
heatmap = sns.heatmap(corr_matrix, annot =True, ax=ax)
heatmap
numericals.shape
# Your comment here
high collinearity in columns 'REMOTE_APP_PACKETS' & 'SOURCE_APP_PACKETS' & 'TCP_CONVERSATION_EXCHANGE' & 'APP_PACKETS'
TCP_CONVERSATION_EXCHANGE & SOURCE_APP_PACKETS have 1
TCP_CONVERSATION_EXCHANGE & APP_PACKETS have 1
APP_BYTES & REMOTE_APP_BYTES have 1
SOURCE_APP_PACKETS & APP_PACKETS have 1
top 3 that have 2: TCP_CONVERSATION_EXCHANGE / SOURCE_APP_PACKETS / APP_PACKETS
[ while
REMOTE_APP_PACKETS & SOURCE_APP_PACKETS have 0.99
TCP_CONVERSATION_EXCHANGE & REMOTE_APP_PACKETS have 0.99
APP_PACKETS & REMOTE_APP_PACKETS have 0.99
'REMOTE_APP_PACKETS' touch all 3 ]
###Output
_____no_output_____
###Markdown
Challenge 2 - Remove Column Collinearity.From the heatmap you created, you should have seen at least 3 columns that can be removed due to high collinearity. Remove these columns from the dataset.Note that you should remove as few columns as you can. You don't have to remove all the columns at once. But instead, try removing one column, then produce the heatmap again to determine if additional columns should be removed. As long as the dataset no longer contains columns that are correlated for over 90%, you can stop. Also, keep in mind when two columns have high collinearity, you only need to remove one of them but not both.In the cells below, remove as few columns as you can to eliminate the high collinearity in the dataset. Make sure to comment on your way so that the instructional team can learn about your thinking process which allows them to give feedback. At the end, print the heatmap again.
###Code
numericals.columns
# Your code here
# 'REMOTE_APP_PACKETS' touch all 3
numericals.drop('TCP_CONVERSATION_EXCHANGE')
numericals
#sns.heatmap(websites[numericals].corr())
# Your comment here
# Print heatmap again
###Output
_____no_output_____
###Markdown
Challenge 3 - Handle Missing ValuesThe next step would be handling missing values. **We start by examining the number of missing values in each column, which you will do in the next cell.**
###Code
# Your code here
numericals.isnull().sum()
len(numericals.isnull().sum())
numericals.dtypes
###Output
_____no_output_____
###Markdown
If you remember in the previous labs, we drop a column if the column contains a high proportion of missing values. After dropping those problematic columns, we drop the rows with missing values. In the cells below, handle the missing values from the dataset. Remember to comment the rationale of your decisions.
###Code
# Your code here
numericals["CONTENT_LENGTH"].isnull().sum()/len(numericals["CONTENT_LENGTH"])
# Your comment here
# Basically half of the CONTENT_LENGTH column is filled with NaN values. So, we're dropping the whole column:
numericals = numericals.drop(columns = "CONTENT_LENGTH")
numericals.head()
###Output
_____no_output_____
###Markdown
Again, examine the number of missing values in each column. If all cleaned, proceed. Otherwise, go back and do more cleaning.
###Code
# Examine missing values in each column
numericals.isnull().sum()
numericals["DNS_QUERY_TIMES"].isnull().sum()/len(numericals["DNS_QUERY_TIMES"])
# There's only one row with a null value in the DNS_QUERY_TIMES column. So, we're dropping that one row:
numericals = numericals.drop(numericals.loc[numericals["DNS_QUERY_TIMES"].isnull()].index, axis = 0)
numericals
numericals.isnull().sum()
###Output
_____no_output_____
###Markdown
Challenge 4 - Handle `WHOIS_*` Categorical Data
###Code
categoricals = websites.select_dtypes(object)
categoricals.head()
# plus the 2 converted datetimes columns already done.
###Output
_____no_output_____
###Markdown
There are several categorical columns we need to handle. These columns are:* `URL`* `CHARSET`* `SERVER`* `WHOIS_COUNTRY`* `WHOIS_STATEPRO`* `WHOIS_REGDATE`* `WHOIS_UPDATED_DATE`How to handle string columns is always case by case. Let's start by working on `WHOIS_COUNTRY`. Your steps are:1. List out the unique values of `WHOIS_COUNTRY`.1. Consolidate the country values with consistent country codes. For example, the following values refer to the same country and should use consistent country code: * `CY` and `Cyprus` * `US` and `us` * `SE` and `se` * `GB`, `United Kingdom`, and `[u'GB'; u'UK']` In the cells below, fix the country values as intructed above.
###Code
# Your code here
websites["WHOIS_COUNTRY"].unique()
# Replacing the variables correctly
websites.WHOIS_COUNTRY = websites.WHOIS_COUNTRY.str.replace("United Kingdom","GB").str.replace("\[u'GB'; u'UK'\]","GB").str.replace("Cyprus","CY").str.upper()
websites["WHOIS_COUNTRY"].unique()
###Output
_____no_output_____
###Markdown
Since we have fixed the country values, can we convert this column to ordinal now?Not yet. If you reflect on the previous labs how we handle categorical columns, you probably remember we ended up dropping a lot of those columns because there are too many unique values. Too many unique values in a column is not desirable in machine learning because it makes prediction inaccurate. But there are workarounds under certain conditions. One of the fixable conditions is: If a limited number of values account for the majority of data, we can retain these top values and re-label all other rare values.The `WHOIS_COUNTRY` column happens to be this case. You can verify it by print a bar chart of the `value_counts` in the next cell to verify:
###Code
# Your code here
websites["WHOIS_COUNTRY"].value_counts()
plt.hist(websites["WHOIS_COUNTRY"], bins = len(websites["WHOIS_COUNTRY"].unique()))
plt.xticks(rotation='vertical')
###Output
_____no_output_____
###Markdown
After verifying, now let's keep the top 10 values of the column and re-label other columns with `OTHER`.
###Code
# Your code here
top10_values = list(websites["WHOIS_COUNTRY"].value_counts().head(10).index)
other_index = websites[~websites["WHOIS_COUNTRY"].isin(top10_values)]["WHOIS_COUNTRY"].index
websites.iloc[other_index, websites.columns.get_loc("WHOIS_COUNTRY")] = "OTHER"
top10_values
other_index
websites.iloc[other_index, websites.columns.get_loc("WHOIS_COUNTRY")]
###Output
_____no_output_____
###Markdown
Now since `WHOIS_COUNTRY` has been re-labelled, we don't need `WHOIS_STATEPRO` any more because the values of the states or provinces may not be relevant any more. We'll drop this column.In addition, we will also drop `WHOIS_REGDATE` and `WHOIS_UPDATED_DATE`. These are the registration and update dates of the website domains. Not of our concerns. In the next cell, drop `['WHOIS_STATEPRO', 'WHOIS_REGDATE', 'WHOIS_UPDATED_DATE']`.
###Code
# Your code here
websites = websites.drop(columns = ['WHOIS_STATEPRO', 'WHOIS_REGDATE', 'WHOIS_UPDATED_DATE'])
websites
###Output
_____no_output_____
###Markdown
Challenge 5 - Handle Remaining Categorical Data & Convert to OrdinalNow print the `dtypes` of the data again. Besides `WHOIS_COUNTRY` which we already fixed, there should be 3 categorical columns left: `URL`, `CHARSET`, and `SERVER`.
###Code
# Your code here
###Output
_____no_output_____
###Markdown
`URL` is easy. We'll simply drop it because it has too many unique values that there's no way for us to consolidate.
###Code
# Your code here
###Output
_____no_output_____
###Markdown
Print the unique value counts of `CHARSET`. You see there are only a few unique values. So we can keep it as it is.
###Code
# Your code here
###Output
_____no_output_____
###Markdown
`SERVER` is a little more complicated. Print its unique values and think about how you can consolidate those values. Before you think of your own solution, don't read the instructions that come next.
###Code
# Your code here
###Output
_____no_output_____
###Markdown

###Code
# Your comment here
###Output
_____no_output_____
###Markdown
Although there are so many unique values in the `SERVER` column, there are actually only 3 main server types: `Microsoft`, `Apache`, and `nginx`. Just check if each `SERVER` value contains any of those server types and re-label them. For `SERVER` values that don't contain any of those substrings, label with `Other`.At the end, your `SERVER` column should only contain 4 unique values: `Microsoft`, `Apache`, `nginx`, and `Other`.
###Code
# Your code here
# Count `SERVER` value counts here
###Output
_____no_output_____
###Markdown
OK, all our categorical data are fixed now. **Let's convert them to ordinal data using Pandas' `get_dummies` function ([documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.get_dummies.html)).** Make sure you drop the categorical columns by passing `drop_first=True` to `get_dummies` as we don't need them any more. **Also, assign the data with dummy values to a new variable `website_dummy`.**
###Code
# Your code here
###Output
_____no_output_____
###Markdown
Now, inspect `website_dummy` to make sure the data and types are intended - there shouldn't be any categorical columns at this point.
###Code
# Your code here
###Output
_____no_output_____
###Markdown
Challenge 6 - Modeling, Prediction, and EvaluationWe'll start off this section by splitting the data to train and test. **Name your 4 variables `X_train`, `X_test`, `y_train`, and `y_test`. Select 80% of the data for training and 20% for testing.**
###Code
from sklearn.model_selection import train_test_split
# Your code here:
###Output
_____no_output_____
###Markdown
In this lab, we will try two different models and compare our results.The first model we will use in this lab is logistic regression. We have previously learned about logistic regression as a classification algorithm. In the cell below, load `LogisticRegression` from scikit-learn and initialize the model.
###Code
# Your code here:
###Output
_____no_output_____
###Markdown
Next, fit the model to our training data. We have already separated our data into 4 parts. Use those in your model.
###Code
# Your code here:
###Output
_____no_output_____
###Markdown
finally, import `confusion_matrix` and `accuracy_score` from `sklearn.metrics` and fit our testing data. Assign the fitted data to `y_pred` and print the confusion matrix as well as the accuracy score
###Code
# Your code here:
###Output
_____no_output_____
###Markdown
What are your thoughts on the performance of the model? Write your conclusions below.
###Code
# Your conclusions here:
###Output
_____no_output_____
###Markdown
Our second algorithm is is K-Nearest Neighbors. Though is it not required, we will fit a model using the training data and then test the performance of the model using the testing data. Start by loading `KNeighborsClassifier` from scikit-learn and then initializing and fitting the model. We'll start off with a model where k=3.
###Code
# Your code here:
###Output
_____no_output_____
###Markdown
To test your model, compute the predicted values for the testing sample and print the confusion matrix as well as the accuracy score.
###Code
# Your code here:
###Output
_____no_output_____
###Markdown
We'll create another K-Nearest Neighbors model with k=5. Initialize and fit the model below and print the confusion matrix and the accuracy score.
###Code
# Your code here:
###Output
_____no_output_____
###Markdown
Did you see an improvement in the confusion matrix when increasing k to 5? Did you see an improvement in the accuracy score? Write your conclusions below.
###Code
# Your conclusions here:
###Output
_____no_output_____
###Markdown
Bonus Challenge - Feature ScalingProblem-solving in machine learning is iterative. You can improve your model prediction with various techniques (there is a sweetspot for the time you spend and the improvement you receive though). Now you've completed only one iteration of ML analysis. There are more iterations you can conduct to make improvements. In order to be able to do that, you will need deeper knowledge in statistics and master more data analysis techniques. In this bootcamp, we don't have time to achieve that advanced goal. But you will make constant efforts after the bootcamp to eventually get there.However, now we do want you to learn one of the advanced techniques which is called *feature scaling*. The idea of feature scaling is to standardize/normalize the range of independent variables or features of the data. This can make the outliers more apparent so that you can remove them. This step needs to happen during Challenge 6 after you split the training and test data because you don't want to split the data again which makes it impossible to compare your results with and without feature scaling. For general concepts about feature scaling, click [here](https://en.wikipedia.org/wiki/Feature_scaling). To read deeper, click [here](https://medium.com/greyatom/why-how-and-when-to-scale-your-features-4b30ab09db5e).In the next cell, attempt to improve your model prediction accuracy by means of feature scaling. A library you can utilize is `sklearn.preprocessing.RobustScaler` ([documentation](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.RobustScaler.html)). You'll use the `RobustScaler` to fit and transform your `X_train`, then transform `X_test`. You will use logistic regression to fit and predict your transformed data and obtain the accuracy score in the same way. Compare the accuracy score with your normalized data with the previous accuracy data. Is there an improvement?
###Code
# Your code here
###Output
_____no_output_____ |
datastructure/collections_ops.ipynb | ###Markdown
教程https://www.liaoxuefeng.com/wiki/897692888725344/973805065315456
###Code
from collections import namedtuple
from collections import deque
from collections import OrderedDict
from collections import Counter
from collections import defaultdict
###Output
_____no_output_____
###Markdown
namedtuplenamedtuple是一个函数,它用来创建一个自定义的tuple对象,并且规定了tuple元素的个数,并可以用属性而不是索引来引用tuple的某个元素。这样一来,我们用namedtuple可以很方便地定义一种数据类型,它具备tuple的不变性,又可以根据属性来引用,使用十分方便。 可以验证创建的Point对象是tuple的一种子类:
###Code
Point = namedtuple('Point', ['x', 'y'])
p = Point(1, 2)
print(p.x, p.y)
print(isinstance(p, Point))
print(isinstance(p, tuple))
###Output
1 2
True
True
###Markdown
deque使用list存储数据时,按索引访问元素很快,但是插入和删除元素就很慢了,因为list是线性存储,数据量大的时候,插入和删除效率很低。 deque是为了高效实现插入和删除操作的双向列表,适合用于队列和栈:
###Code
q = deque(['a', 'b', 'c'])
q.append('x')
q.appendleft('y')
print(q) # deque(['y', 'a', 'b', 'c', 'x'])
###Output
deque(['y', 'a', 'b', 'c', 'x'])
###Markdown
OrderedDictOrderedDict的Key会按照插入的顺序排列,不是Key本身排序 使用dict时,如果引用的Key不存在,就会抛出KeyError。如果希望key不存在时,返回一个默认值,就可以用defaultdict
###Code
od = OrderedDict([('a', 1), ('b', 2), ('c', 3)])
print(od)#OrderedDict([('a', 1), ('b', 2), ('c', 3)])
od['b']
od.values()
list(od.values())
###Output
_____no_output_____
###Markdown
CounterCounter是一个简单的计数器,例如,统计字符出现的个数: Counter实际上也是dict的一个子类 Counter 对象有一个叫做 elements() 的方法,其返回的序列中,依照计数重复元素相同次数,元素顺序是无序的
###Code
c = Counter()
c['a']
for ch in 'programming':
c[ch] = c[ch] + 1
c
Counter('programming')
d = dict(Counter('programming'))
d
import pandas as pd
pd.Series(list(d.keys()))
Counter('programming').most_common(3)
Counter('programming').elements()
list(Counter('programming').elements())
###Output
_____no_output_____
###Markdown
defaultdict使用dict时,如果引用的Key不存在,就会抛出KeyError。如果希望key不存在时,返回一个默认值,就可以用defaultdict:
###Code
from collections import defaultdict
dd = defaultdict(lambda: 'N/A')
dd['key1'] = 'abc'
dd['key1'] # key1存在
dd['key2'] # key2不存在,返回默认值
# 当key不存在时,默认是0
dd = defaultdict(int)
dd['a']
###Output
_____no_output_____ |
02_02_backtracking_nqueens.ipynb | ###Markdown
Backtracking N-Queens
###Code
board = [4, 7, 2, 6, 1, 0, 3, 5]
for row, col in enumerate(board):
print(f"Hay una reina en la fila {row} columna {col}")
import numpy as np
import matplotlib.pyplot as plt
def draw(board):
n = len(board)
b = np.zeros((n, n, 3), dtype=int)
b += [255, 128, 80]
b[::2, ::2] = [255, 225, 120]
b[1::2, 1::2] = [255, 225, 120]
_, ax = plt.subplots()
ax.imshow(b)
for row, col in enumerate(board):
ax.text(col, row, u"\u265b", fontsize=200/n, va="center", ha="center")
ax.set(xticks=[], yticks=[])
draw(board)
def valid(board, row, col):
for row_i in range(row):
col_i = board[row_i]
delta = row - row_i
if col in [col_i, col_i + delta, col_i - delta]:
return False
return True
board = [1, 3, -1, -1]
assert valid(board, 2, 0) == True
draw(board)
board = [1, 3, -1, -1]
assert valid(board, 2, 1) == False
draw(board)
board = [1, 3, -1, -1]
assert valid(board, 2, 2) == False
draw(board)
def nqueens(board, row):
n = len(board)
if row == n:
draw(board)
else:
for col in range(n):
if valid(board, row, col):
board[row] = col
nqueens(board, row+1)
[-1]*10
n = 4
nqueens([-1]*n, 0)
###Output
_____no_output_____ |
1.Chapter-Python/2-Python_Basis/courses/43-Enumerate.ipynb | ###Markdown
EnumerateEnumerate allows you to keep a count as you iterate through an object. It does this by returning a tuple in the form (count,element). The function itself is equivalent to: def enumerate(sequence, start=0): n = start for elem in sequence: yield n, elem n += 1 Example
###Code
lst = ['a','b','c']
for number,item in enumerate(lst):
print(number)
print(item)
###Output
0
a
1
b
2
c
###Markdown
enumerate() becomes particularly useful when you have a case where you need to have some sort of tracker. For example:
###Code
for count,item in enumerate(lst):
if count >= 2:
break
else:
print(item)
###Output
a
b
###Markdown
enumerate() takes an optional "start" argument to override the default value of zero:
###Code
months = ['March','April','May','June']
list(enumerate(months,start=3))
###Output
_____no_output_____ |
notebooks/Data Distributions.ipynb | ###Markdown
MNIST Experiments
###Code
mnist_gauss_save_dir = "../../../models/relative-entropy-coding/empirical-bayes-experiments/mnist/gaussian"
# Standard Gaussian VAE
mnist_gauss_vae = MNISTVAE(name="gaussian_mnist_vae",
prior=tfd.Normal(loc=tf.zeros(50), scale=tf.ones(50)))
ckpt = tf.train.Checkpoint(model=mnist_gauss_vae)
if not os.path.exists(mnist_gauss_save_dir):
print(f"{mnist_gauss_save_dir} has not been trained yet!")
manager = tf.train.CheckpointManager(ckpt, mnist_gauss_save_dir, max_to_keep=3)
mnist_gauss_vae(tf.zeros([1, 28, 28, 1]))
ckpt.restore(manager.latest_checkpoint)
if manager.latest_checkpoint:
print(f"Restored {manager.latest_checkpoint}")
###Output
Restored ../../../models/relative-entropy-coding/empirical-bayes-experiments/mnist/gaussian/ckpt-287
###Markdown
Load MNIST and pass it through every model
###Code
dataset = tfds.load("binarized_mnist",
data_dir="/scratch/gf332/datasets/binarized_mnist",
with_info=True,)
beta = 1
latent_size = 50
model = mnist_gauss_vae
for ds_folder in ["train", "test"]:
print(f"Saving {ds_folder} set!")
ds = dataset[0][ds_folder]
ds = ds.map(lambda x: tf.cast(x["image"], tf.float32))
for i, img in tqdm(enumerate(ds), total=dataset[1].splits[ds_folder].num_examples):
save_dir = f"{data_save_dir}/mnist/beta_{beta}_latents_{latent_size}/{ds_folder}/img_{i}"
if not os.path.exists(save_dir):
os.makedirs(save_dir)
reconstruction = model(img[None, ...], training=True)[0,...,0]
samples = model.posterior.sample()
np.save(f"{save_dir}/post_loc.npy", model.posterior.loc.numpy())
np.save(f"{save_dir}/post_scale.npy", model.posterior.scale.numpy())
np.save(f"{save_dir}/prior_loc.npy", model.prior.loc.numpy())
np.save(f"{save_dir}/prior_scale.npy", model.prior.scale.numpy())
prior_prob = model.prior.log_prob(samples)
prior_prob = tf.reduce_sum(prior_prob, axis=1)
###Output
Saving train set!
|
analysis/Ishita Gupta/ImportFunctions.ipynb | ###Markdown
This is the file I use to import functions - Ishita Gupta
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from pandas_profiling import ProfileReport
from scripts import project_functions
%load_ext autoreload
%autoreload 2
from importlib import reload
reload(project_functions)
project_functions.load_and_process('/Users/ISHITA GUPTA/Documents/COSC301/group29-project/data/raw/Medical_Cost.csv')
###Output
_____no_output_____ |
preparacion.ipynb | ###Markdown
Procesamiento Digital de Señales Dr. Mariano Llamedo Soria Primeros pasos Preparación del entorno de trabajoSe sugiere a todxs lxs estudiantes tener listo durante la primer semana el ambiente de trabajo:* Spyder* Jupyter notebook* El gestor de repositorios [Git](https://git-scm.com/downloads)* Una [cuenta en Github](https://github.com/), donde crearás una carpeta/repositorio a través del cual presentarás tus trabajos. Paso 1: Instalación de Python y otras yerbas ...1. Instalamos Python y el gestor de paquetes PIP```bashsudo apt install python3 python3-pip```2. Ahora ya podemos usar el gestor PIP. Instalará en tu entorno de usuario todos los módulos e IDE necesaria para que trabajes en PDS.```bashpip3 install --user scipy numpy spyder jupyter matplotlib```3. Si todo termina bien, ya tenés casi todo el ambiente de trabajo. Cuando quieras actualizar cualquier paquete, por lo general el Spyder IDE es un proyecto muy activo que saca varias actualizaciones por mes, deberías ejecutar:```bashpip3 install --user --upgrade spyder``` Paso 2: Instalación de GIT y repositorios1. Instalamos GIT```bashsudo apt install git```2. Ahora ya podemos clonar los repositorios de la materia:```bashcd tu_directorio_de_trabajogit clone https://github.com/marianux/pdstestbench.git```3. Ahora si ya podés revisar los siguientes scripts para familiarizarte con Python: a. Abrí Spyder. Si usaste Matlab, verás que es un clon. Situá tu directorio de trabajo en *tu_directorio_de_trabajo*. b. Abrí los siguientes scripts de ejemplo y probá ejecutarlos: * testbench0.py * testbench1.py c. Si sos usuario de Matlab, [este documento](https://numpy.org/doc/stable/user/numpy-for-matlab-users.html) te va a servir seguramente.Si algo falla, comparto recetas de algunos estudiantes que lo resolvieron siguiendo este procedimiento para [Linux](recetas_ubuntu.ipynb), y este otro para [Windows](recetas_windows.ipynb). Del mismo modo, tal vez los siguientes videos te sirvan para orientarte. **Buena suerte!**
###Code
from IPython.display import YouTubeVideo
YouTubeVideo('CJAZCYvigVc', width=800, height=450, list='PLlD2eDv5CIe9l0cjBJ1USQnC3gvV3n_Ga', index=1)
YouTubeVideo('yQ3KuMepMTM', width=800, height=450, list='PLlD2eDv5CIe9l0cjBJ1USQnC3gvV3n_Ga', index=2)
###Output
_____no_output_____
###Markdown
Paso 4: Jupyter Notebooks Este documento que estás leyendo fue creado con Jupyter Notebooks. Básicamente es un editor de texto que incorpora la *gran* ventaja de tener disponible en un mismo entorno de trabajo:* Editor de texto con formato *Markdown** Posibilidad de incorporar lenguaje matemático via $ \LaTeX $* Posibilidad de incluir código y (*re*) generación automática de gráficas y tablas.* Posibilidad de insertar audios, videos de Youtube y otros contenidos multimedia.* Visualización desde *CUALQUIER* dispositivo de forma fácil y elegante.Seguramente en cuanto los comiences a usar, no podrás volver a usar otro formato para artículos técnicos. En el siguiente video podrás ver un tutorial para dar los primeros pasos:
###Code
# en Castellano
YouTubeVideo('6Vr9ZUntCyE', width=800, height=450)
# o Inglés
YouTubeVideo('HW29067qVWk', width=800, height=450)
###Output
_____no_output_____ |
nbs/02-01-chexnet-knn.ipynb | ###Markdown
Split train test
###Code
X_ = X/7847.1504
split_pct = 0.8
indexes = list(range(len(X)))
random.shuffle(indexes)
train_idx = indexes[:int(split_pct*len(indexes))]
test_idx = indexes[int(split_pct*len(indexes)):]
print(len(train_idx), len(test_idx))
np.sum(labels_[train_idx]==1), np.sum(labels_[train_idx]==0)
np.sum(labels_[test_idx]==1), np.sum(labels_[test_idx]==0)
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import precision_score, recall_score, f1_score, precision_recall_curve, confusion_matrix
from sklearn.metrics import plot_confusion_matrix
knn = KNeighborsClassifier(n_neighbors=8, p=2)
knn.fit(X_[train_idx], labels_[train_idx])
knn.score(X_[train_idx], labels_[train_idx])
knn.score(X_[test_idx], labels_[test_idx])
test_output = knn.predict(X_[test_idx])
precision_score(labels_[test_idx], test_output), recall_score(labels_[test_idx], test_output)
f1_score(labels_[test_idx], test_output)
# knn.predict_proba(X_[test_idx])
precision, recall, thresh = precision_recall_curve(
labels_[test_idx],
knn.predict_proba(X_[test_idx])[..., 1]
)
precision
plt.plot(precision)
plt.plot(recall)
plt.legend(('precision', 'recall'))
plt.show()
cm = confusion_matrix(
y_true=labels_[test_idx],
y_pred=knn.predict(X_[test_idx])
)
disp = plot_confusion_matrix(knn, X_[test_idx], labels_[test_idx],
labels=(0, 1),
display_labels=('not covid', 'covid'),
cmap=plt.cm.Oranges
)
disp.ax_.set_title('Confusion Matrix of KNN score')
plt.savefig('../images/knn-confusion-matrix.png')
plt.show()
###Output
_____no_output_____ |
climate_starter_assignment.ipynb | ###Markdown
Reflect Tables into SQLAlchemy ORM
###Code
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
base = automap_base()
# reflect the tables
base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
base.classes.keys()
# Save references to each table
Measurement = base.classes.measurement
Station = base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
###Output
_____no_output_____
###Markdown
Exploratory Climate Analysis
###Code
# Design a query to retrieve the last 12 months of precipitation data and plot the results
year,month,day = last_date.split('-')
query_date = dt.date(int(year),int(month),int(day)) - dt.timedelta(days=365)
last = session.query(Measurement.date).order_by(Measurement.date.desc()).first()
last_date, = last
pt_data = session.query(Measurement.date, Measurement.prcp).filter(Measurement.date >= query_date).group_by(Measurement.date).all()
pt_data
# Calculate the date 1 year ago from the last data point in the database
print("Last data point: ",last_date)
# Perform a query to retrieve the data and precipitation scores
pt_data = pd.DataFrame(pt_data)
pt_data.head()
# Save the query results as a Pandas DataFrame and set the index to the date column
df = pt_data.set_index("date")
df_head()
# Sort the dataframe by date
pt_data.sort_values(["date"]).head()
# Use Pandas Plotting with Matplotlib to plot the data
pt_data.plot()
plt.xlabel("Date")
plt.ylabel("Inches")
plt.title("Analysis")
plt.legend(["Precipitation"])
plt.show()
# Use Pandas to calcualte the summary statistics for the precipitation data
pt_data.describe()
# Design a query to show how many stations are available in this dataset?
stations = session.query(Measurement.station).group_by(Measurement.station).count()
print(f'There are {stations} stations available in this dataset.')
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
active_stations = session.query(Measurement.station,
func.count(Measurement.station))\
.group_by(Measurement.station)\
.order_by(func.count(Measurement.station).desc())
for row in active_stations:
print(row)
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
active_stat = active_stations[0][0]
station = session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.station == active_stat).all()
print(f"Station: {active_stat}\n\
Lowest temperature in dataset : {station[0][0]}\n\
Highest temperture in dataset : {station[0][1]}\n\
Average temperature in dataset : {station[0][2]}")
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
temp = session.query(Measurement.station, Measurement.date, Measurement.tobs).\
filter(Measurement.station == active_stat).\
filter(Measurement.date > query_date).\
order_by(Measurement.date).all()
temp_df=pd.DataFrame(temp)
plt.hist(temp_df['tobs'],12)
plt.xlabel("Temperature")
plt.ylabel("Frequency")
plt.title("Highest number of temperature observations")
plt.legend(["tobs"], loc="best")
plt.savefig("temp.png")
plt.show()
###Output
_____no_output_____
###Markdown
Bonus Challenge Assignment
###Code
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
prev_trip = calc_temps('2012-02-28','2012-03-05')
prev_trip
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
min_temp = prev_trip[0][0]
avg_temp = prev_trip[0][1]
max_temp = prev_trip[0][2]
min_error = avg_temp - min_temp
max_error = max_temp - avg_temp
plt.figure(figsize=(3,6))
plt.bar(0, avg_temp, yerr=[max_temp-min_temp], color = 'yellow', alpha=.3)
plt.title('Trip Avg Temp')
plt.ylim(0,100)
plt.ylabel('Temp (F)')
plt.xticks([])
plt.show()
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
start_date = '2012-01-01'
end_date = '2012-01-07'
sel = [Station.station, Station.name, Station.latitude,
Station.longitude, Station.elevation, func.sum(Measurement.prcp)]
results = session.query(*sel).\
filter(Measurement.station == Station.station).\
filter(Measurement.date >= start_date).\
filter(Measurement.date <= end_date).\
group_by(Station.name).order_by(func.sum(Measurement.prcp).desc()).all()
print(results)
# Create a query that will calculate the daily normals
# (i.e. the averages for tmin, tmax, and tavg for all historic data matching a specific month and day)
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
# calculate the daily normals for your trip
def daily_normals(date):
"""Daily Normals.
Args:
date (str): A date string in the format '%m-%d'
Returns:
A list of tuples containing the daily normals, tmin, tavg, and tmax
"""
sel = [func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)]
return session.query(*sel).filter(func.strftime("%m-%d", Measurement.date) == date).all()
daily_normals("01-01")
trip_start = '2018-01-01'
trip_end = '2018-01-07'
# Use the start and end date to create a range of dates
trip_dates = pd.date_range(trip_start, trip_end, freq='D')
# Stip off the year and save a list of %m-%d strings
trip_month_day = trip_dates.strftime('%m-%d')
# Loop through the list of %m-%d strings and calculate the normals for each date
normals = []
for date in trip_month_day:
normals.append(*daily_normals(date))
normals
# Load the previous query results into a Pandas DataFrame and add the `trip_dates` range as the `date` index
df = pd.DataFrame(normals, columns=['tmin', 'tavg', 'tmax'])
df['date'] = trip_dates
df.set_index(['date'],inplace=True)
df.head()
# Plot the daily normals as an area plot with `stacked=False`
df.plot(kind='area', stacked=False, x_compat=True, alpha=.3)
plt.xlabel("Date")
plt.ylabel("Temperature")
plt.tight_layout()
###Output
_____no_output_____ |
79. Word Search.ipynb | ###Markdown
Given a 2D board and a word, find if the word exists in the grid.The word can be constructed from letters of sequentially adjacent cell, where "adjacent" cells are those horizontally or vertically neighboring. The same letter cell may not be used more than once.**Example:** board = [ ['A','B','C','E'], ['S','F','C','S'], ['A','D','E','E'] ]Given word = "ABCCED", return true.Given word = "SEE", return true.Given word = "ABCB", return false. Thought[Discussion form leetcode](https://leetcode.com/problems/word-search/discuss/27660/Python-dfs-solution-with-comments.)^Very easy to understand^- using DFS
###Code
#Time Complexity: O(n^2)
#Space Complexity: O(n)
def exist(self, board, word):
"""
:type board: List[List[str]]
:type word: str
:rtype: bool
"""
if not board:
return False
for i in range(len(board)):
for j in range(len(board[0])):
if self.dfs(board,word,i,j):
return True
return False
def dfs(self,board,word,i,j):
if not word:
return True
if i<0 or i>=len(board) or j<0 or j>=len(board[0]) or word[0]!=board[i][j]:
return False
tmp=board[i][j] #save the current character of board[i][j] for the future usage.
board[i][j]='#' #prevent visiting again
res=self.dfs(board,word[1:],i+1,j) or self.dfs(board,word[1:],i-1,j) \
or self.dfs(board,word[1:],i,j+1) or self.dfs(board,word[1:],i,j-1) #check each charater each step.
board[i][j]=tmp
return res
###Output
_____no_output_____ |
Appendix/other_gradient_boosting.ipynb | ###Markdown
부록. 다른 그레이디언트 부스팅 라이브러리 *아래 링크를 통해 이 노트북을 주피터 노트북 뷰어(nbviewer.org)로 보거나 구글 코랩(colab.research.google.com)에서 실행할 수 있습니다.* 주피터 노트북 뷰어로 보기 구글 코랩(Colab)에서 실행하기
###Code
# 노트북이 코랩에서 실행 중인지 체크합니다.
import sys
if 'google.colab' in sys.modules:
!pip install -q --upgrade xgboost lightgbm catboost
!wget -q https://raw.githubusercontent.com/rickiepark/handson-gb/main/Appendix/student-por.csv
###Output
_____no_output_____
###Markdown
LightGBM
###Code
import pandas as pd
df = pd.read_csv('student-por.csv', sep=';')
df.head()
y = df.iloc[:, -1]
X = df.iloc[:, :-3]
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
import lightgbm as lgb
lgbr = lgb.LGBMRegressor(random_state=42)
cat_columns = X_train.columns[X_train.dtypes==object].tolist()
for c in cat_columns:
X_train[c] = X_train[c].astype('category')
X_test[c] = X_test[c].astype('category')
X_train.info()
from sklearn.model_selection import cross_validate
scores = cross_validate(lgbr, X_train, y_train, scoring='neg_root_mean_squared_error')
-scores['test_score'].mean()
###Output
_____no_output_____
###Markdown
XGBRegressor의 히스토그램 기반 부스팅
###Code
X_oe = pd.get_dummies(X)
X_oe.info()
import xgboost as xgb
X_train_oe, X_test_oe = train_test_split(X_oe, random_state=42)
xgbr = xgb.XGBRegressor(tree_method='hist', grow_policy='lossguide')
scores = cross_validate(xgbr, X_train_oe, y_train, scoring='neg_root_mean_squared_error')
-scores['test_score'].mean()
###Output
_____no_output_____
###Markdown
xgboost 1.6 버전부터 `'approx'`, `'hist'`, `'gpu_hist'`에서 범주형 특성을 지원합니다.
###Code
xgbr = xgb.XGBRegressor(tree_method='hist', grow_policy='lossguide', enable_categorical=True)
scores = cross_validate(xgbr, X_train, y_train, scoring='neg_root_mean_squared_error')
-scores['test_score'].mean()
###Output
_____no_output_____
###Markdown
LightGBM 튜닝
###Code
from scipy.stats import randint
from sklearn.utils.fixes import loguniform
from sklearn.model_selection import RandomizedSearchCV
param_grid = {
'num_leaves': randint(10, 100),
'max_depth': randint(1, 10),
'min_child_samples': randint(10, 40),
'n_estimators': randint(50, 300),
'learning_rate': loguniform(1e-3, 0.1),
'subsample': loguniform(0.6, 1.0),
'subsample_freq': randint(1, 5),
}
rs = RandomizedSearchCV(lgbr, param_grid, n_iter=300,
scoring='neg_root_mean_squared_error',
n_jobs=-1, random_state=42)
rs.fit(X_train, y_train)
print('최상의 매개변수:', rs.best_params_)
print('최상의 교차 검증 점수:', -rs.best_score_)
###Output
최상의 매개변수: {'learning_rate': 0.021887293880411753, 'max_depth': 3, 'min_child_samples': 17, 'n_estimators': 193, 'num_leaves': 45, 'subsample': 0.8656809331397646, 'subsample_freq': 2}
최상의 교차 검증 점수: 2.63508853549706
###Markdown
모델 저장
###Code
import joblib
lgbr = rs.best_estimator_
joblib.dump(lgbr, 'lightgbm_model.joblib')
lgbr = joblib.load('lightgbm_model.joblib')
from sklearn.metrics import mean_squared_error
y_pred = lgbr.predict(X_test)
mean_squared_error(y_pred, y_test, squared=False)
###Output
_____no_output_____
###Markdown
특성 중요도
###Code
import matplotlib.pyplot as plt
lgb.plot_importance(lgbr, figsize=(10,10))
plt.show()
xgbr.fit(X_train_oe, y_train)
fig, ax = plt.subplots(figsize=(10, 10))
xgb.plot_importance(xgbr, ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
트리 그래프
###Code
lgb.plot_tree(lgbr, tree_index=0, figsize=(20,10),
orientation='vertical',
show_info=['internal_count', 'leaf_count'])
plt.show()
fig, axs = plt.subplots(3, 2, figsize=(20,20))
for i in range(0, 3):
for j in range(0, 2):
lgbr2 = lgb.LGBMRegressor(num_leaves=i*2+j+3)
lgbr2.fit(X_train, y_train)
lgb.plot_tree(lgbr2, tree_index=0, show_info=['split_gain'],
orientation='vertical', ax=axs[i, j])
axs[i, j].set_title('num_leaves={}'.format(i*2+j+3))
fig, ax = plt.subplots(figsize=(20, 20))
xgb.plot_tree(xgbr, num_trees=0, ax=ax)
plt.show()
###Output
_____no_output_____
###Markdown
HistGradientBoostingRegressor
###Code
from sklearn.preprocessing import OrdinalEncoder
from sklearn.compose import ColumnTransformer
cat_columns_bool = X_train.dtypes=='category'
ct = ColumnTransformer([('ord', OrdinalEncoder(), cat_columns_bool)],
remainder='passthrough')
X_train_ord = ct.fit_transform(X_train)
import numpy as np
cat_num_names = np.append(ct.feature_names_in_[cat_columns_bool],
ct.feature_names_in_[~cat_columns_bool])
X_train_ord = pd.DataFrame(X_train_ord, columns=cat_num_names)[X_train.columns]
X_train_ord.head()
from sklearn.ensemble import HistGradientBoostingRegressor
hgbr = HistGradientBoostingRegressor(categorical_features=cat_columns_bool,
random_state=42)
scores = cross_validate(hgbr, X_train_ord, y_train,
scoring='neg_root_mean_squared_error')
-scores['test_score'].mean()
###Output
_____no_output_____
###Markdown
특성 중요도
###Code
from sklearn.inspection import permutation_importance
hgbr.fit(X_train_ord, y_train)
result = permutation_importance(hgbr, X_train_ord, y_train, random_state=42)
sorted_idx = result.importances_mean.argsort()
plt.figure(figsize=(10,10))
plt.barh(X_train.columns[sorted_idx], result.importances_mean[sorted_idx])
plt.show()
###Output
_____no_output_____
###Markdown
CatBoost
###Code
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
mode_imputer = SimpleImputer(strategy='most_frequent')
mean_imputer = SimpleImputer(strategy='mean')
ct2 = ColumnTransformer([('str', mode_imputer, cat_columns_bool),
('num', mean_imputer, ~cat_columns_bool)])
X_train_ct = pd.DataFrame(ct2.fit_transform(X_train),
columns=cat_num_names)
X_train_ct = X_train_ct[X_train.columns]
X_train_ct.head()
param_grid = {
'n_estimators': randint(100, 300),
'depth': randint(4, 10),
'learning_rate': loguniform(1e-3, 0.1),
'min_child_samples': randint(10, 40),
'grow_policy': ['SymmetricTree', 'Lossguide', 'Depthwise']
}
import catboost as cb
cat_columns_idx = np.where(cat_columns_bool)[0]
cbr = cb.CatBoostRegressor(cat_features=cat_columns_idx,
verbose=False, random_seed=42)
rs = RandomizedSearchCV(cbr, param_grid, n_iter=100,
scoring='neg_root_mean_squared_error',
n_jobs=-1, random_state=42)
rs.fit(X_train_ct, y_train)
print('최상의 매개변수:', rs.best_params_)
print('최상의 교차 검증 점수:', -rs.best_score_)
cbr = cb.CatBoostRegressor(cat_features=cat_columns_idx, verbose=False, random_seed=42)
result = cbr.randomized_search(param_grid, X_train_ct, y_train,
cv=5, n_iter=100, verbose=False)
print('최상의 매개변수:', result['params'])
print('최상의 교차 검증 점수:', result['cv_results']['test-RMSE-mean'][-1])
plt.plot(result['cv_results']['train-RMSE-mean'], label='train-RMSE-mean')
plt.plot(result['cv_results']['test-RMSE-mean'], label='test-RMSE-mean')
plt.xlabel('Boosting Round')
plt.ylabel('RMSE')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
모델 저장과 복원
###Code
cbr.save_model('catboost_model.cbm')
cbr = cb.CatBoostRegressor().load_model('catboost_model.cbm')
cbr.save_model('catboost_model.json', format='json')
cbr = cb.CatBoostRegressor().load_model('catboost_model.json', format='json')
X_test_ct = pd.DataFrame(ct2.transform(X_test),
columns=cat_num_names)
X_test_ct = X_test_ct[X_test.columns]
y_pred = cbr.predict(X_test_ct)
mean_squared_error(y_pred, y_test, squared=False)
###Output
_____no_output_____
###Markdown
특성 중요도
###Code
feature_importances = cbr.get_feature_importance()
sorted_idx = feature_importances.argsort()
plt.figure(figsize=(10,10))
plt.barh(X_train.columns[sorted_idx], feature_importances[sorted_idx])
plt.show()
###Output
_____no_output_____ |
examples/composite_operator_examples.ipynb | ###Markdown
Composite Operator DemoQuantum algorithms typically contain a subroutine that involves running a quantum circuit, which are constructed using quantum gates. However, often we are not necessarily interested in knowing precisely what gates compose the algorithm's circuit and are only interested in the high level design. For example, in the below circuit for Quantum Phase Estimation, a group of gates has been summarized as a block corresponding to the inverse Quantum Fourier Transform. This also helps improve clarity in a circuit's design when visualizing. In general, the composite operator feature lets us do circuit construction at higher levels of abstraction. This notebook focuses on demonstrating some example usages. IMPORTS and SETUP
###Code
# general imports
import numpy as np
import math
# AWS imports: Import Amazon Braket SDK modules
from braket.circuits import Circuit, circuit
from braket.circuits.composite_operators import *
###Output
_____no_output_____
###Markdown
Composite Operator DefinitionA composite operator is defined to be a composition of quantum operators (i.e. gates or other composite operators), and it can accept a variable number of target qubits. To give a simple demonstration of how they work, first, here is how one would normally construct the circuit that prepares the Greenberger-Horne-Zeilinger (GHZ) state with gates:
###Code
def ghz(qubits):
ghzcirc = Circuit().h(qubits[0])
for i in range(0, len(qubits) - 1):
ghzcirc.cnot(qubits[i], qubits[i + 1])
return ghzcirc
qubits = [0, 1, 2, 3]
print(ghz(qubits))
###Output
T : |0|1|2|3|
q0 : -H-C-----
|
q1 : ---X-C---
|
q2 : -----X-C-
|
q3 : -------X-
T : |0|1|2|3|
###Markdown
However, a composite operator corresponding to the construction of the GHZ state has already been implemented in the composite_operators module. Thus, we can construct the same circuit with one line of code by calling its corresponding circuit subroutine as shown below. The printed circuit groups all the gates into a GHZ block, with asterisks denoting the target qubits. We can decompose it to verify that it indeed corresponds to the same circuit as above.
###Code
# Call the GHZ circuit subroutine
ghzcirc = Circuit().ghz(qubits)
print('GHZ operator:')
print(ghzcirc)
print('Decomposed GHZ circuit:')
print(ghzcirc.decompose())
###Output
GHZ operator:
T : | 0 |
q0 : -GHZ-
| |
q1 : -|*|-
| |
q2 : -|*|-
| |
q3 : -|*|-
T : | 0 |
Decomposed GHZ circuit:
T : |0|1|2|3|
q0 : -H-C-----
|
q1 : ---X-C---
|
q2 : -----X-C-
|
q3 : -------X-
T : |0|1|2|3|
###Markdown
Composing a composite operators with other operatorsBelow is the implementation of the Quantum Fourier Transform given in the Amazon Braket tutorials repository (https://github.com/aws/amazon-braket-examples/blob/main/examples/advanced_circuits_algorithms/QFT/QFT.ipynb).
###Code
def qft(qubits):
"""
Construct a circuit object corresponding to the Quantum Fourier Transform (QFT)
algorithm, applied to the argument qubits. Does not use recursion to generate the QFT.
Args:
qubits (int): The list of qubits on which to apply the QFT
"""
qftcirc = Circuit()
# get number of qubits
num_qubits = len(qubits)
for k in range(num_qubits):
# First add a Hadamard gate
qftcirc.h(qubits[k])
# Then apply the controlled rotations, with weights (angles) defined by the distance to the control qubit.
# Start on the qubit after qubit k, and iterate until the end. When num_qubits==1, this loop does not run.
for j in range(1,num_qubits - k):
angle = 2*math.pi/(2**(j+1))
qftcirc.cphaseshift(qubits[k+j],qubits[k], angle)
# Then add SWAP gates to reverse the order of the qubits:
for i in range(math.floor(num_qubits/2)):
qftcirc.swap(qubits[i], qubits[-i-1])
return qftcirc
###Output
_____no_output_____
###Markdown
Here, we construct a circuit consisting of a hadamard gate and the gates corresponding to the QFT circuit. However, it isn't very easy to distnguish the QFT process from the other operator in the resulting diagram.
###Code
qubits = [0, 1, 2, 3]
qftcirc = Circuit().h(1)
qftcirc.add(qft(qubits))
print(qftcirc)
###Output
T : |0| 1 | 2 | 3 | 4 | 5 | 6 | 7 |
q0 : -H-PHASE(1.57)-PHASE(0.785)---PHASE(0.393)---------------------------------------------SWAP-
| | | |
q1 : -H-C-----------|------------H-|------------PHASE(1.57)-PHASE(0.785)---------------SWAP-|----
| | | | | |
q2 : ---------------C--------------|------------C-----------|------------H-PHASE(1.57)-SWAP-|----
| | | |
q3 : ------------------------------C------------------------C--------------C-----------H----SWAP-
T : |0| 1 | 2 | 3 | 4 | 5 | 6 | 7 |
###Markdown
If we instead add QFT as a composite operator rather than a series of gates and print the circuit, we see that the circuit diagram gets compartmentalized, making it easier to distinguish the QFT part from the extra hadamard gate. Calling `decompose` on this circuit shows that it is the same as the above circuit.
###Code
qftcirc = Circuit().h(1).qft(qubits)
print('QFT operator and hadamard:')
print(qftcirc)
print('Decomposed circuit:')
print(qftcirc.decompose())
###Output
QFT operator and hadamard:
T : |0| 1 |
q0 : ---QFT-
| |
q1 : -H-|*|-
| |
q2 : ---|*|-
| |
q3 : ---|*|-
T : |0| 1 |
Decomposed circuit:
T : |0| 1 | 2 | 3 | 4 | 5 | 6 | 7 |
q0 : -H-PHASE(1.57)-PHASE(0.785)---PHASE(0.393)---------------------------------------------SWAP-
| | | |
q1 : -H-C-----------|------------H-|------------PHASE(1.57)-PHASE(0.785)---------------SWAP-|----
| | | | | |
q2 : ---------------C--------------|------------C-----------|------------H-PHASE(1.57)-SWAP-|----
| | | |
q3 : ------------------------------C------------------------C--------------C-----------H----SWAP-
T : |0| 1 | 2 | 3 | 4 | 5 | 6 | 7 |
###Markdown
We can also take things a step further and put multiple quantum algorithms in the same circuit. The decompose method decomposes all composite operators in the circuit.
###Code
qft_ghz_circ = Circuit().ghz(qubits[:-1]).qft(qubits)
print('QFT and GHZ operators:')
print(qft_ghz_circ)
print('Decomposed circuit:')
print(qft_ghz_circ.decompose())
###Output
QFT and GHZ operators:
T : | 0 | 1 |
q0 : -GHZ-QFT-
| | | |
q1 : -|*|-|*|-
| | | |
q2 : -|*|-|*|-
| |
q3 : -----|*|-
T : | 0 | 1 |
Decomposed circuit:
T : |0|1|2| 3 | 4 | 5 | 6 | 7 | 8 | 9 |
q0 : -H-C-H-PHASE(1.57)-PHASE(0.785)---PHASE(0.393)---------------------------------------------SWAP-
| | | | |
q1 : ---X-C-C-----------|------------H-|------------PHASE(1.57)-PHASE(0.785)---------------SWAP-|----
| | | | | | |
q2 : -----X-------------C--------------|------------C-----------|------------H-PHASE(1.57)-SWAP-|----
| | | |
q3 : ----------------------------------C------------------------C--------------C-----------H----SWAP-
T : |0|1|2| 3 | 4 | 5 | 6 | 7 | 8 | 9 |
###Markdown
Multiple levels of decompositionBelow is the implementation of the Quantum Phase Estimation given in the Amazon Braket tutorials repository (https://github.com/aws/amazon-braket-examples/blob/main/examples/advanced_circuits_algorithms/QPE/QPE.ipynb). The advantage of using the composite operator feature over constructing the circuit from gates is most apparent in this example.
###Code
# Define Pauli matrices
Id = np.eye(2) # Identity matrix
X = np.array([[0., 1.],
[1., 0.]]) # Pauli X
Y = np.array([[0., -1.j],
[1.j, 0.]]) # Pauli Y
Z = np.array([[1., 0.],
[0., -1.]]) # Pauli Z
def inverse_qft(qubits):
"""
Construct a circuit object corresponding to the inverse Quantum Fourier Transform (QFT)
algorithm, applied to the argument qubits. Does not use recursion to generate the circuit.
Args:
qubits (int): The list of qubits on which to apply the inverse QFT
"""
# instantiate circuit object
qftcirc = Circuit()
# get number of qubits
num_qubits = len(qubits)
# First add SWAP gates to reverse the order of the qubits:
for i in range(math.floor(num_qubits/2)):
qftcirc.swap(qubits[i], qubits[-i-1])
# Start on the last qubit and work to the first.
for k in reversed(range(num_qubits)):
# Apply the controlled rotations, with weights (angles) defined by the distance to the control qubit.
# These angles are the negative of the angle used in the QFT.
# Start on the last qubit and iterate until the qubit after k.
# When num_qubits==1, this loop does not run.
for j in reversed(range(1, num_qubits - k)):
angle = -2*math.pi/(2**(j+1))
qftcirc.cphaseshift(qubits[k+j],qubits[k], angle)
# Then add a Hadamard gate
qftcirc.h(qubits[k])
return qftcirc
def controlled_unitary(control, target_qubits, unitary):
"""
Construct a circuit object corresponding to the controlled unitary
Args:
control: The qubit on which to control the gate
target_qubits: List of qubits on which the unitary U acts
unitary: matrix representation of the unitary we wish to implement in a controlled way
"""
# Define projectors onto the computational basis
p0 = np.array([[1., 0.],
[0., 0.]])
p1 = np.array([[0., 0.],
[0., 1.]])
# Instantiate circuit object
circ = Circuit()
# Construct numpy matrix
id_matrix = np.eye(len(unitary))
controlled_matrix = np.kron(p0, id_matrix) + np.kron(p1, unitary)
# Set all target qubits
targets = [control] + target_qubits
# Add controlled unitary
circ.unitary(matrix=controlled_matrix, targets=targets)
return circ
def qpe(precision_qubits, query_qubits, unitary, control_unitary=True):
"""
Function to implement the QPE algorithm using two registers for precision (read-out) and query.
Register qubits need not be contiguous.
Args:
precision_qubits: list of qubits defining the precision register
query_qubits: list of qubits defining the query register
unitary: Matrix representation of the unitary whose eigenvalues we wish to estimate
control_unitary: Optional boolean flag for controlled unitaries,
with C-(U^{2^k}) by default (default is True),
or C-U controlled-unitary (2**power) times
"""
qpe_circ = Circuit()
# Get number of qubits
num_precision_qubits = len(precision_qubits)
num_query_qubits = len(query_qubits)
# Apply Hadamard across precision register
qpe_circ.h(precision_qubits)
# Apply controlled unitaries. Start with the last precision_qubit, and end with the first
for ii, qubit in enumerate(reversed(precision_qubits)):
# Set power exponent for unitary
power = ii
# Alterantive 1: Implement C-(U^{2^k})
if control_unitary:
# Define the matrix U^{2^k}
Uexp = np.linalg.matrix_power(unitary,2**power)
# Apply the controlled unitary C-(U^{2^k})
qpe_circ.add_circuit(controlled_unitary(qubit, query_qubits, Uexp))
# Alterantive 2: One can instead apply controlled-unitary (2**power) times to get C-U^{2^power}
else:
for _ in range(2**power):
qpe_circ.add_circuit(controlled_unitary(qubit, query_qubits, unitary))
# Apply inverse qft to the precision_qubits
qpe_circ.add_circuit(inverse_qft(precision_qubits))
return qpe_circ
# set total number of qubits
precision_qubits = [0, 1, 2, 3]
query_qubits = [4]
# prepare query register
my_qpe_circ = Circuit()
# set unitary
unitary = X
# show small QPE example circuit
my_qpe_circ = qpe(precision_qubits, query_qubits, unitary)
print('QPE CIRCUIT:')
print(my_qpe_circ)
###Output
QPE CIRCUIT:
T : |0|1|2|3| 4 | 5 |6| 7 | 8 | 9 | 10 | 11 |12|
q0 : -H-------U------SWAP---------------------------------------------PHASE(-0.393)---PHASE(-0.785)-PHASE(-1.57)-H--
| | | | |
q1 : -H-----U-|-SWAP-|---------------------PHASE(-0.785)-PHASE(-1.57)-|-------------H-|-------------C---------------
| | | | | | | |
q2 : -H---U-|-|-SWAP-|------PHASE(-1.57)-H-|-------------C------------|---------------C-----------------------------
| | | | | | |
q3 : -H-U-|-|-|------SWAP-H-C--------------C--------------------------C---------------------------------------------
| | | |
q4 : ---U-U-U-U-----------------------------------------------------------------------------------------------------
T : |0|1|2|3| 4 | 5 |6| 7 | 8 | 9 | 10 | 11 |12|
###Markdown
Equivalently, the entirety of the above can be implemented by simply adding the QPE circuit using the circuit subroutine method instead. The circuit in this case has two levels of decomposition since QPE itself contains a decomposable composite operator (inverse QFT). Since the circuit decomposes by one level for every decomposition pass called, we must call the `decompose` method twice to fully decompose to gates.
###Code
# prepare query register
my_qpe_circ = Circuit()
# set unitary
unitary = X
# show small QPE example circuit
my_qpe_circ = my_qpe_circ.qpe(precision_qubits, query_qubits, unitary)
print('QPE CIRCUIT:')
print(my_qpe_circ)
print('QPE Circuit - One level decomposed:')
print(my_qpe_circ.decompose())
print('QPE Circuit - Two levels decomposed:')
print(my_qpe_circ.decompose().decompose())
###Output
QPE CIRCUIT:
T : | 0 |
q0 : -QPE-
| |
q1 : -|*|-
| |
q2 : -|*|-
| |
q3 : -|*|-
| |
q4 : -|*|-
T : | 0 |
QPE Circuit - One level decomposed:
T : |0|1|2|3|4| 5 |
q0 : -H-------U-iQFT-
| | |
q1 : -H-----U-|-|* |-
| | | |
q2 : -H---U-|-|-|* |-
| | | | |
q3 : -H-U-|-|-|-|* |-
| | | |
q4 : ---U-U-U-U------
T : |0|1|2|3|4| 5 |
QPE Circuit - Two levels decomposed:
T : |0|1|2|3| 4 | 5 |6| 7 | 8 | 9 | 10 | 11 |12|
q0 : -H-------U------SWAP---------------------------------------------PHASE(-0.393)---PHASE(-0.785)-PHASE(-1.57)-H--
| | | | |
q1 : -H-----U-|-SWAP-|---------------------PHASE(-0.785)-PHASE(-1.57)-|-------------H-|-------------C---------------
| | | | | | | |
q2 : -H---U-|-|-SWAP-|------PHASE(-1.57)-H-|-------------C------------|---------------C-----------------------------
| | | | | | |
q3 : -H-U-|-|-|------SWAP-H-C--------------C--------------------------C---------------------------------------------
| | | |
q4 : ---U-U-U-U-----------------------------------------------------------------------------------------------------
T : |0|1|2|3| 4 | 5 |6| 7 | 8 | 9 | 10 | 11 |12|
|
analysis/Attention_Scores_Distribution_Analysis_fltr-individual.ipynb | ###Markdown
Pseud-code/logicFor batch in seqposes: attn_matrices_batch = read attention matrices for the given batch using arg(batch) for seq_inf_dict,attn_matrix in zip(batch,attn_matrices_batch): query_filter_attn_val_matrix = get_attn_values_function(seq_inf_dict, attn_matrix) Once we have attn values for each query filter, use the analysis to generate a scatter plot using query filter IC values vs. the corresponding non-zero attention scores.
###Code
all_filters_dict = {f'filter{i}':[] for i in range(0, num_filters)}
num_batches = 10 #len(seqposes) for all batches to check
for i in range(0, len(seqposes)):
batch_seqposes = seqposes[i]
with open(f'../results/{experiment}/Stored_Values/PAttn_batch-{i}.pckl','rb') as f:
batch_pattn = pickle.load(f)
num_multiheads = int(batch_pattn.shape[2]/batch_pattn.shape[1])
feat_size = batch_pattn.shape[1]
seq_info_dicts = batch_seqposes[0]
seq_info_dict_indices = batch_seqposes[1]
info_dict_keys = list(seq_info_dicts.keys())
for j in range(0, len(seq_info_dict_indices)):
seq_info_dict = seq_info_dicts[info_dict_keys[j]]
attn_mat = batch_pattn[seq_info_dict_indices[j]]
all_filters_dict = get_query_filter_attn(all_filters_dict,
attn_mat,
seq_info_dict,
feat_size,
num_multiheads)
if i >= num_batches:
break
all_filters_attn = pd.Series(all_filters_dict)
all_filters_attn = all_filters_attn.apply(lambda x: np.array(x))
all_filters_attn = all_filters_attn.apply(lambda x: np.median(x[x>0]))#[x>=0]))
x = filter_ICs['ic'].values.astype(float)
y = all_filters_attn[filter_ICs['filter']].values
def r2(x, y):
return stats.pearsonr(x, y)[0] ** 2
res = pd.DataFrame([x,y]).T
res.columns = ['Information content', 'Median attention score']
sns.jointplot(res['Information content'], res['Median attention score'], kind="reg")
r2val = r2(res['Information content'], res['Median attention score'])
plt.text(min(x),max(y),r'$R^2=$'+str(round(r2val,2)))
plt.xlabel('Information content)')
plt.tight_layout()
plt.savefig('output/'+experiment+'.png')
plt.show()
###Output
_____no_output_____ |
dayoung_trial1/06_Service_code_regression.ipynb | ###Markdown
소득 없이 매출액 총합 회귀분석
###Code
#상권(nanX,매출액총합,소득O):sang_income_nan.csv
s_df = pd.read_csv("sang_income_nan.csv", encoding='euc-kr')
s_df=s_df[s_df.columns[:-2]]
s_df.columns= ['당월_매출_금액', '기준_분기_코드', '상권_코드', '서비스_업종_코드', '총_직장_인구_수',
'employee_rate', '집객시설_수', '총_유동인구_수', 'floating_pop_rate',
'운영_영업_개월_평균', '폐업_영업_개월_평균', '총_상주인구_수', '총_가구_수', '점포_수',
'유사_업종_점포_수']
s_df.columns
for i in s_df[s_df.columns[4:]].columns:
print(i, end="+")
#robust scaling하기
from sklearn.preprocessing import RobustScaler
dfX=s_df[s_df.columns[4:]]
rb = RobustScaler()
rb.fit(dfX)
X_robust_scaled = rb.transform(dfX)
dfX2=pd.DataFrame(X_robust_scaled, columns= s_df.columns[4:])
s_df_scaled = pd.concat([s_df[s_df.columns[[0,3]]],dfX2], axis=1)
#회귀분석
model = sm.OLS.from_formula("당월_매출_금액 ~ 총_직장_인구_수+employee_rate+집객시설_수+총_유동인구_수+\
운영_영업_개월_평균+폐업_영업_개월_평균+총_상주인구_수+총_가구_수+점포_수+유사_업종_점포_수+C(서비스_업종_코드)+0",data= s_df_scaled)
result = model.fit()
print(result.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: 당월_매출_금액 R-squared: 0.439
Model: OLS Adj. R-squared: 0.439
Method: Least Squares F-statistic: 1181.
Date: Mon, 02 Dec 2019 Prob (F-statistic): 0.00
Time: 19:59:30 Log-Likelihood: -1.7290e+06
No. Observations: 81553 AIC: 3.458e+06
Df Residuals: 81498 BIC: 3.459e+06
Df Model: 54
Covariance Type: nonrobust
==========================================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------------------
C(서비스_업종_코드)[CS100001] 2.342e+08 7.64e+06 30.638 0.000 2.19e+08 2.49e+08
C(서비스_업종_코드)[CS100002] 1.718e+08 8.02e+06 21.423 0.000 1.56e+08 1.88e+08
C(서비스_업종_코드)[CS100003] 1.475e+08 1.05e+07 14.095 0.000 1.27e+08 1.68e+08
C(서비스_업종_코드)[CS100004] 1.982e+08 1.11e+07 17.856 0.000 1.76e+08 2.2e+08
C(서비스_업종_코드)[CS100005] -1.306e+07 7.19e+06 -1.816 0.069 -2.72e+07 1.04e+06
C(서비스_업종_코드)[CS100006] 3.09e+07 1.11e+07 2.782 0.005 9.13e+06 5.27e+07
C(서비스_업종_코드)[CS100007] -6.301e+07 9.7e+06 -6.497 0.000 -8.2e+07 -4.4e+07
C(서비스_업종_코드)[CS100008] 8.766e+07 1.02e+07 8.608 0.000 6.77e+07 1.08e+08
C(서비스_업종_코드)[CS100009] -4.346e+07 7.07e+06 -6.145 0.000 -5.73e+07 -2.96e+07
C(서비스_업종_코드)[CS100010] 4.915e+07 7.09e+06 6.938 0.000 3.53e+07 6.3e+07
C(서비스_업종_코드)[CS200001] 1.825e+08 8.09e+06 22.565 0.000 1.67e+08 1.98e+08
C(서비스_업종_코드)[CS200002] 1.301e+08 1.24e+07 10.511 0.000 1.06e+08 1.54e+08
C(서비스_업종_코드)[CS200003] 2.068e+07 7.35e+06 2.815 0.005 6.28e+06 3.51e+07
C(서비스_업종_코드)[CS200004] 3.204e+08 9.51e+06 33.689 0.000 3.02e+08 3.39e+08
C(서비스_업종_코드)[CS200005] 1.833e+08 9.13e+06 20.064 0.000 1.65e+08 2.01e+08
C(서비스_업종_코드)[CS200006] 4.38e+08 9.23e+06 47.458 0.000 4.2e+08 4.56e+08
C(서비스_업종_코드)[CS200007] 9.743e+07 1.88e+07 5.175 0.000 6.05e+07 1.34e+08
C(서비스_업종_코드)[CS200008] -2.928e+08 3.13e+07 -9.354 0.000 -3.54e+08 -2.31e+08
C(서비스_업종_코드)[CS200009] 8.257e+07 1.13e+07 7.334 0.000 6.05e+07 1.05e+08
C(서비스_업종_코드)[CS200010] 3.218e+07 7.89e+06 4.076 0.000 1.67e+07 4.77e+07
C(서비스_업종_코드)[CS200011] 2.537e+08 1.21e+07 20.911 0.000 2.3e+08 2.77e+08
C(서비스_업종_코드)[CS200012] 6.044e+07 1.26e+07 4.789 0.000 3.57e+07 8.52e+07
C(서비스_업종_코드)[CS200013] -2.924e+07 8.8e+06 -3.324 0.001 -4.65e+07 -1.2e+07
C(서비스_업종_코드)[CS200014] 1.452e+08 1.08e+07 13.454 0.000 1.24e+08 1.66e+08
C(서비스_업종_코드)[CS200015] 1.422e+08 8.45e+06 16.821 0.000 1.26e+08 1.59e+08
C(서비스_업종_코드)[CS200016] -1.407e+08 6.63e+06 -21.201 0.000 -1.54e+08 -1.28e+08
C(서비스_업종_코드)[CS200017] 5.579e+07 1.1e+07 5.082 0.000 3.43e+07 7.73e+07
C(서비스_업종_코드)[CS200018] 2.523e+07 9.54e+06 2.644 0.008 6.53e+06 4.39e+07
C(서비스_업종_코드)[CS300001] 4.598e+08 6.74e+06 68.207 0.000 4.47e+08 4.73e+08
C(서비스_업종_코드)[CS300002] 3.509e+08 1.26e+07 27.871 0.000 3.26e+08 3.76e+08
C(서비스_업종_코드)[CS300003] 8.097e+07 1.83e+07 4.424 0.000 4.51e+07 1.17e+08
C(서비스_업종_코드)[CS300004] 5.847e+07 1.11e+07 5.261 0.000 3.67e+07 8.02e+07
C(서비스_업종_코드)[CS300005] 1.748e+08 7.48e+06 23.382 0.000 1.6e+08 1.89e+08
C(서비스_업종_코드)[CS300006] 1.763e+08 1.48e+07 11.902 0.000 1.47e+08 2.05e+08
C(서비스_업종_코드)[CS300007] 1.435e+07 7.42e+06 1.933 0.053 -1.99e+05 2.89e+07
C(서비스_업종_코드)[CS300008] 7.921e+07 8.78e+06 9.019 0.000 6.2e+07 9.64e+07
C(서비스_업종_코드)[CS300009] 3.776e+08 7.81e+06 48.359 0.000 3.62e+08 3.93e+08
C(서비스_업종_코드)[CS300010] 1.13e+08 9.92e+06 11.388 0.000 9.36e+07 1.32e+08
C(서비스_업종_코드)[CS300011] -4.115e+07 8.63e+06 -4.766 0.000 -5.81e+07 -2.42e+07
C(서비스_업종_코드)[CS300012] 2.302e+08 1.34e+07 17.219 0.000 2.04e+08 2.56e+08
C(서비스_업종_코드)[CS300013] 8.018e+07 1.12e+07 7.169 0.000 5.83e+07 1.02e+08
C(서비스_업종_코드)[CS300014] 3.843e+07 9.33e+06 4.117 0.000 2.01e+07 5.67e+07
C(서비스_업종_코드)[CS300015] 5.219e+08 1.36e+07 38.281 0.000 4.95e+08 5.49e+08
C(서비스_업종_코드)[CS300016] 3.255e+06 7.64e+06 0.426 0.670 -1.17e+07 1.82e+07
C(서비스_업종_코드)[CS300017] 1.215e+08 1.49e+07 8.135 0.000 9.22e+07 1.51e+08
총_직장_인구_수 4.77e+06 5.38e+05 8.862 0.000 3.71e+06 5.82e+06
employee_rate -2.93e+06 1.94e+06 -1.510 0.131 -6.73e+06 8.74e+05
집객시설_수 3.13e+07 1.66e+06 18.840 0.000 2.8e+07 3.46e+07
총_유동인구_수 6.562e+06 1.66e+06 3.962 0.000 3.32e+06 9.81e+06
운영_영업_개월_평균 -3.469e+07 1.86e+06 -18.640 0.000 -3.83e+07 -3.1e+07
폐업_영업_개월_평균 1.225e+07 1.86e+06 6.581 0.000 8.6e+06 1.59e+07
총_상주인구_수 -2.584e+06 4.87e+06 -0.531 0.595 -1.21e+07 6.95e+06
총_가구_수 -3.256e+07 4.87e+06 -6.689 0.000 -4.21e+07 -2.3e+07
점포_수 -3.932e+08 7.73e+06 -50.877 0.000 -4.08e+08 -3.78e+08
유사_업종_점포_수 5.572e+08 7.34e+06 75.919 0.000 5.43e+08 5.72e+08
==============================================================================
Omnibus: 119230.480 Durbin-Watson: 1.971
Prob(Omnibus): 0.000 Jarque-Bera (JB): 157937354.533
Skew: 8.386 Prob(JB): 0.00
Kurtosis: 217.936 Cond. No. 63.1
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
|
Traffic Anomaly Classification by Support Vector Machine with Radial Basis Function on Chula-SSS Urban Road Network/AnomalyDetection.ipynb | ###Markdown
A Machine Learning Project may not be linear, but it has a number of well known steps:1.Define Problem.2.Prepare Data.3.Evaluate Algorithms.4.Improve Results.5.Present Results.In this section, we are going to work through a small machine learning project end-to-end.Here is an overview of what we are going to cover:1.Installing the Python and SciPy platform.2.Loading the dataset.3.Summarizing the dataset.4.Evaluating some algorithms.5.Making some predictions. 1.Downloading, Installing and Starting Python SciPy
###Code
# from xml.dom import minidom
import numpy as np
import pandas
import csv
from pandas.plotting import scatter_matrix
import matplotlib.pyplot as plt
from sklearn import model_selection
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.svm import SVC
###Output
_____no_output_____
###Markdown
2. Load The Data
###Code
#Load dataset
url ='dataset/LaneClosure/L10130/2 s interval(including2hops)/1,2,3 close/L30.csv'
names = ["Time","Edge ID","Edge Length","NumberOfLane","Lane Name","Jam Length","Density","Mean Speed","Mean Occupancy","Flow","Road State(basedOnJamLength)","Road State(basedOnFlow)"]
dataset = pandas.read_csv(url, names=names,skiprows=1)
plt.scatter(dataset['Flow'], dataset['Mean Speed'])
# class distribution
dataset.groupby('Road State(basedOnFlow)').head()
abnormalcase = dataset.loc[dataset['Road State(basedOnFlow)']==1];
normalcase = dataset.loc[dataset['Road State(basedOnFlow)']==0];
plt.scatter(normalcase['Flow'], normalcase['Mean Speed']);
plt.xlim(0,30); plt.ylim(0,1.1)
plt.figure(); plt.scatter(abnormalcase['Flow'], abnormalcase['Mean Speed']);
###Output
_____no_output_____
###Markdown
3.In this step we are going to take a look at the data a few different ways:Dimensions of the dataset.Peek at the data itself.Statistical summary of all attributes.Breakdown of the data by the class variable.
###Code
# descriptions
print(dataset.describe())
# class distribution
print(dataset.groupby('Road State(basedOnFlow)').size())
###Output
Road State(basedOnFlow)
0 3505
1 1896
dtype: int64
###Markdown
4. Evaluate Some AlgorithmsNow it is time to create some models of the data and estimate their accuracy on unseen data.Here is what we are going to cover in this step:Separate out a validation dataset.Set-up the test harness to use 10-fold cross validation.Build 5 different models to predict species from flower measurementsSelect the best model.
###Code
# Split-out validation dataset
array = dataset.values
X = array[:,[7,9]]
Y = array[:,11]
Y=Y.astype('int')
validation_size = 0.30
seed = 7
X_train, X_validation, Y_train, Y_validation = model_selection.train_test_split(X, Y, test_size=validation_size, random_state=seed)
print(X)
# Test options and evaluation metric
seed = 7
scoring = 'accuracy'
# Spot Check Algorithms
models=[]
svc = SVC(kernel='rbf', C=1,gamma='auto').fit(X_train, Y_train)
models.append(('SVM', svc))
# evaluate each model in turn
results = []
names = []
for name, model in models:
kfold = model_selection.KFold(n_splits=10, random_state=seed)
#kfold = model_selection.KFold(n_splits=10)
cv_results = model_selection.cross_val_score(svc, X_train, Y_train, cv=kfold, scoring=scoring)
results.append(cv_results)
names.append(name)
msg = "%s: %f (%f)" % (name, cv_results.mean(), cv_results.std())
print(msg)
###Output
SVM: 0.729630 (0.021058)
###Markdown
5.Make Predictions
###Code
# Make predictions on validation dataset
predictions = svc.predict(X_validation)
print('Accuracy :',accuracy_score(Y_validation, predictions))
cm=confusion_matrix(Y_validation, predictions)
plt.clf()
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Wistia)
classNames = ['Normal','Abnormal']
plt.title('Normal or Abnormal Confusion Matrix')
plt.ylabel('True label')
plt.xlabel('Predicted label')
tick_marks = np.arange(len(classNames))
plt.xticks(tick_marks, classNames, rotation=45)
plt.yticks(tick_marks, classNames)
s = [['TN','FP'], ['FN', 'TP']]
for i in range(2):
for j in range(2):
plt.text(j,i, str(s[i][j])+" = "+str(cm[i][j]))
plt.show()
# create a mesh to plot in
x_min, x_max = X[:, 0].min() - 0, X[:, 0].max() + 0
y_min, y_max = X[:, 1].min() - 0, X[:, 1].max() + 0
h = (x_max / x_min)/100
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
plt.subplot(1, 1, 1)
Z = svc.predict(np.c_[xx.ravel(), yy.ravel()])
Z = Z.reshape(xx.shape)
plt.contourf(xx, yy, Z, cmap=plt.cm.Paired, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired)
plt.xlabel('Mean Speed')
plt.ylabel('Flow')
plt.xlim(xx.min(), xx.max())
plt.title('SVC with RBF kernel')
plt.show()
###Output
_____no_output_____ |
analysis/June/milestone3/Analysis.ipynb | ###Markdown
Research Question Top 5 Factors that have the greatest impact on the life ladder
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
data=pd.read_csv('whreport.csv')
data.head()
data.isna().sum()
data.describe()
plt.figure(figsize=(20,10))
ax1=sns.lineplot(data.year,data['Life Ladder'])
ax2=sns.lineplot(data.year,data['Log GDP per capita'])
plt.grid()
plt.legend(labels=['Life Ladder','Log GDP per capita'])
plt.show()
plt.figure(figsize=(20,10))
ax1=sns.lineplot(data.year,data['Life Ladder'])
ax3=sns.lineplot(data.year,data['Social support'])
plt.grid()
plt.legend(labels=['Life Ladder','Social support'])
plt.show()
plt.figure(figsize=(20,10))
ax1=sns.lineplot(data.year,data['Life Ladder'])
ax4=sns.lineplot(data.year,data['Healthy life expectancy at birth'])
plt.grid()
plt.legend(labels=['Life Ladder'])
plt.show()
plt.figure(figsize=(20,10))
ax1=sns.lineplot(data.year,data['Life Ladder'])
ax5=sns.lineplot(data.year,data['Freedom to make life choices'])
plt.grid()
plt.legend(labels=['Life Ladder'])
plt.show()
plt.figure(figsize=(20,10))
ax1=sns.lineplot(data.year,data['Life Ladder'])
ax6=sns.lineplot(data.year,data['Generosity'])
plt.grid()
plt.legend(labels=['Life Ladder'])
plt.show()
plt.figure(figsize=(20,10))
ax1=sns.lineplot(data.year,data['Life Ladder'])
ax7=sns.lineplot(data.year,data['Perceptions of corruption'])
plt.grid()
plt.legend(labels=['Life Ladder'])
plt.show()
plt.figure(figsize=(20,10))
ax1=sns.lineplot(data.year,data['Life Ladder'])
ax8=sns.lineplot(data.year,data['Positive affect'])
plt.grid()
plt.legend(labels=['Life Ladder'])
plt.show()
plt.figure(figsize=(20,10))
ax1=sns.lineplot(data.year,data['Life Ladder'])
ax9=sns.lineplot(data.year,data['Negative affect'])
plt.grid()
plt.legend(labels=['Life Ladder'])
plt.show()
plt.figure(figsize=(20,10))
ax2=sns.lineplot(data.year,data['Log GDP per capita'])
ax3=sns.lineplot(data.year,data['Social support'])
ax4=sns.lineplot(data.year,data['Healthy life expectancy at birth'])
ax5=sns.lineplot(data.year,data['Freedom to make life choices'])
ax6=sns.lineplot(data.year,data['Generosity'])
ax7=sns.lineplot(data.year,data['Perceptions of corruption'])
ax8=sns.lineplot(data.year,data['Positive affect'])
ax9=sns.lineplot(data.year,data['Negative affect'])
plt.grid()
plt.legend(labels=['Log GDP per capita','Social support','Healthy life expectancy at birth','Freedom to make life choices','Generosity','Perceptions of corruption','Positive affect','Negative affect'])
plt.show()
data.plot()
plt.figure(figsize=(20,10))
ax2=sns.scatterplot(data['Life Ladder'],data['Log GDP per capita'])
ax3=sns.scatterplot(data['Life Ladder'],data['Social support'])
ax4=sns.scatterplot(data['Life Ladder'],data['Healthy life expectancy at birth'])
ax5=sns.scatterplot(data['Life Ladder'],data['Freedom to make life choices'])
ax6=sns.scatterplot(data['Life Ladder'],data['Generosity'])
ax7=sns.scatterplot(data['Life Ladder'],data['Perceptions of corruption'])
ax8=sns.scatterplot(data['Life Ladder'],data['Positive affect'])
ax9=sns.scatterplot(data['Life Ladder'],data['Negative affect'])
plt.grid()
plt.legend(labels=['Log GDP per capita','Social support','Healthy life expectancy at birth','Freedom to make life choices','Generosity','Perceptions of corruption','Positive affect','Negative affect'])
plt.show()
###Output
C:\Users\J\miniconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
C:\Users\J\miniconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
C:\Users\J\miniconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
C:\Users\J\miniconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
C:\Users\J\miniconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
C:\Users\J\miniconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
C:\Users\J\miniconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
C:\Users\J\miniconda3\lib\site-packages\seaborn\_decorators.py:36: FutureWarning: Pass the following variables as keyword args: x, y. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
warnings.warn(
|
Brain tumor prediction using cnn.ipynb | ###Markdown
Brain Tumor Detection From MRI Images step1: Importing Libraries
###Code
Defining Model architecture: For image classification we use
Convolution neural network
###Output
_____no_output_____
###Markdown
from keras.models import Sequential for iniatializing from keras.layers import Densefrom keras.layers import Conv2Dfrom keras.layers import MaxPooling2Dfrom keras.layers import Flattenimport imutilsimport cv2from matplotlib import pyplot as plt
###Code
# Data Preparation & Preprocessing
In order to crop the part that contains only the brain of the image, I used a cropping technique to find the extreme top, bottom, left and right points of the brain.
###Output
_____no_output_____
###Markdown
def crop_brain_contour(image, plot=False): gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) gray = cv2.GaussianBlur(gray, (5, 5), 0) thresh = cv2.threshold(gray, 45, 255, cv2.THRESH_BINARY)[1] thresh = cv2.erode(thresh, None, iterations=2) thresh = cv2.dilate(thresh, None, iterations=2) cnts = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) cnts = imutils.grab_contours(cnts) c = max(cnts, key=cv2.contourArea) extLeft = tuple(c[c[:, :, 0].argmin()][0]) extRight = tuple(c[c[:, :, 0].argmax()][0]) extTop = tuple(c[c[:, :, 1].argmin()][0]) extBot = tuple(c[c[:, :, 1].argmax()][0]) new_image = image[extTop[1]:extBot[1], extLeft[0]:extRight[0]] if plot: plt.figure() plt.subplot(1, 2, 1) plt.imshow(image) plt.tick_params(axis='both', which='both', top=False, bottom=False, left=False, right=False, labelbottom=False, labeltop=False, labelleft=False, labelright=False) plt.title('Original Image') plt.subplot(1, 2, 2) plt.imshow(new_image) plt.tick_params(axis='both', which='both', top=False, bottom=False, left=False, right=False, labelbottom=False, labeltop=False, labelleft=False, labelright=False) plt.title('Cropped Image') plt.show() return new_image
###Code
In order to better understand what it's doing, let's grab an image from the dataset and apply this cropping function to see the result:
###Output
_____no_output_____
###Markdown
ex_img = cv2.imread(r'C:\Users\asus\Desktop\uploads\dataset\test_set\yes/Y1.jpg')ex_new_img = crop_brain_contour(ex_img, True)
###Code
# Step2: Initializing the model
###Output
_____no_output_____
###Markdown
The Sequential class is used to define a linear initializations of network layers which then, collectively, constitute a model.We will use the Sequential constructor to create a model
###Code
model=Sequential()
###Output
WARNING:tensorflow:From C:\Users\asus\anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:74: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
###Markdown
Step3: Adding Convolution Layers
###Code
The ImageDataGenerator accepts the original data, randomly transforms
it, and returns only the new, transformed data.
###Output
_____no_output_____
###Markdown
from keras.preprocessing.image import ImageDataGenerator
###Code
The image is resized to an optimal size and is fed as input to the
convolutional layer.
###Output
_____no_output_____
###Markdown
model.add(Conv2D(32,3,3,input_shape=(64,64,3),activation='relu'))
###Code
# Step4:Adding Maxpooling Layer
###Output
_____no_output_____
###Markdown
Max pooling is a sample-based discretization process. The objective is to down-sample an input representation (image, hidden-layer output matrix, etc.).It reduces the computational cost by reducing the number of parameters
###Code
model.add(MaxPooling2D(pool_size=(2,2)))
###Output
WARNING:tensorflow:From C:\Users\asus\anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py:3976: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.
###Markdown
Step5: Flatten
###Code
Flattening is the process of converting all the resultant 2 dimensional
arrays into a single long continuous linear vector.
###Output
_____no_output_____
###Markdown
model.add(Flatten()) model.summary()
###Code
# Step6 :Ann Layers
###Output
_____no_output_____
###Markdown
model.add(Dense(units=128,activation='relu',kernel_initializer='random_uniform'))
###Code
The ImageDataGenerator transforms each image in the batch by a series
of random translations, these translations are based on the arguments
###Output
_____no_output_____
###Markdown
train_datagen=ImageDataGenerator(rescale=1./255,shear_range=0.2,zoom_range=0.2,horizontal_flip=True)test_datagen= ImageDataGenerator(rescale=1./255)
###Code
Dense Layers can be reduced back to linear layers if we use a linear
activation!
###Output
_____no_output_____
###Markdown
model.add(Dense(units=128,activation='relu',kernel_initializer='random_uniform')) model.add(Dense(units=1,activation='sigmoid',kernel_initializer='random_uniform')) model.summary()
###Code
With both the training data defined and model defined, it's time configure
the learning process. This is accomplished with a call to the compile()
method of the Sequential model class. Compilation requires 3 arguments: an
optimizer,a loss function, and a list of metrics.
###Output
_____no_output_____
###Markdown
model.compile(optimizer='adam',loss='binary_crossentropy',metrics=['accuracy'])
###Code
getting all the train and test images from the directory through given dataset
###Output
_____no_output_____
###Markdown
x_train= train_datagen.flow_from_directory(r'C:\Users\asus\Desktop\dataset\training_set',target_size=(64,64),batch_size=2,class_mode='binary')x_test=test_datagen.flow_from_directory(r'C:\Users\asus\Desktop\dataset\test_set',target_size=(64,64),batch_size=2,class_mode='binary') print(x_train.class_indices)
###Code
To pass the data to the model for the training process to commence,a process
which is completed by iterating onthe training data. Training begins by
calling the fit() method.
###Output
_____no_output_____
###Markdown
history=model.fit_generator(x_train,samples_per_epoch=215,epochs=8,validation_data=x_test,nb_val_samples=29)
###Code
Graph plots for model accuracy and model loss
###Output
_____no_output_____
###Markdown
print(history.history.keys()) "Accuracy"plt.plot(history.history['acc'])plt.plot(history.history['val_acc'])plt.title('model accuracy')plt.ylabel('accuracy')plt.xlabel('epoch')plt.legend(['train', 'validation'], loc='upper left')plt.show() "Loss"plt.plot(history.history['loss'])plt.plot(history.history['val_loss'])plt.title('model loss')plt.ylabel('loss')plt.xlabel('epoch')plt.legend(['train', 'validation'], loc='upper left')plt.show()
###Code
Your model is to be saved for the future purpose. This saved model ac also be
integrated with android application or web application in order to predict
something
###Output
_____no_output_____
###Markdown
model.save('cnn_Yes_No.h5')
###Code
# prediction
###Output
_____no_output_____
###Markdown
The last and final step is to make use of Saved model to do predictions. Weuse load model class to load the model. We use imread() class from opencvlibrary to read an image and give it to the model to predict the result. Beforegiving the original image to predict the class, we have to pre-process thatimage and apply predictions to get accurate results
###Code
from keras.models import load_model
import numpy as np
import cv2
model=load_model('cnn_Yes_No.h5')
from skimage.transform import resize
def detect(frame):
try:
img= resize(frame,(64,64))
#print(img)
img= np.expand_dims(img,axis=0)
#print(img)
if(np.max(img)>1):
img= img/255.0
prediction= model.predict(img)
print(prediction)
print(model.predict_classes(img))
except AttributeError:
print("No tumor found")
frame= cv2.imread(r'C:\Users\asus\Desktop\uploads\dataset\test_set\yes\test_set\no\1 no.jpg')
data= detect(frame)
###Output
No tumor found
###Markdown
conclusion:
###Code
finally based on the analysis it has been found that the ovarall accuracy of classification is above 75%.
the results show that weather the person is effecting with brain tumor or not
###Output
_____no_output_____ |
GMM_SoftAllocation.ipynb | ###Markdown
GMM Soft Allocation Implement GMM soft allocation procedure to cluster data
###Code
#Libraries required for my code to run
#Uncomment this if you want to display your [graphs] within the notebook in a proper format.
%matplotlib inline
#Uncomment this if you want to display your graphs in backend
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from pandas import DataFrame
import pandas as pd
import numpy as np
from numpy.random import randn
import glob
import sys
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
1.Synthetic Data Generation first method
###Code
#Synthetic Data Generation first method
K=4 # Step1:Choose the Number of clusters
#%matplotlib
from sklearn.datasets.samples_generator import make_blobs
Data, y_true = make_blobs(n_samples=400, centers=K,
cluster_std=0.70, random_state=0)
df = pd.DataFrame(data=Data, columns=["X", "Y"])
#Plot 2D
plt.scatter(Data[:, 0], Data[:, 1], s=5)
plt.title("SampleDataGeneration-2D")
plt.xlabel("X-axis")
plt.ylabel("Y-axis")
plt.grid()
#plt.scatter(centers[:, 0], centers[:, 1],c='red', s=50)
#plt.title("SampleDataGeneration-2D")
#plt.xlabel("X-axis")
#plt.ylabel("Y-axis")
#plt.grid()
#3D Plot
fig = plt.figure(figsize=(6,6))
ax = fig.add_subplot(111, projection='3d')
ax.scatter(Data[:, 0],Data[:, 1],
linewidths=1, alpha=0.5,
#edgecolor='k',
s =10,
)
ax.set_title("SampleDataGeneration-3D")
ax.set_xlabel('X-axis')
ax.set_ylabel('Y-axis')
ax.set_zlabel('Z-axis')
plt.show()
def plot_hist(data):
for x in data:
plt.hist(x, bins = 80, alpha = 0.7)
plot_hist([Data[:, 0], Data[:, 1]])
plt.title("Histogram")
from scipy.stats import multivariate_normal
X=Data
class GaussianMixture():
#Here you will create a refernce to all the parameters which gets substituted against declared variables
def __init__(self, gaussians: int, n_iters: int, tol: float, seed: int):
self.gaussians = gaussians
self.n_iters = n_iters
self.tol = tol
self.seed = seed
def fit(self, X):
# data's dimensionality and probability vector initialization
self.n_row, self.n_col = X.shape
self.probability = np.zeros((self.n_row, self.gaussians))
#print(self.probability)
##Below multicommented block can be used if you want to apply GMM on a dataset without Kmeans result
# initialize parameters
np.random.seed(self.seed)
chosen = np.random.choice(self.n_row, self.gaussians, replace = False)
#print("Chosen:",chosen)
self.means = X[chosen]
#print("Initial Means:",self.means)
self.weights = np.full(self.gaussians, 1 / self.gaussians)
#print("Initial weights:",self.weights)
# for np.cov, rowvar = False,
# indicates that the rows represents obervation
shape = self.gaussians, self.n_col, self.n_col
self.covs = np.full(shape, np.cov(X, rowvar = False))
# print("Initial Covariance:",self.covs)
"""
self.means=m
self.weights=pi
self.covs=c
"""
log_likelihood = 0 #Initializing for iteration
self.converged = False
self.log_likelihood_trace = []
print("...Entering GMM Clustering...\n")
for i in range(self.n_iters):
log_likelihood_new = self.Estep(X)
self.Mstep(X)
if (abs(log_likelihood_new - log_likelihood) <= self.tol):
self.converged = True
break
log_likelihood = log_likelihood_new
self.log_likelihood_trace.append(log_likelihood)
print("Iteration: ",i," log_likelihood: ", log_likelihood)
plt.plot(self.log_likelihood_trace)
plt.title("Loglikelihood Convergence Graph")
#print("log_likelihood_trace:",self.log_likelihood_trace)
last=self.log_likelihood_trace[-1]
#print(last)
return self.means,self.weights,self.covs,self.probability
def Estep(self, X):
"""
E-step: compute probability,
update probability matrix so that probability[i, j] is the probability of cluster k
for data point i,
to compute likelihood of data point i belonging to given cluster k,
use multivariate_normal.pdf
"""
self._compute_log_likelihood(X)
self.log_likelihood1 = np.sum(np.log(np.sum(self.probability, axis = 1)))
#Normalization
self.probability = self.probability / self.probability.sum(axis = 1, keepdims = 1)
#print("Normalised probability",self.probability)
return self.log_likelihood1
def _compute_log_likelihood(self, X):
for k in range(self.gaussians):
prior = self.weights[k]
#print("prior_weight",prior)
likelihood = multivariate_normal(self.means[k], self.covs[k]).pdf(X)
#print("Likelihood/probability"+str(k),likelihood)
self.probability[:, k] = prior * likelihood
#print(" Size of Initial Probability of all the datapoints in cluster"+str(k),self.probability.shape)
return self
def compute_log_likelihood(self, X):
self.probs = np.zeros((X.shape[0] , self.gaussians))
for k in range(self.gaussians):
prior = self.weights[k]
#print("prior_weight",prior)
self.likeli = multivariate_normal(self.means[k], self.covs[k]).pdf(X)
#print("Likelihood/probability"+str(k),likelihood)
self.probs[:,k]= prior * self.likeli
#print(" Size of Initial Probability of all the datapoints in cluster"+str(k),self.probability.shape)
self.probs = self.probs / (np.sum(self.probs, axis=1)[:, np.newaxis])
return self.probs
def compute_log_likelihood_newmean(self, X, nmean, nvar, nweights):
self.probs1 = np.zeros((X.shape[0], self.gaussians))
for k in range(self.gaussians):
prior = nweights[k]
#print("prior_weight",prior)
self.likeli = multivariate_normal(nmean[k], nvar[k]).pdf(X)
#print("Likelihood/probability"+str(k),likelihood)
self.probs1[:,k]= prior * self.likeli
#print(" Size of Initial Probability of all the datapoints in cluster"+str(k),self.probability.shape)
self.probs1 = self.probs1 / (np.sum(self.probs1, axis=1)[:, np.newaxis])
return self.probs1
def Mstep(self, X):
"""M-step, update parameters"""
# total probability assigned to each cluster, Soft alocation(N^soft)
#print("probability assigned to each cluster",self.probability.sum(axis = 0))
resp_weights = self.probability.sum(axis = 0)
# updated_weights
self.weights = resp_weights / X.shape[0]
# updated_means
weighted_sum = np.dot(self.probability.T, X)
self.means = weighted_sum / resp_weights.reshape(-1, 1)
# updated_covariance
for k in range(self.gaussians):
diff = (X - self.means[k]).T
weighted_sum = np.dot(self.probability[:, k] * diff, diff.T)
self.covs[k] = weighted_sum / resp_weights[k]
return self
def predict(self, X):
post_proba = np.zeros((X.shape[0], self.gaussians))
for c in range(self.gaussians):
post_proba [:,c] = self.weights[c] * multivariate_normal.pdf(X, self.means[c,:], self.covs[c])
#print("Posterior_probability:", post_proba)
labels = post_proba.argmax(1)
#print("Labels/Classes:",labels)
return labels
K=4
model = GaussianMixture(gaussians=K, n_iters = 50, tol = 0.0, seed = 4)
#fitted_values = model.fit(X)
mean,weight,covar,probability = model.fit(X)
print("Means",mean)
print("Weights",weight)
print("Covs",covar)
print("Prob",probability)
predicted_values = model.predict(X)
centers = np.zeros((K,2))
for i in range(model.gaussians):
density = multivariate_normal(cov=model.covs[i], mean=model.means[i]).logpdf(X)
centers[i, :] = X[np.argmax(density)]
plt.figure(figsize = (10,8))
plt.scatter(X[:, 0], X[:, 1],c=predicted_values ,s=10, cmap='viridis')
plt.scatter(centers[:, 0], centers[:, 1],c='Red', s=300, alpha=0.6);
plt.grid()
print('converged iteration:', len(model.log_likelihood_trace))
from matplotlib.patches import Ellipse
def draw_ellipse(position, covariance, ax=None, **kwargs):
"""Draw an ellipse with a given position and covariance"""
ax = ax or plt.gca()
# Convert covariance to principal axes
if covariance.shape == (2, 2):
U, s, Vt = np.linalg.svd(covariance)
angle = np.degrees(np.arctan2(U[1, 0], U[0, 0]))
width, height = 2 * np.sqrt(s)
else:
angle = 0
width, height = 2 * np.sqrt(covariance)
# Draw the Ellipse
for nsig in range(1, 4):
ax.add_patch(Ellipse(position, nsig * width, nsig * height,
angle, **kwargs))
plt.figure(figsize = (10,8))
plt.scatter(X[:, 0], X[:, 1],c=predicted_values ,s=10, cmap='viridis')
plt.scatter(centers[:, 0], centers[:, 1],c='Red', s=300, alpha=0.6);
w_factor = 0.2 / model.weights.max()
for pos, covar, w in zip(model.means, model.covs, model.weights):
draw_ellipse(pos, covar, alpha=w * w_factor)
from sklearn import mixture
gmm = mixture.GaussianMixture(n_components = 4, covariance_type = 'full',
max_iter = 600, random_state = 3)
gmm.fit(X)
print('converged or not: ', gmm.converged_)
###Output
converged or not: True
###Markdown
Validation wrt original GMM
###Code
from sklearn import mixture
model = mixture.GaussianMixture(n_components=4, covariance_type='full').fit(Data)
labels = model.predict(X)
plt.scatter(X[:, 0], X[:, 1], c=labels, s=10, cmap='viridis');
###Output
_____no_output_____ |
algorithms/Container With Most Water/scrips.ipynb | ###Markdown
暴力搜索版本,超时
###Code
class Solution(object):
def maxArea(self, height):
"""
:type height: List[int]
:rtype: int
"""
max_area = 0
for m, a_m in enumerate(height):
for n, a_n in enumerate(height):
if n==m: continue
temp = abs(m-n)*min(a_m, a_n)
if temp>max_area:
max_area = temp
return max_area
s = Solution()
s.maxArea([1,1])
import json
f= open("demo.json", "r")
demo_input = json.loads(f.read())
demo_input[:5],len(demo_input)
s.maxArea([1,2]),s.maxArea([2,2,2]),
%time s.maxArea([1,2])
%time s.maxArea(demo_input)
###Output
CPU times: user 6.66 s, sys: 7.59 ms, total: 6.66 s
Wall time: 6.68 s
###Markdown
剪枝1
###Code
class Solution(object):
def maxArea(self, height):
"""
:type height: List[int]
:rtype: int
"""
max_area = 0
for m, a_m in enumerate(height):
for n in range(m+1, len(height)):
a_n = height[n]
temp = abs(m-n)*min(a_m, a_n)
if temp>max_area:
max_area = temp
return max_area
s = Solution()
%time s.maxArea(demo_input)
###Output
CPU times: user 3.22 s, sys: 3.48 ms, total: 3.22 s
Wall time: 3.22 s
###Markdown
剪枝2[新思路](https://discuss.leetcode.com/topic/3462/yet-another-way-to-see-what-happens-in-the-o-n-algorithm)应该怎么办花了半个小时思考下能否使用动态规划的方法来解决这道题目,发现无法证明所有转向了剪枝问题变为了 max(abs(m-n)*min(a[m],a[n]))一个数组二维搜索问题,最直观的方式当时是通过矩阵搜索剪枝来完成这个任务矩阵的每一个元素表示以行和列做下标的两条线所能承受的最大水量
###Code
class Solution(object):
def maxArea(self, height):
"""
:type height: List[int]
:rtype: int
"""
max_area = 0
# 初始化矩阵
x = 0
y = len(height)-1
ret = []
cal =0
while x!=y:
if height[x]>height[y]:
ret.append(abs(y-x)*height[y])
y-=1
elif height[x]<=height[y]:
ret.append(abs(y-x)*height[x])
x+=1
cal+=1
print cal
max_area = max(ret)
return max_area
s = Solution()
%time s.maxArea(demo_input)
s.maxArea([1,2]),
s.maxArea([2,1])
s.maxArea([2,2,2])
###Output
_____no_output_____
###Markdown
疑问,类似的思路,为什么会超时
###Code
def test4(m=10000):
for i in iter(range(m)):
for j in iter(range(m)): break
def test1(m=10000):
for i in range(m):
for j in range(m): break
def test3(m=10000):
i=j=0
while i<m:
i+=1
j=0
while j<m:
j+=1
break
def test2(m=10000):
for i in range(m):
break
%time test2()
%time test3()
%time test1()
%time test4()
###Output
CPU times: user 267 µs, sys: 3 µs, total: 270 µs
Wall time: 271 µs
CPU times: user 1.44 ms, sys: 345 µs, total: 1.78 ms
Wall time: 1.51 ms
CPU times: user 1.18 s, sys: 5.82 ms, total: 1.19 s
Wall time: 1.21 s
CPU times: user 1.2 s, sys: 8.05 ms, total: 1.2 s
Wall time: 1.21 s
###Markdown
通过以上例子说明一个问题,python range的方式来进行两重迭代是要动态生成 list的所以下面的代码虽然和正确代码表达的是一个疑似,但是执行效率完全不是一个数量级
###Code
class Solution(object):
def maxArea(self, height):
"""
:type height: List[int]
:rtype: int
"""
max_area = 0
# 初始化矩阵
len_h = len(height)
len_v = len(height)
ret = []
cal = 0
for m in range(0, len_h):
for n in range(len_v-1, m,-1):
cal+=1
if height[m]>=height[n]:
len_v-=1
ret.append((n-m)*height[n])
elif height[m]<height[n]:
ret.append((n-m)*height[m])
break
max_area = max(ret)
print len(ret), cal
return max_area
s = Solution()
%time s.maxArea(demo_input)
###Output
14999 14999
CPU times: user 7.48 ms, sys: 980 µs, total: 8.46 ms
Wall time: 7.62 ms
|
src/Get_game_info.ipynb | ###Markdown
Get info of each gameHere are some examples of responded API:- https://api.rawg.io/api/games/rimworld- https://api.rawg.io/api/games/grand-theft-auto-v- https://rawg.io/games/grand-theft-auto-v
###Code
import csv
import requests
import json
from pprint import pprint
from time import time
import concurrent.futures
import functools
import os
###Output
_____no_output_____
###Markdown
Load CSV file which has game's id and its name
###Code
csv_data = []
with open("../data/game_id.csv", "r") as f:
csv_data = list(csv.reader(f))
# Preview
for i, val in enumerate(csv_data):
print(val)
if i==10: break
###Output
['3498', 'grand-theft-auto-v']
['4200', 'portal-2']
['3328', 'the-witcher-3-wild-hunt']
['5286', 'tomb-raider']
['5679', 'the-elder-scrolls-v-skyrim']
['12020', 'left-4-dead-2']
['802', 'borderlands-2']
['4062', 'bioshock-infinite']
['13536', 'portal']
['3439', 'life-is-strange-episode-1-2']
['4291', 'counter-strike-global-offensive']
###Markdown
Multithreading This function is responsible for requesting each game and save as a JSON file in `/data/game_info/`
###Code
def worker(start_index, games_per_worker, urls, downloaded_files, headers):
for url in urls[start_index : start_index + games_per_worker]:
if url.rsplit("/")[-1] in downloaded_files: continue
try:
# Request API
json_data = json.loads(requests.get(url, headers=headers).text)
# Only include wanted keys
D = {k:v for k,v in json_data.items() if k in include}
# Clean up dictionary
D["platforms"] = []
for platform in json_data["platforms"]:
D["platforms"].append(platform["platform"]["name"])
for key in ("developers", "genres", "publishers"):
D[key] = []
for data in json_data[key]:
D[key].append(data["name"])
if json_data["esrb_rating"]:
D["esrb_rating"] = json_data["esrb_rating"]["name"]
# Save as JSON file
name = D["id"]
with open(f"../data/game_info/{name}.json","w", encoding="utf-8") as f:
json.dump(D, f)
except:
print(f"Error with {url}")
# Create folder if not existed
if not os.path.exists('../data/game_info/'):
os.makedirs('../data/game_info/')
###Output
_____no_output_____
###Markdown
Threading Preparation
###Code
headers = { 'User-Agent': 'App Name: Education purpose',}
include = {"id",
"slug",
"name",
"metacritic",
"released",
"tba",
"updated",
"website",
"rating",
"rating_top",
"added_by_status",
"playtime",
"achievements_count",
"ratings_count",
"suggestions_count",
"game_series_count",
"reviews_count",
"platforms",
"developers",
"genres",
"publishers",
"esrb_rating",
}
# Set up number of workers
max_workers = 64
start_game_index = 0
end_game_index = len(csv_data)
number_of_games = end_game_index - start_game_index
games_per_worker = int(number_of_games/max_workers) + 1
start_index = range(start_game_index, end_game_index, games_per_worker)
# Make urls
base_url = "https://api.rawg.io/api/games/"
urls = [base_url + csv_data[i][0] for i in range(len(csv_data))]
# Skip downloaded files
downloaded_files = {file.split(".",1)[0] for file in os.listdir("../data/game_info/")}
# Time
t0 = time()
with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
temp = functools.partial(worker,
games_per_worker=games_per_worker,
urls=urls,
downloaded_files=downloaded_files,
headers=headers,
)
executor.map(temp, start_index)
# Time
print(f"Time taken: {time()-t0}")
###Output
Error with https://api.rawg.io/api/games/55494
Error with https://api.rawg.io/api/games/55172
Error with https://api.rawg.io/api/games/267083
Error with https://api.rawg.io/api/games/471035
Error with https://api.rawg.io/api/games/440682
Error with https://api.rawg.io/api/games/367202
Error with https://api.rawg.io/api/games/29079
Error with https://api.rawg.io/api/games/517088
Error with https://api.rawg.io/api/games/312611
Error with https://api.rawg.io/api/games/28446
Error with https://api.rawg.io/api/games/79200
Error with https://api.rawg.io/api/games/413880
Error with https://api.rawg.io/api/games/29123
Error with https://api.rawg.io/api/games/55027
Error with https://api.rawg.io/api/games/59025
Error with https://api.rawg.io/api/games/28703
Error with https://api.rawg.io/api/games/275734
Error with https://api.rawg.io/api/games/266581
Error with https://api.rawg.io/api/games/517387
Time taken: 1.2313072681427002
|
statistics_test/t_test_paired_two_tailed.ipynb | ###Markdown
Dependent design* different conditions or treatments* pre-test, post-test (before vs after)* longitudinal study (tracking __individual__ subject growth over time)
###Code
df = pd.read_csv("data/keyboard.csv")
sample_size = len(df)
df.head()
q_err = df["QWERTY errors"]
a_err = df["Alphabetical errors"]
###Output
_____no_output_____
###Markdown
Point estimate
###Code
q_err.mean()
a_err.mean()
diff = q_err.mean() - a_err.mean()
diff
###Output
_____no_output_____
###Markdown
Compute paired difference
###Code
d = q_err - a_err
d_std = d.std(ddof=1)
d_se = d_std / np.sqrt(sample_size)
t = diff / d_se
# note that since t < 0, we are not taking the complement
p = stats.t.cdf(t, df=sample_size - 1) * 2
alpha = 0.05
if p < alpha:
print("reject null: p-val = %.3f"%p)
else:
print("cannot reject null: p-val = %.3f"%p)
###Output
reject null: p-val = 0.001
###Markdown
Cohen's d
###Code
diff / d_std
###Output
_____no_output_____
###Markdown
Confidence interval
###Code
# use sample t-score, not the critical value
t = 2.064
# margin of error
me = t * d_se
diff + me
diff - me
###Output
_____no_output_____ |
timeseries/MatplotlibGraphAesthetics.ipynb | ###Markdown
PreliminariesWe load matplotlib and execute the inline magic command first so taht we can get to work.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
The Worst Graph You Have Ever Seen! The graph in the first cell below plots random numbers within a specified range. Let's assume, for an example, that these are time series data. I have also created x data which are string data: these are used for x-axis tickmark labels. The result is illegible x-axis tickmark labels because every x label is printed out and they are on top of one another. We will fix that problem, and also we'll fix some other issues are well.I have created a number of issues here, on purpose, in case you need to control these aspects of a graph in the future. First, I have turned on tickmarks for all the spines. This is currently not the current default setting for matplotlib, although it has been in past versions. So, in case the old default returns, you should know how to control turning tickmarks off on axes where they are not needed. Note, also, the tickmarks I have created are on the inside of the axes. We will change the orientation so that they are on the outside of the axes. To make the graph even less aesthetic, I have turned on grid lines, in case you need to know how to turn these off in case they become the default. there are so many vertical grid liens that it looks as though there is a gray background.For practice, let's also change the color of the line to black.Let's, further, suppose that there is something apecial about the data point with indices 50 through 150 and so we want to highlight those points in some manner. We will try vertical lines and shading.While a legend for a graph with only one line series is pointless, we will also demonstrate how to add a legend and position it in a way that it does not obstruct the line being plotted.
###Code
import random
xLo = 20.0
xHi = 100.0
numPoints = 1000
y = [xLo + (xHi-xLo)*random.random() if i < 50 or i >150 else 50.0 for i in range(numPoints)]
x = [str(i) for i in range(len(y))]
fig,ax = plt.subplots()
""" I have created non-aesthetic qualities with the lines of code below """
ax.grid(True)
ax.xaxis.set_tick_params(which = 'both', top = True, bottom = True, labelbottom = True, direction='in')
ax.yaxis.set_tick_params(which = 'both', right = True, left = True, labelleft = True, direction='in')
ax.plot(x,y)
import numpy as np
#import random
xLo = 20.0
xHi = 100.0
numPoints = 1000
y = [xLo + (xHi-xLo)*random.random() if i < 50 or i >150 else 50.0 for i in range(numPoints)]
x = [str(i) for i in range(len(y))]
fig,ax = plt.subplots()
ax.plot(x,y)
#ax.plot(x,y,color='k',linewidth=.5,label='Time Series Plot',marker='.') # https://matplotlib.org/api/markers_api.html
""" I have created non-aesthetic qualities with the lines of code below """
ax.grid(True)
#ax.grid(False)
ax.xaxis.set_tick_params(which = 'both', top = True, bottom = True, labelbottom = True, direction='in')
ax.yaxis.set_tick_params(which = 'both', right = True, left = True, labelleft = True, direction='in')
#ax.xaxis.set_tick_params(which = 'both', top = False, bottom = True, labelbottom = True, direction='in')
#ax.yaxis.set_tick_params(which = 'both', right = False, left = True, labelleft = True, direction='in')
""" Enhance axes object """
#ax.spines['right'].set_visible(False)
#ax.spines['top'].set_visible(False)
#ax.set_xlabel('Day',fontsize='14',fontname='Times New Roman')
#ax.set_ylabel('Temperature',fontsize='14',fontname='Times New Roman')
#ax.set_xticks(x[::100])
#ax.set_xticklabels(x[::100])
#ax.set_ylim(0,max(y)+10)
#ax.axvline(x=50,linewidth=1,color='k',label='Low',linestyle='-')
#ax.axvline(x=150,linewidth=1,color='k',label='High',linestyle='--')
#ax.legend(loc=1)
#ax.axvspan(50,150,ymin=0,ymax = 300,facecolor='k',alpha=0.1)
#ax.text(100,85,'Atypical\nStability',fontsize=14,verticalalignment='center',horizontalalignment='center',fontname='Times New Roman')
""" Enhance figure object """
#fig.set_size_inches(14,6)
#fig.suptitle("Time Series Plot of Temperature",fontsize='18',fontname='Times New Roman')
#fig.savefig('lineplot.jpg',dpi=600)
###Output
_____no_output_____
###Markdown
Formatting Numerical Axis Tickmark Labels We saw this graph previously. Note that the x-axis tickmark labels overlap
###Code
# Data
bdata = [1.28,1.05,0.6093,0.22195,0.16063,0.1357,0.10226,0.08499,0.06148,0.05022,0.04485,0.02981]
blabels = ['Unemp','Health','Mil.','Interest', 'Veterans','Agri.','Edu','Trans','Housing','Intl','EnergyEnv','Science']
xs = range(len(bdata))
bdata_cum = []
for i in range(len(bdata)):
bdata_cum.append(sum(bdata[0:i+1])/sum(bdata))
fig, ax = plt.subplots()
fig.suptitle('United States Budget Analysis', fontsize = 16)
# Set bar chart parameters
ax.bar(xs,bdata, align='center')
ax.set_ylim(0,sum(bdata))
ax.set_xticks(xs)
ax.set_xticklabels(blabels)
ax.spines['top'].set_visible(False)
ax.grid(False)
ax.tick_params(axis = 'y', which = 'both', direction = 'in', width = 2, color = 'black')
# Set line chart paramters and assign the second y axis
ax1 = ax.twinx()
ax1.plot(xs,bdata_cum,color='k')
ax1.set_ylim(0,1)
ax1.set_yticklabels(['{:1.1f}%'.format(x*100) for x in ax1.get_yticks()])
ax1.spines['top'].set_visible(False)
ax1.grid(False)
#fig.set_figwidth(9)
#fig.set_figheight(5)
fig.set_size_inches(9,5)
fig.savefig('pareto.jpg', dpi=2000)
plt.show()
###Output
_____no_output_____
###Markdown
This is not a good solution, but you can rotate the axis tickmark labels by 45 degrees as done in the cell below.
###Code
# Data
bdata = [1.28,1.05,0.6093,0.22195,0.16063,0.1357,0.10226,0.08499,0.06148,0.05022,0.04485,0.02981]
blabels = ['Unemp','Health','Mil.','Interest', 'Veterans','Agri.','Edu','Trans','Housing','Intl','EnergyEnv','Science']
xs = range(len(bdata))
bdata_cum = []
for i in range(len(bdata)):
bdata_cum.append(sum(bdata[0:i+1])/sum(bdata))
fig, ax = plt.subplots()
fig.suptitle('United States Budget Analysis', fontsize = 16)
# Set bar chart parameters
ax.bar(xs,bdata, align='center')
ax.set_ylim(0,sum(bdata))
ax.set_xticks(xs)
ax.set_xticklabels(blabels, rotation = 45)
ax.spines['top'].set_visible(False)
ax.grid(False)
ax.tick_params(axis = 'y', which = 'both', direction = 'in', width = 2, color = 'black')
# Set line chart paramters and assign the second y axis
ax1 = ax.twinx()
ax1.plot(xs,bdata_cum,color='k')
ax1.set_ylim(0,1)
ax1.set_yticklabels(['{:1.1f}%'.format(x*100) for x in ax1.get_yticks()])
ax1.spines['top'].set_visible(False)
ax1.grid(False)
#fig.set_figwidth(9)
#fig.set_figheight(5)
fig.set_size_inches(9,5)
fig.savefig('pareto.jpg', dpi=2000)
plt.show()
###Output
_____no_output_____
###Markdown
The best solution is not to rotate the labels, but to limit how many bars are plotted. After all, the point is to focus on the tallest bars. The Pareto chart below plots only the most frequent items. But, there still are some issues to resolve:- We need to possibly delete the top spine, although one might argue we shuld keep it given we must keep the right spine.- Insert x-axis caption- Insert y-axis caption- Indicate units on (1st) y-axis- Put 2nd y-axis tickmark labels in percentage format- We want to save a very high resolution JPG image for a presentation Formatting Numbers 101
###Code
x = 43210.123456789
'{:.2f}'.format(x)
y = [random.random() for i in range(10)]
print(y)
yStr = ['{:.6f}'.format(val) for val in y]
yStr
perc = ['{:.2f}%'.format(val*100) for val in y]
perc
###Output
_____no_output_____
###Markdown
Fix the 2nd y-axis
###Code
# U.S. Budget Data in Trillions of Dollars
bdata = [1.28,1.05,0.6093,0.22195,0.16063,0.1357,0.10226,0.08499,0.06148,0.05022,0.04485,0.02981]
blabels = ['Unemp','Health','Mil.','Interest', 'Veterans','Agri.','Edu','Trans','Housing','Intl','EnergyEnv','Science']
""" Create cumulative percentage data series """
bdata_cum = []
for i in range(len(bdata)):
bdata_cum.append(sum(bdata[0:i+1])/sum(bdata))
# Create plot and set figure object settings
fig, ax = plt.subplots()
fig.suptitle('United States Budget Analysis', fontsize = 16)
fig.set_size_inches(9,5)
#fig.set_figwidth(9)
#fig.set_figheight(5)
# Set bar chart parameters
ax.bar(blabels,bdata, align='center')
ax.set_ylim(0,sum(bdata)) # set limits on first y-axis to align with second y-axis
# Construct a second y-axis
ax1 = ax.twinx()
ax1.plot(bdata_cum,color='k')
ax1.set_ylim(0,1) # The second y-axis is a percentage scale from zero to 100%
plt.show()
import matplotlib.pyplot as plt
# Data
blabels1 = ['SS','Health','Mil.','Interest', 'Vet.','Agri.','Other']
bindex = 6
bother = sum(bdata[bindex:])
bdata1 = bdata[:bindex] + [bother]
xs = range(len(bdata1))
bdata_cum = []
for i in range(len(bdata1)):
bdata_cum.append(sum(bdata1[0:i+1])/sum(bdata1))
fig, ax = plt.subplots()
fig.set_figwidth(9)
fig.set_figheight(5)
# Bar chart settings
ax.set_xticks(xs)
ax.set_xticklabels(blabels1)
ax.bar(xs,bdata1, align='center')
ax.set_ylim(0,sum(bdata1))
ax.set_ylabel('Budget Amount (in Trillions of $)')
# Line chart settings
ax1 = ax.twinx()
ax1.plot(xs,bdata_cum,color='k')
ax1.set_ylim(0,1)
ax1.set_yticklabels(['{:.0f}%'.format(x*100) for x in ax1.get_yticks()])
plt.show()
[x for x in ax1.get_yticks()]
###Output
_____no_output_____ |
nbs/09a-data-augmentation-pets.ipynb | ###Markdown
เราจะมาลองเทรน 2 โมเดล เปรียบเทียบโมเดล ที่ใช้ Data Augmentation และไม่ใช้ ว่า Validation Loss จะต่างกันอย่างไร 0. Magic Commands
###Code
%reload_ext autoreload
%autoreload 2
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. Import Library
###Code
from fastai import *
from fastai.vision import *
from fastai.metrics import accuracy
###Output
_____no_output_____
###Markdown
2. ข้อมูล เราจะใช้ Dataset [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) จำแนกพันธุ์หมาแมวเหมือนดิม
###Code
path = untar_data(URLs.PETS)
path_images = path/'images'
filenames = get_image_files(path_images)
###Output
_____no_output_____
###Markdown
ประกาศฟัง์ชัน สร้าง databunch และฟังก์ชันแสดงภาพ เราจะ Sample ข้อมูลมาแค่ 500 ตัวอย่าง
###Code
def get_databunch(transform):
batchsize = 32
sample = 5000
np.random.seed(555)
regex_pattern = r'/([^/]+)_\d+.jpg$'
return ImageDataBunch.from_name_re(path_images,
random.sample(filenames, sample),
regex_pattern,
ds_tfms=transform,
size=224, bs=batchsize).normalize(imagenet_stats)
def get_ex(): return open_image(f'{path_images}/pug_147.jpg')
def plots_f(rows, cols, width, height, **kwargs):
[get_ex().apply_tfms(transform[0], **kwargs).show(ax=ax) for i,ax in enumerate(plt.subplots(
rows,cols,figsize=(width,height))[1].flatten())]
###Output
_____no_output_____
###Markdown
3. เตรียมข้อมูล เราจะไปสร้าง DataBunch พร้อมสร้างโมเดลจะได้สะดวกในการเปรียบเทียบ 4. สร้างโมเดล ในเคสนี้ เราจะใช้โมเดลที่ไม่ใหม่มาก ไม่มี Skip Connection อย่าง VGG และไม่ใช้ Dropout (ps=0.0), Weight Decay (wd=0.0) จะได้เปรียบเทียบได้ชัด ๆ ไม่ใช้ Data Augmentation ปิด Data Augmentaion ทุกอย่าง ด้วย Empty List 2 อัน คือ transform สำหรับ Training Set และ Validation Set
###Code
transform = ([], [])
databunch = get_databunch(transform)
learner = cnn_learner(databunch, models.vgg16_bn, ps=0.0, wd=0.0,
metrics=accuracy, callback_fns=ShowGraph)#.to_fp16()
plots_f(3, 3, 9, 9, size=224)
learner.fit_one_cycle(1, max_lr=1e-2)
learner.unfreeze()
learner.fit_one_cycle(8, max_lr=slice(3e-6, 3e-3))
###Output
_____no_output_____
###Markdown
เคลียร์ Memory
###Code
learner = None
gc.collect()
###Output
_____no_output_____
###Markdown
ใช้ Data Augmentationเปิด Data Augmentaion ทุกอย่าง
###Code
# transform = get_transform()
transform = get_transforms(do_flip=True, flip_vert=False, max_rotate=10.0, max_zoom=1.1, max_lighting=0.2, max_warp=0.2, p_affine=0.75, p_lighting=0.75)
databunch = get_databunch(transform)
learner = cnn_learner(databunch, models.vgg16_bn, ps=0.0, wd=0.0,
metrics=accuracy, callback_fns=ShowGraph)#.to_fp16()
plots_f(3, 3, 9, 9, size=224)
learner.fit_one_cycle(1, 1e-2)
learner.unfreeze()
learner.fit_one_cycle(8, max_lr=slice(3e-6, 3e-3))
###Output
_____no_output_____
###Markdown
5. สรุป 1. โมเดลที่ไม่ได้ใช้ Data Augmentation เทรนไปหลาย Epoch แล้ว Training Loss ลดลงเรื่อย ๆ แต่ Validation Loss กลับไม่ลดลง และ Accuracy ก็ไม่ได้ดีขึ้น เป็นสัญญาณของ Overfit 2. โมเดลที่ใช้ Data Augmentation เทรนไปด้วยจำนวน Epoch เท่ากัน Training Loss ลดลงเรื่อย ๆ พร้อมกับ Validation Loss และ Accuracy ก็ดีขึ้นเรื่อย ๆ ไม่ Overfit3. โมเดลสมัยใหม่ ออกแบบมาค่อนข้างดี ทำให้ Overfit ค่อนข้างยาก Credit * [Data Augmentation | How to use Deep Learning when you have Limited Data — Part 2](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced)* [vision.transform](https://docs.fast.ai/vision.transform.htmlget_transforms)* [Oxford-IIIT Pet Dataset](http://www.robots.ox.ac.uk/~vgg/data/pets/) by [O. M. Parkhi et al., 2012](http://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/parkhi12a.pdf)* [FastAI: Practical Deep Learning for Coders, v3 - Lesson 1](https://course.fast.ai/videos/?lesson=1)
###Code
###Output
_____no_output_____ |
posts/markov_chains/Markov_chains.ipynb | ###Markdown
Markov chains and stochastic recurrence relations Some recurrant dynamic systems are naturally *stochastic* (or - in other words - involve a bit of randomness). In this post - continuing our discussion of [recurrence relations](https://jermwatt.github.io/control-notes/posts/recurrence_relations/Recurrence_relations.html) - we introduce the basic version of such a model via its most popular application - as a model of written text. This kind of dynamic system is often referred to as a *Markov Chain*.You can skip around this document to particular subsections via the hyperlinks below.- [Natural ordering in text](text-natural-order)- [Stochastic choices, histograms, and Markov chains](stochastic-choices)- [Markov chains on the character-level](character-level)- [The mathematics of a Markov chain](modeling)- [Examples and `Python` implementation](code)- [Fixed order and limited memory](limited-memory)- [Markov chains with one-hot encoded vectors](one-hot)- [Markov chains with unlimited memory](unlimited)
###Code
# This code cell will not be shown in the HTML version of this notebook
# imports from custom library for animations
from library import word_level_markov_model
from library import markov_words_demo
from library import markov_chars_demo
from library import text_parsing_utils
# import standard libs
import numpy as np
import pandas as pd
# path to data
datapath = '../../datasets/markov_chains/'
# This is needed to compensate for matplotlib notebook's tendancy to blow up images when plotted inline
%matplotlib notebook
from matplotlib import rcParams
rcParams['figure.autolayout'] = True
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Natural ordering in text Below we animate a simple sentence "my dog runs". Each *word* in this sentence does - intuitively - seem to follow its predecessor in an natural, orderly, and predictable way. In the animation we walk through the sentence word-by-word, highlighting the fact that each word follows its immediate predecessor. Figure 1: Each word in a simple English sentence follows its predecessor in an natural, orderly, and predictive fashion. This kind natural ordering holds with text more generally as well, and (in the context of dynamic systems) provokes a natural question: can text be modeled as a *recurrence relation*? As [discussed in a previous post](https://jermwatt.github.io/control-notes/posts/recurrence_relations/Recurrence_relations.html), limited memory recurrence relations take in a window of $D$ elements of an input sequence $x_{p-1},x_{p-2},...,x_{p-D}$ and generate the next element (here a word) $x_p$ as\begin{equation}x_{p} = f\left(x_{p-1},x_{p-2},...,x_{p-D}\right).\end{equation} If this were possible we would of course need to define an appropriate value for $D$ as well as an appropriate form for the function $f$. But more fundamentally, while text certainly seems to have the structure of a recurrence relation (with words in a sentence being reasonably predicted by its predecessors), text does have one attribute that we have not seen thus far: *choice*. That is, a word or set of words does not often uniquely determine the word that follows it. For example, in the sentence above we could imagine a range of words following the word "dog" instead of "runs", like e.g., "eats". This would give us the sentence "my dog eats" instead of "my dog runs", which is a perfectly valid and meaningful English sentence. Of course there are other words that could follow "dog" as well, some of which like e.g., the word "sighs" that while valid would be less common than "eats". However some words, like e.g., "dracula", making the sentence "my dog dracula" do not make any sense at all. Thus with text while we do have *choices* for each follow-up word, some choices are more likely to occur than others. These choices are *stochastic* in nature - meaning each arises with a different *probability* of occurrence. Figure 2: Many words could potentially follow the word "dog" in this sentence. However some words - like "runs" or "eats" - are far more likely than others like "sighs". Further, some words like e.g., "dracula" do not make any sense. Stochastic choices, histograms, and Markov chains To illustrate this point, below we show the four choices of words following "dog" in the animation shown above assigning a probability of occurrence to each. We made up for these probabilities for this simple example as well as the examples that follow, but in practice these can be computed *empirically*. More specifically this can be done by taking a large body of text (like e.g., a long book), scanning it for all occurrences of each of its words, and for each forming a normalized histogram of probabilities of its follow-up words. In any case, here we suppose the word "runs" is the most probable, "eats" the next, and so on. The probabilities shown do not add up to $1$ because there could be other reasonable words that could follow "dog" other than those shown here (like "barks", for example). This set of words following "dog", along with their respective probabilities of occurrence (that is, the probability that they occur following the word "dog"), can be nicely represented as a *normalized histogram* (also known as a discrete probability distribution) as shown in the right panel below. This normalized histogram visualizes the probability of each follow-up word as a vertical bar whose height is proportional to a word's probability of occurrence in following the word "dog". Figure 3: (left panel) Here we show a selection of words that could possibly follow the word "dog", along with a probability of the word occurring (which here we assign ourselves via our intuition). In this case the word "runs" has the highest probability of occurring after "dog", then "eats", and so on. The probabilities do not add up to $1$ because there are other words that could reasonably follow "dog" that we do not list here. (right panel) The set of possible words following the word "dog" shown in the left viewed as a *histogram*, where the height of each bar is proportional to each following word's probability of occurrence. Notice that while many words *can* follow the word "dog", if we had to make a single *educated guess* as to the next best word to follow-up the word "dog" based on this discrete probability distribution we would choose the highest probably follow-up word "run". This sort of model for text, from the capturing of its stochastic nature of follow-up words via a discrete probability distribution to the probability-maximizing "educated guess" for the next best word, is called a *stochastic recurrence relation* or (more popularly) a *Markov chain*. Notice that we can also think of each word in a sentence following logically based not just on its immediate predecessor, but on several words (in other words, we can use a larger window). Text certainly is generally structured like this - with preceding words in a sentence generally determining those that follow. In the Figure below we show an illustration that mirrors that of Figure 3 above using a window size (or *order*) $D = 2$. Now the probabilities of occurrence for each follow-up word reflects how frequently each word follows the phrase "my dog". Once again if we wanted to make an educated guess - based on this discrete probability - of what word likely follows the phrase "my dog" we would choose the most probable follow-up word "sleeps". Figure 4: An illustration mirroring Figure 3 above using a window size (or *order*) of $D = 2$. Here we look at the statistics of follow-up words for the phrase "my dog" precisely as we did in the order $D = 1$ case - by forming a normalized histogram / discrete probability distribution over all the words following them. Notice as we increase the number of words, or likewise our window size $D$, that it gets easier and easier to make an "educated guess" as to the correct follow-up word *using our probability-maximizing choice*. This is because the phrase in our length $D$ window becomes more and more unique with each added word, and eventually only a single follow-up word will ever follow it, and the distribution of follow-up words collapses to an *impulse* or [dirac delta](https://en.wikipedia.org/wiki/Dirac_delta_function) function. This means that once the window becomes large enough our probability-maximizing follow-up word model indeed defines a perfect recurrence relation, We illustrate this idea with a longer (and more popular) phrase below: "a rose by any other name would smell as ". This popular Shakespearean phrase has only a single possible follow-up word: "sweet". Figure 5: As the window length $D$ is increased the phrase in our window becomes more and more unique, with the distribution of follow-up words collapsing to a unit impulse (meaning only one kind of follow-up word ever follows our input phrase). Markov chains on the character-level The same thought process leading to the word-by-word model of text detailed above also leads to a similar idea: modeling text *character-by-character*. All of the same logic developed for the word-wise model applies here as well. If we examine a string of English text - on the level of characters this time - once again upon reflection we see a natural ordering to the characters (as in the example shown in the Figure below). Some characters - as with words - seem to naturally follow others. Of course just as with words, the issue here is once again at each step we have *stochasticity* or multiple choices for what the next character should be. In other words, we have a histogram / discrete probability distribution over the full set of possible English characters that reflects the likelihood of each character occurring (with respect to a large corpus on which we build such histograms). As we increase the order of our model - just as in the word-wise case - our generated text looks more and more like the corpus on which we compute our histograms. Figure 5: Illustrations of the concepts described in the prior three Figures, only here for modeling text as a stochastic recurrence relation *character-wise* instead of word-wise. In each case to make a reasonable prediction about the next character we form a normalized histogram / discrete probability distribution over all the characters that follow each input character(s). Note in the window size $D = 2$ case shown here the empty circle contains the 'space' character. As we increase the window size here - just as with the word-wise model detailed above - our generated sequence will start to look more and more like the text on which we computed our histograms. The mathematics of a Markov chain Let us mathematically codify the Markov model - using words as our fundamental unit and beginning with the first simple word-wise example illustrated in Figure 3 as a jumping off point. Note however that everything we discuss here generalizes to all word-level and character-level modeling as well.As shown in Figure 3 above we form a histogram of follow-up words to the input word "dog", along with their probability of occurrence. A normalized histogram is just *vector-valued* output representing the occurrence probability of each possible follow-up word to our input. For example, we can formally jot down the histogram of possible outputs of words following "dog" as \begin{equation}\text{histogram}\left(\text{"dog"}\right) = \begin{cases}\text{"runs"} \,\,\,\,\,\,\,\,\,\,\,\, \text{with probability} \,\,\,\mathscr{y} = 0.4 \\\text{"eats"} \,\,\,\,\,\,\,\,\,\,\,\,\, \text{with probability} \,\,\,\mathscr{y} = 0.3 \\\text{"sighs"} \,\,\,\,\,\,\,\,\,\, \text{with probability} \,\,\,\mathscr{y} = 0.05 \\\text{"dracula"} \,\,\,\,\, \text{with probability} \,\,\,\mathscr{y}= 0 \\\,\,\,\,\,\, \vdots\end{cases}\end{equation} Here in each case $\mathscr{y}$ stands for the *probability* of the corresponding follow-up word occurring. As mentioned previously, the best follow-up word based on this histogram is simply the one with the *maximum probability* of occurrence. Here suppose that word is "runs". Translating this statement into math, we predict the word following "dog" by taking the $\text{argmax}$ over all of the choices above as \begin{equation}\text{(word we predict to follow "dog")} \,\,\,\,\, \text{"runs"} = \underset{\mathscr{y}}{\text{argmax}}\,\,\,\, \text{historgram}\left(\text{"dog"}\right).\end{equation} More generally, if we denote by $x_{p-1}$ the $\left(p-1\right)^{th}$ word in a sentence, then the choice of the next word $h_{p}$ can likewise be written in general as\begin{equation}h_{p} = \underset{\mathscr{y}}{\text{argmax}}\,\, \text{histogram}\left(x_{p-1}\right).\end{equation} Denoting by $f\left(x_{p-1}\right) = \underset{\mathscr{y}}{\text{argmax}}\,\, \text{histogram}\left(x_{p-1}\right)$ we can express our Markov chain model as a [dynamic system with limited memory](https://jermwatt.github.io/control-notes/posts/dynamic_systems_limited_memory/dynamic_systems_limited_memory.html) as\begin{equation}h_{p} = f\left(x_{p-1}\right).\end{equation} Using the same function $f$ we general order $D$ Markov model via the general update step\begin{equation}h_{p} = f\left(x_{p-1},...,x_{p - D}\right)\end{equation}only here our $\text{histogram}$ function computes the histogram of follow-up words of the input sequence $x_{p-1},...,x_{p - D}$. Finally - as mentioned above - when $D$ is large enough this model becomes a perfect [recurrence relation](https://jermwatt.github.io/control-notes/posts/recurrence_relations/Recurrence_relations.html), since as $D$ increases the number of possible follow-up words narrows more and more, eventually diminishing to a single word. Thus the formula above reduces to\begin{equation}x_{p} = f\left(x_{p-1},...,x_{p - D}\right)\end{equation}since indeed the output of $f$ is the next actual word $x_p$. Examples and `Python` implementation Example 1. Generating text word-by-word via a Markov chain In this example we generate a Markov chain model of text using the classic novel *War of the Worlds* by H.G. Wells to define our transition probabilities (the text of which can be found legally for free online [e.g., here](https://archive.org/stream/TheWarOfTheWorlds-H.G.Wells/war-worlds_djvu.txt)). Below we print out the first $500$ characters of the novel. Note some pre-processing has been done here - in particular we removed any strange characters introduced when converting this book to its e-version, and lower-case-afied all alphabetical characters.
###Code
csvname = datapath + "war_of_the_worlds.txt"
model = word_level_markov_model.Markov(csvname)
model.text[:500]
###Output
_____no_output_____
###Markdown
The `Python` function used to pre-process the text is shown below - and requires minimal functionality.
###Code
## load and preprocess text ##
def load_preprocess(csvname):
# load in text dataset - lower case all
text = open(csvname).read().lower()
# cut out first chunk of giberish text - for this text non-giberish began at the 948th character
text = text[947:]
# remove some obvious tag-related gibberish throughout
characters_to_remove = ['0','1','2','3','4','5','6','7','8','9','_','[',']','}','. . .','\\']
for i in characters_to_remove:
text = text.replace(i,'')
# some gibberish that looks like it needs to be replaced with a ' '
text = text.replace('\n',' ')
text = text.replace('\r',' ')
text = text.replace('--',' ')
text = text.replace(',,',' ')
text = text.replace(' ',' ')
return text
###Output
_____no_output_____
###Markdown
To produce an order $D$ Markov model we then run through the text and compute our discrete transiition probabilities. To do this we first produce a dictionary `words_to_keys` that maps each word (also called a "token") to a discrete integer (called a "key"), and another dictionary `keys_to_words` for the reverse mapping (from keys back to words). These dictionaries are computed via the `parse_words` function below - and employs a simple function called `CountVectorizer` from `sklearn`.
###Code
# imports
from sklearn.feature_extraction.text import CountVectorizer
## parse a text into words - producing mapping dictionaries ##
def parse_words(text):
# load in function from scikit learn that
vectorizer = CountVectorizer()
X = vectorizer.fit_transform([text])
analyze = vectorizer.build_analyzer()
# get all unique words in input corpus
tokens = analyze(text)
unique_words = vectorizer.get_feature_names()
# unique nums to map words too
unique_nums = np.arange(len(unique_words))
# this dictionary is a function mapping each unique word to a unique integer
words_to_keys = dict((i, n) for (i,n) in zip(unique_words,unique_nums))
# this dictionary is a function mapping each unique integer to a unique word
keys_to_words = dict((i, n) for (i,n) in zip(unique_nums,unique_words))
# convert all of our tokens (words) to keys
keys = [words_to_keys[a] for a in tokens]
return tokens,keys,words_to_keys,keys_to_words
###Output
_____no_output_____
###Markdown
With these two dictionaries, our keys, and tokens in hand we can now easily generate our desired set of transition probabilities.
###Code
# make transition probabilities based on discrete count of input text
def make_transition_probabilities(order):
# get unique keys - for dimension of transition matrix
unique_keys = np.unique(keys)
num_unique_words = len(unique_keys)
num_words = len(tokens)
# generate initial zeros order O transition matrix
# use a dictionary - or else this for sure won't scale
# to any order > 1
transition_matrix = {}
# sweep through tokens list, update each individual distribution
# as you go - each one a column
for i in range(order,num_words):
# grab current key, and previous order keys
next_key = keys[i]
prev_keys = tuple(keys[i-order:i])
## update transition matrix
# we've seen current key already
if prev_keys in transition_matrix.keys():
if next_key in transition_matrix[prev_keys].keys():
transition_matrix[prev_keys][next_key] += 1
else:
transition_matrix[prev_keys][next_key] = 1
else: # we haven't seen key already, so create new subdict
transition_matrix[prev_keys] = {}
transition_matrix[prev_keys][next_key] = 1
return transition_matrix
###Output
_____no_output_____
###Markdown
Using an order $D = 1$ model we then pick a word randomly from the text, and start generating text. Below we compare a chunk of $30$ words from the text following this initial input, and below it we show the result of the Markov model. Here the input word $x_1 = \gamma$ is colored red, and the $30$ words generated using it are colored blue. Note this means that we first plug in the word "had" into our model, which returns the word "been". Then we return "been" and it returns "the", etc.,
###Code
## This code cell will not be shown in the HTML version of this notebook
order = 1; num_words = 30;
demo = markov_words_demo.show_order(csvname,order,num_words)
###Output
-------- TRUE TEXT -------
he came up to the fence and extended handful of strawberries for his gardening was as generous as it was enthusiastic at the same time he told me of the burning
-------- ORDER = 1 MODEL TEXT -------
[31mhe[0m [34msaid the martians were the martians were the martians were the martians were the martians were the martians were the martians were the martians were the martians were the martians[0m
###Markdown
Clearly we can see that the Markov model, having only a single word in the past to base the next word on, does not generate anything meaningful. However as we increase the order to e.g., $D = 2$ we can see that the generated sentence starts to make more sense, matching its original as shown below. Here the two initial words are colored red, with the remaining generated words colored blue. Notice this means that we first plug in the first two words (here the phrase "trucks bearing") and it returns "huge", then we plug in the next two words (here "bearing huge") and it returns "guns", etc.,
###Code
## This code cell will not be shown in the HTML version of this notebook
order = 2; num_words = 30;
demo = markov_words_demo.show_order(csvname,order,num_words)
###Output
-------- TRUE TEXT -------
amount of scientific education to perceive that the grey scale of the thing was no common oxide that the yellowish white metal that gleamed in the crack between the lid and the
-------- ORDER = 2 MODEL TEXT -------
[31mamount of[0m [34mscientific education to perceive that the martians had been at work upon my mind was blank wonder my muscles and nerves seemed drained of their houses got back to the[0m
###Markdown
As we increase the order $D$ of the model the generated text will begin to match the original more and more. For example, by the time we crank up the order to $D = 10$ the text generated by the model is identical to the original.
###Code
## This code cell will not be shown in the HTML version of this notebook
order = 10; num_words = 30;
demo = markov_words_demo.show_order(csvname,order,num_words)
###Output
-------- TRUE TEXT -------
the sky and after time their talk died out and gave place to an uneasy state of anticipation several wayfarers came along the lane and of these my brother gathered such news as he could every broken answer he had
-------- ORDER = 10 MODEL TEXT -------
[31mthe sky and after time their talk died out and[0m [34mgave place to an uneasy state of anticipation several wayfarers came along the lane and of these my brother gathered such news as he could every broken answer he had[0m
###Markdown
Why does this happen? Notice that when we increase the order of the model the number of unique input sequences proliferates rapidly. Eventually, when we increase the order enough, there remains only a single exemplar (input/output pair) in the text to construct each associated histogram (i.e., every input sequence used to determine the transition probabilities is *unique*). Past this point we only have a single example of each input, thus only one choice for its associated output: whatever follows it in the text (with probability $\mathscr{p} = 1$). Example 2. Generating text character-by-character using a Markov chain Just modeled text by *words* above using a Markov chain, we can likewise model it via *characters* (indeed we will not repeat the `Python` functionality introduced above for the word-wise Markov example, as it is entirely similar). For the same intuitive reasons as discussed in the context of the text-wise modeling scheme - characters often logically follow one another in succession - we can model text as a stochastic dynamic system (a Markov chain) over characters as well. Below we show the result of an order $D = 1$ Markov chain model using the characters instead of words, and the same text (H.G. Well's classic *War of the Worlds*) to calculate our transition probabilities. Of course using only a single character as precedent we cannot capture much about the text, as reflected in the generated text below. Here the single character used as input is colored red, with $300$ generated characters using the order $1$ model colored blue. Note that this means that the first character "n" is plugged into the model and returns the second, here the space character. This is then plugged in to generate "t", etc.,
###Code
## This code cell will not be shown in the HTML version of this notebook
order = 1; num_chars = 300;
demo = markov_chars_demo.show_order(csvname,order,num_chars)
###Output
-------- TRUE TEXT -------
they saw the gaunt figures separating and rising out of the water as they retreated shoreward, and one of them raised the camera-like generator of the heat-ray. he held it pointing obliquely downward, and a bank of steam sprang from the water at its touch. it must have driven through the iron of th
-------- ORDER = 1 MODEL TEXT -------
[31mt[0m[34mhe the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the the t[0m
###Markdown
As we saw in the previous example, as we increase the order $D$ of the model we capture more and more about the text, and can therefore generate more and more meaningful sentences. For example, below we show the result of an order $D = 5$ model, comparing to the similar chunk of the true text. With this many characters we actually start to generate a number of real words.
###Code
## This code cell will not be shown in the HTML version of this notebook
order = 5; num_chars = 300;
demo = markov_chars_demo.show_order(csvname,order,num_chars)
###Output
-------- TRUE TEXT -------
dawn of the great panic. london, which had gone to bed on sunday night oblivious and inert, was awakened, in the small hours of monday morning, to a vivid sense of danger. unable from his window to learn what was happening, my brother went down and out into the street, just as the sky between the parap
-------- ORDER = 5 MODEL TEXT -------
[31mdawn [0m[34mgrew small hours of the street cobham road to the street cobham road to the street cobham road to the street cobham road to the street cobham road to the street cobham road to the street cobham road to the street cobham road to the street cobham road to the street cobham road to the street cobham ro[0m
###Markdown
Increasing the order $D = 10$ we can further observe this trend.
###Code
## This code cell will not be shown in the HTML version of this notebook
order = 10; num_chars = 300;
demo = markov_chars_demo.show_order(csvname,order,num_chars)
###Output
-------- TRUE TEXT -------
past eight. we hurried across the exposed bridge, of course, but i noticed floating down the stream a number of red masses, some many feet across. i did not know what these were there was no time for scrutiny and i put a more horrible interpretation on them than they deserved. here again on the surrey sid
-------- ORDER = 10 MODEL TEXT -------
[31m past eigh[0m[34mt, when the tragedy happened, and the strange and terrible as was the deputation. there was a strong feeling in the streets that the martians were setting fire to everything was to be done. in london that night another invisible to me because it was something of my schoolboy dreams of battle and h[0m
###Markdown
Finally, just as in the word generating case, if we increase the order $D$ past a certain point our model will generate the text exactly. Below we show the result of an order $D = 50$ model, which generates precisely the true text shown above it. This happens for exactly the same reasoning given previously in the context of the word based model: as we increase the order of the model the number of unique input sequences balloons rapidly, until each input sequence of the text is *unique*. This means that there is only one example of each used to determine the transition probabilities, i.e., precisely the one present in the text.
###Code
## This code cell will not be shown in the HTML version of this notebook
order = 50; num_chars = 300;
demo = markov_chars_demo.show_order(csvname,order,num_chars)
###Output
-------- TRUE TEXT -------
g down the broad, sunlit roadway, between the tall buildings on each side. i turned northwards, marvelling, towards the iron gates of hyde park. i had half a mind to break into the natural history museum and find my way up to the summits of the towers, in order to see across the park. but i decided to keep to the ground, where quick hiding was p
-------- ORDER = 50 MODEL TEXT -------
[31mg down the broad, sunlit roadway, between the tall[0m[34m buildings on each side. i turned northwards, marvelling, towards the iron gates of hyde park. i had half a mind to break into the natural history museum and find my way up to the summits of the towers, in order to see across the park. but i decided to keep to the ground, where quick hiding was p[0m
###Markdown
Fixed order and limited memory In both our posts on [dynamic systems with limited memory](https://jermwatt.github.io/control-notes/posts/dynamic_systems_limited_memory/dynamic_systems_limited_memory.html) and deterministic [recurrence relations](https://jermwatt.github.io/control-notes/posts/recurrence_relations/Recurrence_relations.html) we discussed the impact of the finite window size on the "memory" of such systems. In the present case the consequence of such systems being limited by their order is perhaps most clearly seen by a simple example of a Markov chain model of text. For example, suppose we have constructed a word-based Markov model of order $D = 1$, whose transition probabilities have been determined using a large text corpus. We the apply our model to both of the sentences shown below, to predict the word following "is". Figure 5: The main shortcoming of fixed order systems is exemplified in this toy example. Here we suppose we have an order $D = 1$ model whose transition probabilities have been determined on a large training corpus. Here we use our order $D = 1$ to predict the next word of each sentence, following the word "is". However since the model is order $D = 1$ the *same* word will be predicted for each sentence. Given the different subject / context of each, this will likely mean that at least one of the sentences will not make sense. The problem here is that - because we have used an order $D = 1$ model - the *same* word will be predicted to follow the word "is" in both sentences. This will likely mean that at least one (if not both) of the completed sentences will not make sense, since they have completely different subjects. Because a fixed order dynamic system is limited by its order, and cannot use any information from earlier in a sequence, this problem can arise regardless of the order $D$ we choose. Markov chains with one-hot encoded vectors
###Code
## This code cell will not be shown in the HTML version of this notebook
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
def show_series(series,chars):
num_chars,num_samples = series.shape
fig, ax = plt.subplots(figsize = (8,5))
ax = plt.subplot(111)
a,b = np.meshgrid(np.arange(num_samples),np.arange(num_chars))
### x-axis Customize minor tick labels ###
# make custom labels
x_ticker_range = np.arange(0.5,num_samples,1).tolist()
x_char_range = chars[:num_samples]
ax.xaxis.tick_top()
## assign major or minor ticklabels? - chosen major by default
ax.xaxis.set_major_locator(ticker.FixedLocator(x_ticker_range))
ax.xaxis.set_major_formatter(ticker.FixedFormatter(x_char_range))
### y-axis Customize minor tick labels ###
# make custom labels
y_char_range = np.unique(chars)
num_chars = np.size(y_char_range)
y_ticker_range = np.arange(0.5,num_chars,1).tolist()
## assign major or minor ticklabels? - chosen major by default
ax.yaxis.set_major_locator(ticker.FixedLocator(y_ticker_range))
ax.yaxis.set_major_formatter(ticker.FixedFormatter(y_char_range))
cdict = {
'red' : ( (0.0, 0.25, .25), (0.02, .59, .59), (1., 1., 1.)),
'green': ( (0.0, 0.0, 0.0), (0.02, .45, .45), (1., .97, .97)),
'blue' : ( (0.0, 1.0, 1.0), (0.02, .75, .75), (1., 0.8, 0.45))
}
ax.pcolormesh(a, b, -series,cmap = 'hot',edgecolor = 'k') # hot, gist_heat, cubehelix
plt.show()
# parse an input sequence
def window_series(x,order):
# containers for input/output pairs
x_in = []
x_out = []
T = x.size
# window data
for t in range(T - order):
# get input sequence
temp_in = x[:,t:t + order]
x_in.append(temp_in)
# get corresponding target
temp_out = x[:,t + order]
x_out.append(temp_out)
# make array and cut out redundant dimensions
x_in = np.array(x_in)
x_in = x_in.swapaxes(0,1)[0,:,:].T
x_out = np.array(x_out).T
return x_in,x_out
# transform character-based input/output into equivalent numerical versions
def encode_io_pairs_fixed(keys,order):
# count the number of unique characters in the text
keys = np.array(keys)[np.newaxis,:]
unique_keys = np.unique(keys)
num_keys = np.size(unique_keys)
# window series
x,y = window_series(keys,order)
# dimensions of windowed data
order,num_data = x.shape
# loop over inputs/outputs and tranform and store in x
x_onehot = []
for n in range(num_data):
temp = np.zeros((order,num_keys))
for o in range(order):
temp[o,x[:,n][o]] = 1
x_onehot.append(temp.flatten())
return np.array(x_onehot).T,y
# pre-process text
csvname = datapath + "war_of_the_worlds.txt"
text = text_parsing_utils.load_preprocess(csvname)
# parse into characters
chars,keys,chars_to_keys,keys_to_chars = text_parsing_utils.parse_chars(text)
x,y = encode_io_pairs_fixed(keys,1)
x_sample = x[:,:50]
y_sample = y[:,:50]
###Output
_____no_output_____
###Markdown
In discussing Markov chains its pretty commonplace to represent each fundamental unit (that is each word or character) by a numerical value. For example if we were working on the character level we could represent each character by a unique integer - this is often called "token-izing" the characters of a text (we did this in our implementation above). Another common numerical translation is the "one-hot encoding" scheme where we translated each character's unique token into a standard basis vector, as shown for a few characters below. \begin{equation}a \longrightarrow 1 \longrightarrow \begin{bmatrix} 1 \\ 0 \\ 0 \\ \vdots \\ 0 \end{bmatrix} \,\,\,\, \,\,\,\, \,\,\,\, b \longrightarrow 2 \longrightarrow \begin{bmatrix} 0 \\ 1 \\ 0 \\ \vdots \\ 0 \end{bmatrix} \,\,\,\, \,\,\,\, \,\,\,\, c \longrightarrow 3 \longrightarrow \begin{bmatrix} 0 \\ 0 \\ 1 \\ \vdots \\ 0 \end{bmatrix}\end{equation} We could then visualize the first bit of text from our previously used test corpus - The War of the Worlds by H.G. Wells - as shown below. In this matrix the entire set of unique characters in the text are listed horizontally along the left-hand side, the first few characters of the text are listed along the top, and their one-hot encoded versions of the first few characters are shown as columns with the corresponding square of the column darkened in.
###Code
## This code cell will not be shown in the HTML version of this notebook
# show one-hot encoded characters from test corpus
show_series(x_sample,chars)
###Output
_____no_output_____
###Markdown
If we *one-hot encode* characters (or words) we can pull apart we can pull apart the Markov update formula in Equation (6) quite nicely. First if denote by $x_p$ the integer token of the $p^{th}$ character in a text, we produce its equivalent one-hot encoded vector $\text{x}_p$ via an $\text{encoder}$ function \begin{equation}\mathbf{x}_p = \text{encoder}\left(x_p\right).\end{equation} If we suppose that we have one-hot encoded all characters of a text, then the average of a numerical vector representing a $D$ length window of characters $\mathbf{x}_{p-1},\,\mathbf{x}_{p-2},\,...,\mathbf{x}_{p-D}$ as\begin{equation}\mathbf{a}_p = \frac{\mathbf{x}_{p-1}+\mathbf{x}_{p-2} +\,...\,+\mathbf{x}_{p-D}}{D}\end{equation}uniquely defines this sequence of one-hot encoded characters. The histogram of one-hot encoded charaters following this sequence / uniquely related to their average $\mathbf{a}_p$ is simply an average\begin{equation}\mathbf{b}_p = \frac{1}{\vert\Omega\vert}\sum_{j \in \Omega}\mathbf{x}_j\end{equation}where $\Omega = \left\{j \,\,\vert \,\, \text{if} \,\, \mathbf{x}_j \,\,\text{follows} \,\, \mathbf{a}_p \right\}$ is an index set of all (one-hot encoded) characters following $\mathbf{a}_p$ in a large corpus. To select the next most probable word we then determine the index of the *largest* value from this average - which we can express as\begin{equation}h_p = \underset{j}{\text{argmax}}\,\, \left(\mathbf{b}_p\right)\end{equation}where here the $\text{argmax}$ returns $h_p$ the index of the largest entry of the histogram $\mathbf{z}_p$. If we want our prediction in one-hot encoded form we simply pass the index above through our encoder function as\begin{equation}\mathbf{h}_p = \text{encoder}\left(h_p\right).\end{equation} Markov chains with unlimited memory If we were to replace the moving average process component above $\mathbf{a}_p = \frac{\mathbf{x}_{p-1},\,\mathbf{x}_{p-2},\,...,\,\mathbf{x}_{p-D}}{D}$ with a [dynamic system with unlimited memory](https://jermwatt.github.io/control-notes/posts/dynamic_systems_unlimited_memory/dynamic_systems_unlimited_memory.html) like the simple exponential average $\mathbf{a}_p = \alpha\mathbf{a}_{p-1} + \left(1 - \alpha\right)\mathbf{x}_p$ (where $0 \leq \alpha \leq 1$ we an analogous kind of Markov chain with "unlimited memory". Such a Markov chain - technically speaking - makes predictions about future characters based on the entire history of input characters of a text. When $\alpha$ is set to smaller values the exponential average looks more and more like the most recent character $\mathbf{x}_p$ (and more similar to an order $D = 1$ limited memory dynamic system). As $\alpha$ is increased a greater amount historical context is wrapped up into the summarizing variable $\mathbf{z}_p$. Below we plot the exponential average $\mathbf{z}_p$ of the same set of one-hot encoded characters from our test corpus above. Here we can see how the exponential average with $\alpha = 0.8$ smears out the content of this first bit of text as it progresses, dragging along historical context.
###Code
## This code cell will not be shown in the HTML version of this notebook
# running mean
def running_mean(x,alpha):
# set initial conditions of h to values of x
h = [x[:,0]]
# range over x and create h
for p in range(1,np.shape(x)[1]):
# get current point and prior hidden state
h_p_prev = h[-1]
x_p = x[:,p]
# make next element and store
h_p = alpha*h_p_prev + (1 - alpha)*x_p
h.append(h_p)
return np.array(h).T
h = running_mean(x_sample,alpha = 0.8)
show_series(h,chars)
###Output
_____no_output_____ |
Tutorials/Advanced_NN/7_Object_Detection_Deep_Diving/7_4_Faster_R-CNN.ipynb | ###Markdown
Faster R-CNN---------------------------------------------------------------------you can Find me on Github:> [ GitHub](https://github.com/lev1khachatryan) ***Fast R-CNN*** depends on an external region proposal method like selective search. However, those algorithms run on CPU and they are slow. In testing, Fast R-CNN takes 2.3 seconds to make a prediction in which 2 seconds are for generating 2000 ROIs.
###Code
feature_maps = process(image)
ROIs = region_proposal(image) # Expensive!
for ROI in ROIs
patch = roi_pooling(feature_maps, ROI)
results = detector2(patch)
###Output
_____no_output_____ |
The Big Game/TV, Halftime Shows, and the Big Game.ipynb | ###Markdown
What are the most extreme game outcomes?How does the game affect television viewership?How have viewership, TV ratings, and ad cost evolved over time?Who are the most prolific musicians in terms of halftime show performances?
###Code
#Load CSVs
import pandas as pd
super_bowls = pd.read_csv('super_bowls.csv')
tv = pd.read_csv('tv.csv')
halftime_muscians = pd.read_csv('halftime_musicians.csv')
display(super_bowls.head())
display(tv.head())
display(halftime_muscians.head())
###Output
_____no_output_____
###Markdown
2. Taking Note of Dataset Issues
###Code
#
# Summary of the TV data to inspect
tv.info()
print('\n') # Linebreak
# Summary of the halftime musician data to inspect
halftime_muscians.info()
print('\n')
#Summary of halftime_muscians
super_bowls.info()
display(tv.isnull().sum())
display(tv.isnull().sum().sum())
display(halftime_muscians.isnull().sum())
display(halftime_muscians.isnull().sum().sum())
###Output
_____no_output_____
###Markdown
3. Combined points distribution
###Code
from matplotlib import pyplot as plt
%matplotlib inline
plt.style.use('seaborn')
plt.hist(super_bowls['combined_pts'])
plt.xlabel('Combined Points')
plt.ylabel('Number of Super Bowl')
# Display the Super Bowls with the highest and lowest combined scores
display(super_bowls[super_bowls['combined_pts'] > 70])
display(super_bowls[super_bowls['combined_pts'] < 25])
###Output
_____no_output_____
###Markdown
4. Point difference distribution
###Code
plt.hist(super_bowls['difference_pts'])
plt.xlabel('Point Difference Between Winner and Losers')
plt.ylabel('Number of Superbowls')
plt.show()
close_win = super_bowls[super_bowls['difference_pts'] == 1]
large_win = super_bowls[super_bowls['difference_pts'] >= 30]
display(close_win)
print('\n')
display(large_win)
###Output
_____no_output_____
###Markdown
5. Do blowouts translate to lost viewers? Notes: The linear regression and fit shows that while there is a decrease in household share as the differences in points increases, it does not show a fit. Things to investigate: r value of linear regression slope standard deviation correlation coefficent how much it fits Confidence Interval Analysis Notes: The closer the difference in points, the likelihood that a large percentage of viewers are likely to see the game to the end
###Code
filter1 = tv[tv['super_bowl'] > 1]
games_tv = pd.merge(filter1, super_bowls, on='super_bowl')
#games_tv set determines the sns.regplot
#import seaborn
import seaborn as sns
#create scatter splot with linear regression model fit
sns.regplot(x='difference_pts', y='share_household', data = games_tv)
sns.regplot(x='difference_pts', y='share_household', data = games_tv)
#create scatter splot with linear regression model fit
sns.regplot(x='difference_pts', y='share_household', data = games_tv)
#scatterplot of losing points vs. share household
sns.regplot(x='losing_pts', y='share_household', data = games_tv)
###Output
_____no_output_____
###Markdown
Question: Does in person attendance increase with viewership? Attendance Distribution by Superbowl
###Code
from matplotlib import pyplot as plt
%matplotlib inline
plt.style.use('seaborn')
plt.hist(super_bowls['attendance'], bins = 10)
plt.xlabel('Superbowl Attendance')
plt.ylabel('Number of Super Bowls')
plt.show()
super_bowls['attendance'].describe()
upr = super_bowls[super_bowls['attendance']< 71419] #lower 25% range
lwr = super_bowls[super_bowls['attendance']> 80280] #upper 25% range
display(upr.sort_values('attendance', ascending = False))
print('\n')
display(lwr.sort_values('attendance', ascending = False))
###Output
_____no_output_____
###Markdown
Viewership Distribution by Super Bowl
###Code
plt.style.use('seaborn')
#plot histogram of viewership using share of household
plt.hist(tv['share_household'])
plt.xlabel('share_household')
plt.ylabel('Number of Super Bowls')
plt.show()
tv['share_household'].describe()
tv[tv['super_bowl'] > 1]['share_household'].describe()
tv_lwr = tv[tv['share_household']<63]
tv_upr = tv[tv['share_household']>75]
display(tv_lwr)
display(tv_upr)
###Output
_____no_output_____
###Markdown
Linear Regression
###Code
# Join game and TV data, filtering out SB I because it was split over two networks
games_tv = pd.merge(tv[tv['super_bowl'] > 1], super_bowls, on='super_bowl')
# Import seaborn
import seaborn as sns
# Create a scatter plot with a linear regression model fit
sns.regplot(x='attendance', y='share_household', data=games_tv)
plt.title('Linear Regression of Attendance vs. Household Share')
# Set x-axis label
plt.xlabel('In-Game Attendance')
# Set y-axis label
plt.ylabel('Household Share')
###Output
_____no_output_____
###Markdown
States with highest attendance and States with lowest attendance in person????
###Code
super_bowls.groupby('state')['attendance'].sum().sort_values(ascending=False)
super_bowls.groupby('city')['attendance'].sum().sort_values(ascending=False)
df_new = super_bowls[super_bowls['state']=='Florida']
df_new
df_new.sort_values(by=['attendance'], ascending = False)
###Output
_____no_output_____
###Markdown
6. Viewership and the ad industry over time The downward sloping regression line and the 95% confidence interval for that regression suggest that bailing on the game if it is a blowout is common. Though it matches our intuition, we must take it with a grain of salt because the linear relationship in the data is weak due to our small sample size of 52 games.Regardless of the score though, I bet most people stick it out for the halftime show, which is good news for the TV networks and advertisers. A 30-second spot costs a pretty \$5 million now, but has it always been that way? And how have number of viewers and household ratings trended alongside ad cost? Lets check profitability.
###Code
# Create a figure with 3x1 subplot and activate the top subplot
plt.subplot(3, 1, 1)
plt.plot(tv['super_bowl'], tv['avg_us_viewers'], color='#648FFF')
plt.title('Average Number of US Viewers')
# Activate the middle subplot
plt.subplot(3, 1, 2)
plt.plot(tv['super_bowl'], tv['rating_household'], '#DC267F')
plt.title('Household Rating')
# Activate the bottom subplot
plt.subplot(3, 1, 3)
plt.plot(tv['super_bowl'], tv['ad_cost'], '#FFB000')
plt.title('Ad Cost')
plt.xlabel('SUPER BOWL')
# Improve the spacing between subplots
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Notes: This chart is TERRIBLE. Find a way to make it more easily readable in a few seconds. 7. Halftime shows weren't always this great
###Code
# Display all halftime musicians for Super Bowls up to and including Super Bowl XXVII
halftime_muscians[halftime_muscians['super_bowl'] <= 27]
halftime_muscians[['super_bowl','musician']]
pd.options.display.max_rows
halftime_muscians[halftime_muscians['super_bowl'] >= 27]
###Output
_____no_output_____
###Markdown
9. Who performed the most songs in a halftime show?
###Code
# Filter out most marching bands
no_bands = halftime_muscians[~halftime_muscians.musician.str.contains('Marching')]
no_bands = no_bands[~no_bands.musician.str.contains('Spirit')]
# Plot a histogram of number of songs per performance
most_songs = int(max(no_bands['num_songs'].values))
plt.hist(no_bands.num_songs.dropna(), bins=most_songs)
plt.xlabel('Number of Songs Per Halftime Show Performance')
plt.ylabel('Number of Musicians')
plt.show()
# Sort the non-band musicians by number of songs per appearance...
no_bands = no_bands.sort_values('num_songs', ascending=False)
# ...and display the top 15
display(no_bands.head(15))
###Output
_____no_output_____ |
jupyter_notebooks/image_registration.ipynb | ###Markdown
**[Image Registration](image_registration.ipynb)** by Gerd Duscher and Matthew. F. ChisholmMaterials Science & EngineeringJoint Institute of Advanced MaterialsThe University of Tennessee, Knoxville Registration of a Stack of Images We us this notebook **only** for a stack of images. Prerequesites Install pycroscopy
###Code
import sys
from pkg_resources import get_distribution, DistributionNotFound
def test_package(package_name):
"""Test if package exists and returns version or -1"""
try:
version = (get_distribution(package_name).version)
except (DistributionNotFound, ImportError) as err:
version = '-1'
return version
# Colab setup ------------------
if 'google.colab' in sys.modules:
!pip install git+https://github.com/pycroscopy/pyTEMlib/ -q
# pyTEMlib setup ------------------
else:
if test_package('sidpy') < '0.0.4':
print('installing sidpy')
!{sys.executable} -m pip install --upgrade sidpy -q
if test_package('pyNSID') < '0.0.2':
print('installing pyNSID')
!{sys.executable} -m pip install --upgrade pyNSID -q
if test_package('pycroscopy') < '0':
print('installing pyTEMlib')
!{sys.executable} -m pip install --upgrade pyTEMlib -q
# ------------------------------
print('done')
###Output
_____no_output_____
###Markdown
Import the usual librariesYou can load that library with the code cell above:
###Code
# import matplotlib and numpy
# use "inline" instead of "notebook" for non-interactive
# use widget for jupyterlab needs ipympl to be installed
import sys
if 'google.colab' in sys.modules:
%pylab --no-import-all notebook
else:
%pylab --no-import-all widget
from sidpy.io.interface_utils import open_file_dialog
from SciFiReaders import DM3Reader
import SciFiReaders
%load_ext autoreload
%autoreload 2
sys.path.insert(0, '../')
import pycroscopy as px
__notebook__ = 'Image_Registration'
__notebook_version__ = '2021_10_04'
###Output
Populating the interactive namespace from numpy and matplotlib
The autoreload extension is already loaded. To reload it, use:
%reload_ext autoreload
###Markdown
Load an image stack :Please, load an image stack. A stack of images is used to reduce noise, but for an added image the images have to be aligned to compensate for drift and other microscope instabilities.You select here (with the ``open_file_dialog`` parameter), whether an **open file dialog** apears in the code cell below the next one or whether you want to get a list of files (Nion has a weird way of dealing with file names).
###Code
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount("/content/drive")
drive_directory = 'drive/MyDrive/'
else:
drive_directory = '.'
file_widget = open_file_dialog(drive_directory)
file_widget
###Output
_____no_output_____
###Markdown
Plot Image StackEither we load the selected file in hte widget above above or a file dialog window appears.This is the point the notebook can be repeated with a new file. Either select a file above again (without running the code cell above) or open a file dialog hereNote that the **open file dialog** might not apear in the foreground!
###Code
try:
main_dataset.h5_dataset.file.close()
except:
pass
dm3_reader = DM3Reader(file_widget.selected)
main_dataset = dm3_reader.read()
if main_dataset.data_type.name != 'IMAGE_STACK':
print(f"Please load an image stack for this notebook, this is an {main_dataset.data_type}")
print(main_dataset)
main_dataset.dim_0.dimension_type = 'spatial'
main_dataset.dim_1.dimension_type = 'spatial'
main_dataset.z.dimension_type = 'temporal'
main_dataset.plot() # note this needs a view reference for interaction
main_dataset._axes
frame_dim = []
spatial_dim = []
for i, axis in main_dataset._axes.items():
if axis.dimension_type.name == 'SPATIAL':
spatial_dim.append(i)
else:
frame_dim.append(i)
if len(spatial_dim) != 2:
print('need two spatial dimensions')
if len(frame_dim) != 1:
print('need one frame dimensions')
###Output
_____no_output_____
###Markdown
Complete Registration Takes a while, depending on your computer between 1 and 10 minutes.
###Code
## Do all of registration
notebook_tags ={'notebook': __notebook__, 'notebook_version': __notebook_version__}
non_rigid_registered, rigid_registered_dataset = px.image.complete_registration(main_dataset)
non_rigid_registered.plot()
non_rigid_registered
###Output
Rigid_Registration
Stack contains 20 images, each with 512 pixels in x-direction and 512 pixels in y-direction
###Markdown
Check Drift
###Code
scale_x = (rigid_registered_dataset.x[1]-rigid_registered_dataset.x[0])*1.
drift = rigid_registered_dataset.metadata['drift']
x = np.linspace(0,drift.shape[0]-1,drift.shape[0])
polynom_degree = 2 # 1 is linear fit, 2 is parabolic fit, ...
line_fit_x = np.polyfit(x, drift[:,0], polynom_degree)
poly_x = np.poly1d(line_fit_x)
line_fit_y = np.polyfit(x, drift[:,1], polynom_degree)
poly_y = np.poly1d(line_fit_y)
plt.figure()
# plot drift and fit of drift
plt.axhline(color = 'gray')
plt.plot(x, drift[:,0], label = 'drift x')
plt.plot(x, drift[:,1], label = 'drift y')
plt.plot(x, poly_x(x), label = 'fit_drift_x')
plt.plot(x, poly_y(x), label = 'fit_drift_y')
plt.legend();
# set second axis in pico meter
ax_pixels = plt.gca()
ax_pixels.step(1, 1)
ax_pm = ax_pixels.twinx()
x_1, x_2 = ax_pixels.get_ylim()
ax_pm.set_ylim(x_1*scale_x, x_2*scale_x)
# add labels
ax_pixels.set_ylabel('drift [pixels]')
ax_pm.set_ylabel('drift [nm]')
ax_pixels.set_xlabel('image number');
plt.tight_layout()
###Output
_____no_output_____
###Markdown
Appendix Demon RegistrationHere we use the **Diffeomorphic Demon Non-Rigid Registration** as provided by **simpleITK**. Please Cite: * [simpleITK](http://www.simpleitk.org/SimpleITK/project/parti.html) and * [T. Vercauteren, X. Pennec, A. Perchant and N. Ayache *Diffeomorphic Demons Using ITK\'s Finite Difference Solver Hierarchy* The Insight Journal, 2007](http://hdl.handle.net/1926/510) This Non-Rigid Registration consists of the following steps:- determine ``reference`` image - For this we use the average of the rigid registered stack - this averaged stack is then smeared with a Gaussian of sigma 2 pixel to reduce noise - under the assumption that high frequency scan distortions cancel out over several images, we, therefore, obtained the center of mass of the atoms. - perform the ``demon registration`` filter to determine a distortion matrix - each single image of a stack is first smeared with a Gaussian of sigma of 2pixels - then the deformation matrix is determined for these images - the deformation matrix is a matrix where each pixel has a vector ( x, and y value) for the relative shift of this pixel. - This deformation matrix is used to ``transform`` the image - The transformation is performed on the original image. - Important, here, is to set the interpolator method, (the image needs to be interpolated because the new pixels are not on an integer grid.) Let's see what the different interpolators do.|Method | RMS contrast | Standard | Mean ||-------|:--------------|:-------------|:-------||original |0.1965806 |0.07764114 |0.3949583|Linear |0.20159315 |0.079470366 |0.39421165|BSpline |0.20162606 |0.0794831 |0.39421043|Gaussian |0.14310582 |0.056414302 |0.39421389|Hamming |0.20163293 |0.07948672 |0.39421496The Gaussian interpolator is the only one seems to smear the signal.We will use the ``Bspline`` method a fast and simple method that does not introduce spurious features and does not smear the signal. Full Code of Demon registration
###Code
import simpleITK as sitk
def DemonReg(cube, verbose = False):
"""
Diffeomorphic Demon Non-Rigid Registration
Usage:
DemReg = DemonReg(cube, verbose = False)
Input:
cube: stack of image after rigid registration and cropping
Output:
DemReg: stack of images with non-rigid registration
Dempends on:
simpleITK and numpy
Please Cite: http://www.simpleitk.org/SimpleITK/project/parti.html
and T. Vercauteren, X. Pennec, A. Perchant and N. Ayache
Diffeomorphic Demons Using ITK\'s Finite Difference Solver Hierarchy
The Insight Journal, http://hdl.handle.net/1926/510 2007
"""
DemReg = np.zeros_like(cube)
nimages = cube.shape[0]
print(nimages)
# create fixed image by summing over rigid registration
fixed_np = np.average(current_dataset, axis=0)
fixed = sitk.GetImageFromArray(fixed_np)
fixed = sitk.DiscreteGaussian(fixed, 2.0)
#demons = sitk.SymmetricForcesDemonsRegistrationFilter()
demons = sitk.DiffeomorphicDemonsRegistrationFilter()
demons.SetNumberOfIterations(200)
demons.SetStandardDeviations(1.0)
resampler = sitk.ResampleImageFilter()
resampler.SetReferenceImage(fixed);
resampler.SetInterpolator(sitk.sitkBspline)
resampler.SetDefaultPixelValue(0)
done = 0
for i in range(nimages):
if done < int((i+1)/nimages*50):
done = int((i+1)/nimages*50)
sys.stdout.write('\r')
# progress output :
sys.stdout.write("[%-50s] %d%%" % ('*'*done, 2*done))
sys.stdout.flush()
moving = sitk.GetImageFromArray(cube[i])
movingf = sitk.DiscreteGaussian(moving, 2.0)
displacementField = demons.Execute(fixed,movingf)
outTx = sitk.DisplacementFieldTransform( displacementField )
resampler.SetTransform(outTx)
out = resampler.Execute(moving)
DemReg[i,:,:] = sitk.GetArrayFromImage(out)
#print('image ', i)
print(':-)')
print('You have succesfully completed Diffeomorphic Demons Registration')
return DemReg
###Output
_____no_output_____ |
07_Visualization/Titanic_Desaster/Osbert_answer_20210103.ipynb | ###Markdown
Visualizing the Titanic Disaster Introduction:This exercise is based on the titanic Disaster dataset avaiable at [Kaggle](https://www.kaggle.com/c/titanic). To know more about the variables check [here](https://www.kaggle.com/c/titanic/data) Step 1. Import the necessary libraries
###Code
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
Step 2. Import the dataset from this [address](https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Titanic_Desaster/train.csv) Step 3. Assign it to a variable titanic
###Code
class_1 = titanic[titanic.Pclass == 1]
class_2 = titanic[titanic.Pclass == 2]
class_3 = titanic[titanic.Pclass == 3]
class_3
g = sns.FacetGrid(titanic, col = "Sex")
g.map(plt.hist, "Pclass")
address = 'https://raw.githubusercontent.com/guipsamora/pandas_exercises/master/07_Visualization/Titanic_Desaster/train.csv'
titanic = pd.read_csv(address)
titanic
###Output
_____no_output_____
###Markdown
Step 4. Set PassengerId as the index
###Code
titanic_Pid = titanic.set_index('PassengerId')
titanic_Pid
###Output
_____no_output_____
###Markdown
Step 5. Create a pie chart presenting the male/female proportion
###Code
#groupby the data by delivery type
chart_sex = titanic_Pid.groupby("Sex")["Sex"].count()
chart_sex
#simple one liner
chart_sex.plot.pie(autopct="%.1f%%")
#Using matplotlib
pie, ax = plt.subplots(figsize=[10,6])
labels = chart_sex.keys()
plt.pie(x=chart_sex, autopct="%.1f%%", labels=labels, shadow = False, colors = ['pink', 'brown'],
explode = (0.15 , 0), pctdistance=0.5, startangle = 90)
plt.title("Male / Female Proportion", fontsize=14)
# pie.savefig("DeliveryPieChart.png")
###Output
_____no_output_____
###Markdown
Step 6. Create a scatterplot with the Fare payed and the Age, differ the plot color by gender
###Code
# creates the plot using
lm = sns.lmplot(x = 'Age', y = 'Fare', data = titanic, hue = 'Sex', fit_reg=False)
# set title
lm.set(title = 'Fare x Age')
# get the axes object and tweak it
# axes = lm.axes
# axes[0,0].set_ylim(-5,)
# axes[0,0].set_xlim(-5,85)
or
plot1 = sns.scatterplot(x = 'Age', y = 'Fare', data = titanic, hue = 'Sex')
plot1.set(title = 'Fare x Age by Sex')
titanic_Pid
###Output
_____no_output_____
###Markdown
Step 7. How many people survived?
###Code
titanic_Pid[titanic_Pid.Survived == 1].shape[0]
# OR
# titanic_Pid.Survived.sum()
###Output
_____no_output_____
###Markdown
Step 8. Create a histogram with the Fare payed
###Code
# create histogram
fare_hist = sns.distplot(titanic_Pid.Fare)
# set lables and titles
fare_hist.set(xlabel = 'Fare', ylabel = 'Frequency', title = "Fare of titanic passagers")
###Output
_____no_output_____ |
algorithms/graphs/dijkstra.ipynb | ###Markdown
Dijkstra's Algorithm
###Code
graph = {
0: {1: 5, 2: 2},
1: {0: 3, 5: 2, 6: 1},
2: {0: 2, 3: 3, 4: 1, 9: 1},
3: {2: 1, 9: 3, 10: 1},
4: {2: 1},
5: {1: 1},
6: {1: 3, 7: 2, 8: 9, 3: 1},
7: {6: 2, 8: 4},
8: {7: 1, 6: 1},
9: {2: 2, 3: 1, 10: 1},
10: {3: 4, 9: 1},
}
###Output
_____no_output_____
###Markdown
Shortest Path
###Code
import heapq
def dijkstra_shortest_path(source, destination, graph):
visited, stack = set(), [(0, source, [])]
while stack:
cost, vertex, path = heapq.heappop(stack)
if vertex not in visited:
visited.add(vertex)
if vertex == destination:
return path + [vertex]
for successor, successor_cost in graph[vertex].items():
if successor not in visited:
heapq.heappush(stack, (cost + successor_cost, successor, path + [vertex]))
return []
dijkstra_shortest_path(0, 10, graph)
###Output
_____no_output_____ |
demoAPP1client.ipynb | ###Markdown
This is a web version of the privacy-preserving Q&A app proposal Client version - i.e. for use if you're only encrypting an answer to send it back to a friend who initiated a privacy-preserving survey--For the complete notebook (which contain both the server and client versions) please see https://bit.ly/ADDdemo1 (along with more context and instructions)*(the code in this page is actually ***entirely*** copy-pasted from the complete notebook, with only the client relevant cells copy-pasted here to make it easier to use for someone who only need to encrypt an answer)*.--**There would be ONE modification to do by hand by you (the user who wants to encrypt an answer using a friend's key, and send it back to him) in each cell. After each hand modification, don't forget to run the cell you modified either by pressing together "Shift+Enter", or by clicking on the Run button (with a triangle logo) in the tool bar on top of this webpage**.Please refer to [email protected] for app proposal, if you don't already have it.
###Code
#### TO ENCRYPT YOUR ANSWER : first copy paste in this cell the key your friend sent you along with his question
# copy paste it here, replacing my long {"public_key": {"g": 1640992552166156555639023.....
json_file = {"public_key": {"g": 16409925521661565556390238590618334010017995591945882024016623289250229878864223438578804734886166366197757127765208858724900291220721261838304627862742331860954766873941598582656135290423583297208666575391025694979285756616902113992149510574424448605283772244745133516183442215548909640262343196164697760414432307030131671947438072743774532420623632094975829024541678168986651936199360901936056978638997767419457953073589453031732001364363272804452146891448511845237515689287030447531727597571967819298107759794267035166910904970528087159824090375214425952049815081366550940897785638304994895423959930209887580374602, "n": 16409925521661565556390238590618334010017995591945882024016623289250229878864223438578804734886166366197757127765208858724900291220721261838304627862742331860954766873941598582656135290423583297208666575391025694979285756616902113992149510574424448605283772244745133516183442215548909640262343196164697760414432307030131671947438072743774532420623632094975829024541678168986651936199360901936056978638997767419457953073589453031732001364363272804452146891448511845237515689287030447531727597571967819298107759794267035166910904970528087159824090375214425952049815081366550940897785638304994895423959930209887580374601}, "enc_value": ["115507727406531563170182464875072149557212854672236838077118322814479647023633389379274159053403803790999098471942261488843089509778241033824931704905343838538999545707727922610299860111483812411414930261399103509664636026414853652065767125145573468596407205822061968089967274837613082551993910478629954260977733133463349604972842615501993766736492323319222431832108659128737186297543785054710028369563626263454385855862539618559300290797011565124436703710453557391048240238903416014192113562060125421118184739634664239098239301747777928363886898263615510658969371642844720504869052215293494797470158504922535794718269889298347985112175865946312264088898424016048864581501131394390666364322465980361517389860386601513099989861009841543726212646208390989920631080957160601405032730192223296328652165940044233234743348723876010128401625765259525401339057365254939147759975599546979086171405202841253217825013728754054689428662925684834394585000241500379836263956536708754074399981192984281963349934958696126005406222521781155434872994483169139140706946396841116231144265162448055943334322200909620940593022732107623410468031137158319372699264987136362871949285104394203660451903080144274180607596678194840183561322255397571532700077273", 0]}
# THEN EXECUTE the cell by clicking here and "Shift+Enter" (or by clicking on the Run button (with a triangle logo) in the tool bar on top of this webpage)
# You should see as output below after you ran this cell a text appearing, something like <PaillierPublicKey 22149f2b94>.
import phe
from phe import paillier
import json
pk = json_file['public_key']
public_key_rec = paillier.PaillierPublicKey(n=int(pk['n']))
print(public_key_rec)
#### WRITE YOUR ANSWER HERE, ENCRYPT IT AND SEND IT BACK TO YOUR FRIEND WHO INITIATED THE SURVEY
# replace my 8 by your number
your_number = 8
# THEN EXECUTE the cell by clicking here and "Shift+Enter" (or by clicking on the Run button (with a triangle logo) in the tool bar on top of this webpage)
# You should see as output below after you ran this cell a big chunk of {"public_key": {"g": 1640992552166156555639023..
# THIS IS YOUR ENCRYPTED ANSWER, WHICH YOU CAN COPY PASTE (the text below the cell once you rance it !) AND SEND BACK TO YOUR FRIEND !
your_encrypted_number = public_key_rec.encrypt(your_number)
enc_with_pub_key = {}
enc_with_pub_key['public_key'] = { 'g':public_key_rec.g, 'n':public_key_rec.n}
enc_with_pub_key['enc_value'] = (str(your_encrypted_number.ciphertext()),your_encrypted_number.exponent)
serialised = json.dumps(enc_with_pub_key)
print(serialised)
#### TO DECRYPT THE FINAL ENCRYPTED SURVEY ANSWER, AND DOUBLE CHECK THAT YOUR FRIEND DIDN'T CHEAT
# your friend now sends you the decryption key, along with the encrypted survey answer.
# Copy paste it here, replacing my long {"public_key": {"g": 1640992552166156555639023.....
file_json = {"public_key": {"g": 16409925521661565556390238590618334010017995591945882024016623289250229878864223438578804734886166366197757127765208858724900291220721261838304627862742331860954766873941598582656135290423583297208666575391025694979285756616902113992149510574424448605283772244745133516183442215548909640262343196164697760414432307030131671947438072743774532420623632094975829024541678168986651936199360901936056978638997767419457953073589453031732001364363272804452146891448511845237515689287030447531727597571967819298107759794267035166910904970528087159824090375214425952049815081366550940897785638304994895423959930209887580374602, "n": 16409925521661565556390238590618334010017995591945882024016623289250229878864223438578804734886166366197757127765208858724900291220721261838304627862742331860954766873941598582656135290423583297208666575391025694979285756616902113992149510574424448605283772244745133516183442215548909640262343196164697760414432307030131671947438072743774532420623632094975829024541678168986651936199360901936056978638997767419457953073589453031732001364363272804452146891448511845237515689287030447531727597571967819298107759794267035166910904970528087159824090375214425952049815081366550940897785638304994895423959930209887580374601}, "enc_value": ["36457261154213502783895431601868805501821998179327885621282458492355827876308754850984820296487436846775535197793376479930882211751852115418177604730709029658007765897433826989022861196653432357230485605383237803845837700644779934028466549907647138532996411728020554368778609975336950091705198002379077570662351380119426187797522726578737747583616464179983276095090518682806291670500387362511402041270067757172492990045517534298343964737119529831824476854984902077050562606496262881266189005296648084108087507877010976007111455352498863545137985144147312838303226763391086473562394176759435403593472781120866481870568395584640935239054466197736468550008269027272822166287353433847757231531796783837305700396360973853461914895252124804932209165747299297113043819993218108107917989691191585703086221842250140497539128381087826075964460396570454581539864184741787976079245939027757025716663895625733024233100941670517178827022724796280863575649854577696434180457620173462859400891561573392582889810779514906269820850970561376113427056720145291891380559202420674236456725886365644928277971139084128366190011690632242813829767400818692655100106287050049844185903141830988080816561065490416234114686233848411415634847613727711375315780666", -14], "private_key": {"p": 126551503790991057968363185273620642641420786596372339816129972684004032436964353653103679410907061845500482968103353877240367547122613009266644320493721195845896138341625832172705708397370688298224094583992801057054563703731164956496880321941856524992127322042031876405831151033009715853398545710238404930529, "q": 129669936982840930112397852811089273984542131021354943511573361090422676421138433294072268595731073507435832260280046569874747910635879265888113659949951986191619075403350538659453340523067622386102689521223556903727471044038012038053383264566337772518648615149498711603736472417353316746499029926727023748969}}
# THEN EXECUTE the cell by clicking here and "Shift+Enter" (or by clicking on the Run button (with a triangle logo) in the tool bar on top of this webpage)
# You should see as output below after you ran this cell the decrypt result appearing,
# along with the decryption of the encrypted answer you initially sent him
# (to double check that it is the same key pair used to encrypt your answer, encrypt other friends' answers, and doing the computation on top of these encrypted numbers !
pri_key = file_json['private_key']
private_key_rec = paillier.PaillierPrivateKey(public_key=public_key_rec, p=int(pri_key['p']), q=int(pri_key['q']))
enc_final_result = paillier.EncryptedNumber(public_key_rec, int(file_json['enc_value'][0]), int(file_json['enc_value'][1]))
print(private_key_rec.decrypt(enc_final_result))
print(private_key_rec.decrypt(your_encrypted_number))
###Output
_____no_output_____ |
notebooks/Paper_Figures.ipynb | ###Markdown
Figure 1 Comparison of probability of success for varying $\gamma$ in the case of identical copies, as a function of the number of available systems. Based on thecomputational results, we observe that as the depolarizing parameter increases, the probability of success levels off for large $N$.
###Code
num_trials = 1000
N_list, g_list = list(range(1, 13)), [0.01, 0.05, 0.1, 0.3]
param_grid = [('N', N_list), ('g', g_list)]
em = ExperimentManager('figure1.pickle', param_grid, num_trials)
# Precompute all the pure states
pure_state = [generate_rhos(max(N_list), identical=True, g=0, dim=2) for _ in range(num_trials)]
rho_pool = {
g: [((1 - g) * rp + g / 2 * np.eye(2), (1 - g) * rn + g / 2 * np.eye(2)) for rp, rn in pure_state]
for g in g_list
}
rho_iter = {g: it.cycle(rhos) for g, rhos in rho_pool.items()}
def simul_func(N, g):
q, Qp = 1/2, 100
use_CUDA = False
device, cache = 'cuda:0' if use_CUDA else 'cpu', not use_CUDA
rho_pos, rho_neg = next(rho_iter[g])
rho_pos, rho_neg = rho_pos[:N], rho_neg[:N]
kwargs = {
'N': N, 'rho_pos': rho_pos, 'rho_neg': rho_neg, 'interp_mode': 'linear',
'Qp': Qp, 'device': device, 'cache': cache
}
Asp = Locally_Greedy_ParamSpace(N, rho_pos, rho_neg, Qp, device)
LG_QDP = Quantum_DP(**kwargs, param_space=Asp)
prob_succ_LG = LG_QDP.root.prob_success(q)
return prob_succ_LG
em.run(simul_func, callback)
fig1_data = em.export_xarray(lambda x: x[0]).mean(axis=-1)
plt.axes().set_aspect(20)
plt.gcf().set_size_inches(6, 6)
plt.plot(N_list, fig1_data.sel(g=0.01), 'r+-', label=r'$\gamma=0.01$')
plt.plot(N_list, fig1_data.sel(g=0.05), 'bo-', label=r'$\gamma=0.05$')
plt.plot(N_list, fig1_data.sel(g=0.1), 'kx-', label=r'$\gamma=0.1$')
plt.plot(N_list, fig1_data.sel(g=0.3), 'g^-', label=r'$\gamma=0.3$')
plt.xlabel(r'$N$')
plt.xlim([0.5, 12.5])
plt.xticks([2, 4, 6, 8, 10, 12])
plt.ylabel(r'$P_{\mathrm{succ}}(\gamma)$')
plt.ylim([0.48, 1.02])
plt.legend(loc=4)
plt.grid()
###Output
_____no_output_____
###Markdown
Figure 2 Comparison of probability of success for varying $\gamma$ in the distinct subsystems scenario, as a function of the number of available systems, $N$.
###Code
num_trials = 1000
N_list, g_list = list(range(1, 13)), [0.01, 0.05, 0.1, 0.3]
param_grid = [('N', N_list), ('g', g_list)]
em = ExperimentManager('figure2.pickle', param_grid, num_trials)
# Precompute all the pure states
pure_state = [generate_rhos(max(N_list), identical=False, g=0, dim=2) for _ in range(num_trials)]
rho_pool = {
g: [((1 - g) * rp + g / 2 * np.eye(2), (1 - g) * rn + g / 2 * np.eye(2)) for rp, rn in pure_state]
for g in g_list
}
rho_iter = {g: it.cycle(rhos) for g, rhos in rho_pool.items()}
def simul_func(N, g):
q, Qp = 1/2, 100
use_CUDA = False
device, cache = 'cuda:0' if use_CUDA else 'cpu', not use_CUDA
rho_pos, rho_neg = next(rho_iter[g])
rho_pos, rho_neg = rho_pos[:N], rho_neg[:N]
kwargs = {
'N': N, 'rho_pos': rho_pos, 'rho_neg': rho_neg, 'interp_mode': 'linear',
'Qp': Qp, 'device': device, 'cache': cache
}
Asp = Locally_Greedy_ParamSpace(N, rho_pos, rho_neg, Qp, device)
LG_QDP = Quantum_DP(**kwargs, param_space=Asp)
prob_succ_LG = LG_QDP.root.prob_success(q)
return prob_succ_LG
em.run(simul_func, callback)
fig2_data = em.export_xarray(lambda x: x[0]).mean(axis=-1)
plt.axes().set_aspect(20)
plt.gcf().set_size_inches(6, 6)
plt.plot(N_list, fig2_data.sel(g=0.01), 'r+-', label=r'$\gamma=0.01$')
plt.plot(N_list, fig2_data.sel(g=0.05), 'bo-', label=r'$\gamma=0.05$')
plt.plot(N_list, fig2_data.sel(g=0.1), 'kx-', label=r'$\gamma=0.1$')
plt.plot(N_list, fig2_data.sel(g=0.3), 'g^-', label=r'$\gamma=0.3$')
plt.xlabel(r'$N$')
plt.xlim([0.5, 12.5])
plt.xticks([2, 4, 6, 8, 10, 12])
plt.ylabel(r'$P_{\mathrm{succ}}(\gamma)$')
plt.ylim([0.48, 1.02])
plt.legend(loc=4)
plt.grid()
###Output
_____no_output_____
###Markdown
Figure 3 Comparison of probability of success as a function of thenumber of available systems, $N$, for depolarizing parameter $\gamma = 0.3$.
###Code
plt.axes().set_aspect(20)
plt.gcf().set_size_inches(6, 6)
plt.plot(N_list, fig1_data.sel(g=0.3), 'bx-', label='Identical Copies')
plt.plot(N_list, fig2_data.sel(g=0.3), 'ro-', label='Distinct Subsystems')
plt.xlabel(r'$N$')
plt.xlim([0.5, 12.5])
plt.xticks([2, 4, 6, 8, 10, 12])
plt.ylabel(r'$P_{\mathrm{succ}}(\gamma)$')
plt.ylim([0.48, 1.02])
plt.legend(loc=4)
plt.grid()
###Output
_____no_output_____
###Markdown
Figure 4 Probability of success for identical copies as a function of $\gamma$ and number of subsystems measured simultaneously , $m$. Here $N = 6$
###Code
num_trials = 1000
m_list = [x for x in range(1, 7) if 6 % x == 0]
g_list = [0, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 1]
param_grid = [('m', m_list), ('g', g_list)]
em = ExperimentManager('figure4.pickle', param_grid, num_trials)
# Precompute all the pure states
pure_state = [generate_rhos(max(m_list), identical=True, g=0, dim=2) for _ in range(num_trials)]
rho_pool = {
g: [((1 - g) * rp + g / 2 * np.eye(2), (1 - g) * rn + g / 2 * np.eye(2)) for rp, rn in pure_state]
for g in g_list
}
rho_iter = {g: it.cycle(rhos) for g, rhos in rho_pool.items()}
def simul_func(m, g):
if m != 6:
N, q, Qp = 6//m, 1/2, 100
use_CUDA = False
device, cache = 'cuda:0' if use_CUDA else 'cpu', not use_CUDA
cache = True
rho_pos, rho_neg = next(rho_iter[g])
rho_pos, rho_neg = grouping(rho_pos, rho_neg, m)
kwargs = {
'N': N, 'rho_pos': rho_pos, 'rho_neg': rho_neg, 'interp_mode': 'linear',
'Qp': Qp, 'device': device, 'cache': cache
}
Asp = Locally_Greedy_ParamSpace(N, rho_pos, rho_neg, Qp, device)
LG_QDP = Quantum_DP(**kwargs, param_space=Asp)
prob_succ_LG = LG_QDP.root.prob_success(q)
return prob_succ_LG
else:
q = 1/2
rho_pos, rho_neg = next(rho_iter[g])
prob_succ_H = helstrom(q, rho_pos, rho_neg)
return prob_succ_H[0]
em.run(simul_func, callback)
fig4_data = em.export_xarray(lambda x: x).mean(axis=-1)
plt.axes().set_aspect(2)
plt.gcf().set_size_inches(6, 6)
plt.plot(g_list, fig4_data.sel(m=1), 'r+-', label=r'$m=1$')
plt.plot(g_list, fig4_data.sel(m=2), 'bo-', label=r'$m=2$')
plt.plot(g_list, fig4_data.sel(m=3), 'kx-', label=r'$m=3$')
plt.plot(g_list, fig4_data.sel(m=6), 'g^-', label=r'$m=6$')
plt.xlabel(r'$\gamma$')
plt.ylabel(r'$P_{\mathrm{succ}}(\gamma)$')
plt.ylim([0.48, 1.02])
plt.legend(loc=3)
plt.grid()
###Output
_____no_output_____
###Markdown
Figure 5 Probability of success for distinct subsystems as a function of $\gamma$ and number of subsystems measured simultaneously , $m$. Here $N = 6$
###Code
num_trials = 1000
m_list = [x for x in range(1, 7) if 6 % x == 0]
g_list = [0, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.7, 1]
param_grid = [('m', m_list), ('g', g_list)]
em = ExperimentManager('figure5.pickle', param_grid, num_trials)
# Precompute all the pure states
pure_state = [generate_rhos(max(m_list), identical=True, g=0, dim=2) for _ in range(num_trials)]
rho_pool = {
g: [((1 - g) * rp + g / 2 * np.eye(2), (1 - g) * rn + g / 2 * np.eye(2)) for rp, rn in pure_state]
for g in g_list
}
rho_iter = {g: it.cycle(rhos) for g, rhos in rho_pool.items()}
def simul_func(m, g):
if m != 6:
N, q, Qp = 6//m, 1/2, 100
use_CUDA = False
device, cache = 'cuda:0' if use_CUDA else 'cpu', not use_CUDA
cache = True
rho_pos, rho_neg = next(rho_iter[g])
rho_pos, rho_neg = grouping(rho_pos, rho_neg, m)
kwargs = {
'N': N, 'rho_pos': rho_pos, 'rho_neg': rho_neg, 'interp_mode': 'linear',
'Qp': Qp, 'device': device, 'cache': cache
}
Asp = Locally_Greedy_ParamSpace(N, rho_pos, rho_neg, Qp, device)
LG_QDP = Quantum_DP(**kwargs, param_space=Asp)
prob_succ_LG = LG_QDP.root.prob_success(q)
return prob_succ_LG[0]
else:
q = 1/2
rho_pos, rho_neg = next(rho_iter[g])
prob_succ_H = helstrom(q, rho_pos, rho_neg)
return prob_succ_H[0]
em.run(simul_func, callback)
fig5_data = em.export_xarray(lambda x: x).mean(axis=-1)
plt.axes().set_aspect(2)
plt.gcf().set_size_inches(6, 6)
plt.plot(g_list, fig5_data.sel(m=1), 'r+-', label=r'$m=1$')
plt.plot(g_list, fig5_data.sel(m=2), 'bo-', label=r'$m=2$')
plt.plot(g_list, fig5_data.sel(m=3), 'kx-', label=r'$m=3$')
plt.plot(g_list, fig5_data.sel(m=6), 'g^-', label=r'$m=6$')
plt.xlabel(r'$\gamma$')
plt.ylabel(r'$P_{\mathrm{succ}}(\gamma)$')
plt.ylim([0.48, 1.02])
plt.legend(loc=3)
plt.grid()
###Output
_____no_output_____
###Markdown
Figure 6 Probability of success for the special case $\gamma=0.3$ when the subsystems are not necessarily copies. Here $N=6$. This clearly illustrates the initial dip in probability of success with increasing $m$.
###Code
fig4_data = em.export_xarray(lambda x: x).mean(axis=-1)
plt.axes().set_aspect(500)
plt.gcf().set_size_inches(6, 6)
plt.plot(m_list, fig5_data.sel(g=0.3), 'g^')
plt.xlabel(r'$m$')
plt.ylabel(r'$P_{\mathrm{succ}}(\gamma=0.3)$')
plt.grid()
###Output
_____no_output_____
###Markdown
Figure 7 Comparison of probability of $P_{\mathrm{best}}(3, \gamma)$ and $P_{\mathrm{worst}}(3, \gamma)$ as a function of the depolarising parameter $\gamma$ for $N=3$. Although $P_{\mathrm{best}}(3, \gamma) \ne P_{\mathrm{worst}}(3, \gamma)$, the relative difference is small.
###Code
num_trials = 1000
g_list = [0, 0.05, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
N_list = [3, 4, 5, 6, 7]
param_grid = [('N', N_list), ('g', g_list)]
em = ExperimentManager('figure7.pickle', param_grid, num_trials)
# Precompute all the pure states
pure_state = [generate_rhos(max(N_list), identical=False, g=0, dim=2) for _ in range(num_trials)]
rho_pool = {
g: [((1 - g) * rp + g / 2 * np.eye(2), (1 - g) * rn + g / 2 * np.eye(2)) for rp, rn in pure_state]
for g in g_list
}
rho_iter = {g: it.cycle(rhos) for g, rhos in rho_pool.items()}
def simul_func(N, g):
q, Qp = 1/2, 100
use_CUDA = False
device, cache = 'cuda:0' if use_CUDA else 'cpu', not use_CUDA
rho_pos, rho_neg = next(rho_iter[g])
rho_pos, rho_neg = rho_pos[:N], rho_neg[:N]
kwargs = {
'N': N, 'rho_pos': rho_pos, 'rho_neg': rho_neg, 'interp_mode': 'linear',
'Qp': Qp, 'device': device, 'cache': cache
}
Asp = Qubit_Proj_ParamSpace(Qphi=128, device=device)
Qb_QDP = Quantum_DP(**kwargs, param_space=Asp)
prob_succ_B = Qb_QDP.root.prob_success(q)
return prob_succ_B
def callback(pbar, res):
pbar.set_postfix(n=len(res), mean=np.round(np.mean(res, axis=0), 6))
em.run(simul_func, callback)
fig7_best = em.export_xarray(lambda x: x[0]).mean(axis=-1)
fig7_worst = em.export_xarray(lambda x: x[-1]).mean(axis=-1)
plt.axes().set_aspect(2)
plt.gcf().set_size_inches(6, 6)
plt.plot(g_list, fig7_best.sel(N=3), 'rx-', label='max')
plt.plot(g_list, fig7_worst.sel(N=3), 'bo-', label='min')
plt.xlabel(r'$\gamma$')
plt.ylabel(r'$P_{\mathrm{method}}(3, \gamma)$')
plt.ylim([0.48, 1.02])
plt.legend(loc=3)
plt.grid()
###Output
_____no_output_____
###Markdown
Figure 8 Comparison of difference in maximum and minimum probability of success, $P_{\mathrm{diff}}(N, \gamma)$, as a function of the depolarizing parameter $\gamma$ over $200$ trials for $N = 3,4,5,6,7$.
###Code
fig7_diff = fig7_best - fig7_worst
plt.axes().set_aspect(200)
plt.gcf().set_size_inches(6, 6)
plt.plot(g_list, fig7_diff.sel(N=3), 'r+-', label=r'$N=3$')
plt.plot(g_list, fig7_diff.sel(N=4), 'bo-', label=r'$N=4$')
plt.plot(g_list, fig7_diff.sel(N=5), 'kx-', label=r'$N=5$')
plt.plot(g_list, fig7_diff.sel(N=6), 'g^-', label=r'$N=6$')
plt.plot(g_list, fig7_diff.sel(N=7), 'ms-', label=r'$N=7$')
plt.xlabel(r'$\gamma$')
plt.ylabel(r'$P_{\mathrm{diff}}(N,\gamma)$')
plt.legend(loc=1)
plt.grid()
###Output
_____no_output_____
###Markdown
Figure 9 The average probability of success for the best and worst ordering using both ternary and binary projective measurements for qutrit product states when $N=3$. Results are averaged over $100$ trials
###Code
num_trials = 1000
g_list = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6]
N_list = [3]
param_grid = [('N', N_list), ('g', g_list)]
em = ExperimentManager('figure9.pickle', param_grid, num_trials)
# Precompute all the pure states
pure_state = [generate_rhos(max(N_list), identical=False, g=0, dim=3) for _ in range(num_trials)]
rho_pool = {
g: [((1 - g) * rp + g / 3 * np.eye(3), (1 - g) * rn + g / 3 * np.eye(3)) for rp, rn in pure_state]
for g in g_list
}
rho_iter = {g: it.cycle(rhos) for g, rhos in rho_pool.items()}
def simul_func(N, g):
q, Qp = 1/2, 100
use_CUDA = True
device, cache = 'cuda:0' if use_CUDA else 'cpu', not use_CUDA
rho_pos, rho_neg = next(rho_iter[g])
rho_pos, rho_neg = rho_pos[:N], rho_neg[:N]
kwargs = {
'N': N, 'rho_pos': rho_pos, 'rho_neg': rho_neg, 'interp_mode': 'linear',
'Qp': Qp, 'device': device, 'cache': cache
}
Asp3 = Qutrit_Proj_ParamSpace(d=[2, 2, 2], Q=32, mode='ternary', device=device)
QQDP3 = Quantum_DP(**kwargs, param_space=Asp3)
prob_succ_T = QQDP3.root.prob_success(q)
Asp2 = Qutrit_Proj_ParamSpace(d=[2, 2, 2], Q=32, mode='binary', device=device)
QQDP2 = Quantum_DP(**kwargs, param_space=Asp2)
prob_succ_B = QQDP2.root.prob_success(q)
return np.concatenate([prob_succ_T, prob_succ_B])
def callback(pbar, res):
pbar.set_postfix(n=len(res), mean=np.round(np.mean(res, axis=0), 2))
em.run(simul_func, callback)
fig9_ternary_best = em.export_xarray(lambda x: x[0]).mean(axis=-1)
fig9_ternary_worst = em.export_xarray(lambda x: x[1]).mean(axis=-1)
fig9_binary_best = em.export_xarray(lambda x: x[2]).mean(axis=-1)
fig9_binary_worst = em.export_xarray(lambda x: x[3]).mean(axis=-1)
plt.axes().set_aspect(1)
plt.gcf().set_size_inches(6, 6)
plt.plot(g_list, fig9_ternary_best.sel(N=3), 'r+-', label='ternary,best')
plt.plot(g_list, fig9_ternary_worst.sel(N=3), 'bo-', label='ternary,worst')
plt.plot(g_list, fig9_binary_best.sel(N=3), 'kx-', label='binary,best')
plt.plot(g_list, fig9_binary_worst.sel(N=3), 'g^-', label='binary,worst')
plt.xlabel(r'$\gamma$')
plt.ylabel(r'$P_{\mathrm{method}}(\gamma,\mathcal{A})$')
plt.ylim([0.48, 1.02])
plt.legend(loc=3)
plt.grid()
###Output
_____no_output_____
###Markdown
Figure 10 Difference in average success probability for the various methods, namely, $P_{d, \mathrm{method}}(\gamma, \mathcal{A})$ as a function of $\gamma$ when $N=3$. Results are averaged over $100$ trials.
###Code
plt.axes().set_aspect(50)
plt.gcf().set_size_inches(6, 6)
plt.plot(g_list, (fig9_ternary_best - fig9_ternary_worst).sel(N=3), 'r+-', label='ternary,worst')
plt.plot(g_list, (fig9_ternary_best - fig9_binary_best).sel(N=3), 'bo-', label='binary,best')
plt.plot(g_list, (fig9_ternary_best - fig9_binary_worst).sel(N=3), 'kx-', label='binary,worst')
plt.xlabel(r'$\gamma$')
plt.ylabel(r'$P_{d, \mathrm{method}}(\gamma,\mathcal{A})$')
plt.legend(loc=2)
plt.grid()
###Output
_____no_output_____ |
host_guest/GDCC/input_maker.ipynb | ###Markdown
Get SMILES from source files
###Code
#define source file directory
GDCC_GUEST_PATH = '/home/amezcum1/SAMPL8/host_guest/GDCC/source_files/Guests'
import os
from openeye import oechem
import pandas as pd
import numpy as np
if not os.path.exists('guest_files'):
os.makedirs('guest_files')
if not os.path.exists('host_files'):
os.makedirs('host_files')
#list of smiles
SMILES = []
#get smiles from source files
for root, dirs, files in os.walk(GDCC_GUEST_PATH):
for file in files:
if file.endswith(".sdf"):
ifs = oechem.oemolistream(root + '/' + '{}'.format(file))
mol = oechem.OEMol()
oechem.OEReadMolecule(ifs, mol)
smiles = oechem.OEMolToSmiles(mol)
SMILES.append(smiles)
# update list with names (i.e G1, G2, etc)
names = ['G1', 'G2', 'G3', 'G4', 'G5']
smiles_names = {'SMILES': SMILES, 'name':names}
# make a dataframe
df = pd.DataFrame.from_dict(smiles_names)
#save dataframe, remove headers
df.to_csv('/home/amezcum1/SAMPL8/host_guest/GDCC/guest_files/guest_smiles.csv', index=False,header=False)
#save a txt file
with open('/home/amezcum1/SAMPL8/host_guest/GDCC/guest_files/guest_smiles.csv', 'r') as inputfile, open('/home/amezcum1/SAMPL8/host_guest/GDCC/guest_files/guest_smiles.txt', 'w') as outputfile:
for line in inputfile:
line = line.replace(',', ';')
outputfile.write(line)
###Output
_____no_output_____
###Markdown
Do some simple prep of compound structures from SMILES strings
###Code
file = '/home/amezcum1/SAMPL8/host_guest/GDCC/guest_files/guest_smiles.txt'
file = open(file, 'r')
text = file.readlines()
file.close()
print("Getting list of molecule IDs...")
MoleculeIDs = [] #SAMPL8 Molecule ID
smiles = [] #isomeric SMILES
#Loop over lines and parse
for line in text:
tmp = line.split(';') #Split into columns
MoleculeID = tmp[1].strip() #mol ID
smi = tmp[0] #smiles string
try:
MoleculeIDs.append(MoleculeID)
smiles.append(smi)
except:
print("Error storing line: %s" % line)
print("Done!")
###Output
Getting list of molecule IDs...
Done!
###Markdown
Parse SMILES, store molecules
###Code
from openeye.oechem import *
from openeye.oeomega import * # conformer generation
from openeye.oequacpac import *
from openeye import oequacpac
from openeye import oedepict
from IPython.display import display, Image
mols_by_ID = {}
for idx in range(len(smiles)):
# Generate new OEMol and parse SMILES
mol = OEMol()
OEParseSmiles( mol, smiles[idx])
# Set netural pH model to pick a "reasonable" netural protonation per OpenEye's QuacPac
OESetNeutralpHModel(mol)
mols_by_ID[MoleculeIDs[idx]] = mol
# check 2D versions of molecule
oedepict.OEPrepareDepiction(mol)
# set some display options
image = oedepict.OEImage(300, 300)
opts = oedepict.OE2DMolDisplayOptions(image.GetWidth(), image.GetHeight(), oedepict.OEScale_Default)
# render and display
disp = oedepict.OE2DMolDisplay(mol, opts)
oedepict.OERenderMolecule(image, disp)
# oedepict.OERenderMolecule("title.png", disp) # to save a file
display(Image(oedepict.OEWriteImageToString("png",image)))
###Output
_____no_output_____
###Markdown
Experimental ConditionsGDCC system experimental conditions:10 mM sodium phosphate buffer at 25 C, pH 11.5 pKa values:G1 (3-hydroxy-2-naphthoic acid)- hydroxyl 12.83 (4.48%), carboxyl 2.69 (95.52%)G2 (4-Bromophenol)- 9.17G3 (cyclopentylacetic acid)- 4.86G4 (piperonylic acid)- 4.10G5 (p-toluic acid)- 4.26 Tweak protonation states for some compounds For GDCCs only G2 is affected. Hydroxyl is coming out protonated (neutral), should have -1 charge.
###Code
for mol_id in ['G2']:
mol = mols_by_ID[mol_id]
for atom in mol.GetAtoms():
if atom.GetAtomicNum()==8:
atom.SetFormalCharge(-1)
mols_by_ID[mol_id]=mol
###Output
_____no_output_____
###Markdown
Generate conformers and assign partial charges
###Code
for mol_id in mols_by_ID:
#initialize omega, this is a conformation generator
omega = OEOmega()
#set the maximum conformer generated to 1
omega.SetMaxConfs(1)
omega.SetIncludeInput(False)
omega.SetStrictAtomTypes(False) #Leniency in assigning atom types
omega.SetStrictStereo(True) #Don't generate conformers if stereochemistry not provided. Setting to false would pick a random stereoisomer
mol = mols_by_ID[mol_id]
# Generate one conformer
status = omega(mol)
if not status:
print("Error generating conformers for %s" % (mol_id))
# Assign charges
OEAssignPartialCharges(mol, OECharges_AM1BCC)
# Write out PDB of molecule
ofile = oemolostream('/home/amezcum1/SAMPL8/host_guest/GDCC/guest_files/%s.pdb'%(mol_id))
OEWriteMolecule(ofile, mol)
ofile.close()
# Write out MOL2 of molecule
ofile = oemolostream('/home/amezcum1/SAMPL8/host_guest/GDCC/guest_files/%s.mol2'%(mol_id))
OEWriteMolecule(ofile, mol)
ofile.close()
# Write out SDF molecule
ofile = oemolostream('/home/amezcum1/SAMPL8/host_guest/GDCC/guest_files/%s.sdf'%(mol_id))
OEWriteMolecule(ofile, mol)
ofile.close()
###Output
Error generating conformers for G2
###Markdown
Make host files
###Code
hosts = ['TEMOA']
for idx in range( len( hosts ) ):
inputfile = oemolistream('/home/amezcum1/SAMPL8/host_guest/GDCC/source_files/Hosts/TEMOA/%s.sdf'%(hosts[idx]) )
mol = OEMol()
OEReadMolecule( inputfile, mol )
inputfile.close()
# Write to a SDF file
ofile = oemolostream( '/home/amezcum1/SAMPL8/host_guest/GDCC/host_files/%s.sdf'%(hosts[idx]) )
OEWriteMolecule( ofile, mol)
ofile.close()
# Write to a mol2 file
ofile = oemolostream( '/home/amezcum1/SAMPL8/host_guest/GDCC/host_files/%s.mol2'%(hosts[idx]) )
OEWriteMolecule( ofile, mol)
ofile.close()
# Write to a PDB file
ofile = oemolostream('/home/amezcum1/SAMPL8/host_guest/GDCC/host_files/%s.pdb'%(hosts[idx]) )
OEWriteMolecule( ofile, mol)
ofile.close()
print("Host done:", hosts[idx])
hosts = ['TEEtOA']
for idx in range( len( hosts ) ):
inputfile = oemolistream('/home/amezcum1/SAMPL8/host_guest/GDCC/source_files/Hosts/TEEtOA/%s.sdf'%(hosts[idx]) )
mol = OEMol()
OEReadMolecule( inputfile, mol )
inputfile.close()
# Write to a SDF file
ofile = oemolostream( '/home/amezcum1/SAMPL8/host_guest/GDCC/host_files/%s.sdf'%(hosts[idx]) )
OEWriteMolecule( ofile, mol)
ofile.close()
# Write to a mol2 file
ofile = oemolostream( '/home/amezcum1/SAMPL8/host_guest/GDCC/host_files/%s.mol2'%(hosts[idx]) )
OEWriteMolecule( ofile, mol)
ofile.close()
# Write to a PDB file
ofile = oemolostream('/home/amezcum1/SAMPL8/host_guest/GDCC/host_files/%s.pdb'%(hosts[idx]) )
OEWriteMolecule( ofile, mol)
ofile.close()
print("Host done:", hosts[idx])
###Output
Host done: TEEtOA
|
YDOblogs.ipynb | ###Markdown

###Code
print("There are {} tag words and here are the top 5".format(len(tagwords.unique())))
print(tagwords.value_counts()[:5])
print()
print()
print("There are {} categories and here are the top 5".format(len(categories.unique())))
print(categories.value_counts()[:5])
fig,ax=plt.subplots(figsize=(8,6))
sns.barplot(x=tag20.values, y=tag20.index, palette="rainbow")
ax.set_title("Top 20 Tags")
ax.set_xlabel("Total Posts")
ax.set_ylabel("Tags")
fig.tight_layout()
fig,ax=plt.subplots(figsize=(8,6))
sns.barplot(x=cat20.values, y=cat20.index, palette="rainbow")
ax.set_title("Top 20 Categories")
ax.set_xlabel("Total Posts")
ax.set_ylabel("Categories")
fig.tight_layout()
###Output
_____no_output_____ |
Examples-mt2204/My_project.ipynb | ###Markdown
This is my new python project I don't know yet what the project is going be, here are some practice problems
###Code
x = 3
y = x**2
print (x)
print (y)
###Output
3
9
|
utils/converter.ipynb | ###Markdown
process layernorm before Process Tansformer
###Code
megatron_trans['layers.0.row_input_layernorm.weight'].shape, \
esm['encoder.sentence_encoder.layers.0.column_self_attention.layer_norm.weight'].shape
megatron_trans['layers.0.row_attention.query_key_value.weight'].shape, \
esm['encoder.sentence_encoder.layers.0.column_self_attention.layer.k_proj.weight'].shape
mega_trans_keys
esm_trans_keys
def process_layer_i(i: int):
for p in ['weight', 'bias']:
for rc in ['row', 'col']:
esm_rc = 'column' if rc == 'row' else 'row'
assign(megatron_trans[f'layers.{i}.{rc}_input_layernorm.{p}'], esm[f'encoder.sentence_encoder.layers.{i}.{esm_rc}_self_attention.layer_norm.{p}'])
assign(megatron_trans[f'layers.{i}.{rc}_attention.dense.{p}'], esm[f'encoder.sentence_encoder.layers.{i}.{esm_rc}_self_attention.layer.out_proj.{p}'])
assign(megatron_trans[f'layers.{i}.mlp.dense_h_to_4h.{p}'], esm[f'encoder.sentence_encoder.layers.{i}.feed_forward_layer.layer.fc1.{p}'])
assign(megatron_trans[f'layers.{i}.mlp.dense_4h_to_h.{p}'], esm[f'encoder.sentence_encoder.layers.{i}.feed_forward_layer.layer.fc2.{p}'])
assign(megatron_trans[f'layers.{i}.post_attention_layernorm.{p}'], esm[f'encoder.sentence_encoder.layers.{i}.feed_forward_layer.layer_norm.{p}'])
num_heads = 12
hidden_dim = 768
heads_dim = hidden_dim // num_heads
for rc in ['row', 'col']:
# esm_rc = rc if rc == 'row' else 'column'
esm_rc = 'column' if rc == 'row' else 'row'
# .contiguous()
wq = esm[f'encoder.sentence_encoder.layers.{i}.{esm_rc}_self_attention.layer.q_proj.weight'].view(num_heads, heads_dim, -1)
wk = esm[f'encoder.sentence_encoder.layers.{i}.{esm_rc}_self_attention.layer.k_proj.weight'].view(num_heads, heads_dim, -1)
wv = esm[f'encoder.sentence_encoder.layers.{i}.{esm_rc}_self_attention.layer.v_proj.weight'].view(num_heads, heads_dim, -1)
bq = esm[f'encoder.sentence_encoder.layers.{i}.{esm_rc}_self_attention.layer.q_proj.bias'].view(num_heads, heads_dim)
bk = esm[f'encoder.sentence_encoder.layers.{i}.{esm_rc}_self_attention.layer.k_proj.bias'].view(num_heads, heads_dim)
bv = esm[f'encoder.sentence_encoder.layers.{i}.{esm_rc}_self_attention.layer.v_proj.bias'].view(num_heads, heads_dim)
# print(wq.shape, bq.shape)
# torch.Size([12, 64, 768]) torch.Size([12, 64])
W_mixed = torch.cat((wq, wk, wv), dim=1).reshape(hidden_dim * 3, hidden_dim)
B_mixed = torch.cat((bq, bk, bv), dim=1).reshape(-1)
assign(megatron_trans[f'layers.{i}.{rc}_attention.query_key_value.weight'], W_mixed)
assign(megatron_trans[f'layers.{i}.{rc}_attention.query_key_value.bias'], B_mixed)
for i in range(12):
process_layer_i(i)
torch.save(raw_megatron, '/dataset/ee84df8b/release/ProteinLM/pretrain/dump/iter_0000010/mp_rank_00/model_optim_rng.pt')
process_layer_i(11)
esm['encoder.sentence_encoder.embed_tokens.weight'].shape
# megatron_trans.keys()
fs = "UniRef50-xa-a2m-2017/ UniRef50-xb-a2m-2018/ UniRef50-xc-a2m-2017/ UniRef50-xd-a2m-2018/ UniRef50-xe-a2m-2017/ UniRef50-xf-a2m-2018"
folders = [i.strip() for i in fs.split('/')]
folders = ['/workspace/data/' + f + '/' + f + '.json' for f in folders]
print(f'/bin/cat {" ".join(folders)} > /workspace/data/TOTAL.jsonl')
isinstance(esm['encoder.sentence_encoder.layers.0.feed_forward_layer.layer_norm.bias'], torch.Tensor)
tot = 0
def recursive_print_param_shape(model_dict):
for k in model_dict:
if isinstance(model_dict[k], torch.Tensor):
print(k, model_dict[k].shape)
global tot
tot += model_dict[k].numel()
else:
recursive_print_param_shape(model_dict[k])
# print(tot)
recursive_print_param_shape(esm)
tot
recursive_print_param_shape(megatron)
tot
!factor 741793 # 115641633 - 114899840
###Output
741793: 13 43 1327
|
PreRequsites/matplot/Matplotlib.ipynb | ###Markdown
Line Plot
###Code
plt.plot([2,4,6,4])
plt.ylabel("Numbers")
plt.xlabel("Indicies")
plt.title("Title")
plt.show()
plt.plot([1,2,3,4],[1,4,9,16])
#X-values #Y-values
plt.ylabel("Squares")
plt.xlabel("Numbers")
plt.title("Squares of numbers")
plt.grid() #to show grid
plt.show()
plt.plot([1,2,3,4],[1,4,9,16],'ro')
plt.grid()
plt.show()
import numpy as np
t = np.arange(0.,5.,0.2)
plt.plot(t,t**2,'b--',label='^2')
plt.plot(t,t**2.2,'rs',label='^2.2')
plt.plot(t,t**2.5,'g^',label='^2.5')
plt.grid()
plt.legend() #legend provides us with the detailed view of lables with symbol used
plt.show()
x = [1,2,3,4]
y = [1,4,9,16]
plt.plot(x,y,linewidth=5.0)#line width property
plt.show()
x1 = [1,2,3,4]
y1 = [1,4,9,16]
x2 = [1,2,3,4]
y2 = [2,4,6,8]
lines = plt.plot(x1,y1,x2,y2)
plt.setp(lines[0],color='r',linewidth=2.0)
plt.setp(lines[1],'color','g','linewidth',2.0)
plt.grid()
def f(t):
return np.exp(-t) + np.cos(2*np.pi*t)
t1 = np.arange(0.0,5.0,0.1)
t2 = np.arange(0.0,5.0,0.02)
plt.figure(1)
#the subplot() specifies numrows , numcols,
#fignum where fignum ranges 1 to numrows*numcols.
plt.subplot(211)
plt.grid()
plt.plot(t1,f(t1),'b-')
plt.subplot(212)
plt.plot(t2,np.cos(2*np.pi*t2),'r--')
plt.show()
###Output
_____no_output_____ |
notebooks/mnist_hello_world.ipynb | ###Markdown
Basic MNist Model
###Code
class MNISTModel(pl.LightningModule):
def __init__(self):
super(MNISTModel, self).__init__()
self.l1 = torch.nn.Linear(28 * 28, 10)
def forward(self, x):
return torch.relu(self.l1(x.view(x.size(0), -1)))
def training_step(self, batch, batch_nb):
x, y = batch
loss = F.cross_entropy(self(x), y)
return pl.TrainResult(loss)
def configure_optimizers(self):
return torch.optim.Adam(self.parameters(), lr=0.02)
# Init our model
mnist_model = MNISTModel()
# Init DataLoader from MNIST Dataset
train_ds = MNIST(os.getcwd(), train=True, download=True, transform=transforms.ToTensor())
train_loader = DataLoader(train_ds, batch_size=32)
# Initialize a trainer
trainer = pl.Trainer(gpus=1, max_epochs=3, progress_bar_refresh_rate=20)
# Train the model ⚡
trainer.fit(mnist_model, train_loader)
###Output
GPU available: True, used: True
TPU available: False, using: 0 TPU cores
CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
--------------------------------
0 | l1 | Linear | 7 K
###Markdown
More Advanced example
###Code
class LitMNIST(pl.LightningModule):
def __init__(self, data_dir='./', hidden_size=64, learning_rate=2e-4):
super().__init__()
# Set our init args as class attributes
self.data_dir = data_dir
self.hidden_size = hidden_size
self.learning_rate = learning_rate
# Hardcode some dataset specific attributes
self.num_classes = 10
self.dims = (1, 28, 28)
channels, width, height = self.dims
self.transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
# Define PyTorch model
self.model = nn.Sequential(
nn.Flatten(),
nn.Linear(channels * width * height, hidden_size),
nn.ReLU(),
nn.Dropout(0.1),
nn.Linear(hidden_size, hidden_size),
nn.ReLU(),
nn.Dropout(0.1),
nn.Linear(hidden_size, self.num_classes)
)
def forward(self, x):
x = self.model(x)
return F.log_softmax(x, dim=1)
def training_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
return pl.TrainResult(loss)
def validation_step(self, batch, batch_idx):
x, y = batch
logits = self(x)
loss = F.nll_loss(logits, y)
preds = torch.argmax(logits, dim=1)
acc = accuracy(preds, y)
result = pl.EvalResult(checkpoint_on=loss)
# Calling result.log will surface up scalars for you in TensorBoard
result.log('val_loss', loss, prog_bar=True)
result.log('val_acc', acc, prog_bar=True)
return result
def test_step(self, batch, batch_idx):
# Here we just reuse the validation_step for testing
return self.validation_step(batch, batch_idx)
def configure_optimizers(self):
optimizer = torch.optim.Adam(self.parameters(), lr=self.learning_rate)
return optimizer
####################
# DATA RELATED HOOKS
####################
def prepare_data(self):
# download
MNIST(self.data_dir, train=True, download=True)
MNIST(self.data_dir, train=False, download=True)
def setup(self, stage=None):
# Assign train/val datasets for use in dataloaders
if stage == 'fit' or stage is None:
mnist_full = MNIST(self.data_dir, train=True, transform=self.transform)
self.mnist_train, self.mnist_val = random_split(mnist_full, [55000, 5000])
# Assign test dataset for use in dataloader(s)
if stage == 'test' or stage is None:
self.mnist_test = MNIST(self.data_dir, train=False, transform=self.transform)
def train_dataloader(self):
return DataLoader(self.mnist_train, batch_size=32, num_workers=8)
def val_dataloader(self):
return DataLoader(self.mnist_val, batch_size=32, num_workers=8)
def test_dataloader(self):
return DataLoader(self.mnist_test, batch_size=32, num_workers=8)
model = LitMNIST()
trainer = pl.Trainer(gpus=1, max_epochs=3, progress_bar_refresh_rate=20)
trainer.fit(model)
trainer.test()
###Output
_____no_output_____ |
notebooks/ModelPersistance.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
from sklearn.datasets import load_boston
from sklearn.model_selection import train_test_split
from sklearn.linear_model import SGDRegressor
from sklearn.model_selection import GridSearchCV
from matplotlib import pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Model Training
###Code
boston_data = load_boston()
X = boston_data.data
y = boston_data.target
model = SGDRegressor()
model
param_grid = {'alpha': [1e-3,1e-2,1e-1,1e0]}
grid_search_cv = GridSearchCV(model, param_grid, n_jobs = -1, cv = 3, return_train_score=True)
grid_search_cv.fit(X,y)
grid_search_cv.best_params_
model.set_params(**grid_search_cv.best_params_)
model.fit(X,y)
from joblib import dump, load
###Output
_____no_output_____
###Markdown
Save the model to the disk This is file that should be used for deployment
###Code
dump(model,"trained_model.gz")
###Output
_____no_output_____
###Markdown
Load the model from the disk
###Code
clf = load("trained_model.gz")
###Output
_____no_output_____
###Markdown
Now use the loaded model to predict on the unseen data
###Code
ypred = clf.predict(X)
###Output
_____no_output_____ |
exploracion-de-datos/1-Base de Datos.ipynb | ###Markdown
Informe de Análisis I Importando la Base de Datos
###Code
import pandas as pd
# importando
pd.read_csv('datos/alquiler.csv', sep = ';')
datos = pd.read_csv('datos/alquiler.csv', sep = ';')
datos
type(datos)
datos.info()
datos.head(10)
###Output
_____no_output_____
###Markdown
Informaciones Generales sobre la Base de Datos Creamos un DataFrame con los tipos de datos por atributo
###Code
datos.dtypes
tipos_de_datos = pd.DataFrame(datos.dtypes, columns = ['Tipos de Datos'])
tipos_de_datos.columns.name = 'Variábles'
tipos_de_datos
###Output
_____no_output_____
###Markdown
Visualizamos la dimensión de la Base de datos
###Code
datos.shape
datos.shape[0]
datos.shape[1]
print('La base de datos presenta {} registros (inmuebles) y {} variábles'.format(datos.shape[0], datos.shape[1]))
###Output
La base de datos presenta 32960 registros (inmuebles) y 9 variábles
|
Activation Functions/Linear Function.ipynb | ###Markdown
Linear function The linear function is popular in economics. Linear functions are those whose graph is a straight line.A linear function has the following formy = f(x) = c + mxA linear function has one independent variable and one dependent variable. The independent variable is x and the dependent variable is y.m is the constant term or the y intercept. It is the value of the dependent variable when x = 0.c is the coefficient of the independent variable. It is also known as the slope and gives the rate of change of the dependent variable.
###Code
def line(x,m,c):
return (m*x+c)
x=np.array(range(-10,11,3))
plt.plot(x,line(x,2,1), linewidth='4')
plt.scatter(x,x,color='r')
x=np.array(range(-10,11,3))
plt.plot(x,line(x,0.5,0.5), linewidth='4')
plt.scatter(x,x,color='r')
x=np.array(range(-10,11,3))
plt.plot(x,line(x,1.5,0.1), linewidth='4')
plt.scatter(x,x,color='r')
x=np.array(range(-10,11,3))
plt.plot(x,line(x,1,0), linewidth='4')
plt.scatter(x,x,color='r')
###Output
_____no_output_____ |
compare_real_bug_finding_ability/create_dataset_from_real_bugs_assignments.ipynb | ###Markdown
Create dataset for wrong assignment bugsUsing the commits
###Code
import os
from pathlib import Path
import codecs
import json
from typing import List, Dict, Any
import pandas as pd
from multiprocessing import Pool, cpu_count
from tqdm.notebook import trange, tqdm
benchmarks_dir = '../../../benchmarks'
real_bugs_dataset_file_path = os.path.join(benchmarks_dir, 'assignments_real_bugs.pkl')
real_bugs_dataset_dir = os.path.join(benchmarks_dir, 'assignments_real_bugs')
def read_json_file(json_file_path) -> Dict:
try:
obj_text = codecs.open(json_file_path, 'r', encoding='utf-8').read()
return json.loads(obj_text)
except FileNotFoundError:
print(
"Please provide a correct file p. Eg. ./results/validated-conflicts.json")
return {}
except Exception as e:
# Empty JSON file most likely due to abrupt killing of the process while writing
# print (e)
return {}
def read_dataset_given_files(extracted_data_files: List) -> pd.DataFrame:
d = []
with Pool(cpu_count()) as p:
with tqdm(total=len(extracted_data_files)) as pbar:
pbar.set_description_str(
desc="Reading dataset from files", refresh=False)
for i, each_vars in enumerate(
p.imap_unordered(read_json_file, extracted_data_files, 20)):
pbar.update()
d.extend(each_vars)
p.close()
p.join()
extracted_dataset = pd.DataFrame(d)
return extracted_dataset
def file_path_to_dataset(dataset_file_path, dir_path):
if not Path(dataset_file_path).is_file():
file_paths = list(Path(dir_path).rglob('*.json'))
print(f"Number of files={len(file_paths)}")
dataset = read_dataset_given_files(extracted_data_files=file_paths)
print(f"Saving {dataset_file_path}")
dataset.to_pickle(dataset_file_path, 'gzip')
else:
print(f'Reading from {dataset_file_path}')
dataset = pd.read_pickle(dataset_file_path, 'gzip')
print(f"Dataset contains {len(dataset)} examples")
return dataset
def get_file_loc(row):
d = row.to_dict()
if 'benchmarks/real_bugs_github/buggy_' in d['src']:
file_name = d['src'].replace('benchmarks/real_bugs_github/buggy_', '')
else:
file_name = d['src'].replace('benchmarks/real_bugs_github/correct_', '')
range = str(d['range'][0])
return file_name + '_' + range
dataset = file_path_to_dataset(dataset_file_path=real_bugs_dataset_file_path, dir_path=real_bugs_dataset_dir)
row_iter = [row for _, row in dataset.iterrows()]
locations = []
for row in tqdm(row_iter):
loc = get_file_loc(row)
locations.append(loc)
dataset['filename_loc'] = locations
dataset['filename_loc']
correct_dataset = dataset[dataset['src'].apply(lambda x: 'correct_' in x)]
buggy_dataset = dataset[dataset['src'].apply(lambda x: 'buggy_' in x)]
correct_dataset.iloc[0, 8]
buggy_dataset.iloc[0, 8]
print(f'Length of correct dataset {len(correct_dataset)}')
print(f'Length of buggy dataset {len(buggy_dataset)}')
merged = correct_dataset.merge(buggy_dataset, left_on='filename_loc', right_on='filename_loc',
suffixes=['_CORRECT', '_BUGGY'])
merged
def get_buggy_non_buggy_data(row):
d = row.to_dict()
correct = {k.replace('_CORRECT', ''): v for k, v in d.items() if '_CORRECT' in k}
correct['probability_that_incorrect'] = 0
buggy = {k.replace('_BUGGY', ''): v for k, v in d.items() if '_BUGGY' in k}
buggy['probability_that_incorrect'] = 1
if correct['lhs'] == buggy['lhs'] and correct['rhs'] != buggy['rhs']:
return [correct, buggy]
else:
return []
correct_assgn = []
buggy_assgn = []
x_y_pair_given = []
for _, row in tqdm(list(merged.iterrows()), desc='Get lines'):
r = get_buggy_non_buggy_data(row)
if len(r):
correct_assgn.append(r[0])
buggy_assgn.append(r[1])
x_y_pair_given.append(r)
print(f'Number of buggy/correct assignments extracted are {len(correct_assgn)}')
def write_json(content, out_file):
with open(out_file, 'w+') as f:
print(f'Writing to {f.name}')
json.dump(content, f)
write_json(x_y_pair_given, os.path.join(benchmarks_dir, 'correct_buggy_real_wrong_assignments.json'))
###Output
_____no_output_____ |
Notebooks_CAVEDU/road_following/train_model.ipynb | ###Markdown
Road Follower - Train ModelIn this notebook we will train a neural network to take an input image, and output a set of x, y values corresponding to a target.We will be using PyTorch deep learning framework to train ResNet18 neural network architecture model for road follower application.
###Code
import torch
import torch.optim as optim
import torch.nn.functional as F
import torchvision
import torchvision.datasets as datasets
import torchvision.models as models
import torchvision.transforms as transforms
import glob
import PIL.Image
import os
import numpy as np
###Output
_____no_output_____
###Markdown
Download and extract dataBefore you start, you should upload the ``road_following_.zip`` file that you created in the ``data_collection.ipynb`` notebook on the robot. > If you're training on the JetBot you collected data on, you can skip this!You should then extract this dataset by calling the command below:
###Code
!unzip -q road_following.zip
###Output
unzip: cannot find or open road_following.zip, road_following.zip.zip or road_following.zip.ZIP.
###Markdown
You should see a folder named ``dataset_all`` appear in the file browser. Create Dataset InstanceHere we create a custom ``torch.utils.data.Dataset`` implementation, which implements the ``__len__`` and ``__getitem__`` functions. This classis responsible for loading images and parsing the x, y values from the image filenames. Because we implement the ``torch.utils.data.Dataset`` class,we can use all of the torch data utilities :)We hard coded some transformations (like color jitter) into our dataset. We made random horizontal flips optional (in case you want to follow a non-symmetric path, like a roadwhere we need to 'stay right'). If it doesn't matter whether your robot follows some convention, you could enable flips to augment the dataset.
###Code
def get_x(path):
"""Gets the x value from the image filename"""
return (float(int(path[3:6])) - 50.0) / 50.0
def get_y(path):
"""Gets the y value from the image filename"""
return (float(int(path[7:10])) - 50.0) / 50.0
class XYDataset(torch.utils.data.Dataset):
def __init__(self, directory, random_hflips=False):
self.directory = directory
self.random_hflips = random_hflips
self.image_paths = glob.glob(os.path.join(self.directory, '*.jpg'))
self.color_jitter = transforms.ColorJitter(0.3, 0.3, 0.3, 0.3)
def __len__(self):
return len(self.image_paths)
def __getitem__(self, idx):
image_path = self.image_paths[idx]
image = PIL.Image.open(image_path)
x = float(get_x(os.path.basename(image_path)))
y = float(get_y(os.path.basename(image_path)))
if float(np.random.rand(1)) > 0.5:
image = transforms.functional.hflip(image)
x = -x
image = self.color_jitter(image)
image = transforms.functional.resize(image, (224, 224))
image = transforms.functional.to_tensor(image)
image = image.numpy()[::-1].copy()
image = torch.from_numpy(image)
image = transforms.functional.normalize(image, [0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
return image, torch.tensor([x, y]).float()
dataset = XYDataset('dataset_xy', random_hflips=False)
###Output
_____no_output_____
###Markdown
Split dataset into train and test setsOnce we read dataset, we will split data set in train and test sets. In this example we split train and test a 90%-10%. The test set will be used to verify the accuracy of the model we train.
###Code
test_percent = 0.1
num_test = int(test_percent * len(dataset))
train_dataset, test_dataset = torch.utils.data.random_split(dataset, [len(dataset) - num_test, num_test])
###Output
_____no_output_____
###Markdown
Create data loaders to load data in batchesWe use ``DataLoader`` class to load data in batches, shuffle data and allow using multi-subprocesses. In this example we use batch size of 64. Batch size will be based on memory available with your GPU and it can impact accuracy of the model.
###Code
train_loader = torch.utils.data.DataLoader(
train_dataset,
batch_size=16,
shuffle=True,
num_workers=4
)
test_loader = torch.utils.data.DataLoader(
test_dataset,
batch_size=16,
shuffle=True,
num_workers=4
)
###Output
_____no_output_____
###Markdown
Define Neural Network Model We use ResNet-18 model available on PyTorch TorchVision. In a process called transfer learning, we can repurpose a pre-trained model (trained on millions of images) for a new task that has possibly much less data available.More details on ResNet-18 : https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.pyMore Details on Transfer Learning: https://www.youtube.com/watch?v=yofjFQddwHE
###Code
model = models.resnet18(pretrained=True)
###Output
_____no_output_____
###Markdown
ResNet model has fully connect (fc) final layer with 512 as ``in_features`` and we will be training for regression thus ``out_features`` as 1Finally, we transfer our model for execution on the GPU
###Code
model.fc = torch.nn.Linear(512, 2)
device = torch.device('cuda')
model = model.to(device)
###Output
_____no_output_____
###Markdown
Train Regression:We train for 50 epochs and save best model if the loss is reduced.
###Code
NUM_EPOCHS = 35
BEST_MODEL_PATH = 'best_steering_model_xy.pth'
best_loss = 1e9
optimizer = optim.Adam(model.parameters())
for epoch in range(NUM_EPOCHS):
model.train()
train_loss = 0.0
for images, labels in iter(train_loader):
images = images.to(device)
labels = labels.to(device)
optimizer.zero_grad()
outputs = model(images)
loss = F.mse_loss(outputs, labels)
train_loss += float(loss)
loss.backward()
optimizer.step()
train_loss /= len(train_loader)
model.eval()
test_loss = 0.0
for images, labels in iter(test_loader):
images = images.to(device)
labels = labels.to(device)
outputs = model(images)
loss = F.mse_loss(outputs, labels)
test_loss += float(loss)
test_loss /= len(test_loader)
print('%f, %f' % (train_loss, test_loss))
if test_loss < best_loss:
torch.save(model.state_dict(), BEST_MODEL_PATH)
best_loss = test_loss
###Output
1.282182, 31.641369
0.629080, 6.712799
0.207432, 14.820867
0.145445, 1.427848
0.075360, 1.088769
0.065170, 0.178638
0.056157, 0.083472
0.053888, 0.103331
0.043079, 0.063917
0.065537, 0.055637
0.027115, 0.046398
0.028136, 0.057677
0.029130, 0.058760
0.024902, 0.056609
0.025495, 0.045249
0.030610, 0.043539
0.035902, 0.051456
0.038679, 0.075726
0.035739, 0.052188
0.022526, 0.071388
0.028849, 0.036784
0.026857, 0.057645
0.019523, 0.077049
0.058323, 0.030911
0.030861, 0.039272
0.027867, 0.066096
0.022365, 0.047160
0.022388, 0.040050
0.013951, 0.033337
0.015337, 0.042537
0.026873, 0.079005
0.012069, 0.033647
0.014868, 0.067680
0.028998, 0.057607
0.042950, 0.028712
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.