path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Supervised_Learning/Backpropagation/Backpropagation.ipynb | ###Markdown
BackpropagationThis document will explain the code I used to attempt to implement the backpropagation machine learning algorithm as well as an attempt to explain my understainding of it. Later on once I have more experience I will reflect on this. What is it?Backpropagation is an algorithm that is used in neural networks to learn. It is a form of supervised learning where the user gives it training examples with features and a desired output. The machine can try to predict the output then it will use a mutivariable function in order to calculate the error of each weight and bias. It does this for each training example then it tries to find the minimum of this function (essentially finding out what weights and biases correspond to the lowest error) by using gradient descent. The algorithm starts at the last layer then it goes back one layer at a time to the last layer which is why its called backpropagation. The Datasetthe dataset I used was a simple one which is a XOR gate. It has two inputs and each can either be one or a zero. The output will be zero if both inputs are zeros or ones.|Input|Output||-----|------||0, 0 | 0 ||0, 1 | 1 ||1, 0 | 1 ||1, 1 | 0 |This dataset is small which means that during testing I dont have to wait long periods of time just to see how well the program did. The Neural NetworkThe architecture of the neural network was simple, but I wanted my program to be able to use any size of neural network. In this example I used one with an input layer, one hidden layer, and the output layer which has only one output.W1 and W2 are sets of weights represented as matricies: $$ W^{(1)} = \begin{bmatrix}w_{1} &w_{2}\\ w_{3} &w_{4} \end{bmatrix} $$$$ W^{(2)} = \begin{bmatrix}w_{5} &w_{6}\end{bmatrix} $$ How it predictsThe neural network is given a trainig exaple say 0, 0 to get the hidden layers. Lets see for example how it calculates the value at h1: $$ x1 * w_{1} + x2 * w_{2} + b^{(1)} $$Where b is a real number called the bias which is in each perceptron.We can calculate the value of x1 and x2 simultaneously by using matrices: $$ \begin{bmatrix}h1\\ h2\end{bmatrix} = \sigma \left ( \begin{bmatrix}w_{1} &w_{2}\\ w_{3} &w_{4} \end{bmatrix}\begin{bmatrix}x1\\ x2\end{bmatrix} + \begin{bmatrix} b^{(1)} \\ b^{(2)} \end{bmatrix}\right ) $$And for the final output layer: $$ y = \sigma \left ( \begin{bmatrix}w5\\ w6\end{bmatrix}\begin{bmatrix}h1\\ h2\end{bmatrix} + b^{(y)}\right) $$$ \sigma $ is the activation function which in this case is the sigmoid funciton. The activation function takes an input and turns it into a value between zero and one. $$ \sigma (x) = \frac{1}{1+e^{-x}} $$Initially the weights are random values and the biases are zeros, but after training these values would have changed. This whole process is called the forward pass. How it learnsBackpropagation uses a technique called gradient descent where it tries to find the minimum of a function that takes all the parameters in order to calculate the error called the cost function. The cost functionFirstly the machine needs to know how wrong it was, and to do that an error function is defined. Let $ x^{(i)} $ be the ith training example, $ y^{(i)} $ be the training examples's desired output, and $ \hat{y}^{(i)} $ be the machine's prediction. We define the error of that training example to be the squared error: $$ C(x^{(i)}) = (\hat{y}^{(i)} - y^{(i)})^{2}$$We want to know how well it did over all the training examples so we take the average squared error for each training example letting $ m $ equal the number of training examples letting $ W $ be the set of all the weights: $$ C(W) = \frac{1}{m}\sum_{i=1}^{m}(\hat{y}^{(i)}-y^{(i)})^{2} $$ Gradient descentWe can not plot the cost function due to it being a multivariable function with the number of inputs being the number of weights and biases, but I will show a more simplified example in order to give an intuition of whats going on. I will later link a video that shows a really nice animation and explanation to what is going on.A simple cost funtion with just one parameter is a parabola:We start at a random value of $ w $ and the goal is to minimize the cost so we need to tell the computer to change $ w $ in a way that it will get closer to the minimum point of the graph. The negative of the derivative of the graph corresponds to lowering the value of the cost funtion so we take the derivative of the cost function with respect to $ w $ If we have two parameters the simplified version of the cost funtion will be a paraboloid so instead of taking a derivative we have to take a partial derivative for every weight.To compute this derivative we have to use the chain rule. This is where I will let the video do the explaining as it really hepls having animations and writing all the process will make this document really long. (https://www.youtube.com/watch?v=tIeHLnjs5U8)[](http://www.youtube.com/watch?v=tIeHLnjs5U8)This process repeates itself for the amount of iterations specified. CodeI used python due to its simple syntax and popularity meaning that there are a lot of resources to help me. The only module I used was NumPy. I wanted to make this as from scratch as possible.First I imported Numpy:
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Then I defined the activation function and its derivative:
###Code
def sigmoid(x):
return 1/(1 + np.exp(-x))
def d_sigmoid(x):
return sigmoid(x) * (1 - sigmoid(x))
###Output
_____no_output_____
###Markdown
Made the dataset (XOR gate truth table):
###Code
data_in = np.array([[0, 0],
[0, 1],
[1, 0],
[1, 1]])
data_out = np.array([0, 1, 1, 0])
###Output
_____no_output_____
###Markdown
Initialized the architechture of the weights making sure to use loops so I can change the architecture easily. The weights started out as random values from the normal distribution and the biases as zeros:
###Code
layer_sizes = (2, 2, 1)
weight_sizes = [(y, x) for y, x in zip(layer_sizes[1:], layer_sizes[:-1])]
weights = [np.random.standard_normal(s) for s in weight_sizes]
biases = [np.zeros((s, 1)) for s in layer_sizes[1:]]
###Output
_____no_output_____
###Markdown
In this notebook the code will be a bit different from now on as I want it to be organized a bit better, I make a function to fowardpropagate a sample which stores all the layers in a list of numpy arrays which it then returns:
###Code
def feedforward(X):
a = X.reshape((layer_sizes[0], 1)) # Turns it into a column matrix
z = a
layers = []
for w, b in zip(weights, biases):
z = np.dot(w, z) + b
layers.append(z)
layers_sigmoid = [sigmoid(i) for i in layers]
layers_sigmoid.insert(0, a) # Adds the activation layers to the begging
return layers, layers_sigmoid
###Output
_____no_output_____
###Markdown
For the backpropagation I will explain each step of the process separately and then put it into a function.First I assign variables to use the returned values frow the feed forward step of a random trainig example:
###Code
ri = np.random.randint(len(data_in))
layers, layers_sigmoid = feedforward(data_in[ri])
###Output
_____no_output_____
###Markdown
I calculate the error $ \hat{y}^{(ri)} - y^{(ri)} $ of each output layer and put it in a column matrix (in this case it only has one element):
###Code
cost_matrix = layers_sigmoid[-1] - data_out[ri]
###Output
_____no_output_____
###Markdown
I calculate $ \delta^{L} $ or the error of the last layer output errors into a matrix which in this case will only have one element. I calculate this by multiplying $ \frac{\partial C}{\partial a^{L}} = 2(\hat{y}^{(i)}-y^{(i)}) $ (a matrix of the partial derivative of the cost function with respect to the final layer outputs) and $ \frac{\partial a^{L}}{\partial z^{L}} = \sigma '(z^{L}) $ where $ z^{L} $ is the last layer predictions without the activation function applied and $ a^{L} $ is with the activation function applied. In the end: $$ \delta^{L} = \frac{\partial C}{\partial a^{L}} \frac{\partial a^{L}}{\partial z^{L}} $$
###Code
delC_dela = 2 * cost_matrix
dela_delz = d_sigmoid(layers[-1])
delta_L = delC_dela * dela_delz
print(delta_L)
###Output
[[-0.22331546]]
###Markdown
Having $ \delta^{L} $ means that I can calculate $ \delta $ for each layer also known as $ \delta^{l} $. I made a list that had numpy arrays with each one being the values of $ \delta $ with the same size as the corresponding layer weight matrix so that later on I can apply the changes to the weights all at the same time. The formula used to calculate this is: $$ \delta^{l} = \left (\left (w^{l+1}\right)^{T}\delta^{l+1}\right ) \odot \sigma'\left (z^{l} \right ) $$Where $ w^{l+1} $ and $ \delta^{l+1} $ refer to the next layer weights (transposed) and deltas. The $ \odot $ is called the Hadamard product and a simple explanation of it would be:$$ \begin{bmatrix}2 \\3 \end{bmatrix} \odot \begin{bmatrix}4 \\5\end{bmatrix} = \begin{bmatrix}2*4 \\3*5\end{bmatrix} $$It is not a common operation but numpy has a good implimentation of it.\I also needed to make sure that using the loop that I used the iterated variable (i, its l in the original code but I cahnged it here because I would confuse l with 1 sometimes) would never reach zero otherwise it will mess up everything. In this case it should return a 2x1 and a 1x1 matrix inside the list and the last entry of that list should be delta_L.
###Code
delta_l = []
delta_l.append(delta_L) # Adds the last layer deltas
for i in range(1, len(layer_sizes) - 1):
d = -i - 1
if d == 0:
break
g = np.dot(weights[-i].T, delta_l[0]) * d_sigmoid(layers[-i - 1])
delta_l.insert(0, g)
print(delta_l)
###Output
[array([[-0.03535966],
[ 0.03116512]]), array([[-0.22331546]])]
###Markdown
As you can see it does just that. Each matrix of weights is ordered so that the row contains all the weights connected to a perceptron. Now I need to calculate $ \frac{\partial{z}}{\partial{w}} $ which are just equal to the previous layers (the one where teh weights stem from). I made a list of numpy arrays with each element being $ \frac{\partial{z}}{\partial{w}} $ for that weight.
###Code
delz_delw = []
for i in range(1, len(layer_sizes)):
r = layers_sigmoid[-i - 1].T * np.ones((weight_sizes[-i]))
delz_delw.insert(0, r)
###Output
_____no_output_____
###Markdown
Remember that the goal was to calculate $ \frac{\partial{C}}{\partial{w}} $ which is $ \frac{\partial C}{\partial a} \frac{\partial a}{\partial z} \frac{\partial{z}}{\partial{w}} $ and if you recall $ \delta = \frac{\partial C}{\partial a} \frac{\partial a}{\partial z} $ then $ \frac{\partial{C}}{\partial{w}} = \frac{\partial{z}}{\partial{w}} \delta $ which is what I calculated.
###Code
delC_delw = [np.zeros(w) for w in weight_sizes]
for i in range(len(delta_l)):
delC_delw[i] = delta_l[i] * delz_delw[i]
###Output
_____no_output_____ |
pymc3/examples/GLM-linear.ipynb | ###Markdown
The Inference Button: Bayesian GLMs made easy with PyMC3Author: Thomas WieckiThis tutorial appeared as a post in a small series on Bayesian GLMs on my blog: 1. [The Inference Button: Bayesian GLMs made easy with PyMC3](http://twiecki.github.com/blog/2013/08/12/bayesian-glms-1/) 2. [This world is far from Normal(ly distributed): Robust Regression in PyMC3](http://twiecki.github.io/blog/2013/08/27/bayesian-glms-2/) 3. [The Best Of Both Worlds: Hierarchical Linear Regression in PyMC3](http://twiecki.github.io/blog/2014/03/17/bayesian-glms-3/) In this blog post I will talk about: - How the Bayesian Revolution in many scientific disciplines is hindered by poor usability of current Probabilistic Programming languages. - A gentle introduction to Bayesian linear regression and how it differs from the frequentist approach. - A preview of [PyMC3](https://github.com/pymc-devs/pymc/tree/pymc3) (currently in alpha) and its new GLM submodule I wrote to allow creation and estimation of Bayesian GLMs as easy as frequentist GLMs in R.Ready? Lets get started!There is a huge paradigm shift underway in many scientific disciplines: The Bayesian Revolution. While the theoretical benefits of Bayesian over Frequentist stats have been discussed at length elsewhere (see *Further Reading* below), there is a major obstacle that hinders wider adoption -- *usability* (this is one of the reasons DARPA wrote out a huge grant to [improve Probabilistic Programming](http://www.darpa.mil/Our_Work/I2O/Programs/Probabilistic_Programming_for_Advanced_Machine_Learning_%28PPAML%29.aspx)). This is mildly ironic because the beauty of Bayesian statistics is their generality. Frequentist stats have a bazillion different tests for every different scenario. In Bayesian land you define your model exactly as you think is appropriate and hit the *Inference Button(TM)* (i.e. running the magical MCMC sampling algorithm).Yet when I ask my colleagues why they use frequentist stats (even though they would like to use Bayesian stats) the answer is that software packages like SPSS or R make it very easy to run all those individuals tests with a single command (and more often then not, they don't know the exact model and inference method being used).While there are great Bayesian software packages like [JAGS](http://mcmc-jags.sourceforge.net/), [BUGS](http://www.mrc-bsu.cam.ac.uk/bugs/), [Stan](http://mc-stan.org/) and [PyMC](http://pymc-devs.github.io/pymc/), they are written for Bayesians statisticians who know very well what model they want to build. Unfortunately, ["the vast majority of statistical analysis is not performed by statisticians"](http://simplystatistics.org/2013/06/14/the-vast-majority-of-statistical-analysis-is-not-performed-by-statisticians/) -- so what we really need are tools for *scientists* and not for statisticians.In the interest of putting my code where my mouth is I wrote a submodule for the upcoming [PyMC3](https://github.com/pymc-devs/pymc/tree/pymc3) that makes construction of Bayesian Generalized Linear Models (GLMs) as easy as Frequentist ones in R.Linear Regression-----------------While future blog posts will explore more complex models, I will start here with the simplest GLM -- linear regression.In general, frequentists think about Linear Regression as follows:$$ Y = X\beta + \epsilon $$where $Y$ is the output we want to predict (or *dependent* variable), $X$ is our predictor (or *independent* variable), and $\beta$ are the coefficients (or parameters) of the model we want to estimate. $\epsilon$ is an error term which is assumed to be normally distributed. We can then use Ordinary Least Squares or Maximum Likelihood to find the best fitting $\beta$.Probabilistic Reformulation---------------------------Bayesians take a probabilistic view of the world and express this model in terms of probability distributions. Our above linear regression can be rewritten to yield:$$ Y \sim \mathcal{N}(X \beta, \sigma^2) $$In words, we view $Y$ as a random variable (or random vector) of which each element (data point) is distributed according to a Normal distribution. The mean of this normal distribution is provided by our linear predictor with variance $\sigma^2$.While this is essentially the same model, there are two critical advantages of Bayesian estimation: - Priors: We can quantify any prior knowledge we might have by placing priors on the paramters. For example, if we think that $\sigma$ is likely to be small we would choose a prior with more probability mass on low values. - Quantifying uncertainty: We do not get a single estimate of $\beta$ as above but instead a complete posterior distribution about how likely different values of $\beta$ are. For example, with few data points our uncertainty in $\beta$ will be very high and we'd be getting very wide posteriors. Bayesian GLMs in PyMC3----------------------With the new GLM module in PyMC3 it is very easy to build this and much more complex models.First, lets import the required modules.
###Code
%matplotlib inline
from pymc3 import *
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Generating data Create some toy data to play around with and scatter-plot it. Essentially we are creating a regression line defined by intercept and slope and add data points by sampling from a Normal with the mean set to the regression line.
###Code
size = 200
true_intercept = 1
true_slope = 2
x = np.linspace(0, 1, size)
# y = a + b*x
true_regression_line = true_intercept + true_slope * x
# add noise
y = true_regression_line + np.random.normal(scale=.5, size=size)
data = dict(x=x, y=y)
fig = plt.figure(figsize=(7, 7))
ax = fig.add_subplot(111, xlabel='x', ylabel='y', title='Generated data and underlying model')
ax.plot(x, y, 'x', label='sampled data')
ax.plot(x, true_regression_line, label='true regression line', lw=2.)
plt.legend(loc=0);
###Output
_____no_output_____
###Markdown
Estimating the model Lets fit a Bayesian linear regression model to this data. As you can see, model specifications in `PyMC3` are wrapped in a `with` statement. Here we use the awesome new [NUTS sampler](http://arxiv.org/abs/1111.4246) (our Inference Button) to draw 2000 posterior samples.
###Code
with Model() as model: # model specifications in PyMC3 are wrapped in a with-statement
# Define priors
sigma = HalfCauchy('sigma', beta=10, testval=1.)
intercept = Normal('Intercept', 0, sd=20)
x_coeff = Normal('x', 0, sd=20)
# Define likelihood
likelihood = Normal('y', mu=intercept + x_coeff * x,
sd=sigma, observed=y)
# Inference!
start = find_MAP() # Find starting value by optimization
step = NUTS(scaling=start) # Instantiate MCMC sampling algorithm
trace = sample(2000, step, start=start, progressbar=False) # draw 2000 posterior samples using NUTS sampling
###Output
_____no_output_____
###Markdown
This should be fairly readable for people who know probabilistic programming. However, would my non-statistican friend know what all this does? Moreover, recall that this is an extremely simple model that would be one line in R. Having multiple, potentially transformed regressors, interaction terms or link-functions would also make this much more complex and error prone. The new `glm()` function instead takes a [Patsy](http://patsy.readthedocs.org/en/latest/quickstart.html) linear model specifier from which it creates a design matrix. `glm()` then adds random variables for each of the coefficients and an appopriate likelihood to the model.
###Code
with Model() as model:
# specify glm and pass in data. The resulting linear model, its likelihood and
# and all its parameters are automatically added to our model.
glm.glm('y ~ x', data)
start = find_MAP()
step = NUTS(scaling=start) # Instantiate MCMC sampling algorithm
trace = sample(2000, step, progressbar=False) # draw 2000 posterior samples using NUTS sampling
###Output
_____no_output_____
###Markdown
Much shorter, but this code does the exact same thing as the above model specification (you can change priors and everything else too if we wanted). `glm()` parses the `Patsy` model string, adds random variables for each regressor (`Intercept` and slope `x` in this case), adds a likelihood (by default, a Normal is chosen), and all other variables (`sigma`). Finally, `glm()` then initializes the parameters to a good starting point by estimating a frequentist linear model using [statsmodels](http://statsmodels.sourceforge.net/devel/).If you are not familiar with R's syntax, `'y ~ x'` specifies that we have an output variable `y` that we want to estimate as a linear function of `x`. Analyzing the model Bayesian inference does not give us only one best fitting line (as maximum likelihood does) but rather a whole posterior distribution of likely parameters. Lets plot the posterior distribution of our parameters and the individual samples we drew.
###Code
plt.figure(figsize=(7, 7))
traceplot(trace[100:])
plt.tight_layout();
###Output
_____no_output_____
###Markdown
The left side shows our marginal posterior -- for each parameter value on the x-axis we get a probability on the y-axis that tells us how likely that parameter value is.There are a couple of things to see here. The first is that our sampling chains for the individual parameters (left side) seem well converged and stationary (there are no large drifts or other odd patterns).Secondly, the maximum posterior estimate of each variable (the peak in the left side distributions) is very close to the true parameters used to generate the data (`x` is the regression coefficient and `sigma` is the standard deviation of our normal). In the GLM we thus do not only have one best fitting regression line, but many. A posterior predictive plot takes multiple samples from the posterior (intercepts and slopes) and plots a regression line for each of them. Here we are using the `glm.plot_posterior_predictive()` convenience function for this.
###Code
plt.figure(figsize=(7, 7))
plt.plot(x, y, 'x', label='data')
glm.plot_posterior_predictive(trace, samples=100,
label='posterior predictive regression lines')
plt.plot(x, true_regression_line, label='true regression line', lw=3., c='y')
plt.title('Posterior predictive regression lines')
plt.legend(loc=0)
plt.xlabel('x')
plt.ylabel('y');
###Output
_____no_output_____ |
Course-1-Analyze Datasets and Train ML Models using AutoML/Week-1/C1_W1_Assignment.ipynb | ###Markdown
Register and visualize dataset IntroductionIn this lab you will ingest and transform the customer product reviews dataset. Then you will use AWS data stack services such as AWS Glue and Amazon Athena for ingesting and querying the dataset. Finally you will use AWS Data Wrangler to analyze the dataset and plot some visuals extracting insights. Table of Contents- [1. Ingest and transform the public dataset](c1w1-1.) - [1.1. List the dataset files in the public S3 bucket](c1w1-1.1.) - [Exercise 1](c1w1-ex-1) - [1.2. Copy the data locally to the notebook](c1w1-1.2.) - [1.3. Transform the data](c1w1-1.3.) - [1.4 Write the data to a CSV file](c1w1-1.4.)- [2. Register the public dataset for querying and visualizing](c1w1-2.) - [2.1. Register S3 dataset files as a table for querying](c1w1-2.1.) - [Exercise 2](c1w1-ex-2) - [2.2. Create default S3 bucket for Amazon Athena](c1w1-2.2.)- [3. Visualize data](c1w1-3.) - [3.1. Preparation for data visualization](c1w1-3.1.) - [3.2. How many reviews per sentiment?](c1w1-3.2.) - [Exercise 3](c1w1-ex-3) - [3.3. Which product categories are highest rated by average sentiment?](c1w1-3.3.) - [3.4. Which product categories have the most reviews?](c1w1-3.4.) - [Exercise 4](c1w1-ex-4) - [3.5. What is the breakdown of sentiments per product category?](c1w1-3.5.) - [3.6. Analyze the distribution of review word counts](c1w1-3.6.) Let's install the required modules first.
###Code
# please ignore warning messages during the installation
!pip install --disable-pip-version-check -q sagemaker==2.35.0
!pip install --disable-pip-version-check -q pandas==1.1.4
!pip install --disable-pip-version-check -q awswrangler==2.7.0
!pip install --disable-pip-version-check -q numpy==1.18.5
!pip install --disable-pip-version-check -q seaborn==0.11.0
!pip install --disable-pip-version-check -q matplotlib===3.3.3
###Output
/opt/conda/lib/python3.7/site-packages/secretstorage/dhcrypto.py:16: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
/opt/conda/lib/python3.7/site-packages/secretstorage/util.py:25: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv[0m
/opt/conda/lib/python3.7/site-packages/secretstorage/dhcrypto.py:16: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
/opt/conda/lib/python3.7/site-packages/secretstorage/util.py:25: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv[0m
/opt/conda/lib/python3.7/site-packages/secretstorage/dhcrypto.py:16: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
/opt/conda/lib/python3.7/site-packages/secretstorage/util.py:25: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
conda 4.10.3 requires ruamel_yaml_conda>=0.11.14, which is not installed.[0m
[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv[0m
/opt/conda/lib/python3.7/site-packages/secretstorage/dhcrypto.py:16: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
/opt/conda/lib/python3.7/site-packages/secretstorage/util.py:25: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv[0m
/opt/conda/lib/python3.7/site-packages/secretstorage/dhcrypto.py:16: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
/opt/conda/lib/python3.7/site-packages/secretstorage/util.py:25: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv[0m
/opt/conda/lib/python3.7/site-packages/secretstorage/dhcrypto.py:16: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
/opt/conda/lib/python3.7/site-packages/secretstorage/util.py:25: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv[0m
###Markdown
1. Ingest and transform the public datasetThe dataset [Women's Clothing Reviews](https://www.kaggle.com/nicapotato/womens-ecommerce-clothing-reviews) has been chosen as the main dataset.It is shared in a public Amazon S3 bucket, and is available as a comma-separated value (CSV) text format:`s3://dlai-practical-data-science/data/raw/womens_clothing_ecommerce_reviews.csv` 1.1. List the dataset files in the public S3 bucketThe [AWS Command Line Interface (CLI)](https://awscli.amazonaws.com/v2/documentation/api/latest/index.html) is a unified tool to manage your AWS services. With just one tool, you can control multiple AWS services from the command line and automate them through scripts. You will use it to list the dataset files. **View dataset files in CSV format** ```aws s3 ls [bucket_name]``` function lists all objects in the S3 bucket. Let's use it to view the reviews data files in CSV format: Exercise 1View the list of the files available in the public bucket `s3://dlai-practical-data-science/data/raw/`.**Instructions**:Use `aws s3 ls [bucket_name]` function. To run the AWS CLI command from the notebook you will need to put an exclamation mark in front of it: `!aws`. You should see the data file `womens_clothing_ecommerce_reviews.csv` in the list.
###Code
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
!aws s3 ls dlai-practical-data-science/data/raw/
### END SOLUTION - DO NOT delete this comment for grading purposes
# EXPECTED OUTPUT
# ... womens_clothing_ecommerce_reviews.csv
###Output
2021-04-30 02:21:06 8457214 womens_clothing_ecommerce_reviews.csv
###Markdown
1.2. Copy the data locally to the notebook ```aws s3 cp [bucket_name/file_name] [file_name]``` function copies the file from the S3 bucket into the local environment or into another S3 bucket. Let's use it to copy the file with the dataset locally.
###Code
!aws s3 cp s3://dlai-practical-data-science/data/raw/womens_clothing_ecommerce_reviews.csv ./womens_clothing_ecommerce_reviews.csv
###Output
download: s3://dlai-practical-data-science/data/raw/womens_clothing_ecommerce_reviews.csv to ./womens_clothing_ecommerce_reviews.csv
###Markdown
Now use the Pandas dataframe to load and preview the data.
###Code
import pandas as pd
import csv
df = pd.read_csv('./womens_clothing_ecommerce_reviews.csv',
index_col=0)
df.shape
df
###Output
_____no_output_____
###Markdown
1.3. Transform the dataTo simplify the task, you will transform the data into a comma-separated value (CSV) file that contains only a `review_body`, `product_category`, and `sentiment` derived from the original data.
###Code
df_transformed = df.rename(columns={'Review Text': 'review_body',
'Rating': 'star_rating',
'Class Name': 'product_category'})
df_transformed.drop(columns=['Clothing ID', 'Age', 'Title', 'Recommended IND', 'Positive Feedback Count', 'Division Name', 'Department Name'],
inplace=True)
df_transformed.dropna(inplace=True)
df_transformed.shape
###Output
_____no_output_____
###Markdown
Now convert the `star_rating` into the `sentiment` (positive, neutral, negative), which later on will be for the prediction.
###Code
def to_sentiment(star_rating):
if star_rating in {1, 2}: # negative
return -1
if star_rating == 3: # neutral
return 0
if star_rating in {4, 5}: # positive
return 1
# transform star_rating into the sentiment
df_transformed['sentiment'] = df_transformed['star_rating'].apply(lambda star_rating:
to_sentiment(star_rating=star_rating)
)
# drop the star rating column
df_transformed.drop(columns=['star_rating'],
inplace=True)
# remove reviews for product_categories with < 10 reviews
df_transformed = df_transformed.groupby('product_category').filter(lambda reviews : len(reviews) > 10)[['sentiment', 'review_body', 'product_category']]
df_transformed.shape
# preview the results
df_transformed
###Output
_____no_output_____
###Markdown
1.4 Write the data to a CSV file
###Code
df_transformed.to_csv('./womens_clothing_ecommerce_reviews_transformed.csv',
index=False)
!head -n 5 ./womens_clothing_ecommerce_reviews_transformed.csv
###Output
sentiment,review_body,product_category
1,If this product was in petite i would get the petite. the regular is a little long on me but a tailor can do a simple fix on that. fits nicely! i'm 5'4 130lb and pregnant so i bough t medium to grow into. the tie can be front or back so provides for some nice flexibility on form fitting.,Blouses
1,"Love this dress! it's sooo pretty. i happened to find it in a store and i'm glad i did bc i never would have ordered it online bc it's petite. i bought a petite and am 5'8"". i love the length on me- hits just a little below the knee. would definitely be a true midi on someone who is truly petite.",Dresses
0,I had such high hopes for this dress and really wanted it to work for me. i initially ordered the petite small (my usual size) but i found this to be outrageously small. so small in fact that i could not zip it up! i reordered it in petite medium which was just ok. overall the top half was comfortable and fit nicely but the bottom half had a very tight under layer and several somewhat cheap (net) over layers. imo a major design flaw was the net over layer sewn directly into the zipper - it c,Dresses
1,I love love love this jumpsuit. it's fun flirty and fabulous! every time i wear it i get nothing but great compliments!,Pants
###Markdown
2. Register the public dataset for querying and visualizingYou will register the public dataset into an S3-backed database table so you can query and visualize our dataset at scale. 2.1. Register S3 dataset files as a table for queryingLet's import required modules.`boto3` is the AWS SDK for Python to create, configure, and manage AWS services, such as Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Simple Storage Service (Amazon S3). The SDK provides an object-oriented API as well as low-level access to AWS services. `sagemaker` is the SageMaker Python SDK which provides several high-level abstractions for working with the Amazon SageMaker.
###Code
import boto3
import sagemaker
import pandas as pd
import numpy as np
import botocore
config = botocore.config.Config(user_agent_extra='dlai-pds/c1/w1')
# low-level service client of the boto3 session
sm = boto3.client(service_name='sagemaker',
config=config)
sess = sagemaker.Session(sagemaker_client=sm)
bucket = sess.default_bucket()
role = sagemaker.get_execution_role()
region = sess.boto_region_name
account_id = sess.account_id
print('S3 Bucket: {}'.format(bucket))
print('Region: {}'.format(region))
print('Account ID: {}'.format(account_id))
###Output
S3 Bucket: sagemaker-us-east-1-116199014196
Region: us-east-1
Account ID: <bound method Session.account_id of <sagemaker.session.Session object at 0x7ff05817a290>>
###Markdown
Review the empty bucket which was created automatically for this account.**Instructions**: - open the link- click on the S3 bucket name `sagemaker-us-east-1-ACCOUNT`- check that it is empty at this stage
###Code
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://s3.console.aws.amazon.com/s3/home?region={}#">Amazon S3 buckets</a></b>'.format(region)))
###Output
_____no_output_____
###Markdown
Copy the file into the S3 bucket.
###Code
!aws s3 cp ./womens_clothing_ecommerce_reviews_transformed.csv s3://$bucket/data/transformed/womens_clothing_ecommerce_reviews_transformed.csv
###Output
upload: ./womens_clothing_ecommerce_reviews_transformed.csv to s3://sagemaker-us-east-1-116199014196/data/transformed/womens_clothing_ecommerce_reviews_transformed.csv
###Markdown
Review the bucket with the file we uploaded above.**Instructions**: - open the link- check that the CSV file is located in the S3 bucket- check the location directory structure is the same as in the CLI command above- click on the file name and see the available information about the file (region, size, S3 URI, Amazon Resource Name (ARN))
###Code
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://s3.console.aws.amazon.com/s3/buckets/{}?region={}&prefix=data/transformed/#">Amazon S3 buckets</a></b>'.format(bucket, region)))
###Output
_____no_output_____
###Markdown
**Import AWS Data Wrangler**[AWS Data Wrangler](https://github.com/awslabs/aws-data-wrangler) is an AWS Professional Service open source python initiative that extends the power of Pandas library to AWS connecting dataframes and AWS data related services (Amazon Redshift, AWS Glue, Amazon Athena, Amazon EMR, Amazon QuickSight, etc).Built on top of other open-source projects like Pandas, Apache Arrow, Boto3, SQLAlchemy, Psycopg2 and PyMySQL, it offers abstracted functions to execute usual ETL tasks like load/unload data from data lakes, data warehouses and databases. Review the AWS Data Wrangler documentation: https://aws-data-wrangler.readthedocs.io/en/stable/
###Code
import awswrangler as wr
###Output
_____no_output_____
###Markdown
**Create AWS Glue Catalog database** The data catalog features of **AWS Glue** and the inbuilt integration to Amazon S3 simplify the process of identifying data and deriving the schema definition out of the discovered data. Using AWS Glue crawlers within your data catalog, you can traverse your data stored in Amazon S3 and build out the metadata tables that are defined in your data catalog.Here you will use `wr.catalog.create_database` function to create a database with the name `dsoaws_deep_learning` ("dsoaws" stands for "Data Science on AWS").
###Code
wr.catalog.create_database(
name='dsoaws_deep_learning',
exist_ok=True
)
dbs = wr.catalog.get_databases()
for db in dbs:
print("Database name: " + db['Name'])
###Output
Database name: dsoaws_deep_learning
###Markdown
Review the created database in the AWS Glue Catalog.**Instructions**:- open the link- on the left side panel notice that you are in the AWS Glue -> Data Catalog -> Databases- check that the database `dsoaws_deep_learning` has been created- click on the name of the database- click on the `Tables in dsoaws_deep_learning` link to see that there are no tables
###Code
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://console.aws.amazon.com/glue/home?region={}#catalog:tab=databases">AWS Glue Databases</a></b>'.format(region)))
###Output
_____no_output_____
###Markdown
**Register CSV data with AWS Glue Catalog** Exercise 2Register CSV data with AWS Glue Catalog.**Instructions**:Use ```wr.catalog.create_csv_table``` function with the following parameters```pythonres = wr.catalog.create_csv_table( database='...', AWS Glue Catalog database name path='s3://{}/data/transformed/'.format(bucket), S3 object path for the data table='reviews', registered table name columns_types={ 'sentiment': 'int', 'review_body': 'string', 'product_category': 'string' }, mode='overwrite', skip_header_line_count=1, sep=',' )```
###Code
wr.catalog.create_csv_table(
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
database='dsoaws_deep_learning', # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
path='s3://{}/data/transformed/'.format(bucket),
table="reviews",
columns_types={
'sentiment': 'int',
'review_body': 'string',
'product_category': 'string'
},
mode='overwrite',
skip_header_line_count=1,
sep=','
)
###Output
_____no_output_____
###Markdown
Review the registered table in the AWS Glue Catalog.**Instructions**:- open the link- on the left side panel notice that you are in the AWS Glue -> Data Catalog -> Databases -> Tables- check that you can see the table `reviews` from the database `dsoaws_deep_learning` in the list- click on the name of the table- explore the available information about the table (name, database, classification, location, schema etc.)
###Code
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://console.aws.amazon.com/glue/home?region={}#">AWS Glue Catalog</a></b>'.format(region)))
###Output
_____no_output_____
###Markdown
Review the table shape:
###Code
table = wr.catalog.table(database='dsoaws_deep_learning',
table='reviews')
table
###Output
_____no_output_____
###Markdown
2.2. Create default S3 bucket for Amazon AthenaAmazon Athena requires this S3 bucket to store temporary query results and improve performance of subsequent queries.The contents of this bucket are mostly binary and human-unreadable.
###Code
# S3 bucket name
wr.athena.create_athena_bucket()
# EXPECTED OUTPUT
# 's3://aws-athena-query-results-ACCOUNT-REGION/'
###Output
_____no_output_____
###Markdown
3. Visualize data**Reviews dataset - column descriptions**- `sentiment`: The review's sentiment (-1, 0, 1).- `product_category`: Broad product category that can be used to group reviews (in this case digital videos).- `review_body`: The text of the review. 3.1. Preparation for data visualization**Imports**
###Code
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format='retina'
###Output
_____no_output_____
###Markdown
**Settings** Set AWS Glue database and table name.
###Code
# Do not change the database and table names - they are used for grading purposes!
database_name = 'dsoaws_deep_learning'
table_name = 'reviews'
###Output
_____no_output_____
###Markdown
Set seaborn parameters. You can review seaborn documentation following the [link](https://seaborn.pydata.org/index.html).
###Code
sns.set_style = 'seaborn-whitegrid'
sns.set(rc={"font.style":"normal",
"axes.facecolor":"white",
'grid.color': '.8',
'grid.linestyle': '-',
"figure.facecolor":"white",
"figure.titlesize":20,
"text.color":"black",
"xtick.color":"black",
"ytick.color":"black",
"axes.labelcolor":"black",
"axes.grid":True,
'axes.labelsize':10,
'xtick.labelsize':10,
'font.size':10,
'ytick.labelsize':10})
###Output
_____no_output_____
###Markdown
Helper code to display values on barplots: **Run SQL queries using Amazon Athena** **Amazon Athena** lets you query data in Amazon S3 using a standard SQL interface. It reflects the databases and tables in the AWS Glue Catalog. You can create interactive queries and perform any data manipulations required for further downstream processing. Standard SQL query can be saved as a string and then passed as a parameter into the Athena query. Run the following cells as an example to count the total number of reviews by sentiment. The SQL query here will take the following form:```sqlSELECT column_name, COUNT(column_name) as new_column_nameFROM table_nameGROUP BY column_nameORDER BY column_name```If you are not familiar with the SQL query statements, you can review some tutorials following the [link](https://www.w3schools.com/sql/default.asp). 3.2. How many reviews per sentiment? Set the SQL statement to find the count of sentiments:
###Code
statement_count_by_sentiment = """
SELECT sentiment, COUNT(sentiment) AS count_sentiment
FROM reviews
GROUP BY sentiment
ORDER BY sentiment
"""
print(statement_count_by_sentiment)
###Output
SELECT sentiment, COUNT(sentiment) AS count_sentiment
FROM reviews
GROUP BY sentiment
ORDER BY sentiment
###Markdown
Query data in Amazon Athena database cluster using the prepared SQL statement:
###Code
df_count_by_sentiment = wr.athena.read_sql_query(
sql=statement_count_by_sentiment,
database=database_name
)
print(df_count_by_sentiment)
###Output
sentiment count_sentiment
0 -1 2370
1 0 2823
2 1 17433
###Markdown
Preview the results of the query:
###Code
df_count_by_sentiment.plot(kind='bar', x='sentiment', y='count_sentiment', rot=0)
###Output
_____no_output_____
###Markdown
Exercise 3Use Amazon Athena query with the standard SQL statement passed as a parameter, to calculate the total number of reviews per `product_category` in the table ```reviews```.**Instructions**: Pass the SQL statement of the form```sqlSELECT category_column, COUNT(column_name) AS new_column_nameFROM table_nameGROUP BY category_columnORDER BY new_column_name DESC```as a triple quote string into the variable `statement_count_by_category`. Please use the column `sentiment` in the `COUNT` function and give it a new name `count_sentiment`.
###Code
# Replace all None
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
statement_count_by_category = """
SELECT product_category, COUNT(sentiment) AS count_sentiment
FROM reviews
GROUP BY product_category
ORDER BY count_sentiment DESC
"""
### END SOLUTION - DO NOT delete this comment for grading purposes
print(statement_count_by_category)
###Output
SELECT product_category, COUNT(sentiment) AS count_sentiment
FROM reviews
GROUP BY product_category
ORDER BY count_sentiment DESC
###Markdown
Query data in Amazon Athena database passing the prepared SQL statement:
###Code
%%time
df_count_by_category = wr.athena.read_sql_query(
sql=statement_count_by_category,
database=database_name
)
df_count_by_category
# EXPECTED OUTPUT
# Dresses: 6145
# Knits: 4626
# Blouses: 2983
# Sweaters: 1380
# Pants: 1350
# ...
###Output
CPU times: user 347 ms, sys: 12.8 ms, total: 359 ms
Wall time: 3.31 s
###Markdown
3.3. Which product categories are highest rated by average sentiment? Set the SQL statement to find the average sentiment per product category, showing the results in the descending order:
###Code
statement_avg_by_category = """
SELECT product_category, AVG(sentiment) AS avg_sentiment
FROM {}
GROUP BY product_category
ORDER BY avg_sentiment DESC
""".format(table_name)
print(statement_avg_by_category)
###Output
SELECT product_category, AVG(sentiment) AS avg_sentiment
FROM reviews
GROUP BY product_category
ORDER BY avg_sentiment DESC
###Markdown
Query data in Amazon Athena database passing the prepared SQL statement:
###Code
%%time
df_avg_by_category = wr.athena.read_sql_query(
sql=statement_avg_by_category,
database=database_name
)
###Output
CPU times: user 228 ms, sys: 23.5 ms, total: 251 ms
Wall time: 3.36 s
###Markdown
Preview the query results in the temporary S3 bucket: `s3://aws-athena-query-results-ACCOUNT-REGION/`**Instructions**: - open the link- check the name of the S3 bucket- briefly check the content of it
###Code
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://s3.console.aws.amazon.com/s3/buckets/aws-athena-query-results-{}-{}?region={}">Amazon S3 buckets</a></b>'.format(account_id, region, region)))
###Output
_____no_output_____
###Markdown
Preview the results of the query:
###Code
df_avg_by_category
###Output
_____no_output_____
###Markdown
**Visualization**
###Code
def show_values_barplot(axs, space):
def _show_on_plot(ax):
for p in ax.patches:
_x = p.get_x() + p.get_width() + float(space)
_y = p.get_y() + p.get_height()
value = round(float(p.get_width()),2)
ax.text(_x, _y, value, ha="left")
if isinstance(axs, np.ndarray):
for idx, ax in np.ndenumerate(axs):
_show_on_plot(ax)
else:
_show_on_plot(axs)
# Create plot
barplot = sns.barplot(
data = df_avg_by_category,
y='product_category',
x='avg_sentiment',
color="b",
saturation=1
)
# Set the size of the figure
sns.set(rc={'figure.figsize':(15.0, 10.0)})
# Set title and x-axis ticks
plt.title('Average sentiment by product category')
#plt.xticks([-1, 0, 1], ['Negative', 'Neutral', 'Positive'])
# Helper code to show actual values afters bars
show_values_barplot(barplot, 0.1)
plt.xlabel("Average sentiment")
plt.ylabel("Product category")
plt.tight_layout()
# Do not change the figure name - it is used for grading purposes!
plt.savefig('avg_sentiment_per_category.png', dpi=300)
# Show graphic
plt.show(barplot)
# Upload image to S3 bucket
sess.upload_data(path='avg_sentiment_per_category.png', bucket=bucket, key_prefix="images")
###Output
_____no_output_____
###Markdown
Review the bucket on the account.**Instructions**: - open the link- click on the S3 bucket name `sagemaker-us-east-1-ACCOUNT`- open the images folder- check the existence of the image `avg_sentiment_per_category.png`- if you click on the image name, you can see the information about the image file. You can also download the file with the command on the top right Object Actions -> Download / Download as
###Code
from IPython.core.display import display, HTML
display(HTML('<b>Review <a target="top" href="https://s3.console.aws.amazon.com/s3/home?region={}">Amazon S3 buckets</a></b>'.format(region)))
###Output
_____no_output_____
###Markdown
3.4. Which product categories have the most reviews?Set the SQL statement to find the count of sentiment per product category, showing the results in the descending order:
###Code
statement_count_by_category_desc = """
SELECT product_category, COUNT(*) AS count_reviews
FROM {}
GROUP BY product_category
ORDER BY count_reviews DESC
""".format(table_name)
print(statement_count_by_category_desc)
###Output
SELECT product_category, COUNT(*) AS count_reviews
FROM reviews
GROUP BY product_category
ORDER BY count_reviews DESC
###Markdown
Query data in Amazon Athena database passing the prepared SQL statement:
###Code
%%time
df_count_by_category_desc = wr.athena.read_sql_query(
sql=statement_count_by_category_desc,
database=database_name
)
###Output
CPU times: user 268 ms, sys: 2.65 ms, total: 270 ms
Wall time: 3.42 s
###Markdown
Store maximum number of sentiment for the visualization plot:
###Code
max_sentiment = df_count_by_category_desc['count_reviews'].max()
print('Highest number of reviews (in a single category): {}'.format(max_sentiment))
###Output
Highest number of reviews (in a single category): 6145
###Markdown
**Visualization** Exercise 4Use `barplot` function to plot number of reviews per product category.**Instructions**: Use the `barplot` chart example in the previous section, passing the newly defined dataframe `df_count_by_category_desc` with the count of reviews. Here, please put the `product_category` column into the `y` argument.
###Code
# Create seaborn barplot
barplot = sns.barplot(
### BEGIN SOLUTION - DO NOT delete this comment for grading purposes
data=df_count_by_category_desc, # Replace None
y='product_category', # Replace None
x='count_reviews', # Replace None
### END SOLUTION - DO NOT delete this comment for grading purposes
color="b",
saturation=1
)
# Set the size of the figure
sns.set(rc={'figure.figsize':(15.0, 10.0)})
# Set title
plt.title("Number of reviews per product category")
plt.xlabel("Number of reviews")
plt.ylabel("Product category")
plt.tight_layout()
# Do not change the figure name - it is used for grading purposes!
plt.savefig('num_reviews_per_category.png', dpi=300)
# Show the barplot
plt.show(barplot)
# Upload image to S3 bucket
sess.upload_data(path='num_reviews_per_category.png', bucket=bucket, key_prefix="images")
###Output
_____no_output_____
###Markdown
3.5. What is the breakdown of sentiments per product category? Set the SQL statement to find the count of sentiment per product category and sentiment:
###Code
statement_count_by_category_and_sentiment = """
SELECT product_category,
sentiment,
COUNT(*) AS count_reviews
FROM {}
GROUP BY product_category, sentiment
ORDER BY product_category ASC, sentiment DESC, count_reviews
""".format(table_name)
print(statement_count_by_category_and_sentiment)
###Output
SELECT product_category,
sentiment,
COUNT(*) AS count_reviews
FROM reviews
GROUP BY product_category, sentiment
ORDER BY product_category ASC, sentiment DESC, count_reviews
###Markdown
Query data in Amazon Athena database passing the prepared SQL statement:
###Code
%%time
df_count_by_category_and_sentiment = wr.athena.read_sql_query(
sql=statement_count_by_category_and_sentiment,
database=database_name
)
###Output
CPU times: user 233 ms, sys: 4.09 ms, total: 237 ms
Wall time: 2.75 s
###Markdown
Prepare for stacked percentage horizontal bar plot showing proportion of sentiments per product category.
###Code
# Create grouped dataframes by category and by sentiment
grouped_category = df_count_by_category_and_sentiment.groupby('product_category')
grouped_star = df_count_by_category_and_sentiment.groupby('sentiment')
# Create sum of sentiments per star sentiment
df_sum = df_count_by_category_and_sentiment.groupby(['sentiment']).sum()
# Calculate total number of sentiments
total = df_sum['count_reviews'].sum()
print('Total number of reviews: {}'.format(total))
###Output
Total number of reviews: 22626
###Markdown
Create dictionary of product categories and array of star rating distribution per category.
###Code
distribution = {}
count_reviews_per_star = []
i=0
for category, sentiments in grouped_category:
count_reviews_per_star = []
for star in sentiments['sentiment']:
count_reviews_per_star.append(sentiments.at[i, 'count_reviews'])
i=i+1;
distribution[category] = count_reviews_per_star
###Output
_____no_output_____
###Markdown
Build array per star across all categories.
###Code
distribution
df_distribution_pct = pd.DataFrame(distribution).transpose().apply(
lambda num_sentiments: num_sentiments/sum(num_sentiments)*100, axis=1
)
df_distribution_pct.columns=['1', '0', '-1']
df_distribution_pct
###Output
_____no_output_____
###Markdown
**Visualization**Plot the distributions of sentiments per product category.
###Code
categories = df_distribution_pct.index
# Plot bars
plt.figure(figsize=(10,5))
df_distribution_pct.plot(kind="barh",
stacked=True,
edgecolor='white',
width=1.0,
color=['green',
'orange',
'blue'])
plt.title("Distribution of reviews per sentiment per category",
fontsize='16')
plt.legend(bbox_to_anchor=(1.04,1),
loc="upper left",
labels=['Positive',
'Neutral',
'Negative'])
plt.xlabel("% Breakdown of sentiments", fontsize='14')
plt.gca().invert_yaxis()
plt.tight_layout()
# Do not change the figure name - it is used for grading purposes!
plt.savefig('distribution_sentiment_per_category.png', dpi=300)
plt.show()
# Upload image to S3 bucket
sess.upload_data(path='distribution_sentiment_per_category.png', bucket=bucket, key_prefix="images")
###Output
_____no_output_____
###Markdown
3.6. Analyze the distribution of review word counts Set the SQL statement to count the number of the words in each of the reviews:
###Code
statement_num_words = """
SELECT CARDINALITY(SPLIT(review_body, ' ')) as num_words
FROM {}
""".format(table_name)
print(statement_num_words)
###Output
SELECT CARDINALITY(SPLIT(review_body, ' ')) as num_words
FROM reviews
###Markdown
Query data in Amazon Athena database passing the SQL statement:
###Code
%%time
df_num_words = wr.athena.read_sql_query(
sql=statement_num_words,
database=database_name
)
###Output
CPU times: user 264 ms, sys: 3.89 ms, total: 268 ms
Wall time: 3.1 s
###Markdown
Print out and analyse some descriptive statistics:
###Code
summary = df_num_words["num_words"].describe(percentiles=[0.10, 0.20, 0.30, 0.40, 0.50, 0.60, 0.70, 0.80, 0.90, 1.00])
summary
###Output
_____no_output_____
###Markdown
Plot the distribution of the words number per review:
###Code
df_num_words["num_words"].plot.hist(xticks=[0, 16, 32, 64, 128, 256], bins=100, range=[0, 256]).axvline(
x=summary["100%"], c="red"
)
plt.xlabel("Words number", fontsize='14')
plt.ylabel("Frequency", fontsize='14')
plt.savefig('distribution_num_words_per_review.png', dpi=300)
plt.show()
# Upload image to S3 bucket
sess.upload_data(path='distribution_num_words_per_review.png', bucket=bucket, key_prefix="images")
###Output
_____no_output_____
###Markdown
Upload the notebook into S3 bucket for grading purposes.**Note**: you may need to click on "Save" button before the upload.
###Code
!aws s3 cp ./C1_W1_Assignment.ipynb s3://$bucket/C1_W1_Assignment_Learner.ipynb
###Output
_____no_output_____ |
.ipynb_checkpoints/imagingscript_multiple-checkpoint.ipynb | ###Markdown
Carry out imaging in CASA given parameters in imagepars.npy file, which were fed by the main.py routineNeeds to be run within imaging folder in the presence of imagepars.npy
###Code
import numpy as np
sourcetag,workingdir,vis,nvis,mosaic,phasecenter,weighting,robust,uvtaper,interactive = np.load('./imagepars.npy')
#Run this within imaging folder
concatvis=workingdir+'/'+sourcetag+'/calibratedms/'+sourcetag+'_calibratedvis_cont_concat.ms'
if not os.path.exists(concatvis):
concaten=False
imagesingles=True
else:
concaten=True
weightfacts=[1.0 for x in np.arange(len(vis))]
imageconcat=True
imagesingles=True
if concaten:
os.system('rm -r '+concatvis)
concat(vis=vis, concatvis=concatvis, visweightscale=weightfacts, copypointing=False)
#If needed, manually fix coordinates to match coordinates of first observation, otherwise mosaic won't work. Phase centers are all aligned already from fit
#fixplanets(vis=concatvis, field='0,1', direction='J2000 02h26m16.337489 +06d17m32.38948')
listobs(concatvis)
if imageconcat:
imagename=concatvis[16:-3]+'_'+weighting+robust
#clean parameters
imsize=[512,512]
cell=['0.05arcsec']
pblimit=1e-5
if mosaic:
gridder='mosaic'
else:
gridder='standard'
deconvolver='multiscale'
#Scales should be roughly [0, n where n*cell~expected syntesized beam size, 3n, 9n, etc.]
scales=[0,10,30,90]
niter=100000000000000
specmode='mfs'
#Remove image if it exists
os.system('rm -r '+imagename+'.*')
#Run iterative tclean with manual masking
tclean(vis=concatvis, interactive=interactive, imsize=imsize, cell=cell, weighting=weighting, niter=niter, specmode=specmode, gridder=gridder, deconvolver=deconvolver, scales=scales, imagename=imagename, uvtaper=uvtaper, robust=robust, pblimit=pblimit)
#Export image to FITS
exportfits(imagename=imagename+'.image', fitsimage=imagename+'.fits', overwrite=True)
#Export primary beam to FITS
exportfits(imagename=imagename+'.pb', fitsimage=imagename+'_pb.fits', overwrite=True)
#View result
viewer(imagename+'.image')
if imagesingles:
for i in np.arange(nvis):
imagename=vis[i][16:-3]+'_'+weighting+robust
#clean parameters
imsize=[1024,1024]
cell=['0.05arcsec']
pblimit=1e-5
gridder='standard'
deconvolver='multiscale'
#Scales should be roughly [0, n where n*cell~expected syntesized beam size, 3n, 9n, etc.]
scales=[0,10,30,90]
niter=10000000000000000000000000
specmode='mfs'
#Remove image if it exists
os.system('rm -r '+imagename+'.*')
#Run iterative tclean with manual masking
tclean(vis=vis[i], interactive=interactive, imsize=imsize, cell=cell, weighting=weighting, niter=niter, specmode=specmode, gridder=gridder, deconvolver=deconvolver, scales=scales, imagename=imagename, uvtaper=uvtaper, robust=robust)
#Export image to FITS
exportfits(imagename=imagename+'.image', fitsimage=imagename+'.fits', overwrite=True)
#Export primary beam to FITS
exportfits(imagename=imagename+'.pb', fitsimage=imagename+'_pb.fits', overwrite=True)
#View result
viewer(imagename+'.image')
#RMS X uJy for X" taper, beam X" x X" @Xdeg PA
###Output
_____no_output_____ |
Test_karl/material/session_7/lecture_7.ipynb | ###Markdown
Data structuring, part 3 The Pandas way*Andreas Bjerre-Nielsen* Conda- Seaborn issue - update package: `conda update seaborn` - In general we can install packages - `conda install xxx` Recap*Which datatypes beyond numeric does pandas handle natively?*- ...*What can we do to missing values and duplicates?*- ... Agenda1. [the split apply combine framework](Split-apply-combine)1. [joining datasets](Joining-data)1. [reshaping data](Reshaping-data) - Loading the software
###Code
import numpy as np
import pandas as pd
import seaborn as sns
###Output
_____no_output_____
###Markdown
Split-apply-combine Split-apply-combine (1)*What is the split-apply-combine framework?* A procedure to 1. **split** a DataFrame 2. **apply** certain functions (sorting, mean, other custom stuff)3. **combine** it back into a DataFrame Split-apply-combine (2) How do we *split* observations by x and *apply* the calculation mean of y?* groupby (1)A powerful tool in DataFrames are the `groupby` method. Example:
###Code
tips = sns.load_dataset('tips')
split_var = 'sex'
apply_var = 'total_bill'
tips\
.groupby(split_var)\
[apply_var]\
.mean()
###Output
_____no_output_____
###Markdown
groupby (2)*Does it work for multiple variables, functions?*
###Code
split_vars = ['sex']
apply_vars = ['total_bill']
apply_fcts = ['mean']
tips\
.groupby(split_vars)\
[apply_vars]\
.agg(apply_fcts)
###Output
_____no_output_____
###Markdown
Note grouping with multiple variables uses a [MultiIndex](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.MultiIndex.html) which we do not cover. groupby (3)*What does the groupby by method do?* - It implements *split-apply-combine* Can other functions be applied? - Yes: mean, std, min, max all work. *To note:* Using .apply() method and inserting your homemade function works too. groupby (4)*Can we use groupby in a loop?* Yes, we can iterate over a groupby object. Example:
###Code
results = {}
for group, group_df in tips.groupby('time'):
group_mean = group_df.total_bill.mean()
results[group] = group_mean
results
###Output
_____no_output_____
###Markdown
ProTip: `groupby` is an iterable we can also use with `multiprocessing` for parallel computing. groupby (5)*How do we get our `groupby` output into the original dataframe?*Option 1: you merge it (not recommended)Option 2: you use `transform`.
###Code
mu_time = tips.groupby(split_var)[apply_var].transform('mean')
mu_time.head(4)
# tips.total_bill - mu_time
###Output
_____no_output_____
###Markdown
*Why is this useful?*- Joining data Until now we've worked with one DataFrame at a time.We will now learn to put them together. Concatenating DataFrames (1)Let's make some data to play with
###Code
df1 = pd.DataFrame([[1, 2], [3, 4]], columns=['A', 'B'])
df2 = pd.DataFrame([[2, 3], [5, 6]], columns=['B', 'C'])
print(df1,'\n')
print(df2)
###Output
A B
0 1 2
1 3 4
B C
0 2 3
1 5 6
###Markdown
Concatenating DataFrames (2)Let's try to vertically put two DataFrames together:
###Code
dfs = [df1, df2]
print(pd.concat(dfs)) # vertically stacking dataframes
###Output
A B C
0 1.0 2 NaN
1 3.0 4 NaN
0 NaN 2 3.0
1 NaN 5 6.0
###Markdown
The `concat` function creates one big DataFrame from two or more dataframes. Requires overlapping columns. Concatenating DataFrames (3)*How might we do this horizontally?*
###Code
df3 = pd.DataFrame([[7, 8], [9, 10]], columns=['C', 'D'], index = [1,2])
print(df3)
print(pd.concat([df1, df3], axis=1)) # put together horizontally - axis=1
###Output
C D
1 7 8
2 9 10
A B C D
0 1.0 2.0 NaN NaN
1 3.0 4.0 7.0 8.0
2 NaN NaN 9.0 10.0
###Markdown
Merging DataFrames (1) We can merge DataFrames which share common identifiers, row by row. Example:
###Code
print(pd.merge(df1, df2, how='outer'))
# print(pd.concat([df1, df2]))
###Output
_____no_output_____
###Markdown
`merge` is useful for when you have two or more datasets about the same entities, e.g. data about individual where you merge by social security number. Merging DataFrames (2)Merging can be either of four types.- inner merge [default]: observations exist in both dataframes - left (right) merge: observations exist in left (right) dataframe- outer merge: observations exist either in left or in right dataframe Merging DataFrames (3)*What happens if we do a left merge `df1` and `df2`?*
###Code
print(pd.merge(df1, df2, how='left'))
###Output
_____no_output_____
###Markdown
Merging DataFrames (4)With `join` we can also merge on indices.- Note horizontal `concat` performs an outer join. Reshaping data Stacking data A DataFrame can be collapsed into a Series with the **stack** command:
###Code
df = pd.DataFrame([[1,2],[3,4]],columns=['EU','US'],index=[2000,2010])
print(df, '\n')
stacked = df.stack() # going from wide to long format
print(stacked) # .reset_index()
###Output
EU US
2000 1 2
2010 3 4
2000 EU 1
US 2
2010 EU 3
US 4
dtype: int64
###Markdown
Note: The stacked DataFrame is in long/tidy format, the original is wide. To wide format Likewise we can transform a long DataFrame with the unstack
###Code
print(stacked.unstack(level=1))
###Output
_____no_output_____ |
notebooks/digits_recognition/nn_train/MNIST_XNORNet.ipynb | ###Markdown
Train a binary XNORNet for deploying in FPGAThis tutorial will help you go through the procedures of training a one-layer XNorNet as a perceptron on mnist dataset, and how to deploy trained weights and images into magma.This tutorial has borrowed some codes from https://github.com/BenBBear/MNIST-XNORNet.
###Code
import numpy as np
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_func import *
from mnist import read_data_sets
mnist = read_data_sets('MNIST_data')
###Output
Successfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting MNIST_data/train-images-idx3-ubyte.gz
Successfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Successfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Successfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
###Markdown
TrainingHere we train a xnor network following description(https://arxiv.org/abs/1603.05279). Considering the limit amount of resources on ice40, we resize the image from 28x28 to 16x16. We use only one fully-connected layer to conduct classification. Feel free to add conv layers if you have a larger resource.
###Code
# Build Computational Graph
sess = tf.InteractiveSession()
# Initialize placeholders for data & labels
x = tf.placeholder(tf.float32, shape=[None, 256])
y_ = tf.placeholder(tf.float32, shape=[None, 10])
keep_prob = tf.placeholder(tf.float32)
# reshape to make image volumes
x_image = tf.reshape(x, [-1,1,1,256])
x_image_drop = tf.nn.dropout(x_image, keep_prob)
W_fc = weight_variable([1, 1, 256, 10])
BW_fc = binarize_weights(W_fc)
y_conv = tf.reshape(conv2d(x_image, BW_fc), [-1, 10])
# create train ops
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# initialize all variables
sess.run(tf.global_variables_initializer())
# train loop
for i in range(10000):
batch = mnist.train.next_batch(50)
if i % 1000 == 0:
print("test accuracy %g"%accuracy.eval(feed_dict={
x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
if i % 100 == 0:
train_accuracy = accuracy.eval(feed_dict={
x:batch[0], y_: batch[1], keep_prob: 1.0})
print("step %d,r training accuracy %g"%(i, train_accuracy))
train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5})
###Output
Tensor("Variable/read:0", shape=(1, 1, 256, 10), dtype=float32)
test accuracy 0.1119
step 0,r training accuracy 0.16
step 100,r training accuracy 0.24
step 200,r training accuracy 0.24
step 300,r training accuracy 0.18
step 400,r training accuracy 0.24
step 500,r training accuracy 0.42
step 600,r training accuracy 0.4
step 700,r training accuracy 0.5
step 800,r training accuracy 0.44
step 900,r training accuracy 0.54
test accuracy 0.5274
step 1000,r training accuracy 0.52
step 1100,r training accuracy 0.46
step 1200,r training accuracy 0.56
step 1300,r training accuracy 0.56
step 1400,r training accuracy 0.58
step 1500,r training accuracy 0.6
step 1600,r training accuracy 0.54
step 1700,r training accuracy 0.56
step 1800,r training accuracy 0.58
step 1900,r training accuracy 0.66
test accuracy 0.6486
step 2000,r training accuracy 0.58
step 2100,r training accuracy 0.72
step 2200,r training accuracy 0.64
step 2300,r training accuracy 0.62
step 2400,r training accuracy 0.66
step 2500,r training accuracy 0.52
step 2600,r training accuracy 0.7
step 2700,r training accuracy 0.62
step 2800,r training accuracy 0.74
step 2900,r training accuracy 0.78
test accuracy 0.7056
step 3000,r training accuracy 0.74
step 3100,r training accuracy 0.72
step 3200,r training accuracy 0.76
step 3300,r training accuracy 0.7
step 3400,r training accuracy 0.64
step 3500,r training accuracy 0.76
step 3600,r training accuracy 0.74
step 3700,r training accuracy 0.66
step 3800,r training accuracy 0.68
step 3900,r training accuracy 0.76
test accuracy 0.7174
step 4000,r training accuracy 0.74
step 4100,r training accuracy 0.62
step 4200,r training accuracy 0.66
step 4300,r training accuracy 0.66
step 4400,r training accuracy 0.78
step 4500,r training accuracy 0.7
step 4600,r training accuracy 0.66
step 4700,r training accuracy 0.7
step 4800,r training accuracy 0.72
step 4900,r training accuracy 0.72
test accuracy 0.7304
step 5000,r training accuracy 0.68
step 5100,r training accuracy 0.72
step 5200,r training accuracy 0.68
step 5300,r training accuracy 0.64
step 5400,r training accuracy 0.7
step 5500,r training accuracy 0.6
step 5600,r training accuracy 0.76
step 5700,r training accuracy 0.7
step 5800,r training accuracy 0.7
step 5900,r training accuracy 0.7
test accuracy 0.73
step 6000,r training accuracy 0.76
step 6100,r training accuracy 0.68
step 6200,r training accuracy 0.68
step 6300,r training accuracy 0.7
step 6400,r training accuracy 0.78
step 6500,r training accuracy 0.64
step 6600,r training accuracy 0.7
step 6700,r training accuracy 0.72
step 6800,r training accuracy 0.76
step 6900,r training accuracy 0.7
test accuracy 0.7376
step 7000,r training accuracy 0.68
step 7100,r training accuracy 0.62
step 7200,r training accuracy 0.76
step 7300,r training accuracy 0.74
step 7400,r training accuracy 0.74
step 7500,r training accuracy 0.72
step 7600,r training accuracy 0.64
step 7700,r training accuracy 0.74
step 7800,r training accuracy 0.8
step 7900,r training accuracy 0.64
test accuracy 0.7495
step 8000,r training accuracy 0.88
step 8100,r training accuracy 0.8
step 8200,r training accuracy 0.74
step 8300,r training accuracy 0.76
step 8400,r training accuracy 0.66
step 8500,r training accuracy 0.74
step 8600,r training accuracy 0.82
step 8700,r training accuracy 0.74
step 8800,r training accuracy 0.7
step 8900,r training accuracy 0.74
test accuracy 0.7463
step 9000,r training accuracy 0.72
step 9100,r training accuracy 0.82
step 9200,r training accuracy 0.72
step 9300,r training accuracy 0.82
step 9400,r training accuracy 0.8
step 9500,r training accuracy 0.78
step 9600,r training accuracy 0.84
step 9700,r training accuracy 0.64
step 9800,r training accuracy 0.76
step 9900,r training accuracy 0.8
###Markdown
Save WeightsHere we save the weights, and prepare 10 mnist samples, whose labels ranging from 0 to 9 to a pickle file, which could be used later to initiate ROM in FPGA.
###Code
import pickle
# trained binary weights
res = BW_fc.eval()
alpha = np.abs(res).sum(0).sum(0).sum(0) / res[:,:,:,0].size
BW = np.sign(res)
BW = np.squeeze(BW, axis=(0, 1))
BW = BW.T
BW[BW==-1] = 0
# mnist samples ranging from label 0 to 9
imgs = [mnist.test.images[3], mnist.test.images[2], mnist.test.images[208], mnist.test.images[811], mnist.test.images[1140],
mnist.test.images[102], mnist.test.images[814], mnist.test.images[223],mnist.test.images[128], mnist.test.images[214]]
imgs = np.vstack(imgs)
imgs[imgs==-1]=0
weights_int16 = np.zeros((10, 16), dtype=np.uint16)
for index in range(10):
for i in range(16):
for j in range(15):
weights_int16[index, i] += BW[index, 16 * i + j]
weights_int16[index, i] = np.left_shift(weights_int16[index, i], 1)
weights_int16[index, i] += BW[index, 16 * i + 15]
imgs_int16 = np.zeros((10, 16), dtype=np.uint16)
for index in range(10):
for i in range(16):
for j in range(15):
imgs_int16[index, 15-i] += imgs[index, 16 * (15 - j) + i]
imgs_int16[index, 15-i] = np.left_shift(imgs_int16[index, 15-i], 1)
imgs_int16[index, 15-i] += imgs[index, 16 * 0 + i]
pickle.dump({'imgs':imgs, 'weights': BW, 'alpha':alpha,
'imgs_int16':imgs_int16, 'weights_int16':weights_int16}, open( "BNN.pkl", "wb" ))
###Output
_____no_output_____
###Markdown
Simulate on CPUThe following code simulates the computation on FPGA on CPU.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def dis_img(imgs, index):
img = imgs[index, :]
img = np.reshape(img, [16, 16])
plt.imshow(img, cmap='gray')
plt.show()
for img_index in range(10):
res = []
for i in range(10):
kk = np.logical_not(np.logical_xor(imgs[img_index, :], BW[i, :].T))
pop_count = np.sum(kk)
res.append(pop_count)
plt.subplot(2, 5, img_index + 1)
img = np.reshape(imgs[img_index, :], [16, 16])
plt.imshow(img, cmap='gray')
plt.axis('off')
plt.title("Pred: " + str(np.argmax(res, axis=0)))
plt.show()
###Output
_____no_output_____ |
Mitchell_HelpYourNGO/transformation/helpyourngo_json_transformation.ipynb | ###Markdown
HelpYourNGO Before Glue Transformations helpyourngo.json__Data provided by:__ www.helpyourngo.com __Source:__ s3://daanmatchdatafiles/webscrape-fall2021/helpyourngo.json __Type:__ json __Last Modified:__ October 31, 2021, 14:58:41 (UTC-07:00) __Size:__ 1.6 MB helpyourngo.json named helpyourngo_df contains: List of NGOs indexed on helpyourngo.com* COLUMN NAME: Content * Issues * Transformations* name: NGO Name * Issues: Duplicate Names (e.g. Search NGO)* last_updated: Most recent year that this data was collected * address: Address * Issues: Escape chars* mobile: Phone Number * Issues: Some NGOs have multiple phone numbers in the same column, Numbers may have an extra leading 0, Country code might be duplicated, Formatting varies dramatically * email: Email * Issues: Some NGOs have multiple emails in the same column * website: Website * Issues: String representation of NA (e.g. 'NA', 'N.A.', 'N. A.', 'N.A', 'Under Construction') * Transformations: Convert string representations of NA to None * annual_expenditure: Annual Expenditure for the last_updated Year * Issues: Contains negative values * Transformations: Remove commas and convert from str to int * description: Description of the NGO * Issues: Has abnormal spacings and irregular characters (โ vs ') Imports
###Code
import boto3
import io
import string
import requests
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
import missingno as msno
sns.set(rc={'figure.figsize':(16,5)})
###Output
_____no_output_____
###Markdown
Load Data
###Code
# df = pd.read_json('helpyourngo.json', orient='values')
client = boto3.client('s3')
resource = boto3.resource('s3')
obj = client.get_object(Bucket='daanmatchdatafiles', Key='webscrape-fall2021/helpyourngo.json')
df = pd.read_json(io.BytesIO(obj['Body'].read()))
df_transformed = df.copy()
###Output
_____no_output_____
###Markdown
Column Transformations name
###Code
# No transformations for now
###Output
_____no_output_____
###Markdown
last_updated
###Code
# Convert last_updated to date
df_transformed["last_updated"] = pd.to_datetime(df["last_updated"], format='%Y', errors='coerce').dt.strftime('%Y-%m-%d')
df_transformed.head()
###Output
_____no_output_____
###Markdown
address
###Code
# No transformations for now
###Output
_____no_output_____
###Markdown
mobile
###Code
# Replace all non numeric characters and then take the last 10 chars
# last 10 chars won't capture leading 0 or the country code
temp = df["mobile"]
to_remove = [" ", "-", "/", "(", ")", ";", ",", "+"]
def format_mobile(m):
if m is None or len(m)<10:
return None
else:
return m[-10:]
for c in to_remove:
temp = temp.str.replace(c, "")
df_transformed["mobile"] = temp.apply(format_mobile)
df_transformed.head()
###Output
C:\Users\mitch\Anaconda3\lib\site-packages\ipykernel_launcher.py:14: FutureWarning: The default value of regex will change from True to False in a future version. In addition, single character regular expressions will *not* be treated as literal strings when regex=True.
###Markdown
email
###Code
# Replace existing delimiters with common delimiter and capture everything before that
temp = df.email
to_remove = [";", ","]
delimiter = " #CAPTURE THE STUFF BEFORE ME# "
def extract_first_email(email):
if not email or delimiter not in email:
return email
else: # delimiter is in email
return email[:email.find(delimiter)] # return stuff before delimiter
for c in to_remove:
temp = temp.str.replace(c, delimiter)
df_transformed["email"] = temp.apply(extract_first_email)
df_transformed.head()
###Output
_____no_output_____
###Markdown
Website
###Code
# If string resembles "NA", then convert it's value to None
temp = df.website
str_rep_NA = ["N.A.", "N. A.", "NA", "N.A", "Under Construction"]
for s in str_rep_NA:
temp[temp == s] = None
df_transformed["website"] = temp
df_transformed.head()
###Output
C:\Users\mitch\Anaconda3\lib\site-packages\ipykernel_launcher.py:6: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
###Markdown
annual_expenditure
###Code
# Convert to float
temp = df.annual_expenditure
remove_commas = temp.str.replace(',', '')
df_transformed["annual_expenditure"] = pd.to_numeric(remove_commas)
df_transformed.head()
###Output
_____no_output_____
###Markdown
description
###Code
# No transformations for now
###Output
_____no_output_____
###Markdown
Save Transformed Data
###Code
df_transformed.info()
df_transformed.to_json("helpyourngo_before_glue_transformation.json", lines=True, orient='records')
###Output
_____no_output_____ |
EDA_On_U.S._Gun_Violence_Records_2014-2021.ipynb | ###Markdown
First View of Data
###Code
data.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 3230 entries, 0 to 3229
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 incident_id 3230 non-null int64
1 incident_date 3230 non-null object
2 state 3230 non-null object
3 city_or_county 3230 non-null object
4 address 3225 non-null object
5 killed 3230 non-null int64
6 injured 3228 non-null float64
dtypes: float64(1), int64(2), object(4)
memory usage: 176.8+ KB
###Markdown
Feature Engineering There are not much NaN values so we can simply Drop the Row which contains NaN Values
###Code
data = data.dropna()
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Previously ["injured"] column was in "float64" dtype so i converted tp "int"
###Code
data["injured"] = data["injured"].astype("int")
data.head()
###Output
_____no_output_____
###Markdown
EDA
###Code
injured_total = int(data.injured.sum())
killed_total = int(data.killed.sum())
print("Total Person Injured: {}".format(injured_total))
print("Total Person Killed: {}".format(killed_total))
top30_kill_rates = data.groupby("state")["killed"].count().sort_values(ascending=False).head(30)
plt.figure(figsize=(20,7))
sns.barplot(x=top30_kill_rates.index, y=top30_kill_rates.values)
plt.title('Comparision of Top 30 States with its Killing Rates')
plt.xticks(rotation=90)
plt.show()
top30_injured_rates = data.groupby("state")["injured"].count().sort_values(ascending=False).head(30)
plt.figure(figsize=(20,7))
sns.barplot(x=top30_injured_rates.index, y=top30_injured_rates.values)
plt.title('Comparision of Top 30 States with its Injury Rates')
plt.xticks(rotation=90)
plt.show()
top30_killed_rates = data.groupby("city_or_county")["killed"].count().sort_values(ascending=False).head(30)
plt.figure(figsize=(20,7))
sns.barplot(x=top30_killed_rates.index, y=top30_killed_rates.values)
plt.title('Comparision of Top 30 City/Country with its Killing Rates')
plt.xticks(rotation=90)
plt.show()
top30_injured_rates = data.groupby("city_or_county")["injured"].count().sort_values(ascending=False).head(30)
plt.figure(figsize=(20,7))
sns.barplot(x=top30_injured_rates.index, y=top30_injured_rates.values)
plt.title('Comparision of Top 30 City/Country with its Injured Rates')
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____ |
03_assim_driver.ipynb | ###Markdown
run SM for CSO
###Code
!pwd
out_path = gdat_out_path+str(water_year)+'assim_run'+hoy+'/'
#assimilate CSO observations into SM
SMassim_ensemble(cso_gdf,var,out_path)
###Output
edges: [1774. 2062.4 2350.8 2639.2 2927.6 3216. ]
labels: [0 1 2 3 4]
/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev/code
###Markdown
run SM for SNOTEL
###Code
#assimilate SNOTEL observations into SM
SMassim_ensemble_snotel(cso_gdf,snotel_assim_sites,snotel_swe_assim,var,out_path)
###Output
edges: [1774. 2062.4 2350.8 2639.2 2927.6 3216. ]
labels: [0 1 2 3 4]
/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev/code
/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev
For a DATA ASSIMILATION RUN, MAX_OBS_DATES must be
defined in SNOWMODEL.INC to be greater than the
number of observation dates in the entire simulation
+ (plus) the number of years in the simulation. For
example, for a 6-year simulation with two observation
dates in each year, you would set max_obs_dates to be
at least = 18.
Checking for sufficient met forcing data to
complete the model simulation. This may
take a while, depending on how big your met
input file is.
You are running the large-domain Barnes oi scheme
This requires:
1) no missing data for the fields of interest
2) no missing stations during the simulation
3) met file must list stations in the same order
4) the number of nearest stations used is 9 or less
5) **** no error checking for this is done ****
Generating nearest-station index. Be patient.
In Assim Loop #1; WORKING ON MODEL TIME = 2018 9 1 0.0
ZEROING OUT THE SNOW ARRAYS
ZEROING OUT THE SNOW ARRAYS
ZEROING OUT THE SNOW ARRAYS
At line 425 of file ./dataassim_user.f (unit = 238, file = 'outputs/wo_assim/swed.gdat')
Fortran runtime error: Non-existing record number
mv: cannot stat โ/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev/outputs/wi_assim/swed.gdatโ: No such file or directory
/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev/code
/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev
For a DATA ASSIMILATION RUN, MAX_OBS_DATES must be
defined in SNOWMODEL.INC to be greater than the
number of observation dates in the entire simulation
+ (plus) the number of years in the simulation. For
example, for a 6-year simulation with two observation
dates in each year, you would set max_obs_dates to be
at least = 18.
Checking for sufficient met forcing data to
complete the model simulation. This may
take a while, depending on how big your met
input file is.
You are running the large-domain Barnes oi scheme
This requires:
1) no missing data for the fields of interest
2) no missing stations during the simulation
3) met file must list stations in the same order
4) the number of nearest stations used is 9 or less
5) **** no error checking for this is done ****
Generating nearest-station index. Be patient.
In Assim Loop #1; WORKING ON MODEL TIME = 2018 9 1 0.0
ZEROING OUT THE SNOW ARRAYS
ZEROING OUT THE SNOW ARRAYS
ZEROING OUT THE SNOW ARRAYS
At line 425 of file ./dataassim_user.f (unit = 238, file = 'outputs/wo_assim/swed.gdat')
Fortran runtime error: Non-existing record number
mv: cannot stat โ/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev/outputs/wi_assim/swed.gdatโ: No such file or directory
/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev/code
/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev
For a DATA ASSIMILATION RUN, MAX_OBS_DATES must be
defined in SNOWMODEL.INC to be greater than the
number of observation dates in the entire simulation
+ (plus) the number of years in the simulation. For
example, for a 6-year simulation with two observation
dates in each year, you would set max_obs_dates to be
at least = 18.
Checking for sufficient met forcing data to
complete the model simulation. This may
take a while, depending on how big your met
input file is.
You are running the large-domain Barnes oi scheme
This requires:
1) no missing data for the fields of interest
2) no missing stations during the simulation
3) met file must list stations in the same order
4) the number of nearest stations used is 9 or less
5) **** no error checking for this is done ****
Generating nearest-station index. Be patient.
In Assim Loop #1; WORKING ON MODEL TIME = 2018 9 1 0.0
ZEROING OUT THE SNOW ARRAYS
ZEROING OUT THE SNOW ARRAYS
ZEROING OUT THE SNOW ARRAYS
At line 425 of file ./dataassim_user.f (unit = 238, file = 'outputs/wo_assim/swed.gdat')
Fortran runtime error: Non-existing record number
mv: cannot stat โ/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev/outputs/wi_assim/swed.gdatโ: No such file or directory
/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev/code
/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev
For a DATA ASSIMILATION RUN, MAX_OBS_DATES must be
defined in SNOWMODEL.INC to be greater than the
number of observation dates in the entire simulation
+ (plus) the number of years in the simulation. For
example, for a 6-year simulation with two observation
dates in each year, you would set max_obs_dates to be
at least = 18.
Checking for sufficient met forcing data to
complete the model simulation. This may
take a while, depending on how big your met
input file is.
You are running the large-domain Barnes oi scheme
This requires:
1) no missing data for the fields of interest
2) no missing stations during the simulation
3) met file must list stations in the same order
4) the number of nearest stations used is 9 or less
5) **** no error checking for this is done ****
Generating nearest-station index. Be patient.
In Assim Loop #1; WORKING ON MODEL TIME = 2018 9 1 0.0
ZEROING OUT THE SNOW ARRAYS
ZEROING OUT THE SNOW ARRAYS
ZEROING OUT THE SNOW ARRAYS
At line 425 of file ./dataassim_user.f (unit = 238, file = 'outputs/wo_assim/swed.gdat')
Fortran runtime error: Non-existing record number
mv: cannot stat โ/nfs/attic/dfh/Aragon2/WY_scratch/jan2021_snowmodel-dfhill_elev/outputs/wi_assim/swed.gdatโ: No such file or directory
###Markdown
run SM for CSO & SNOTEL
###Code
#assimilate SNOTEL observations into SM
SMassim_ensemble_both(snotel_swe_assim,snotel_assim_sites,cso_gdf,var,out_path)
STswe = snotel_swe_assim
STmeta = snotel_assim_sites
CSOdata = cso_gdf
outFpath = 'test.dat'
var == 'elev'
edges = np.histogram_bin_edges(CSOdata.dem_elev,bins=5, range=(CSOdata.dem_elev.min(),CSOdata.dem_elev.max()))
print('edges:',edges)
labs = np.arange(0,len(edges)-1,1)
print('labels:',labs)
bins = pd.cut(STmeta['dem_elev'], edges,labels=labs)
STmeta['elev_bin']=bins
bins = pd.cut(CSOdata['dem_elev'], edges,labels=labs)
CSOdata['elev_bin']=bins
for lab in labs:
newST = STmeta[STmeta.elev_bin == lab]
newCSO = CSOdata[CSOdata.elev_bin == lab]
newSTswe = STswe[np.intersect1d(STswe.columns, newST.code.values)]
outFpath = 'test_'+str(lab)+'.dat'
num_obs = make_SMassim_file_both(newSTswe,newST,newCSO,outFpath)
num_obs
var = 'M'
newST = STmeta
mo = [11,12,1,2,3,4,5]#np.unique(STswe.index.month)
for m in mo:
newSTswe = STswe[STswe.index.month == m]
newCSO = CSOdata[CSOdata[var] == m]
outFpath = 'test_'+str(m)+'.dat'
num_obs = make_SMassim_file_both(newSTswe,newST,newCSO,outFpath)
num_obs
def make_SMassim_file_both(STswe,STmeta,CSOdata,outFpath):
f= open(outFpath,"w+")
#determine number of days with observations to assimilate
if STswe.shape[1]>0:
uq_day = np.unique(np.concatenate((STswe.index.date,CSOdata.dt.dt.date.values)))
f.write('{:02.0f}\n'.format(len(uq_day)))
else:
uq_day = np.unique(CSOdata.dt.dt.date.values)
f.write('{:02.0f}\n'.format(len(uq_day)))
# determine snotel stations
stn = list(STswe.columns)
# ids for CSO observations - outside of loop so each observation is unique
IDS = 500
#add assimilation observations to output file
for i in range(len(uq_day)):
SThoy = STswe[STswe.index.date == uq_day[i]]
CSOhoy = CSOdata[CSOdata.dt.dt.date.values == uq_day[i]]
d=uq_day[i].day
m=uq_day[i].month
y=uq_day[i].year
date = str(y)+' '+str(m)+' '+str(d)
stn_count = len(stn) + len(CSOhoy)
if stn_count > 0:
f.write(date+' \n')
f.write(str(stn_count)+' \n')
#go through snotel stations for that day
ids = 100
if len(SThoy) > 0:
for k in stn:
ids = ids + 1
x = STmeta.easting.values[STmeta.code.values == k][0]
y = STmeta.northing.values[STmeta.code.values == k][0]
swe = SThoy[k].values[0]
f.write('{:3.0f}\t'.format(ids)+'{:10.0f}\t'.format(x)+'{:10.0f}\t'.format(y)+'{:3.2f}\n'.format(swe))
#go through cso obs for that day
if len(CSOhoy) > 0:
for c in range(len(CSOhoy)):
IDS = IDS + 1
x= CSOhoy.x[CSOhoy.index[c]]
y=CSOhoy.y[CSOhoy.index[c]]
swe=CSOhoy.swe[CSOhoy.index[c]]
f.write('{:3.0f}\t'.format(IDS)+'{:10.0f}\t'.format(x)+'{:10.0f}\t'.format(y)+'{:3.2f}\n'.format(swe))
f.close()
return len(uq_day)
def make_SMassim_file_both(STswe,STmeta,CSOdata,outFpath):
f= open(outFpath,"w+")
uq_day = np.unique(np.concatenate((STswe.index.date,CSOdata.dt.dt.date.values)))
f.write('{:02.0f}\n'.format(len(uq_day)))
stn = list(STswe.columns)
for i in range(len(uq_day)):
SThoy = STswe[STswe.index.date == uq_day[i]]
CSOhoy = CSOdata[CSOdata.dt.dt.date.values == uq_day[i]]
d=uq_day[i].day
m=uq_day[i].month
y=uq_day[i].year
date = str(y)+' '+str(m)+' '+str(d)
stn_count = len(stn) + len(CSOhoy)
f.write(date+' \n')
f.write(str(stn_count)+' \n')
#go through snotel stations for that day
ids = 100
if len(SThoy) > 0:
for k in stn:
ids = ids + 1
x = STmeta.easting.values[STmeta.code.values == k][0]
y = STmeta.northing.values[STmeta.code.values == k][0]
swe = SThoy[k].values[0]
f.write('{:3.0f}\t'.format(ids)+'{:10.0f}\t'.format(x)+'{:10.0f}\t'.format(y)+'{:3.2f}\n'.format(swe))
#go through cso obs for that day
if len(CSOhoy) > 0:
for c in range(len(CSOhoy)):
ids = ids + 1
x= CSOhoy.x[CSOhoy.index[c]]
y=CSOhoy.y[CSOhoy.index[c]]
swe=CSOhoy.swe[CSOhoy.index[c]]
f.write('{:3.0f}\t'.format(ids)+'{:10.0f}\t'.format(x)+'{:10.0f}\t'.format(y)+'{:3.2f}\n'.format(swe))
f.close()
return len(uq_day)
def SMassim_ensemble_both(STswe,STmeta,CSOdata,var,path):
'''
STmeta: this is the geodataframe containing all snotel stations
STswe: this is a dataframe containing all snotel swe
CSOdata: this is the geodataframe containing all CSO data
var: this is the landscape characteristic that will be made into an assimilation ensemble
'all': assimilate all inputs to SM
'elev': assimilate each of n elevation bands.
Default = breaks elevation range into 5 bands
'slope': assimilate each of n slope bands.
Default = breaks slope range into 5 bands
'tc': assimilate each of n terrain complexity score bands.
Default = breaks tc score range into 5 bands
'delta_day': sets a minimum number of days between assimilated observations.
-> only 1 observation is selected each day
'M': assimilate data from each month
'lc': assimilate data from each land cover class
'aspect': assimilate data from each aspect N, E, S, W
path: path to put all output SM .gdat files
'''
#create directory with initiation date for ensemble if it doesn't exist
!mkdir -p $path
outFpath = SMpath+'swe_assim/swe_obs_test.dat'
if var == 'all':
newST = STmeta
newSTswe = STswe
newCSO = CSOdata
num_obs = make_SMassim_file_both(newSTswe,newST,newCSO,outFpath)
#edit .inc file
replace_line(incFile, 30, ' parameter (max_obs_dates='+str(num_obs+1)+')\n')
#compile SM
%cd $codepath
! ./compile_snowmodel.script
#run snowmodel
%cd $SMpath
! ./snowmodel
#move swed.gdat file
oSWEpath = SMpath + 'outputs/wi_assim/swed.gdat'
nSWEpath = path + '/snotel_all_swed.gdat'
!mv $oSWEpath $nSWEpath
elif var == 'elev':
edges = np.histogram_bin_edges(CSOdata.dem_elev,bins=5, range=(CSOdata.dem_elev.min(),CSOdata.dem_elev.max()))
print('edges:',edges)
labs = np.arange(0,len(edges)-1,1)
print('labels:',labs)
bins = pd.cut(STmeta['dem_elev'], edges,labels=labs)
STmeta['elev_bin']=bins
bins = pd.cut(CSOdata['dem_elev'], edges,labels=labs)
CSOdata['elev_bin']=bins
for lab in labs:
newST = STmeta[STmeta.elev_bin == lab]
newCSO = CSOdata[CSOdata.elev_bin == lab]
newSTswe = STswe[np.intersect1d(STswe.columns, newST.code.values)]
num_obs = make_SMassim_file_both(newSTswe,newST,newCSO,outFpath)
#edit .inc file
replace_line(incFile, 30, ' parameter (max_obs_dates='+str(num_obs+1)+')\n')
#compile SM
%cd $codepath
! ./compile_snowmodel.script
#run snowmodel
%cd $SMpath
! ./snowmodel
#move swed.gdat file
oSWEpath = SMpath + 'outputs/wi_assim/swed.gdat'
nSWEpath = path + '/snotel_elev_'+str(lab)+'_swed.gdat'
!mv $oSWEpath $nSWEpath
elif var == 'slope':
edges = np.histogram_bin_edges(CSOdata.slope,bins=5, range=(CSOdata.slope.min(),CSOdata.slope.max()))
print('edges:',edges)
labs = np.arange(0,len(edges)-1,1)
print('labels:',labs)
bins = pd.cut(STmeta['slope'], edges,labels=labs)
STmeta['slope_bin']=bins
bins = pd.cut(CSOdata['slope'], edges,labels=labs)
CSOdata['slope_bin']=bins
for lab in labs:
newST = STmeta[STmeta.slope_bin == lab]
newCSO = CSOdata[CSOdata.slope_bin == lab]
newSTswe = STswe[np.intersect1d(STswe.columns, newST.code.values)]
num_obs = make_SMassim_file_both(newSTswe,newST,newCSO,outFpath)
#edit .inc file
replace_line(incFile, 30, ' parameter (max_obs_dates='+str(num_obs+1)+')\n')
#compile SM
%cd $codepath
! ./compile_snowmodel.script
#run snowmodel
%cd $SMpath
! ./snowmodel
#move swed.gdat file
oSWEpath = SMpath + 'outputs/wi_assim/swed.gdat'
nSWEpath = path + '/snotel_slope_'+str(lab)+'_swed.gdat'
!mv $oSWEpath $nSWEpath
elif var == 'tc':
edges = np.histogram_bin_edges(CSOdata.tc,bins=5, range=(CSOdata.tc.min(),CSOdata.tc.max()))
print('edges:',edges)
labs = np.arange(0,len(edges)-1,1)
print('labels:',labs)
bins = pd.cut(STmeta['tc'], edges,labels=labs)
STmeta['tc_bin']=bins
bins = pd.cut(CSOdata['tc'], edges,labels=labs)
CSOdata['tc_bin']=bins
for lab in labs:
newST = STmeta[STmeta.tc_bin == lab]
newCSO = CSOdata[CSOdata.tc_bin == lab]
newSTswe = STswe[np.intersect1d(STswe.columns, newST.code.values)]
num_obs = make_SMassim_file_both(newSTswe,newST,newCSO,outFpath)
#edit .inc file
replace_line(incFile, 30, ' parameter (max_obs_dates='+str(num_obs+1)+')\n')
#compile SM
%cd $codepath
! ./compile_snowmodel.script
#run snowmodel
%cd $SMpath
! ./snowmodel
#move swed.gdat file
oSWEpath = SMpath + 'outputs/wi_assim/swed.gdat'
nSWEpath = path + '/snotel_tc_'+str(lab)+'_swed.gdat'
!mv $oSWEpath $nSWEpath
elif var == 'delta_day':
import datetime
CSOdata = CSOdata.sort_values(by='dt',ascending=True)
CSOdata = CSOdata.reset_index(drop=True)
newST = STmeta
Delta = [3,5,7,10]
for dels in Delta:
idx = [0]
st = CSOdata.dt[0]
for i in range(1,len(CSOdata)-1):
date = CSOdata.dt.iloc[i]
gap = (date - st).days
if gap<=dels:
continue
else:
idx.append(i)
st = date
newCSO = CSOdata[CSOdata.index.isin(idx)]
newSTswe = STswe.iloc[::dels,:]
num_obs = make_SMassim_file_both(newSTswe,newST,newCSO,outFpath)
#edit .inc file
replace_line(incFile, 30, ' parameter (max_obs_dates='+str(num_obs+1)+')\n')
#compile SM
%cd $codepath
! ./compile_snowmodel.script
#run snowmodel
%cd $SMpath
! ./snowmodel
#move swed.gdat file
oSWEpath = SMpath + 'outputs/wi_assim/swed.gdat'
nSWEpath = path + '/snotel_day_delta'+str(dels)+'_swed.gdat'
!mv $oSWEpath $nSWEpath
elif var == 'M':
newST = STmeta
mo = [11,12,1,2,3,4,5]#np.unique(STswe.index.month)
for m in mo:
newSTswe = STswe[STswe.index.month == m]
newCSO = CSOdata[CSOdata[var] == m]
num_obs = make_SMassim_file_both(newSTswe,newST,newCSO,outFpath)
#edit .inc file
replace_line(incFile, 30, ' parameter (max_obs_dates='+str(num_obs+1)+')\n')
#compile SM
%cd $codepath
! ./compile_snowmodel.script
#run snowmodel
%cd $SMpath
! ./snowmodel
#move swed.gdat file
oSWEpath = SMpath + 'outputs/wi_assim/swed.gdat'
nSWEpath = path + '/snotel_M_'+str(m)+'_swed.gdat'
!mv $oSWEpath $nSWEpath
else: #works for 'M', 'lc', 'aspect'
uq = np.unique(np.concatenate((STmeta[var].values,CSOdata[var].values)))
for lab in uq:
newST = STmeta[STmeta[var] == lab]
newCSO = CSOdata[CSOdata[var] == lab]
newSTswe = STswe[np.intersect1d(STswe.columns, newST.code.values)]
num_obs = make_SMassim_file_both(newSTswe,newST,newCSO,outFpath)
#edit .inc file
replace_line(incFile, 30, ' parameter (max_obs_dates='+str(num_obs+1)+')\n')
#compile SM
%cd $codepath
! ./compile_snowmodel.script
#run snowmodel
%cd $SMpath
! ./snowmodel
#move swed.gdat file
oSWEpath = SMpath + 'outputs/wi_assim/swed.gdat'
nSWEpath = path + '/snotel_'+var+'_'+str(lab)+'_swed.gdat'
!mv $oSWEpath $nSWEpath
###Output
_____no_output_____ |
Video-Upscaler-with-Real-ESRGAN-DEMO.ipynb | ###Markdown
1. Important Notes* If you are using the free Colab tier , it works best for short length videos* Just follow the instructions in the notebook and you should be fine!*Colab Notebook prepared by Geeve George*
###Code
# Clone Real-ESRGAN and enter the Real-ESRGAN
%cd ~
!git clone https://github.com/xinntao/Real-ESRGAN.git
%cd Real-ESRGAN
# Set up the environment
!pip install ffmpeg-python
!pip install pydub
!pip install basicsr
!pip install facexlib
!pip install gfpgan
!pip install -r requirements.txt
!python setup.py develop
# Download the pre-trained model
!wget https://github.com/xinntao/Real-ESRGAN/releases/download/v0.1.0/RealESRGAN_x4plus.pth -P experiments/pretrained_models
import cv2
import ffmpeg
import moviepy.editor
from pydub import AudioSegment
import numpy as np
import glob
from os.path import isfile, join
from pathlib import Path
import subprocess
from IPython.display import clear_output
###Output
Imageio: 'ffmpeg-linux64-v3.3.1' was not found on your computer; downloading it now.
Try 1. Download from https://github.com/imageio/imageio-binaries/raw/master/ffmpeg/ffmpeg-linux64-v3.3.1 (43.8 MB)
Downloading: 8192/45929032 bytes (0.0%)2924544/45929032 bytes (6.4%)7028736/45929032 bytes (15.3%)11083776/45929032 bytes (24.1%)15130624/45929032 bytes (32.9%)19251200/45929032 bytes (41.9%)23388160/45929032 bytes (50.9%)27557888/45929032 bytes (60.0%)31637504/45929032 bytes (68.9%)35692544/45929032 bytes (77.7%)39714816/45929032 bytes (86.5%)43843584/45929032 bytes (95.5%)45929032/45929032 bytes (100.0%)
Done
File saved as /root/.imageio/ffmpeg/ffmpeg-linux64-v3.3.1.
###Markdown
2. This section is for creating some important folders (IMPORTANT : Please Read)* First Run this cell* Once you've run the cell , you can see that a folder named "videos" will be created inside the Real-ESRGAN folder* Just drag and drop any number of videos from your computer and place them inside the "videos" folder. (videos you want to upscale)
###Code
import os
from google.colab import files
import shutil
upload_folder = 'upload'
result_folder = 'results'
video_folder = 'videos'
wav_folder = 'wav'
video_result_folder = 'results_videos'
video_mp4_result_folder = 'results_mp4_videos'
video_mp4_result_with_audio_folder = 'results_mp4_videos_with_audio'
if os.path.isdir(upload_folder):
print(upload_folder+" exists")
else :
os.mkdir(upload_folder)
if os.path.isdir(video_folder):
print(video_folder+" exists")
else :
os.mkdir(video_folder)
if os.path.isdir(wav_folder):
print(wav_folder+" exists")
else :
os.mkdir(wav_folder)
if os.path.isdir(video_result_folder):
print(video_result_folder+" exists")
else :
os.mkdir(video_result_folder)
if os.path.isdir(video_mp4_result_folder):
print(video_mp4_result_folder+" exists")
else :
os.mkdir(video_mp4_result_folder)
if os.path.isdir(video_mp4_result_with_audio_folder):
print(video_mp4_result_with_audio_folder+" exists")
else :
os.mkdir(video_mp4_result_with_audio_folder)
if os.path.isdir(result_folder):
print(result_folder+" exists")
else :
os.mkdir(result_folder)
###Output
_____no_output_____
###Markdown
3. Inference (Please READ)* Then run this cell , it will take a good amount of time depending on the length of your video and how many videos you want to upscale* Once the upscaling process is complete , you can find your upscaled video results , under the "results_mp4_videos" folder.* By default the video is upscaled to a 25fps format, in case you want to change that , it can be changed within the cell below , change the "fps" variable. (Can be found towards the end part of the code section.)
###Code
fps = 24
# assign directory
directory = '/content/Real-ESRGAN/videos' #PATH_WITH_INPUT_VIDEOS
zee = 0
#deletes frames from previous video
for f in os.listdir(upload_folder):
os.remove(os.path.join(upload_folder, f))
#deletes upscaled frames from previous video
for f in os.listdir(result_folder):
os.remove(os.path.join(result_folder, f))
#clearing previous .avi files
for f in os.listdir(video_result_folder):
os.remove(os.path.join(video_result_folder, f))
#clearing .mp4 result files
for f in os.listdir(video_mp4_result_folder):
os.remove(os.path.join(video_mp4_result_folder, f))
def convert_frames_to_video(pathIn,pathOut,fps):
frame_array = []
files = [f for f in os.listdir(pathIn) if isfile(join(pathIn, f))]
#for sorting the file names properly
files.sort(key = lambda x: int(x[5:-4]))
size2 = (0,0)
for i in range(len(files)):
filename=pathIn + files[i]
#reading each files
img = cv2.imread(filename)
height, width, layers = img.shape
size = (width,height)
size2 = size
print(filename)
#inserting the frames into an image array
frame_array.append(img)
out = cv2.VideoWriter(pathOut,cv2.VideoWriter_fourcc(*'FMP4'), fps, size2)
for i in range(len(frame_array)):
# writing to a image array
out.write(frame_array[i])
out.release()
for filename in os.listdir(directory):
f = os.path.join(directory, filename)
original_file = filename
original_file_name = os.path.splitext(filename)[0]
# checking if it is a file
if os.path.isfile(f):
print("PROCESSING :"+str(f)+"\n")
# Read the video from specified path
#video to frames
cam = cv2.VideoCapture(str(f))
try:
# PATH TO STORE VIDEO FRAMES
if not os.path.exists('/content/Real-ESRGAN/upload'):
os.makedirs('/content/Real-ESRGAN/upload')
# if not created then raise error
except OSError:
print ('Error: Creating directory of data')
# frame
currentframe = 0
#clear all folders
#deletes upscaled frames from previous video
#for f in os.listdir(result_folder):
# os.remove(os.path.join(result_folder, f))
while(True):
# reading from frame
ret,frame = cam.read()
if ret:
# if video is still left continue creating images
name = '/content/Real-ESRGAN/upload/frame' + str(currentframe) + '.jpg'
# writing the extracted images
cv2.imwrite(name, frame)
# increasing counter so that it will
# show how many frames are created
currentframe += 1
else:
#deletes all the videos you uploaded for upscaling
#for f in os.listdir(video_folder):
# os.remove(os.path.join(video_folder, f))
break
# Release all space and windows once done
cam.release()
cv2.destroyAllWindows()
#apply super-resolution on all frames of a video
#scale factor is by 3.5x
!python inference_realesrgan.py -n RealESRGAN_x4plus -i upload --outscale 4 --half --face_enhance
#after upscaling just delete the source frames
for f in os.listdir(upload_folder):
os.remove(os.path.join(upload_folder, f))
#rename all frames in "results" to remove the 'out' substring from the processing results
paths = (os.path.join(root, filename)
for root, _, filenames in os.walk('/content/Real-ESRGAN/results')
for filename in filenames)
for path in paths:
newname = path.replace('_out', '')
if newname != path:
os.rename(path, newname)
#convert super res frames to .avi
pathIn = '/content/Real-ESRGAN/results/'
filenameVid = original_file_name + "_enhanced"
pathOut = "/content/Real-ESRGAN/results_videos/"+filenameVid+".avi"
convert_frames_to_video(pathIn, pathOut, fps)
#after processing frames converted to .avi video , delete upscaled frames from previous video
for f in os.listdir(result_folder):
os.remove(os.path.join(result_folder, f))
#convert .avi to .mp4
src = '/content/Real-ESRGAN/results_videos/'
dst = '/content/Real-ESRGAN/results_mp4_videos/'
for root, dirs, filenames in os.walk(src, topdown=False):
#print(filenames)
for filename in filenames:
print('[INFO] 1',filename)
try:
_format = ''
if ".flv" in filename.lower():
_format=".flv"
if ".mp4" in filename.lower():
_format=".mp4"
if ".avi" in filename.lower():
_format=".avi"
if ".mov" in filename.lower():
_format=".mov"
inputfile = os.path.join(root, filename)
print('[INFO] 1',inputfile)
outputfile = os.path.join(dst, filename.lower().replace(_format, ".mp4"))
subprocess.call(['ffmpeg', '-i', inputfile, outputfile])
except:
print("An exception occurred")
audioIn = '/content/Real-ESRGAN/videos/' + original_file
pathAudio = "/content/Real-ESRGAN/wav/" + original_file_name + '.wav'
given_audio = AudioSegment.from_file(audioIn, format="mp4")
given_audio.export(pathAudio, format="wav")
mp4dir = '/content/Real-ESRGAN/results_mp4_videos/' + filenameVid + '.mp4'
mp4dir_with_audio = '/content/Real-ESRGAN/results_mp4_videos_with_audio/' + filenameVid + '_with_audio' + '.mp4'
video_clip = moviepy.editor.VideoFileClip(mp4dir)
audio_clip = moviepy.editor.AudioFileClip(pathAudio)
video_clip = video_clip.set_audio(audio_clip)
video_clip.write_videofile(mp4dir_with_audio, fps=fps, codec='libx264', audio_codec='aac', bitrate='8M')
clear_output(wait=True)
#clearing previous .avi files
for f in os.listdir(video_result_folder):
os.remove(os.path.join(video_result_folder, f))
#deletes frames from previous video
#for f in os.listdir(upload_folder):
# os.remove(os.path.join(upload_folder, f))
# if it is out of memory, try to use the `--tile` option
# We upsample the image with the scale factor X3.5
# Arguments
# -n, --model_name: Model names
# -i, --input: input folder or image
# --outscale: Output scale, can be arbitrary scale factore.
###Output
PROCESSING :/content/Real-ESRGAN/videos/20220224_01_test.mp4
Testing 0 frame0
Testing 1 frame1
Testing 2 frame10
Testing 3 frame100
Testing 4 frame101
Testing 5 frame102
Testing 6 frame103
Testing 7 frame104
Testing 8 frame105
Testing 9 frame106
Testing 10 frame107
Testing 11 frame108
Testing 12 frame109
Testing 13 frame11
Testing 14 frame110
Testing 15 frame111
Testing 16 frame112
Testing 17 frame113
Testing 18 frame114
Testing 19 frame115
Testing 20 frame116
Testing 21 frame117
Testing 22 frame118
Testing 23 frame119
Testing 24 frame12
Testing 25 frame120
Testing 26 frame121
Testing 27 frame122
Testing 28 frame123
Testing 29 frame124
Testing 30 frame125
Testing 31 frame126
Testing 32 frame127
Testing 33 frame128
Testing 34 frame129
Testing 35 frame13
Testing 36 frame130
Testing 37 frame131
Testing 38 frame132
Testing 39 frame133
Testing 40 frame134
Testing 41 frame135
Testing 42 frame136
Testing 43 frame137
Testing 44 frame138
Testing 45 frame139
Testing 46 frame14
Testing 47 frame140
Testing 48 frame141
Testing 49 frame142
Testing 50 frame143
Testing 51 frame144
Testing 52 frame145
Testing 53 frame146
Testing 54 frame147
Testing 55 frame148
Testing 56 frame149
Testing 57 frame15
Testing 58 frame150
Testing 59 frame151
Testing 60 frame152
Testing 61 frame153
Testing 62 frame154
Testing 63 frame155
Testing 64 frame156
Testing 65 frame157
Testing 66 frame158
Testing 67 frame159
Testing 68 frame16
Testing 69 frame160
Testing 70 frame161
Testing 71 frame162
Testing 72 frame163
Testing 73 frame164
Testing 74 frame165
Testing 75 frame166
Testing 76 frame167
Testing 77 frame168
Testing 78 frame169
Testing 79 frame17
Testing 80 frame170
Testing 81 frame171
Testing 82 frame172
Testing 83 frame173
Testing 84 frame174
Testing 85 frame175
Testing 86 frame176
Testing 87 frame177
Testing 88 frame178
Testing 89 frame179
Testing 90 frame18
Testing 91 frame180
Testing 92 frame181
Testing 93 frame182
Testing 94 frame183
Testing 95 frame184
Testing 96 frame185
Testing 97 frame186
Testing 98 frame187
Testing 99 frame188
Testing 100 frame189
Testing 101 frame19
Testing 102 frame190
Testing 103 frame191
Testing 104 frame192
Testing 105 frame193
Testing 106 frame194
Testing 107 frame195
Testing 108 frame196
Testing 109 frame197
Testing 110 frame198
Testing 111 frame199
Testing 112 frame2
Testing 113 frame20
Testing 114 frame200
Testing 115 frame201
Testing 116 frame202
Testing 117 frame203
Testing 118 frame204
Testing 119 frame205
Testing 120 frame206
Testing 121 frame207
Testing 122 frame208
Testing 123 frame209
Testing 124 frame21
Testing 125 frame210
Testing 126 frame211
Testing 127 frame212
Testing 128 frame213
Testing 129 frame214
Testing 130 frame215
Testing 131 frame216
Testing 132 frame217
Testing 133 frame218
Testing 134 frame219
Testing 135 frame22
Testing 136 frame220
Testing 137 frame221
Testing 138 frame222
Testing 139 frame223
Testing 140 frame224
Testing 141 frame225
Testing 142 frame226
Testing 143 frame227
Testing 144 frame228
Testing 145 frame229
Testing 146 frame23
Testing 147 frame230
Testing 148 frame231
Testing 149 frame232
Testing 150 frame233
Testing 151 frame234
Testing 152 frame235
Testing 153 frame236
Testing 154 frame237
Testing 155 frame238
Testing 156 frame239
Testing 157 frame24
Testing 158 frame240
Testing 159 frame241
Testing 160 frame242
Testing 161 frame243
Testing 162 frame25
Testing 163 frame26
Testing 164 frame27
Testing 165 frame28
Testing 166 frame29
Testing 167 frame3
Testing 168 frame30
Testing 169 frame31
Testing 170 frame32
Testing 171 frame33
Testing 172 frame34
Testing 173 frame35
Testing 174 frame36
Testing 175 frame37
Testing 176 frame38
Testing 177 frame39
Testing 178 frame4
Testing 179 frame40
Testing 180 frame41
Testing 181 frame42
Testing 182 frame43
Testing 183 frame44
Testing 184 frame45
Testing 185 frame46
Testing 186 frame47
Testing 187 frame48
Testing 188 frame49
Testing 189 frame5
Testing 190 frame50
Testing 191 frame51
Testing 192 frame52
Testing 193 frame53
Testing 194 frame54
Testing 195 frame55
Testing 196 frame56
Testing 197 frame57
Testing 198 frame58
Testing 199 frame59
Testing 200 frame6
Testing 201 frame60
Testing 202 frame61
Testing 203 frame62
Testing 204 frame63
Testing 205 frame64
Testing 206 frame65
Testing 207 frame66
Testing 208 frame67
Testing 209 frame68
Testing 210 frame69
Testing 211 frame7
Testing 212 frame70
Testing 213 frame71
Testing 214 frame72
Testing 215 frame73
Testing 216 frame74
Testing 217 frame75
Testing 218 frame76
Testing 219 frame77
Testing 220 frame78
Testing 221 frame79
Testing 222 frame8
Testing 223 frame80
Testing 224 frame81
Testing 225 frame82
Testing 226 frame83
Testing 227 frame84
Testing 228 frame85
Testing 229 frame86
Testing 230 frame87
Testing 231 frame88
Testing 232 frame89
Testing 233 frame9
Testing 234 frame90
Testing 235 frame91
Testing 236 frame92
Testing 237 frame93
Testing 238 frame94
Testing 239 frame95
Testing 240 frame96
Testing 241 frame97
Testing 242 frame98
Testing 243 frame99
/content/Real-ESRGAN/results/frame0.jpg
/content/Real-ESRGAN/results/frame1.jpg
/content/Real-ESRGAN/results/frame2.jpg
/content/Real-ESRGAN/results/frame3.jpg
/content/Real-ESRGAN/results/frame4.jpg
/content/Real-ESRGAN/results/frame5.jpg
/content/Real-ESRGAN/results/frame6.jpg
/content/Real-ESRGAN/results/frame7.jpg
/content/Real-ESRGAN/results/frame8.jpg
/content/Real-ESRGAN/results/frame9.jpg
/content/Real-ESRGAN/results/frame10.jpg
/content/Real-ESRGAN/results/frame11.jpg
/content/Real-ESRGAN/results/frame12.jpg
/content/Real-ESRGAN/results/frame13.jpg
/content/Real-ESRGAN/results/frame14.jpg
/content/Real-ESRGAN/results/frame15.jpg
/content/Real-ESRGAN/results/frame16.jpg
/content/Real-ESRGAN/results/frame17.jpg
/content/Real-ESRGAN/results/frame18.jpg
/content/Real-ESRGAN/results/frame19.jpg
/content/Real-ESRGAN/results/frame20.jpg
/content/Real-ESRGAN/results/frame21.jpg
/content/Real-ESRGAN/results/frame22.jpg
/content/Real-ESRGAN/results/frame23.jpg
/content/Real-ESRGAN/results/frame24.jpg
/content/Real-ESRGAN/results/frame25.jpg
/content/Real-ESRGAN/results/frame26.jpg
/content/Real-ESRGAN/results/frame27.jpg
/content/Real-ESRGAN/results/frame28.jpg
/content/Real-ESRGAN/results/frame29.jpg
/content/Real-ESRGAN/results/frame30.jpg
/content/Real-ESRGAN/results/frame31.jpg
/content/Real-ESRGAN/results/frame32.jpg
/content/Real-ESRGAN/results/frame33.jpg
/content/Real-ESRGAN/results/frame34.jpg
/content/Real-ESRGAN/results/frame35.jpg
/content/Real-ESRGAN/results/frame36.jpg
/content/Real-ESRGAN/results/frame37.jpg
/content/Real-ESRGAN/results/frame38.jpg
/content/Real-ESRGAN/results/frame39.jpg
/content/Real-ESRGAN/results/frame40.jpg
/content/Real-ESRGAN/results/frame41.jpg
/content/Real-ESRGAN/results/frame42.jpg
/content/Real-ESRGAN/results/frame43.jpg
/content/Real-ESRGAN/results/frame44.jpg
/content/Real-ESRGAN/results/frame45.jpg
/content/Real-ESRGAN/results/frame46.jpg
/content/Real-ESRGAN/results/frame47.jpg
/content/Real-ESRGAN/results/frame48.jpg
/content/Real-ESRGAN/results/frame49.jpg
/content/Real-ESRGAN/results/frame50.jpg
/content/Real-ESRGAN/results/frame51.jpg
/content/Real-ESRGAN/results/frame52.jpg
/content/Real-ESRGAN/results/frame53.jpg
/content/Real-ESRGAN/results/frame54.jpg
/content/Real-ESRGAN/results/frame55.jpg
/content/Real-ESRGAN/results/frame56.jpg
/content/Real-ESRGAN/results/frame57.jpg
/content/Real-ESRGAN/results/frame58.jpg
/content/Real-ESRGAN/results/frame59.jpg
/content/Real-ESRGAN/results/frame60.jpg
/content/Real-ESRGAN/results/frame61.jpg
/content/Real-ESRGAN/results/frame62.jpg
/content/Real-ESRGAN/results/frame63.jpg
/content/Real-ESRGAN/results/frame64.jpg
/content/Real-ESRGAN/results/frame65.jpg
/content/Real-ESRGAN/results/frame66.jpg
/content/Real-ESRGAN/results/frame67.jpg
/content/Real-ESRGAN/results/frame68.jpg
/content/Real-ESRGAN/results/frame69.jpg
/content/Real-ESRGAN/results/frame70.jpg
/content/Real-ESRGAN/results/frame71.jpg
/content/Real-ESRGAN/results/frame72.jpg
/content/Real-ESRGAN/results/frame73.jpg
/content/Real-ESRGAN/results/frame74.jpg
/content/Real-ESRGAN/results/frame75.jpg
/content/Real-ESRGAN/results/frame76.jpg
/content/Real-ESRGAN/results/frame77.jpg
/content/Real-ESRGAN/results/frame78.jpg
/content/Real-ESRGAN/results/frame79.jpg
/content/Real-ESRGAN/results/frame80.jpg
/content/Real-ESRGAN/results/frame81.jpg
/content/Real-ESRGAN/results/frame82.jpg
/content/Real-ESRGAN/results/frame83.jpg
/content/Real-ESRGAN/results/frame84.jpg
/content/Real-ESRGAN/results/frame85.jpg
/content/Real-ESRGAN/results/frame86.jpg
/content/Real-ESRGAN/results/frame87.jpg
/content/Real-ESRGAN/results/frame88.jpg
/content/Real-ESRGAN/results/frame89.jpg
/content/Real-ESRGAN/results/frame90.jpg
/content/Real-ESRGAN/results/frame91.jpg
/content/Real-ESRGAN/results/frame92.jpg
/content/Real-ESRGAN/results/frame93.jpg
/content/Real-ESRGAN/results/frame94.jpg
/content/Real-ESRGAN/results/frame95.jpg
/content/Real-ESRGAN/results/frame96.jpg
/content/Real-ESRGAN/results/frame97.jpg
/content/Real-ESRGAN/results/frame98.jpg
/content/Real-ESRGAN/results/frame99.jpg
/content/Real-ESRGAN/results/frame100.jpg
/content/Real-ESRGAN/results/frame101.jpg
/content/Real-ESRGAN/results/frame102.jpg
/content/Real-ESRGAN/results/frame103.jpg
/content/Real-ESRGAN/results/frame104.jpg
/content/Real-ESRGAN/results/frame105.jpg
/content/Real-ESRGAN/results/frame106.jpg
/content/Real-ESRGAN/results/frame107.jpg
/content/Real-ESRGAN/results/frame108.jpg
/content/Real-ESRGAN/results/frame109.jpg
/content/Real-ESRGAN/results/frame110.jpg
/content/Real-ESRGAN/results/frame111.jpg
/content/Real-ESRGAN/results/frame112.jpg
/content/Real-ESRGAN/results/frame113.jpg
/content/Real-ESRGAN/results/frame114.jpg
/content/Real-ESRGAN/results/frame115.jpg
/content/Real-ESRGAN/results/frame116.jpg
/content/Real-ESRGAN/results/frame117.jpg
/content/Real-ESRGAN/results/frame118.jpg
/content/Real-ESRGAN/results/frame119.jpg
/content/Real-ESRGAN/results/frame120.jpg
/content/Real-ESRGAN/results/frame121.jpg
/content/Real-ESRGAN/results/frame122.jpg
/content/Real-ESRGAN/results/frame123.jpg
/content/Real-ESRGAN/results/frame124.jpg
/content/Real-ESRGAN/results/frame125.jpg
/content/Real-ESRGAN/results/frame126.jpg
/content/Real-ESRGAN/results/frame127.jpg
/content/Real-ESRGAN/results/frame128.jpg
/content/Real-ESRGAN/results/frame129.jpg
/content/Real-ESRGAN/results/frame130.jpg
/content/Real-ESRGAN/results/frame131.jpg
/content/Real-ESRGAN/results/frame132.jpg
/content/Real-ESRGAN/results/frame133.jpg
/content/Real-ESRGAN/results/frame134.jpg
/content/Real-ESRGAN/results/frame135.jpg
/content/Real-ESRGAN/results/frame136.jpg
/content/Real-ESRGAN/results/frame137.jpg
/content/Real-ESRGAN/results/frame138.jpg
/content/Real-ESRGAN/results/frame139.jpg
/content/Real-ESRGAN/results/frame140.jpg
/content/Real-ESRGAN/results/frame141.jpg
/content/Real-ESRGAN/results/frame142.jpg
/content/Real-ESRGAN/results/frame143.jpg
/content/Real-ESRGAN/results/frame144.jpg
/content/Real-ESRGAN/results/frame145.jpg
/content/Real-ESRGAN/results/frame146.jpg
/content/Real-ESRGAN/results/frame147.jpg
/content/Real-ESRGAN/results/frame148.jpg
/content/Real-ESRGAN/results/frame149.jpg
/content/Real-ESRGAN/results/frame150.jpg
/content/Real-ESRGAN/results/frame151.jpg
/content/Real-ESRGAN/results/frame152.jpg
/content/Real-ESRGAN/results/frame153.jpg
/content/Real-ESRGAN/results/frame154.jpg
/content/Real-ESRGAN/results/frame155.jpg
/content/Real-ESRGAN/results/frame156.jpg
/content/Real-ESRGAN/results/frame157.jpg
/content/Real-ESRGAN/results/frame158.jpg
/content/Real-ESRGAN/results/frame159.jpg
/content/Real-ESRGAN/results/frame160.jpg
/content/Real-ESRGAN/results/frame161.jpg
/content/Real-ESRGAN/results/frame162.jpg
/content/Real-ESRGAN/results/frame163.jpg
/content/Real-ESRGAN/results/frame164.jpg
/content/Real-ESRGAN/results/frame165.jpg
/content/Real-ESRGAN/results/frame166.jpg
/content/Real-ESRGAN/results/frame167.jpg
/content/Real-ESRGAN/results/frame168.jpg
/content/Real-ESRGAN/results/frame169.jpg
/content/Real-ESRGAN/results/frame170.jpg
/content/Real-ESRGAN/results/frame171.jpg
/content/Real-ESRGAN/results/frame172.jpg
/content/Real-ESRGAN/results/frame173.jpg
/content/Real-ESRGAN/results/frame174.jpg
/content/Real-ESRGAN/results/frame175.jpg
/content/Real-ESRGAN/results/frame176.jpg
/content/Real-ESRGAN/results/frame177.jpg
/content/Real-ESRGAN/results/frame178.jpg
/content/Real-ESRGAN/results/frame179.jpg
/content/Real-ESRGAN/results/frame180.jpg
/content/Real-ESRGAN/results/frame181.jpg
/content/Real-ESRGAN/results/frame182.jpg
/content/Real-ESRGAN/results/frame183.jpg
/content/Real-ESRGAN/results/frame184.jpg
/content/Real-ESRGAN/results/frame185.jpg
/content/Real-ESRGAN/results/frame186.jpg
/content/Real-ESRGAN/results/frame187.jpg
/content/Real-ESRGAN/results/frame188.jpg
/content/Real-ESRGAN/results/frame189.jpg
/content/Real-ESRGAN/results/frame190.jpg
/content/Real-ESRGAN/results/frame191.jpg
/content/Real-ESRGAN/results/frame192.jpg
/content/Real-ESRGAN/results/frame193.jpg
/content/Real-ESRGAN/results/frame194.jpg
/content/Real-ESRGAN/results/frame195.jpg
/content/Real-ESRGAN/results/frame196.jpg
/content/Real-ESRGAN/results/frame197.jpg
/content/Real-ESRGAN/results/frame198.jpg
/content/Real-ESRGAN/results/frame199.jpg
/content/Real-ESRGAN/results/frame200.jpg
/content/Real-ESRGAN/results/frame201.jpg
/content/Real-ESRGAN/results/frame202.jpg
/content/Real-ESRGAN/results/frame203.jpg
/content/Real-ESRGAN/results/frame204.jpg
/content/Real-ESRGAN/results/frame205.jpg
/content/Real-ESRGAN/results/frame206.jpg
/content/Real-ESRGAN/results/frame207.jpg
/content/Real-ESRGAN/results/frame208.jpg
/content/Real-ESRGAN/results/frame209.jpg
/content/Real-ESRGAN/results/frame210.jpg
/content/Real-ESRGAN/results/frame211.jpg
/content/Real-ESRGAN/results/frame212.jpg
/content/Real-ESRGAN/results/frame213.jpg
/content/Real-ESRGAN/results/frame214.jpg
/content/Real-ESRGAN/results/frame215.jpg
/content/Real-ESRGAN/results/frame216.jpg
/content/Real-ESRGAN/results/frame217.jpg
/content/Real-ESRGAN/results/frame218.jpg
/content/Real-ESRGAN/results/frame219.jpg
/content/Real-ESRGAN/results/frame220.jpg
/content/Real-ESRGAN/results/frame221.jpg
/content/Real-ESRGAN/results/frame222.jpg
/content/Real-ESRGAN/results/frame223.jpg
/content/Real-ESRGAN/results/frame224.jpg
/content/Real-ESRGAN/results/frame225.jpg
/content/Real-ESRGAN/results/frame226.jpg
/content/Real-ESRGAN/results/frame227.jpg
/content/Real-ESRGAN/results/frame228.jpg
/content/Real-ESRGAN/results/frame229.jpg
/content/Real-ESRGAN/results/frame230.jpg
/content/Real-ESRGAN/results/frame231.jpg
/content/Real-ESRGAN/results/frame232.jpg
/content/Real-ESRGAN/results/frame233.jpg
/content/Real-ESRGAN/results/frame234.jpg
/content/Real-ESRGAN/results/frame235.jpg
/content/Real-ESRGAN/results/frame236.jpg
/content/Real-ESRGAN/results/frame237.jpg
/content/Real-ESRGAN/results/frame238.jpg
/content/Real-ESRGAN/results/frame239.jpg
/content/Real-ESRGAN/results/frame240.jpg
/content/Real-ESRGAN/results/frame241.jpg
/content/Real-ESRGAN/results/frame242.jpg
/content/Real-ESRGAN/results/frame243.jpg
[INFO] 1 20220224_01_test_enhanced.avi
[INFO] 1 /content/Real-ESRGAN/results_videos/20220224_01_test_enhanced.avi
[MoviePy] >>>> Building video /content/Real-ESRGAN/results_mp4_videos_with_audio/20220224_01_test_enhanced_with_audio.mp4
[MoviePy] Writing audio in 20220224_01_test_enhanced_with_audioTEMP_MPY_wvf_snd.mp4
###Markdown
4. Download Results* All your upscaled .mp4 files will be inside the results_mp4_videos folder (inside Real-ESRGAN folder)* You can right click on the needed file and download it from there. 5. Run this cell after the batch of videos have been upscaled (WARNING : Deletes processed data)* IMPORTANT : Read the comments inside the code section , because this DELETES previous frames and videos
###Code
#deletes frames from previous video
for f in os.listdir(upload_folder):
os.remove(os.path.join(upload_folder, f))
#deletes wav file from previous video
for f in os.listdir(wav_folder):
os.remove(os.path.join(wav_folder, f))
#deletes upscaled frames from previous video
for f in os.listdir(result_folder):
os.remove(os.path.join(result_folder, f))
#deletes all the videos you uploaded for upscaling
for f in os.listdir(video_folder):
os.remove(os.path.join(video_folder, f))
#clearing previous .avi files
for f in os.listdir(video_result_folder):
os.remove(os.path.join(video_result_folder, f))
#clearing .mp4 result files
for f in os.listdir(video_mp4_result_folder):
os.remove(os.path.join(video_mp4_result_folder, f))
#clearing .mp4 result with audio files
for f in os.listdir(video_mp4_result_with_audio_folder):
os.remove(os.path.join(video_mp4_result_with_audio_folder, f))
###Output
_____no_output_____ |
rutin/Untitled-Copy2.ipynb | ###Markdown
Parse all data
###Code
#create timematrix - timeslice:activity list
output4=[]
for j in hkozdata:
timematrix={}
for i in hkozdata[j]:
activity=i[:i.find('-')-1]
timeslice=i[i.find('-')+2:]
if timeslice not in timematrix:timematrix[timeslice]=[]
timematrix[timeslice].append(actidict[activity])
#create correct timeslice order to start day at 04:00
parseorder=np.roll(np.sort(timematrix.keys()),-2)
#create output list, with shared timeslots
for x in range(3): #create 3 randomized person-instances
output=[]
for k in range(len(parseorder)):
helper=timematrix[parseorder[k]]
np.random.shuffle(helper)
output.append(helper[:3]) #max 3 activities within 90 minutes, but create 3 randomized persons
#create output CSV list: activity, duration, activity, duration, ...
output2=[]
fixed=90 # survey 90 min timeslices are fixed
for k in range(len(output)):
for z in range(len(output[k])):
output2.append(output[k][z])
output2.append(fixed/(len(output[k])))
output4.append(str(output2)[1:-1].replace(' ',''))
savedata=pd.DataFrame(output4)
savedata.columns=['day']
savedata.to_csv('hkoz.csv',index=False)
###Output
_____no_output_____ |
ipynb/GDSC_data_preprocessing.ipynb | ###Markdown
get 1815 genes
###Code
data = scio.loadmat(data_folder + 'Demo.mat')
genes = [str(item[0]) for item in data['genenames'].reshape(-1)]
###Output
_____no_output_____
###Markdown
expression DataFrame
###Code
gdsc_exp_df = pd.read_csv(data_folder + 'Cell_line_RMA_proc_basalExp.txt', sep='\t', index_col=0)
gdsc_exp_df = gdsc_exp_df.dropna(how='any')
gdsc_exp_df = gdsc_exp_df.drop('GENE_title', axis=1)
gdsc_exp_df = gdsc_exp_df.rename(columns=lambda x:x[5:])
gdsc_exp_df.head()
gdsc_exp_df.tail()
gdsc_exp_df = gdsc_exp_df.loc[genes]
###Output
_____no_output_____
###Markdown
mutation DataFrame
###Code
data = pd.read_csv(data_folder + 'WES_variants.csv', sep='\t')
cell_line_id_nan = []
cell_line_tissue_dic = {}
for name, group in data[['COSMIC_ID', 'Cancer Type']].groupby('COSMIC_ID'):
name = str(name)
if group['Cancer Type'].nunique() == 1:
cell_line_tissue_dic[name] = group['Cancer Type'].unique()[0]
else:
cell_line_tissue_dic[name] = 'nan'
cell_line_id_nan.append(name)
gdsc_mut_df = pd.DataFrame()
data = pd.read_csv(data_folder + 'WES_variants.csv', sep='\t')
for index, row in data.iterrows():
if int(index)%10000 == 0: print(index)
gdsc_mut_df.loc[row['Gene'], str(row['COSMIC_ID'])] = 1
gdsc_mut_df = gdsc_mut_df.fillna(value=0)
gdsc_mut_df.head()
gdsc_mut_df.tail()
gdsc_mut_df = gdsc_mut_df.loc[genes]
###Output
_____no_output_____
###Markdown
compare expression and mutation
###Code
cell_line_id_set = set(gdsc_mut_df.columns) & set(gdsc_exp_df.columns)
print (len(cell_line_id_set))
###Output
968
###Markdown
Drug Response DataFrame
###Code
gdsc_drug_response_df = pd.read_csv(data_folder + 'v17_fitted_dose_response.csv', sep='\t')
gdsc_drug_response_df.head()
drug_id_arr = pd.unique(gdsc_drug_response_df.DRUG_ID)
found_and_unfound_cell_line = {}
gdsc_dataset = {}
for drug_id in [drug_id_arr]:
found_and_unfound_cell_line[drug_id] = ([],[])
gdsc_dataset[drug_id] = {
'exp': pd.DataFrame(columns=genes),
'mut': pd.DataFrame(columns=genes),
'auc': pd.Series()
}
for index, value in gdsc_drug_response_df.loc[gdsc_drug_response_df.DRUG_ID == drug_id].COSMIC_ID.items():
value = str(value)
if value in cell_line_id_set:
found_and_unfound_cell_line[drug_id][0].append(value)
gdsc_dataset[drug_id]['exp'].loc[value] = gdsc_exp_df[value]
gdsc_dataset[drug_id]['mut'].loc[value] = gdsc_mut_df[value]
gdsc_dataset[drug_id]['auc'].loc[value] = gdsc_drug_response_df.loc[
(gdsc_drug_response_df['DRUG_ID']==drug_id) & (gdsc_drug_response_df['COSMIC_ID']==int(value))].AUC.item()
else:
found_and_unfound_cell_line[drug_id][1].append(value)
for key in gdsc_dataset.keys():
exp = gdsc_dataset[key]['exp']
mut = gdsc_dataset[key]['mut']
gdsc_dataset[key]['mut'] = mut[mut.columns[mut.sum()>4]]
mut_feature = list(mut.columns[mut.sum()>4])
thre = exp.var().quantile(0.2)
gdsc_dataset[key]['exp'] = exp[exp.columns[exp.var()>thre]]
for key in gdsc_dataset.keys():
exp = gdsc_dataset[key]['exp'].to_numpy().astype(np.float32)
mut = gdsc_dataset[key]['mut'].to_numpy().astype(np.float32)
exp = (exp - exp.mean(axis=0))/exp.std(axis=0)
gdsc_dataset[key]['X'] = np.concatenate([exp, mut], axis=1)
tmp = gdsc_dataset[key]['auc'].to_numpy().reshape(-1,1).astype(np.float32)
gdsc_dataset[key]['y'] = (tmp-tmp.mean())/tmp.std()
for key in gdsc_dataset.keys():
idx = list(gdsc_dataset[key]['auc'].argsort())
for k,v in gdsc_dataset[key].items():
if k in ['exp','mut','auc']:
gdsc_dataset[key][k] = v.iloc[idx]
else:
gdsc_dataset[key][k] = v[idx]
with open('shuhantao/gdsc_dataset.pickle', 'wb') as f:
pickle.dump(gdsc_dataset, f)
for key in gdsc_dataset.keys():
with open(f'./gdsc/data/{key}.pickle', 'wb') as f:
pickle.dump(tmp, f)
###Output
_____no_output_____ |
Benchmarking-Notebook.ipynb | ###Markdown
HDI Reduction Benchmarking Notebook Week of 2.1.2018
###Code
# python 2/3 compatibility
from __future__ import print_function
# numerical python
import numpy as np
# file management tools
import glob
import os
# good module for timing tests
import time
# plotting stuff
import matplotlib.pyplot as plt
%matplotlib inline
# ability to read/write fits files
from astropy.io import fits
# fancy image combination technique
from astropy.stats import sigma_clipping
###Output
_____no_output_____
###Markdown
Step 0: Make Defintions
###Code
#
# change these if you'd like to run this
#
indir = '/Volumes/A341/ObservingRunData/20180115/'
reducedir = indir+'reduced/'
###Output
_____no_output_____
###Markdown
Three definitions:1. find_files2. read_image3. arrange_quadrants
###Code
def find_files(directory,img_type,filternum=-1):
#
# definition to go through HDI headers and get the images desired of a particular type
# and filter
#
#
# limitations:
# - not set up to sort by specific targets
# - will not enforce one of the filter wheel slots being empty
#
# grab all HDI files from the specified directory
files = [infile for infile in glob.glob(directory+'c*t*fits') if not os.path.isdir(infile)]
out_files = []
for f in files:
#print(f)
phdr = fits.getheader(f,0) # the 0 is needed to get the correct extension
# filter by desired_type
# if biases, don't care about filter
if (img_type == phdr['OBSTYPE']) \
& (img_type == 'BIAS'):
out_files.append(f)
# if flats or objects, the filter matters
if (img_type == phdr['OBSTYPE']) \
& (img_type != 'BIAS')\
& ( (str(filternum) == phdr['FILTER1']) | (str(filternum) == phdr['FILTER2']) ):
out_files.append(f)
return out_files
def read_image(infile,mosaic=True):
'''
read_image
----------
routine to read in an image and either return
mosaic==True
single mosaicked, non-overscan frame
mosaic==False
dictionary of numbered quadrants for overscan operations
(can later be combined with arrange_quadrants)
inputs
----------
infile : (string) filename to be read in
mosaic : (boolean, default=True) if True, returns a single data frame,
if False, returns a dictionary of the four quadrants
outputs
----------
data_quad : (dictionary or array) if dictionary, keys are [0,1,2,3], each 2056x2048
corresponding to each quadrant. if array, single 4122x4096 frame
overscan_quad : (dictionary) keys are [0,1,2,3], each 2056x2048, corresponding to each quadrant
dependents
----------
arrange_quadrants : definition to place quadrants in the correct configuration, below
'''
ofile = fits.open(infile)
# retreive header for the purposes of sizing the array
phdr = fits.getheader(infile,0)
saxis_x = int(phdr['CNAXIS2'])
saxis_y = int(phdr['CNAXIS1'])+int(phdr['OVRSCAN1'])
# overscan array is solely the overscan region of each quadrant
overscan_quad = {}
# median_array is the entire array, overscan included
data_quad = {}
for ext in range(1,5):
overscan_quad[ext-1] = ofile[ext].data[0:saxis_x,phdr['CNAXIS1']:saxis_y]
data_quad[ext-1] = ofile[ext].data[0:saxis_x,0:phdr['CNAXIS1']]
if mosaic:
data_mosaic = arrange_quadrants(data_quad)
return data_mosaic,overscan_quad
else:
return data_quad,overscan_quad
def arrange_quadrants(quadrants):
'''
arrange_quadrants
-----------------
rearrange HDI quadrants to be in the proper configuration
can be done with or without overscan, in theory.
inputs
--------
quadrants : (dictionary) dictionary of the four quadrants, with keys [0,1,2,3]
outputs
--------
data_array : (matrix)
'''
saxis_x,saxis_y = quadrants[0].shape
data_array = np.zeros([2*saxis_x,2*saxis_y])
# reposition different segments
data_array[0:saxis_x ,0:saxis_y] = quadrants[0] # lower left
data_array[0:saxis_x ,saxis_y:2*saxis_y] = quadrants[1][:,::-1] # lower right
data_array[saxis_x:2*saxis_x,0:saxis_y] = quadrants[2][::-1,:] # upper left
data_array[saxis_x:2*saxis_x,saxis_y:2*saxis_y] = quadrants[3][::-1,::-1] # upper right
# include the left-right flip
return data_array[:,::-1]
###Output
_____no_output_____
###Markdown
Biases.
###Code
# check out the biases
bias_files = find_files(indir,'BIAS')
print('N_BIASES: ',len(bias_files))
# test reading files
# use this number to increase statistical certainty
nreads = 200
t1 = time.time()
for imgnum in range(0,nreads):
ofile = fits.open(bias_files[0])
for ext in range(1,5):
setval = ofile[ext].data
# naively, I assumed closing the file would save memory allocation, but it doesn't.
#ofile.close()
print('time elapsed per read:',np.round((time.time()-t1)/float(nreads),3),' seconds')
###Output
time elapsed per read: 0.079 seconds
###Markdown
Benchmarks:1. Mike's Computer+Samsung EVO: time elapsed per read: 0.086 seconds
###Code
ltimes = []
for nfiles in range(2,len(bias_files)):
use_bias = bias_files[0:nfiles]
t1 = time.time()
data = np.zeros([len(use_bias),4112,4096])
for imgnum,img in enumerate(use_bias):
data[imgnum],ovrscn = read_image(img)
# check the levels in each bias quadrant:
#data,ovrscn = read_image(img,mosaic=False)
#print([np.median(ovrscn[x])-np.median(datareg[x]) for x in range(0,4)])
masterbias = np.median(data,axis=0)
print('time elapsed to median ({0} files):'.format(nfiles),np.round(time.time()-t1,2),' seconds')
ltimes.append(np.round(time.time()-t1,2))
phdr = fits.getheader(bias_files[0],0)
fits.writeto(reducedir+'masterbias.fits',masterbias,phdr,clobber=True)
print('time elapsed total:',np.round(time.time()-t1,2),' seconds')
###Output
time elapsed to median (2 files): 1.85 seconds
time elapsed to median (3 files): 2.21 seconds
time elapsed to median (4 files): 3.15 seconds
time elapsed to median (5 files): 3.48 seconds
time elapsed to median (6 files): 4.57 seconds
time elapsed to median (7 files): 4.79 seconds
time elapsed to median (8 files): 5.69 seconds
time elapsed to median (9 files): 6.0 seconds
time elapsed to median (10 files): 7.01 seconds
time elapsed to median (11 files): 7.31 seconds
time elapsed to median (12 files): 8.34 seconds
time elapsed to median (13 files): 8.75 seconds
time elapsed to median (14 files): 9.56 seconds
time elapsed to median (15 files): 9.94 seconds
time elapsed total: 15.09 seconds
###Markdown
Benchmark runs:time elapsed total: 19.47 secondsPlus see benchmarking plot below. Gray is Mike's computer benchmark.
###Code
filenums = np.arange(2,len(bias_files),1)
benchtimes = [2.81,2.77,3.67,4.16,5.07,5.51,6.53,7.16,8.34, 9.61, 10.03,11.62,13.38,14.08]
plt.figure(figsize=(5,3))
plt.plot(filenums,benchtimes,color='gray')
plt.plot(filenums,np.array(ltimes),color='red')
plt.title('Median Combine',size=16)
plt.ylabel('Time [s]',size=16)
plt.xlabel('nfiles',size=16)
plt.tight_layout()
###Output
_____no_output_____
###Markdown
same thing as above, but with astropy sigma_clip DO NOT DO THIS STEP UNLESS YOU REALLLLLLLLY WANT TO KNOWltimes = []for nfiles in range(2,len(bias_files)): use_bias = bias_files[0:nfiles] t1 = time.time() data = np.zeros([len(use_bias),4112,4096]) for imgnum,img in enumerate(use_bias): data[imgnum],ovrscn = read_image(img) clipped_arr = sigma_clipping.sigma_clip(data,sigma=3,axis=0) print('time elapsed to sigmaclip ({0} files):'.format(nfiles),np.round(time.time()-t1,2),' seconds') ltimes.append(np.round(time.time()-t1,2))phdr = fits.getheader(bias_files[0],0)flat_arr = np.ma.median(clipped_arr,axis=0)masterbias = flat_arr.filled(0.)print('time elapsed to median ({0} files):'.format(nfiles),np.round(time.time()-t1,2),' seconds')fits.writeto(reducedir+'masterbias.fits',masterbias,phdr,clobber=True)print('time elapsed total:',np.round(time.time()-t1,2),' seconds')
###Code
filenums = np.arange(2,len(bias_files),1)
benchtimes = [44.5,51.8,61.3,72.4,82.3,91.1,111.5,122.5,135.46,162.1,196.3,217.9,214.6,215.7]
plt.figure(figsize=(5,3))
plt.plot(filenums,benchtimes,color='gray')
# if checking against data from above, uncomment this
#plt.plot(filenums,np.array(ltimes),color='red')
plt.title('Sigma Clip Combine',size=16)
plt.ylabel('Time [s]',size=16)
plt.xlabel('nfiles',size=16)
plt.tight_layout()
# test writing files
# MUST HAVE MASTERBIAS DEFINED: to be run sequentially with above cells
nreads = 10
t1 = time.time()
phdr = fits.getheader(bias_files[0],0)
for imgnum in range(0,nreads):
fits.writeto(reducedir+'masterbias.fits',masterbias,phdr,clobber=True)
print('time elapsed per write:',np.round((time.time()-t1)/float(nreads),3),' seconds')
###Output
time elapsed per write: 8.879 seconds
|
docs/task-offloading-experiment.ipynb | ###Markdown
Start Now
###Code
%%time
all_data.clear()
start_time = time.time()
for i in range(1000):
r = get_ret(baseurl)
time.sleep(random.randint(1, 5) / 1000)
all_data.append((time.time() - start_time, r["server"], r["throughput"], r["time"]))
lst_local = []
lst_server1 = []
lst_server2 = []
for item in all_data:
if item[1] == "127.0.0.1":
lst_local.append(item)
elif item[1] == "192.168.56.2":
lst_server1.append(item)
elif item[1] == "192.168.56.3":
lst_server2.append(item)
def get_delay_on_device(lst):
delay = [item[3] for item in lst]
return delay
def convert_lst_to_time_count(lst):
new_lst = []
for i in range(len(lst)):
new_lst.append((lst[i][0], i))
return new_lst
def convert_lst_to_time_throughput(lst):
new_lst = []
for item in lst:
new_lst.append((item[0], item[2]))
return new_lst
# ไธๅๆๅกๅจไธ๏ผไธๅ็ๆถ้ด็น็่ฏทๆฑๆฐ้ [(time, count), (time, count), ...]
local_time_req = convert_lst_to_time_count(lst_local)
server1_time_req = convert_lst_to_time_count(lst_server1)
server2_time_req = convert_lst_to_time_count(lst_server2)
# ไปปๅกๅธ่ฝฝๅฐไธๅ็ๆๅกๅจไธ๏ผๆฏไธช่ฏทๆฑๅจๅไธชๆบๅจไธ็ๆถ้ด [req_time1, req_time2, ...]
local_delay = get_delay_on_device(lst_local)
server1_delay = get_delay_on_device(lst_server1)
server2_delay = get_delay_on_device(lst_server2)
import numpy as np
print(np.mean(local_delay), np.mean(server1_delay), np.mean(server2_delay))
import matplotlib.pyplot as plt
import seaborn as sns
sns.set()
# ๅฐไธๅๆถ้ด็น็่ฏทๆฑๆฐ้ๅ่กจ [(time, count), (time, count), ..] ๏ผๆถ้ดๅ่ฏทๆฑๆฐ้ๅๅผๅฐไธคไธชๅ็ฌ็ๅ่กจ
def generate_plot_time_req(lst):
time = [item[0] for item in lst]
count = [item[1] for item in lst]
return time, count
# time, request counts on local device
local_time, local_count = generate_plot_time_req(local_time_req)
plt.plot(local_time, local_count, color='r', label=f"local device: {np.mean(local_delay):.4f}s")
plt.ylabel("request counts", fontsize=16)
plt.xlabel("time", fontsize=16)
# time, request counts on server1
server1_time, server1_count = generate_plot_time_req(server1_time_req)
plt.plot(server1_time, server1_count, color='b', label=f"server1: {np.mean(server1_delay):.4f}s")
# time, request counts on server2
server2_time, server2_count = generate_plot_time_req(server2_time_req)
plt.plot(server2_time, server2_count, color='y', label=f"server2: {np.mean(server2_delay):.4f}s")
# time, throughput on local device
local_throughout_time = convert_lst_to_time_throughput(lst_local)
throughput_time, throughput_cnt = generate_plot_time_req(local_throughout_time)
plt.plot(throughput_time, throughput_cnt, color='g', label="throughput", linestyle="--", linewidth=1.5)
plt.title("Request counts / avg delay on different devices", fontsize=16)
plt.legend(loc='upper left',scatterpoints=1,ncol=1, fontsize=15,numpoints = 1)
plt.savefig("request_counts_comparation.png")
plt.show()
# request count on local device, throughput on local device
# use local_req_time, local_throughput_time
plt.plot(local_time, local_count, color='r', label="request counts", linestyle=":", linewidth=2.5)
plt.plot(throughput_time, throughput_cnt, color='g', label="throughput", linestyle="--", linewidth=2.5)
plt.ylabel("request counts / throughout", fontsize=16)
plt.xlabel("time", fontsize=16)
plt.legend(loc='upper left',scatterpoints=1,ncol=1, fontsize=15,numpoints = 1)
plt.title("Request counts and throughput on local device", fontsize=16)
plt.savefig("local_request_counts_throughput.png")
plt.show()
###Output
_____no_output_____ |
Notebooks/0_Introduction/N1_Linear_Classification.ipynb | ###Markdown
This notebook can be run on mybinder: [](https://mybinder.org/v2/git/https%3A%2F%2Fgricad-gitlab.univ-grenoble-alpes.fr%2Fai-courses%2Fautonomous_systems_ml/HEAD?filepath=notebooks%2F1_introduction)
###Code
# Import modules
%matplotlib inline
import matplotlib
import numpy as np
import matplotlib.pyplot as plt
# Select random seed
random_state = 0
###Output
_____no_output_____
###Markdown
We use scikit-learn to generate a toy 2D data set (two features $x_1$ and $x_2$) for binary classification (two classes) - each sample $(x_1,x_2)$ in the dataset is plotted as a 2D point where the two features $x_1$ and $x_2$ are displayed along the abscissa and ordinate axes respectively - the corresponding class label $y$ is displayed as a color mark (e.g., yellow or purple)
###Code
from sklearn.datasets import make_classification
#X are the features (aka inputs, ...), y the labels (aka responses, targets, output...)
X,y = make_classification(n_features=2, n_redundant=0, n_informative=1, n_samples=150,
random_state=random_state, n_clusters_per_class=1)
#ย make the class labels y_i as +1 or -1
y[y==0]=-1
#ย display the dataset
plt.scatter(X[:,0], X[:,1], c=y)
plt.grid(True)
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
#plt.savefig("2d_binary_classif.pdf")
###Output
_____no_output_____
###Markdown
Then, a linear model is used to learn the classification function/rule.
###Code
from sklearn import linear_model
# Train a linear model, namely RidgeClassifier,
#ย this includes standard linear regression as particular case (alpha=0)
model = linear_model.RidgeClassifier(alpha=0)
model.fit(X,y)
# Plot the decision functions
XX, YY = np.meshgrid(np.linspace(X[:,0].min(), X[:,0].max(),200),
np.linspace(X[:,1].min(), X[:,1].max(),200))
XY = np.vstack([XX.flatten(), YY.flatten()]).T
yp = model.predict(XY)
plt.contour(XX,YY,yp.reshape(XX.shape),[0])
plt.scatter(X[:,0], X[:,1], c=y)
plt.grid("on")
plt.xlabel('$x_1$')
plt.ylabel('$x_2$')
#ย What are the parameter values of the linear boundary equation x_2=a x_1 + b?
a = -model.coef_[0][0]/model.coef_[0][1]
b = model.intercept_[0]
print('boudary equation x_2={} x_1 + {}'.format(a,b))
###Output
boudary equation x_2=-0.5596406428840415 x_1 + -0.42432031894620276
###Markdown
ExerciseChange the number of informative features from `n_informative=2ฬ`ย to `n_informative=1` in the `make_classification()` procedure, regenerate the data set and fit the classification rule. Interpret now the new decision boundary: are the two variables of equal importance in predicting the class of the data?
###Code
#get the documentation for sklearn RidgeClassification object
linear_model.RidgeClassifier?
###Output
_____no_output_____ |
connectivity_analysis/bsc/dn_bs_13_node_profile_calculation.ipynb | ###Markdown
Node profile dissimilarity between conditionsAim of node profile dissimilarity analysis is to find network nodes (ROIs) with highest variability of module assignemnt between task conditions. Highly variable nodes has different node profies depending on the task condtion. **Node profile** for node $i$ is a single row / column of group level agreement matrix. $i$th element of node profile vector reflects probability that nodes $i$ and $j$ will be placed inside the same community in randomly selected individual network. Correlation between node profile vectors from different conditions are calculated to assess similairy between node profiles. Average between all six condition pairs (rew+ โ rew-, rew+ โ pun+, ...) is calculated as mean similarity. Since raw connectivity values are hard to interpret, z-score is calculated for all ROIs mean similarity. These values are stored in `dissim` vector. Lower `dissim` values indicate ROIs with most between-condition variability in node profile. Dissimilarity significance is calculated using Monte Carlo procedure. First, individual module assignemnt vectors are randomly shuffled. Then the same procedure is applied to calculate null distribution of dissimilaity: agreement is calculated for individual conditions, then for each ROI node profile vectors are correlated across conditions yielding dissimilarity values, dissimilarity values are averaged and z-scored. This procedure is repeated `n_reps` times. Entire procedure is applied to single gamma and independent on other gammas. Finally dissimilary p-values are FDR corrected to reveal (for each gamma) set of ROIs with significantly variable node profile. > **Analysis type**: Multiple ฮณ (calculations)
###Code
import json
from itertools import combinations
from os.path import join
import numpy as np
import pandas as pd
from bct.algorithms.clustering import agreement
from dn_utils.networks import zscore_vector
from dn_utils.path import path
from statsmodels.stats.multitest import fdrcorrection
from tqdm.notebook import tqdm
atlas = "combined_roi"
gamma_range = np.arange(0.5, 2.5, 0.5)
path_corrmats = join(path["bsc"], "corrmats")
path_corrmats_unthr = join(path_corrmats, atlas, "unthr")
m = {}
for gamma in gamma_range:
gamma_str = str(float(gamma)).replace(".", "_")
path_corrmats_unthr_gamma = join(path_corrmats_unthr, f"gamma_{gamma_str}")
m[gamma] = np.load(join(path_corrmats_unthr_gamma, "m_aggregated.npy"))
# Load subject exclusion
df_exclusion = pd.read_csv(join(path["nistats"], "exclusion/exclusion.csv"),
index_col=0)
ok_index = df_exclusion["ok_all"]
# Load ROI table
df_roi = pd.read_csv(join(path_corrmats, atlas, "roi_table_filtered.csv"))
# Meta information about corrmats dimensions
with open(join(path_corrmats, atlas, "corrmats_aggregated.json"), "r") as f:
corrmats_meta = json.loads(f.read())
n_subjects_ok = sum(ok_index)
n_conditions = len(corrmats_meta["dim2"])
n_perr_sign = len(corrmats_meta["dim3"])
n_roi = len(corrmats_meta["dim4"])
def corr_rowwise(arr1, arr2):
"""Calculate correlations between corresponding rows of two arrays."""
n = len(arr1)
return np.diag(np.corrcoef(arr1, arr2)[:n, n:])
def shuffle_along_axis(a, axis):
"""Shuffle array along specific axis."""
idx = np.random.rand(*a.shape).argsort(axis=axis)
return np.take_along_axis(a,idx,axis=axis)
def calculate_dissimiliarty(m):
"""..."""
# Condition dependent agreements
d_rew_inc = agreement(m[:, 0, 0].T)
d_rew_dec = agreement(m[:, 0, 1].T)
d_pun_inc = agreement(m[:, 1, 0].T)
d_pun_dec = agreement(m[:, 1, 1].T)
for d in [d_rew_inc, d_rew_dec, d_pun_inc, d_pun_dec]:
np.fill_diagonal(d, n_subjects_ok)
# All combinations
dissim = np.zeros((m.shape[-1]))
for d1, d2 in combinations([d_rew_inc, d_rew_dec, d_pun_inc, d_pun_dec], 2):
dissim = dissim + corr_rowwise(d1, d2)
dissim = dissim / 6
# Between prediction errors
dissim_perr = np.zeros((m.shape[-1]))
dissim_perr = dissim_perr + corr_rowwise(d_rew_inc, d_rew_dec)
dissim_perr = dissim_perr + corr_rowwise(d_pun_inc, d_pun_dec)
dissim_perr = dissim_perr / 2
# Between conditions
dissim_con = np.zeros((m.shape[-1]))
dissim_con = dissim_con + corr_rowwise(d_rew_inc, d_pun_inc)
dissim_con = dissim_con + corr_rowwise(d_rew_dec, d_pun_dec)
dissim_con = dissim_con / 2
return dissim, dissim_perr, dissim_con
n_nulls = 10_000
np.random.seed(0)
for gamma in gamma_range:
print(f"ฮณ = {gamma}")
gamma_str = str(float(gamma)).replace(".", "_")
path_corrmats_unthr_gamma = join(path_corrmats_unthr, f"gamma_{gamma_str}")
mt = m[gamma][ok_index]
# Real dissimilarity values
dissim, dissim_perr, dissim_con = calculate_dissimiliarty(mt)
dissim_zscore = zscore_vector(dissim)
dissim_perr_zscore = zscore_vector(dissim_perr)
dissim_con_zscore = zscore_vector(dissim_con)
# Monte-Carlo distribution of dissimilarity z-scores
dissim_null = np.zeros((n_nulls, n_roi))
dissim_perr_null = np.zeros((n_nulls, n_roi))
dissim_con_null = np.zeros((n_nulls, n_roi))
for rep in tqdm(range(n_nulls)):
m_null = shuffle_along_axis(mt, 3)
tmp_dissim, tmp_dissim_perr, tmp_dissim_con = calculate_dissimiliarty(m_null)
dissim_null[rep] = zscore_vector(tmp_dissim)
dissim_perr_null[rep] = zscore_vector(tmp_dissim_perr)
dissim_con_null[rep] = zscore_vector(tmp_dissim_con)
# Calculate significance
pval = np.mean(dissim_zscore > dissim_null, axis=0)
pval_perr = np.mean(dissim_perr_zscore > dissim_perr_null, axis=0)
pval_con = np.mean(dissim_con_zscore > dissim_con_null, axis=0)
# Save values
df_dissim = df_roi.copy()
df_dissim[f"dissim_{gamma_str}"] = dissim
df_dissim[f"dissim_perr_{gamma_str}"] = dissim_perr
df_dissim[f"dissim_con_{gamma_str}"] = dissim_con
df_dissim[f"dissim_zscore_{gamma_str}"] = dissim_zscore
df_dissim[f"dissim_perr_zscore_{gamma_str}"] = dissim_perr_zscore
df_dissim[f"dissim_con_zscore_{gamma_str}"] = dissim_con_zscore
df_dissim[f"pval_unc_{gamma_str}"] = pval
df_dissim[f"pval_perr_unc_{gamma_str}"] = pval_perr
df_dissim[f"pval_con_unc_{gamma_str}"] = pval_con
df_dissim[f"pval_fdr_{gamma_str}"] = fdrcorrection(pval)[1]
df_dissim[f"pval_perr_fdr_{gamma_str}"] = fdrcorrection(pval_perr)[1]
df_dissim[f"pval_con_fdr_{gamma_str}"] = fdrcorrection(pval_con)[1]
df_dissim.to_csv(join(path_corrmats_unthr_gamma,
"node_profile_dissimilarity.csv"))
###Output
_____no_output_____ |
caffe2_initialize.ipynb | ###Markdown
Caffe2 IntroductionReferences* https://caffe2.ai/docs/intro-tutorial.html
###Code
from caffe2.python import workspace, model_helper
import numpy as np
###Output
_____no_output_____
###Markdown
Blobs and Workspace, Tensors
###Code
# Create random tensor of three dimensions
x = np.random.rand(4, 3, 2)
print(x)
print(x.shape)
workspace.FeedBlob("my_x", x)
x2 = workspace.FetchBlob("my_x")
print(x2)
###Output
[[[0.88997241 0.12022102]
[0.88188169 0.18473214]
[0.71762122 0.11558636]]
[[0.22579086 0.67422141]
[0.48671107 0.94477542]
[0.54582604 0.93478675]]
[[0.72404967 0.82139218]
[0.0241157 0.19129789]
[0.15488574 0.01253563]]
[[0.36795231 0.81898088]
[0.92400248 0.66840576]
[0.31662866 0.38012366]]]
(4, 3, 2)
[[[0.88997241 0.12022102]
[0.88188169 0.18473214]
[0.71762122 0.11558636]]
[[0.22579086 0.67422141]
[0.48671107 0.94477542]
[0.54582604 0.93478675]]
[[0.72404967 0.82139218]
[0.0241157 0.19129789]
[0.15488574 0.01253563]]
[[0.36795231 0.81898088]
[0.92400248 0.66840576]
[0.31662866 0.38012366]]]
###Markdown
Nets and Operators 1. Model defintion
###Code
# Create the input data
data = np.random.rand(16, 100).astype(np.float32)
# Create labels for the data as integers [0, 9].
label = (np.random.rand(16) * 10).astype(np.int32)
workspace.FeedBlob("data", data)
workspace.FeedBlob("label", label)
# Create model using a model helper
m = model_helper.ModelHelper(name="my first net")
weight = m.param_init_net.XavierFill([], 'fc_w', shape=[10, 100])
bias = m.param_init_net.ConstantFill([], 'fc_b', shape=[10, ])
fc_1 = m.net.FC(["data", "fc_w", "fc_b"], "fc1")
pred = m.net.Sigmoid(fc_1, "pred")
softmax, loss = m.net.SoftmaxWithLoss([pred, "label"], ["softmax", "loss"])
print(m.net.Proto())
###Output
name: "my first net"
op {
input: "data"
input: "fc_w"
input: "fc_b"
output: "fc1"
name: ""
type: "FC"
}
op {
input: "fc1"
output: "pred"
name: ""
type: "Sigmoid"
}
op {
input: "pred"
input: "label"
output: "softmax"
output: "loss"
name: ""
type: "SoftmaxWithLoss"
}
external_input: "data"
external_input: "fc_w"
external_input: "fc_b"
external_input: "label"
###Markdown
2. Executing
###Code
workspace.RunNetOnce(m.param_init_net)
workspace.CreateNet(m.net)
# Run 100 x 10 iterations
for _ in range(100):
data = np.random.rand(16, 100).astype(np.float32)
label = (np.random.rand(16) * 10).astype(np.int32)
workspace.FeedBlob("data", data)
workspace.FeedBlob("label", label)
workspace.RunNet(m.name, 10) # run for 10 times
print(workspace.FetchBlob("softmax"))
print(workspace.FetchBlob("loss"))
###Output
[[0.09339145 0.10642346 0.10014153 0.10325584 0.09183178 0.09534732
0.08891211 0.10233209 0.12209567 0.09626896]
[0.09899452 0.11280565 0.10002844 0.09607407 0.09911832 0.10619459
0.07705664 0.11145403 0.10433369 0.09394009]
[0.08286455 0.10791922 0.10609834 0.09013259 0.09178647 0.09616897
0.08506706 0.11604338 0.12522957 0.09868987]
[0.09134731 0.11518338 0.09047218 0.08984502 0.09977093 0.10848781
0.07927989 0.10156581 0.12222464 0.10182317]
[0.09768469 0.09847265 0.09167515 0.09170575 0.096403 0.09395219
0.09331473 0.10700324 0.1209162 0.10887244]
[0.08799846 0.12094094 0.09772131 0.09030902 0.09551021 0.10199878
0.08268747 0.1148183 0.11389599 0.09411947]
[0.11328786 0.11041225 0.09950201 0.09428996 0.09200225 0.09499804
0.08375614 0.10167009 0.11757088 0.09251044]
[0.08725245 0.11221404 0.10196627 0.09240996 0.09664204 0.1015223
0.08091603 0.1116172 0.10908668 0.10637297]
[0.0866763 0.10937949 0.09433612 0.10313403 0.09211618 0.09972405
0.09426679 0.11585984 0.11554804 0.08895916]
[0.09681463 0.10001516 0.09986553 0.0932567 0.09213073 0.10670245
0.08585839 0.10303358 0.12837052 0.09395228]
[0.10431464 0.1151255 0.10348144 0.09908683 0.09095726 0.09803206
0.07790362 0.09662888 0.1088801 0.10558958]
[0.10328569 0.1099343 0.08903392 0.10324942 0.09314542 0.09214558
0.09500015 0.10476547 0.11269218 0.09674791]
[0.10121156 0.09672479 0.10021012 0.08958615 0.09762245 0.11022357
0.08450419 0.10700191 0.10968003 0.10323509]
[0.09693552 0.11217226 0.09108547 0.09153257 0.09372533 0.09961286
0.08712576 0.10366333 0.12288415 0.10126287]
[0.08191606 0.1115969 0.11029428 0.09250261 0.09585159 0.10668804
0.07584092 0.11041553 0.11572818 0.09916577]
[0.08774085 0.11619036 0.09769543 0.10462706 0.09060304 0.09722544
0.08245538 0.09907018 0.11782952 0.10656259]]
2.292003
###Markdown
3. Backward pass
###Code
m.AddGradientOperators([loss])
workspace.RunNetOnce(m.param_init_net)
workspace.CreateNet(m.net, overwrite=True)
print(m.net.Proto())
# Run 100 x 10 iterations
for _ in range(100):
data = np.random.rand(16, 100).astype(np.float32)
label = (np.random.rand(16) * 10).astype(np.int32)
workspace.FeedBlob("data", data)
workspace.FeedBlob("label", label)
workspace.RunNet(m.name, 10) # run for 10 times
print(workspace.FetchBlob("softmax"))
print(workspace.FetchBlob("loss"))
###Output
[[0.07764349 0.11906692 0.08852771 0.10767992 0.09314898 0.08110189
0.1064648 0.09220832 0.09623636 0.13792174]
[0.09315608 0.1179883 0.09621266 0.11301854 0.08626252 0.07878942
0.10712076 0.09544351 0.09666031 0.11534771]
[0.09202261 0.11013588 0.0967541 0.11250325 0.10506929 0.08122766
0.10346481 0.09218597 0.08264202 0.12399448]
[0.08944649 0.11206648 0.10214503 0.09155405 0.09619832 0.08243875
0.10476203 0.09395029 0.09811552 0.129323 ]
[0.08586247 0.11873516 0.09534384 0.10848465 0.09346242 0.08816012
0.10321909 0.09627131 0.08954283 0.12091807]
[0.07998072 0.11492892 0.09025536 0.10086226 0.0940085 0.08645912
0.11578054 0.08664259 0.09488203 0.1362001 ]
[0.08601896 0.12322535 0.10309216 0.10297704 0.08651943 0.08301044
0.08535339 0.0948774 0.09138822 0.14353763]
[0.08541438 0.11164442 0.10693711 0.10046652 0.08668761 0.09766892
0.09125784 0.10340477 0.0853864 0.13113196]
[0.08984394 0.11833447 0.09547494 0.09715635 0.09566005 0.08537985
0.09069771 0.09648992 0.09715558 0.13380738]
[0.08903304 0.12455085 0.09045957 0.10409139 0.08645114 0.085023
0.09624769 0.08588018 0.10242204 0.13584113]
[0.09060349 0.11964466 0.09496283 0.10266292 0.10300025 0.09036767
0.10111754 0.09321727 0.09000772 0.11441553]
[0.10135321 0.12626009 0.09712467 0.09984553 0.08265459 0.07946642
0.09442706 0.10157681 0.10003713 0.11725444]
[0.08811106 0.1307206 0.10086539 0.09968071 0.09406433 0.08389304
0.09897583 0.0911096 0.08911277 0.12346673]
[0.08397318 0.11631957 0.0879937 0.10743459 0.09764356 0.08494614
0.1092471 0.07589111 0.09786444 0.13868651]
[0.09601338 0.13371843 0.10221256 0.09570286 0.09054451 0.07850944
0.09719571 0.09023482 0.09109879 0.12476942]
[0.09882221 0.11274704 0.0988393 0.08951053 0.10886348 0.08350162
0.0921585 0.09019405 0.09174726 0.13361602]]
2.3365507
|
jupyter/d2l-java/chapter_mlp/weight-decay.ipynb | ###Markdown
Weight Decay:label:`sec_weight_decay`Now that we have characterized the problem of overfitting,we can introduce some standard techniques for regularizing models.Recall that we can always mitigate overfittingby going out and collecting more training data.That can be costly, time consuming,or entirely out of our control,making it impossible in the short run.For now, we can assume that we already haveas much high-quality data as our resources permitand focus on regularization techniques.Recall that in ourpolynomial curve-fitting example(:numref:`sec_model_selection`)we could limit our model's capacitysimply by tweaking the degree of the fitted polynomial.Indeed, limiting the number of features is a popular technique to avoid overfitting.However, simply tossing aside featurescan be too blunt an instrument for the job.Sticking with the polynomial curve-fittingexample, consider what might happenwith high-dimensional inputs.The natural extensions of polynomialsto multivariate data are called *monomials*, which are simply products of powers of variables.The degree of a monomial is the sum of the powers.For example, $x_1^2 x_2$, and $x_3 x_5^2$ are both monomials of degree $3$.Note that the number of terms with degree $d$blows up rapidly as $d$ grows larger.Given $k$ variables, the number of monomials of degree $d$ is ${k - 1 + d} \choose {k - 1}$.Even small changes in degree, say from $2$ to $3$,dramatically increase the complexity of our model.Thus we often need a more fine-grained toolfor adjusting function complexity. Squared Norm Regularization*Weight decay* (commonly called *L2* regularization),might be the most widely-used techniquefor regularizing parametric machine learning models.The technique is motivated by the basic intuitionthat among all functions $f$,the function $f = 0$ (assigning the value $0$ to all inputs) is in some sense the *simplest*,and that we can measure the complexity of a function by its distance from zero.But how precisely should we measurethe distance between a function and zero?There is no single right answer.In fact, entire branches of mathematics,including parts of functional analysis and the theory of Banach spaces,are devoted to answering this issue.One simple interpretation might be to measure the complexity of a linear function$f(\mathbf{x}) = \mathbf{w}^\top \mathbf{x}$by some norm of its weight vector, e.g., $|| \mathbf{w} ||^2$.The most common method for ensuring a small weight vectoris to add its norm as a penalty termto the problem of minimizing the loss.Thus we replace our original objective,*minimize the prediction loss on the training labels*,with new objective,*minimize the sum of the prediction loss and the penalty term*.Now, if our weight vector grows too large,our learning algorithm might *focus* on minimizing the weight norm $|| \mathbf{w} ||^2$versus minimizing the training error.That is exactly what we want.To illustrate things in code, let us revive our previous examplefrom :numref:`sec_linear_regression` for linear regression.There, our loss was given by$$l(\mathbf{w}, b) = \frac{1}{n}\sum_{i=1}^n \frac{1}{2}\left(\mathbf{w}^\top \mathbf{x}^{(i)} + b - y^{(i)}\right)^2.$$Recall that $\mathbf{x}^{(i)}$ are the observations,$y^{(i)}$ are labels, and $(\mathbf{w}, b)$are the weight and bias parameters respectively.To penalize the size of the weight vector,we must somehow add $|| \mathbf{w} ||^2$ to the loss function,but how should the model trade off the standard loss for this new additive penalty?In practice, we characterize this tradeoffvia the *regularization constant* $\lambda > 0$, a non-negative hyperparameter that we fit using validation data:$$l(\mathbf{w}, b) + \frac{\lambda}{2} \|\mathbf{w}\|^2.$$For $\lambda = 0$, we recover our original loss function.For $\lambda > 0$, we restrict the size of $|| \mathbf{w} ||$.The astute reader might wonder why we work with the squarednorm and not the standard norm (i.e., the Euclidean distance).We do this for computational convenience.By squaring the L2 norm, we remove the square root, leaving the sum of squares of each component of the weight vector.This makes the derivative of the penalty easy to compute(the sum of derivatives equals the derivative of the sum).Moreover, you might ask why we work with the L2 norm in the first place and not, say, the L1 norm.In fact, other choices are valid and popular throughout statistics.While L2-regularized linear models constitutethe classic *ridge regression* algorithm,L1-regularized linear regressionis a similarly fundamental model in statistics(popularly known as *lasso regression*).More generally, the $\ell_2$ is just one among an infinite class of norms call p-norms,many of which you might encounter in the future.In general, for some number $p$, the $\ell_p$ norm is defined as$$\|\mathbf{w}\|_p^p := \sum_{i=1}^d |w_i|^p.$$One reason to work with the L2 normis that it places and outsize penaltyon large components of the weight vector.This biases our learning algorithm towards models that distribute weight evenly across a larger number of features.In practice, this might make them more robustto measurement error in a single variable.By contrast, L1 penalties lead to modelsthat concentrate weight on a small set of features,which may be desirable for other reasons. The stochastic gradient descent updates for L2-regularized regression follow:$$\begin{aligned}\mathbf{w} & \leftarrow \left(1- \eta\lambda \right) \mathbf{w} - \frac{\eta}{|\mathcal{B}|} \sum_{i \in \mathcal{B}} \mathbf{x}^{(i)} \left(\mathbf{w}^\top \mathbf{x}^{(i)} + b - y^{(i)}\right),\end{aligned}$$As before, we update $\mathbf{w}$ based on the amount by which our estimate differs from the observation.However, we also shrink the size of $\mathbf{w}$ towards $0$.That is why the method is sometimes called "weight decay":given the penalty term alone,our optimization algorithm *decays*the weight at each step of training.In contrast to feature selection,weight decay offers us a continuous mechanismfor adjusting the complexity of $f$.Small values of $\lambda$ correspond to unconstrained $\mathbf{w}$,whereas large values of $\lambda$ constrain $\mathbf{w}$ considerably.Whether we include a corresponding bias penalty $b^2$ can vary across implementations, and may vary across layers of a neural network.Often, we do not regularize the bias termof a network's output layer. High-Dimensional Linear RegressionWe can illustrate the benefits of weight decay over feature selectionthrough a simple synthetic example.First, we generate some data as before$$y = 0.05 + \sum_{i = 1}^d 0.01 x_i + \epsilon \text{ where }\epsilon \sim \mathcal{N}(0, 0.01).$$choosing our label to be a linear function of our inputs,corrupted by Gaussian noise with zero mean and variance 0.01.To make the effects of overfitting pronounced,we can increase the dimensionality of our problem to $d = 200$and work with a small training set containing only 20 examples.We will now import the relevant libraries for showing weight decay concept in action.
###Code
%mavenRepo snapshots https://oss.sonatype.org/content/repositories/snapshots/
%maven ai.djl:api:0.6.0-SNAPSHOT
%maven org.slf4j:slf4j-api:1.7.26
%maven org.slf4j:slf4j-simple:1.7.26
%maven net.java.dev.jna:jna:5.3.0
%maven ai.djl.mxnet:mxnet-engine:0.6.0-SNAPSHOT
%maven ai.djl.mxnet:mxnet-native-auto:1.7.0-a
%%loadFromPOM
<dependency>
<groupId>tech.tablesaw</groupId>
<artifactId>tablesaw-jsplot</artifactId>
<version>0.30.4</version>
</dependency>
%load ../utils/plot-utils.ipynb
%load ../utils/DataPoints.java
%load ../utils/Training.java
import ai.djl.*;
import ai.djl.engine.*;
import ai.djl.ndarray.*;
import ai.djl.ndarray.types.Shape;
import ai.djl.nn.*;
import ai.djl.nn.core.Linear;
import ai.djl.training.DefaultTrainingConfig;
import ai.djl.training.GradientCollector;
import ai.djl.training.Trainer;
import ai.djl.training.dataset.ArrayDataset;
import ai.djl.training.dataset.Batch;
import ai.djl.training.evaluator.Accuracy;
import ai.djl.training.listener.TrainingListener;
import ai.djl.training.loss.L2Loss;
import ai.djl.training.loss.Loss;
import ai.djl.training.optimizer.Optimizer;
import ai.djl.training.optimizer.learningrate.LearningRateTracker;
import org.apache.commons.lang3.ArrayUtils;
import tech.tablesaw.api.*;
import tech.tablesaw.plotly.api.*;
import tech.tablesaw.plotly.components.*;
import tech.tablesaw.plotly.Plot;
import tech.tablesaw.plotly.components.Figure;
int nTrain = 20;
int nTest = 100;
int numInputs = 200;
int batchSize = 5;
float trueB = 0.05f;
NDManager manager = NDManager.newBaseManager();
NDArray trueW = manager.ones(new Shape(numInputs, 1));
trueW = trueW.mul(0.01);
public ArrayDataset loadArray(NDArray features, NDArray labels, int batchSize, boolean shuffle) {
return new ArrayDataset.Builder()
.setData(features) // set the features
.optLabels(labels) // set the labels
.setSampling(batchSize, shuffle) // set the batch size and random sampling
.build();
}
DataPoints trainData = DataPoints.syntheticData(manager, trueW, trueB, nTrain);
ArrayDataset trainIter = loadArray(trainData.getX(), trainData.getY(), batchSize, true);
DataPoints testData = DataPoints.syntheticData(manager, trueW, trueB, nTest);
ArrayDataset testIter = loadArray(testData.getX(), testData.getY(), batchSize, false);
###Output
_____no_output_____
###Markdown
Implementation from ScratchNext, we will implement weight decay from scratch,simply by adding the squared $\ell_2$ penaltyto the original target function. Initializing Model ParametersFirst, we will define a function to randomly initialize our model parameters and run `attachGradient()` on each to allocate memory for the gradients we will calculate.
###Code
public class InitParams{
private NDArray w;
private NDArray b;
private NDList l;
public NDArray getW(){
return this.w;
}
public NDArray getB(){
return this.b;
}
public InitParams(){
NDManager manager = NDManager.newBaseManager();
w = manager.randomNormal(0, 1.0f, new Shape(numInputs, 1), DataType.FLOAT32, Device.defaultDevice());
b = manager.zeros(new Shape(1));
w.attachGradient();
b.attachGradient();
}
}
###Output
_____no_output_____
###Markdown
Defining $\ell_2$ Norm PenaltyPerhaps the most convenient way to implement this penaltyis to square all terms in place and sum them up.We divide by $2$ by convention(when we take the derivative of a quadratic function,the $2$ and $1/2$ cancel out, ensuring that the expressionfor the update looks nice and simple).
###Code
public NDArray l2Penalty(NDArray w){
return ((w.pow(2)).sum()).div(2);
}
Loss l2loss = Loss.l2Loss();
###Output
_____no_output_____
###Markdown
Defining the Train and Test FunctionsThe following code fits a model on the training setand evaluates it on the test set.The linear network and the squared losshave not changed since the previous chapter,so we will just import them via `Training.linreg()` and `Training.squaredLoss()`.The only change here is that our loss now includes the penalty term.
###Code
float[] trainLoss;
float[] testLoss;
float[] epochCount;
public void train(float lambd){
InitParams initParams = new InitParams();
NDList params = new NDList(initParams.getW(), initParams.getB());
int numEpochs = 100;
float lr = 0.003f;
trainLoss = new float[(numEpochs/5)];
testLoss = new float[(numEpochs/5)];
epochCount = new float[(numEpochs/5)];
for(int epoch = 1; epoch <= numEpochs; epoch++){
for(Batch batch : trainIter.getData(manager)){
NDArray X = batch.getData().head();
NDArray y = batch.getLabels().head();
// Attach Gradients
for(NDArray param : params) {
param.attachGradient();
}
NDArray w = params.get(0);
NDArray b = params.get(1);
try (GradientCollector gc = Engine.getInstance().newGradientCollector()) {
// Minibatch loss in X and y
NDArray l = Training.squaredLoss(Training.linreg(X, w, b), y).add(l2Penalty(w).mul(lambd));
gc.backward(l); // Compute gradient on l with respect to w and b
Training.sgd(params, lr, batchSize); // Update parameters using their gradient
}
batch.close();
}
if(epoch % 5 == 0){
NDArray testL = Training.squaredLoss(Training.linreg(testData.getX(), params.get(0), params.get(1)), testData.getY());
NDArray trainL = Training.squaredLoss(Training.linreg(trainData.getX(), params.get(0), params.get(1)), trainData.getY());
epochCount[epoch/5 - 1] = epoch;
trainLoss[epoch/5 -1] = trainL.mean().getFloat();
testLoss[epoch/5 -1] = testL.mean().getFloat();
}
}
System.out.println("l1 norm of w: " + params.get(0).abs().sum());
}
###Output
_____no_output_____
###Markdown
Training without RegularizationWe now run this code with `lambd = 0`, disabling weight decay.Note that we overfit badly, decreasing the training error but not the test error---a textook case of overfitting.
###Code
train(0f);
String[] lossLabel = new String[trainLoss.length + testLoss.length];
Arrays.fill(lossLabel, 0, testLoss.length, "test");
Arrays.fill(lossLabel, testLoss.length, trainLoss.length + testLoss.length, "train");
Table data = Table.create("Data").addColumns(
FloatColumn.create("epochCount", ArrayUtils.addAll(epochCount, epochCount)),
FloatColumn.create("loss", ArrayUtils.addAll(testLoss, trainLoss)),
StringColumn.create("lossLabel", lossLabel)
);
render(LinePlot.create("", data, "epochCount", "loss", "lossLabel"),"text/html");
###Output
_____no_output_____
###Markdown
Using Weight DecayBelow, we run with substantial weight decay.Note that the training error increasesbut the test error decreases.This is precisely the effect we expect from regularization.As an exercise, you might want to checkthat the $\ell_2$ norm of the weights $\mathbf{w}$has actually decreased.
###Code
// calling training with weight decay lambda = 3.0
train(3f);
String[] lossLabel = new String[trainLoss.length + testLoss.length];
Arrays.fill(lossLabel, 0, testLoss.length, "test");
Arrays.fill(lossLabel, testLoss.length, trainLoss.length + testLoss.length, "train");
Table data = Table.create("Data").addColumns(
FloatColumn.create("epochCount", ArrayUtils.addAll(epochCount, epochCount)),
FloatColumn.create("loss", ArrayUtils.addAll(testLoss, trainLoss)),
StringColumn.create("lossLabel", lossLabel)
);
render(LinePlot.create("", data, "epochCount", "loss", "lossLabel"),"text/html");
###Output
_____no_output_____
###Markdown
Concise ImplementationBecause weight decay is ubiquitous in neural network optimization,DJL makes it especially convenient,integrating weight decay into the optimization algorithm itselffor easy use in combination with any loss function.Moreover, this integration serves a computational benefit,allowing implementation tricks to add weight decay to the algorithm,without any additional computational overhead.Since the weight decay portion of the updatedepends only on the current value of each parameter,and the optimizer must touch each parameter once anyway.In the following code, we specifythe weight decay hyperparameter directlythrough `wd` when instantiating our `Trainer`.By default, DJL decays both weights and biases simultaneously.
###Code
public void train_djl(float wd){
InitParams initParams = new InitParams();
NDList params = new NDList(initParams.getW(), initParams.getB());
int numEpochs = 100;
float lr = 0.003f;
trainLoss = new float[(numEpochs/5)];
testLoss = new float[(numEpochs/5)];
epochCount = new float[(numEpochs/5)];
LearningRateTracker lrt = LearningRateTracker.fixedLearningRate(lr);
Optimizer sgd = Optimizer.sgd().setLearningRateTracker(lrt).build();
DefaultTrainingConfig config = new DefaultTrainingConfig(l2loss)
.optOptimizer(sgd) // Optimizer (loss function)
.addEvaluator(new Accuracy()) // Model Accuracy
.addEvaluator(l2loss)
.addTrainingListeners(TrainingListener.Defaults.logging()); // Logging
Model model = Model.newInstance("mlp");
SequentialBlock net = new SequentialBlock();
Linear linearBlock = Linear.builder().optBias(true).setOutChannels(1).build();
net.add(linearBlock);
model.setBlock(net);
Trainer trainer = model.newTrainer(config);
trainer.initialize(new Shape(batchSize, 2));
for(int epoch = 1; epoch <= numEpochs; epoch++){
for(Batch batch : trainer.iterateDataset(trainIter)){
NDArray X = batch.getData().head();
NDArray y = batch.getLabels().head();
// Attach Gradients
for (NDArray param : params) {
param.attachGradient();
}
NDArray w = params.get(0);
NDArray b = params.get(1);
try (GradientCollector gc = Engine.getInstance().newGradientCollector()) {
// Minibatch loss in X and y
NDArray l = Training.squaredLoss(Training.linreg(X, w, b), y).add(l2Penalty(w).mul(wd));
gc.backward(l); // Compute gradient on l with respect to w and b
Training.sgd(params, lr, batchSize); // Update parameters using their gradient
}
batch.close();
}
if(epoch % 5 == 0){
NDArray testL = Training.squaredLoss(Training.linreg(testData.getX(), params.get(0), params.get(1)), testData.getY());
NDArray trainL = Training.squaredLoss(Training.linreg(trainData.getX(), params.get(0), params.get(1)), trainData.getY());
epochCount[epoch/5 - 1] = epoch;
trainLoss[epoch/5 -1] = trainL.mean().getFloat();
testLoss[epoch/5 -1] = testL.mean().getFloat();
}
}
System.out.println("l1 norm of w: " + params.get(0).abs().sum());
}
###Output
_____no_output_____
###Markdown
The plots look identical to those when we implemented weight decay from scratch.However, they run appreciably faster and are easier to implement,a benefit that will become morepronounced for large problems.
###Code
train_djl(0);
String[] lossLabel = new String[trainLoss.length + testLoss.length];
Arrays.fill(lossLabel, 0, testLoss.length, "test");
Arrays.fill(lossLabel, testLoss.length, trainLoss.length + testLoss.length, "train");
Table data = Table.create("Data").addColumns(
FloatColumn.create("epochCount", ArrayUtils.addAll(epochCount, epochCount)),
FloatColumn.create("loss", ArrayUtils.addAll(testLoss, trainLoss)),
StringColumn.create("lossLabel", lossLabel)
);
render(LinePlot.create("", data, "epochCount", "loss", "lossLabel"),"text/html");
###Output
_____no_output_____ |
Academy awards analysis using SQL.ipynb | ###Markdown
Exploration of Academy awards using SQL
###Code
import pandas as pd
df=pd.read_csv("academy_awards.csv",encoding='ISO-8859-1')
df.head()
df["Year"]=df["Year"].str[0:4]
df["Year"]=df["Year"].astype("int64")
later_than_2000=df[df["Year"]>2000]
award_categories=["Actor -- Leading Role", "Actor -- Supporting Role", "Actress -- Leading Role",
"Actress -- Supporting Role"]
nominations=later_than_2000[later_than_2000["Category"].isin(award_categories)]
replace_dict={"NO":0,"YES":1}
nominations["Won?"]=nominations["Won?"].map(replace_dict)
nominations["Won"]=nominations["Won?"]
list_drop=["Won?","Unnamed: 5", "Unnamed: 6","Unnamed: 7","Unnamed: 8","Unnamed: 9","Unnamed: 10"]
final_nominations=nominations.drop(list_drop,axis=1)
additional_info_one=final_nominations["Additional Info"].str.rstrip("'}")
additional_info_two=additional_info_one.str.split("{'")
movie_names=additional_info_two.str[0]
characters=additional_info_two.str[1]
final_nominations["Movie"]=movie_names
final_nominations["Character"]=characters
final_nominations=final_nominations.drop("Additional Info",axis=1)
import sqlite3
conn = sqlite3.connect("nominations.db")
final_nominations.to_sql("nominations", conn, index=False)
###Output
_____no_output_____ |
sphinx/scikit-intro/source/plot-bar.ipynb | ###Markdown
Bar Plot
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn')
###Output
_____no_output_____
###Markdown
Basic
###Code
import numpy as np
import pandas as pd
np.random.seed(37)
s = pd.Series(np.random.randint(0, 11, size=10))
fig, ax = plt.subplots(figsize=(10, 3))
_ = s.plot(kind='bar', ax=ax)
_ = ax.set_title('Basic bar plot')
###Output
_____no_output_____
###Markdown
Coloring barsColoring bars is controlled by the `color` argument, which expects an array of [colors](https://matplotlib.org/2.0.2/api/colors_api.html).
###Code
s = pd.Series(np.random.randint(-10, 11, size=10))
fig, ax = plt.subplots(figsize=(10, 3))
_ = s.plot(kind='bar', ax=ax, color=(s > 0).map({True: 'b', False: 'r'}))
_ = ax.set_title('Bar plot, color bars')
###Output
_____no_output_____
###Markdown
Labeling barsLabeling or annotating bars with counts or percentage relies on access to the `patches` (aka `rectangles`) associated with each bar.
###Code
s = pd.Series(np.random.randint(-5, 6, size=20))
fig, ax = plt.subplots(figsize=(10, 3))
_ = s.plot(kind='bar', ax=ax, color=(s > 0).map({True: 'b', False: 'r'}))
_ = ax.set_title('Bar plot, annotate bars')
for i, v in enumerate(s.values):
params = {
'x': i,
'y': v if v >= 0 else v -1.0,
's': v,
'horizontalalignment': 'center',
'verticalalignment': 'bottom',
'fontdict': {
'fontweight': 500,
'size': 12
}
}
_ = ax.text(**params)
# increase the y min and max space of the graph
y_min, y_max = ax.get_ylim()
_ = ax.set_ylim(y_min - 1.0, y_max + 1.0)
###Output
_____no_output_____
###Markdown
Hiding x-axis labelsSometimes, plotting every label on the x-axis will result in collision of labels.
###Code
s = pd.Series(np.random.randint(1, 11, size=100))
fig, ax = plt.subplots(figsize=(10, 3))
_ = s.plot(kind='bar', ax=ax)
_ = ax.set_title('Bar plot, label collision')
###Output
_____no_output_____
###Markdown
You may show only every `n-th` label by setting all other labels' visibility to `False`.
###Code
fig, ax = plt.subplots(figsize=(10, 3))
_ = s.plot(kind='bar', ax=ax)
_ = ax.set_title('Bar plot, show every n-th x-axis label')
n_th = 5
for index, label in enumerate(ax.xaxis.get_ticklabels()):
if index % n_th != 0:
label.set_visible(False)
###Output
_____no_output_____
###Markdown
Stacked bar
###Code
n = 25
labels = [f'y{i}' for i in range(n)]
columns = [f'x{i}' for i in range(n)]
df = pd.DataFrame(np.random.randint(0, 11, size=(n, n)), index=labels, columns=columns)
import seaborn as sns
fig, ax = plt.subplots(figsize=(10, 3))
colors = sns.color_palette('hls', df.shape[0])
prev = []
for color, label in zip(colors, labels):
if len(prev) == 0:
ax.bar(columns, df.loc[label], color=color, label=label)
else:
s = df.loc[prev].sum()
ax.bar(columns, df.loc[label], color=color, label=label, bottom=s)
prev.append(label)
_ = ax.legend(bbox_to_anchor=(1, 1), loc='upper left', ncol=5)
_ = ax.set_title('Stacked bar')
###Output
_____no_output_____
###Markdown
Stacked bar, normalizedA stacked bar plot normalized to percentages is achieved through transforming the column values to percentages (the columns must sum to 1).
###Code
fig, ax = plt.subplots(figsize=(10, 3))
colors = sns.color_palette('hls', df.shape[0])
p_df = df / df.sum()
prev = []
for color, label in zip(colors, labels):
if len(prev) == 0:
ax.bar(columns, p_df.loc[label], color=color, label=label)
else:
s = p_df.loc[prev].sum()
ax.bar(columns, p_df.loc[label], color=color, label=label, bottom=s)
prev.append(label)
_ = ax.legend(bbox_to_anchor=(1, 1), loc='upper left', ncol=5)
_ = ax.set_title('Stacked bar, normalized')
###Output
_____no_output_____
###Markdown
Bar, multiple series
###Code
n = 5
m = 10
labels = [f'y{i}' for i in range(n)]
columns = [f'x{i}' for i in range(m)]
df = pd.DataFrame(np.random.randint(0, 11, size=(n, m)), index=labels, columns=columns)
fig, ax = plt.subplots(figsize=(15, 3))
colors = sns.color_palette('hls', df.shape[0])
df.plot(kind='bar', color=colors, ax=ax)
_ = ax.legend(bbox_to_anchor=(0, -0.15), loc='upper left', ncol=5)
_ = ax.set_title('Stacked bar, series')
###Output
_____no_output_____ |
Untitled91.ipynb | ###Markdown
List
###Code
l=[1,2,3,4,5]
a=l.append(3)
print(l)
l=[1,2,3,6,8]
b=l.pop()
print(l)
l=[1,2,3,4,5]
c=l.insert(2,8)
print(l)
l=[1,2,3,4,56]
d=l.reverse()
print(l)
l=[1,5,7,8,9,1,2]
e=l.sort()
print(l)
###Output
[1, 1, 2, 5, 7, 8, 9]
###Markdown
Dictonary
###Code
d= {"1": "hi","2": "hello","3": 1964}
d.clear()
print(d)
d = {"name": "Ford","color": "green","year": 1964}
x = d.copy()
print(x)
car = {"brand": "Ford","model": "Mustang","year": 1964}
car.pop("model")
print(car)
car = {"brand": "Ford","model": "Mustang","year": 1964}
car.popitem()
print(car)
car = {"brand": "Ford","model": "Mustang","year": 1964}
car.update({"color": "White"})
print(car)
###Output
{'brand': 'Ford', 'model': 'Mustang', 'year': 1964, 'color': 'White'}
###Markdown
Set
###Code
s = {"apple", "banana", "cherry"}
s.add("orange")
print(s)
s = {"apple", "banana", "cherry"}
s.clear()
print(s)
s = {"apple", "banana", "cherry"}
b=s.copy()
print(b)
s = {"apple", "banana", "cherry"}
s.pop()
print(s)
s = {"apple", "banana", "cherry"}
s.remove("apple")
print(s)
###Output
{'cherry', 'banana'}
###Markdown
Tuple
###Code
t= (1, 3, 7, 8, 7, 5, 4, 6, 8, 5)
x =t.count(5)
print(x)
t= (1, 3, 7, 8, 7, 5, 4, 6, 8, 5)
x =t.index(5)
print(x)
###Output
5
###Markdown
Strings
###Code
s = "hello, and welcome to my world."
x = s.capitalize()
print (x)
s = "hello, and welcome to my world."
x = s.upper()
print (x)
s = "hello, and welcome to my world."
x = s.lower()
print (x)
s = "hello, and welcome to my world."
x = s.find("hello")
print (x)
s = "hello, and welcome to my world."
x = s.replace("hello","hi")
print (x)
###Output
hi, and welcome to my world.
|
.ipynb_checkpoints/evaluate1-checkpoint.ipynb | ###Markdown
('Mean pixel accuracy:', 0.93686196970385172)('Mean accuraccy:', 0.88282679428816835)('Mean IoU:', 0.82205035370678359) mza('Mean frequency weighted IU:', 0.89157518604189323) ('Mean pixel accuracy:', 0.89442467504693557)('Mean accuraccy:', 0.73916047278027552)('Mean IoU:', 0.6501638034247158)('Mean frequency weighted IU:', 0.82468454034802852)
###Code
mean_pixel_acc
gt_image=cv2.imread(gt_img_list[11],0 )
plt.imshow(gt_image)
plt.pause(2)
prd_image=cv2.imread(pred_img_list[11],0 )
plt.imshow(prd_image)
blur_gt = cv2.GaussianBlur(gt_image, (5, 5), 0)
(t, label_image) = cv2.threshold(blur_gt, 0, 1, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
blur_pr = cv2.GaussianBlur(prd_image, (5, 5), 0)
(t, pred_image) = cv2.threshold(blur_pr, 0, 1, cv2.THRESH_BINARY + cv2.THRESH_OTSU)
pix_acc = pixel_accuracy(pred_image, label_image)
###Output
_____no_output_____ |
content/posts/Analyzing the Next Decade of Earth Close-Approaching Objects with nasapy.ipynb | ###Markdown
In this example, we will walk through a possible use case of the `nasapy` library by extracting the next 10 years of close-approaching objects to Earth identified by NASA's Jet Propulsion Laboratory's Small-Body Database.Before diving in, import the packages we will use to extract and analyze the data. The data analysis library [pandas](https://pandas.pydata.org/) will be used to wrangle the data, while [seaborn](https://seaborn.pydata.org/) is used for plotting the data. The magic command [`%matplotlib inline`](https://ipython.readthedocs.io/en/stable/interactive/plotting.htmlid1) is loaded to display the generated plots.
###Code
import nasapy
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
###Output
_____no_output_____
###Markdown
The `close_approach` method of the `nasapy` library allows one to access the JPL SBDB to extract data related to known meteoroids and asteroids within proximity to Earth. Setting the parameter `return_df=True` automatically coerces the returned JSON data into a pandas DataFrame. After extracting the data, we transform several of the variables into `float` type.
###Code
ca = nasapy.close_approach(date_min='2020-01-01', date_max='2029-12-31', return_df=True)
ca['dist'] = ca['dist'].astype(float)
ca['dist_min'] = ca['dist_min'].astype(float)
ca['dist_max'] = ca['dist_max'].astype(float)
###Output
_____no_output_____
###Markdown
The `dist` column of the returned data describes the nominal approach distance of the object in astronomical units (AU). An [astronomical unit](https://en.wikipedia.org/wiki/Astronomical_unit), or AU is roughly the distance of the Earth to the Sun, approximately 92,955,807 miles or 149,598,000 kilometers. Using the [`.describe`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.describe.html) method, we can display descriptive statistics that summarize the data.
###Code
ca['dist'].describe()
###Output
_____no_output_____
###Markdown
We see the mean approach distance in AUs is approximately 0.031, which we can transform into miles:
###Code
au_miles = 92955807.26743
ca['dist'].describe()['mean'] * au_miles
###Output
_____no_output_____
###Markdown
Thus the average distance of the approaching objects to Earth over the next decade is about 2.86 million miles, which is more than 10 times the distance from the Earth to the Moon (238,900 miles).What about the closest approaching object to Earth within the next ten years? Using the [`.loc`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.loc.html) method, we can find the object with the closest approaching distance.
###Code
ca.loc[ca['dist'] == ca['dist'].min()]
###Output
_____no_output_____
###Markdown
The closest approaching known object is expected to approach Earth near the end of the decade, on April 13, 2029, at a distance of 0.00023 AU. Transforming the astronomical units into miles, we can get a better sense of the approach distance of the object.
###Code
print('Distance: ' + str(au_miles * ca['dist'].min()))
print('Minimum Distance: ' + str(au_miles * ca['dist_min'].min()))
print('Maximum Distance: ' + str(au_miles * ca['dist_max'].min()))
###Output
Distance: 23440.92769543333
Minimum Distance: 644.2481158331191
Maximum Distance: 23874.510393069424
###Markdown
Oh my! It looks like this object will approach Earth relatively close, at about 23,000 miles, in a range of [644, 23874] miles. For comparison, the maximum distance is about 1/10 of the distance from the Earth to the Moon.Let's get a sense of the number of approaching objects to Earth by year over the next decade. First, we extract the year of the approach date using a combination of [`.apply`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.apply.html) and [`to_datetime`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.to_datetime.html) into a new column `approach_year`.
###Code
ca['approach_year'] = ca['cd'].apply(lambda x: pd.to_datetime(x).year)
###Output
_____no_output_____
###Markdown
Using the [`.groupby`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html) method, we create a new DataFrame with the aggregated count of approaching objects for each year.
###Code
approaches_by_year = ca.groupby('approach_year').count().reset_index()
###Output
_____no_output_____
###Markdown
We can use seaborn's [barplot](https://seaborn.pydata.org/generated/seaborn.barplot.html) function to plot the count of approaching objects for each year.
###Code
plt.figure(figsize=(10, 6))
p = sns.barplot(x='approach_year', y='h', data=approaches_by_year)
plt.axhline(approaches_by_year['h'].mean(), color='r', linestyle='--')
p = p.set_xticklabels(approaches_by_year['approach_year'], rotation=45, ha='right', fontsize=12)
###Output
_____no_output_____
###Markdown
Interestingly, this year (2020) will have the most activity, and then it will somewhat decline over the next few years until the end of the decade. On average, there are a little less than 80 Earth approaching objects each year of the decade.As the last example, let's plot the distribution of the approaching object distances using seaborn's [`.kdeplot`](https://seaborn.pydata.org/generated/seaborn.kdeplot.html) which creates a kernel destiny plot. We can also add a mean line of the distances similar to how we did above.
###Code
plt.figure(figsize=(14, 6))
plt.axvline(ca['dist'].astype(float).mean(), color='r', linestyle='--')
sns.kdeplot(ca['dist'], shade=True)
###Output
_____no_output_____
###Markdown
As we noted above, the mean approach distance is a little more than 0.03, which we can see in the density plot above. Lastly, we can plot a normal distribution over the distribution of the distances using [numpy.random.normal](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.normal.html) to get a quick comparison of the actual distribution compared to a normal one.
###Code
plt.figure(figsize=(14, 6))
x = np.random.normal(size=len(ca['dist']),
scale=ca['dist'].std(),
loc=ca['dist'].mean())
plt.axvline(ca['dist'].mean(), color='r', linestyle='--')
sns.kdeplot(ca['dist'], shade=True)
sns.kdeplot(x, shade=True)
###Output
_____no_output_____ |
01.Python/Python_06_Probability.ipynb | ###Markdown
Peter Norvig, 12 Feb 2016 A Concrete Introduction to Probability (using Python)This notebook covers the basics of probability theory, with Python 3 implementations. (You should have some background in [probability](http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/pdf.html) and [Python](https://www.python.org/about/gettingstarted/).) In 1814, Pierre-Simon Laplace [wrote](https://en.wikipedia.org/wiki/Classical_definition_of_probability):>*Probability ... is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible ... when nothing leads us to expect that any one of these cases should occur more than any other.*Pierre-Simon Laplace1814Laplace really nailed it, way back then! If you want to untangle a probability problem, all you have to do is be methodical about defining exactly what the cases are, and then careful in counting the number of favorable and total cases. We'll start being methodical by defining some vocabulary:- **[Experiment](https://en.wikipedia.org/wiki/Experiment_(probability_theory%29):** An occurrence with an uncertain outcome that we can observe. *For example, rolling a die.*- **[Outcome](https://en.wikipedia.org/wiki/Outcome_(probability%29):** The result of an experiment; one particular state of the world. What Laplace calls a "case." *For example:* `4`.- **[Sample Space](https://en.wikipedia.org/wiki/Sample_space):** The set of all possible outcomes for the experiment. *For example,* `{1, 2, 3, 4, 5, 6}`.- **[Event](https://en.wikipedia.org/wiki/Event_(probability_theory%29):** A subset of possible outcomes that together have some property we are interested in. *For example, the event "even die roll" is the set of outcomes* `{2, 4, 6}`. - **[Probability](https://en.wikipedia.org/wiki/Probability_theory):** As Laplace said, the probability of an event with respect to a sample space is the number of favorable cases (outcomes from the sample space that are in the event) divided by the total number of cases in the sample space. (This assumes that all outcomes in the sample space are equally likely.) Since it is a ratio, probability will always be a number between 0 (representing an impossible event) and 1 (representing a certain event).*For example, the probability of an even die roll is 3/6 = 1/2.*This notebook will develop all these concepts; I also have a [second part](http://nbviewer.jupyter.org/url/norvig.com/ipython/ProbabilityParadox.ipynb) that covers paradoxes in Probability Theory. Code for `P` `P` is the traditional name for the Probability function:
###Code
from fractions import Fraction
def P(event, space):
"The probability of an event, given a sample space of equiprobable outcomes."
return Fraction(len(event & space),
len(space))
###Output
_____no_output_____
###Markdown
Read this as implementing Laplace's quote directly: *"Probability is thus simply a fraction whose numerator is the number of favorable cases and whose denominator is the number of all the cases possible."* Warm-up Problem: Die Roll What's the probability of rolling an even number with a single six-sided fair die? We can define the sample space `D` and the event `even`, and compute the probability:
###Code
D = {1, 2, 3, 4, 5, 6}
even = { 2, 4, 6}
P(even, D)
###Output
_____no_output_____
###Markdown
It is good to confirm what we already knew.You may ask: Why does the definition of `P` use `len(event & space)` rather than `len(event)`? Because I don't want to count outcomes that were specified in `event` but aren't actually in the sample space. Consider:
###Code
even = {2, 4, 6, 8, 10, 12}
P(even, D)
###Output
_____no_output_____
###Markdown
Here, `len(event)` and `len(space)` are both 6, so if just divided, then `P` would be 1, which is not right.The favorable cases are the *intersection* of the event and the space, which in Python is `(event & space)`.Also note that I use `Fraction` rather than regular division because I want exact answers like 1/3, not 0.3333333333333333. Urn ProblemsAround 1700, Jacob Bernoulli wrote about removing colored balls from an urn in his landmark treatise *[Ars Conjectandi](https://en.wikipedia.org/wiki/Ars_Conjectandi)*, and ever since then, explanations of probability have relied on [urn problems](https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8q=probability%20ball%20urn). (You'd think the urns would be empty by now.) Jacob Bernoulli1700For example, here is a three-part problem [adapted](http://mathforum.org/library/drmath/view/69151.html) from mathforum.org:> An urn contains 23 balls: 8 white, 6 blue, and 9 red. We select six balls at random (each possible selection is equally likely). What is the probability of each of these possible outcomes:> 1. all balls are red2. 3 are blue, 2 are white, and 1 is red3. exactly 4 balls are whiteSo, an outcome is a set of 6 balls, and the sample space is the set of all possible 6 ball combinations. We'll solve each of the 3 parts using our `P` function, and also using basic arithmetic; that is, *counting*. Counting is a bit tricky because:- We have multiple balls of the same color. - An outcome is a *set* of balls, where order doesn't matter, not a *sequence*, where order matters.To account for the first issue, I'll have 8 different white balls labelled `'W1'` through `'W8'`, rather than having eight balls all labelled `'W'`. That makes it clear that selecting `'W1'` is different from selecting `'W2'`.The second issue is handled automatically by the `P` function, but if I want to do calculations by hand, I will sometimes first count the number of *permutations* of balls, then get the number of *combinations* by dividing the number of permutations by *c*!, where *c* is the number of balls in a combination. For example, if I want to choose 2 white balls from the 8 available, there are 8 ways to choose a first white ball and 7 ways to choose a second, and therefore 8 × 7 = 56 permutations of two white balls. But there are only 56 / 2 = 28 combinations, because `(W1, W2)` is the same combination as `(W2, W1)`.We'll start by defining the contents of the urn:
###Code
def cross(A, B):
"The set of ways of concatenating one item from collection A with one from B."
return {a + b
for a in A for b in B}
urn = cross('W', '12345678') | cross('B', '123456') | cross('R', '123456789')
urn
len(urn)
###Output
_____no_output_____
###Markdown
Now we can define the sample space, `U6`, as the set of all 6-ball combinations. We use `itertools.combinations` to generate the combinations, and then join each combination into a string:
###Code
import itertools
def combos(items, n):
"All combinations of n items; each combo as a concatenated str."
return {' '.join(combo)
for combo in itertools.combinations(items, n)}
U6 = combos(urn, 6)
len(U6)
###Output
_____no_output_____
###Markdown
I don't want to print all 100,947 members of the sample space; let's just peek at a random sample of them:
###Code
import random
random.sample(U6, 10)
###Output
_____no_output_____
###Markdown
Is 100,947 really the right number of ways of choosing 6 out of 23 items, or "23 choose 6", as mathematicians [call it](https://en.wikipedia.org/wiki/Combination)? Well, we can choose any of 23 for the first item, any of 22 for the second, and so on down to 18 for the sixth. But we don't care about the ordering of the six items, so we divide the product by 6! (the number of permutations of 6 things) giving us:$$23 ~\mbox{choose}~ 6 = \frac{23 \cdot 22 \cdot 21 \cdot 20 \cdot 19 \cdot 18}{6!} = 100947$$Note that $23 \cdot 22 \cdot 21 \cdot 20 \cdot 19 \cdot 18 = 23! \;/\; 17!$, so, generalizing, we can write:$$n ~\mbox{choose}~ c = \frac{n!}{(n - c)! \cdot c!}$$And we can translate that to code and verify that 23 choose 6 is 100,947:
###Code
from math import factorial
def choose(n, c):
"Number of ways to choose c items from a list of n items."
return factorial(n) // (factorial(n - c) * factorial(c))
choose(23, 6)
###Output
_____no_output_____
###Markdown
Now we're ready to answer the 4 problems: Urn Problem 1: what's the probability of selecting 6 red balls?
###Code
red6 = {s for s in U6 if s.count('R') == 6}
P(red6, U6)
###Output
_____no_output_____
###Markdown
Let's investigate a bit more. How many ways of getting 6 red balls are there?
###Code
len(red6)
###Output
_____no_output_____
###Markdown
Why are there 84 ways? Because there are 9 red balls in the urn, and we are asking how many ways we can choose 6 of them:
###Code
choose(9, 6)
###Output
_____no_output_____
###Markdown
So the probabilty of 6 red balls is then just 9 choose 6 divided by the size of the sample space:
###Code
P(red6, U6) == Fraction(choose(9, 6),
len(U6))
###Output
_____no_output_____
###Markdown
Urn Problem 2: what is the probability of 3 blue, 2 white, and 1 red?
###Code
b3w2r1 = {s for s in U6 if
s.count('B') == 3 and s.count('W') == 2 and s.count('R') == 1}
P(b3w2r1, U6)
###Output
_____no_output_____
###Markdown
We can get the same answer by counting how many ways we can choose 3 out of 6 blues, 2 out of 8 whites, and 1 out of 9 reds, and dividing by the number of possible selections:
###Code
P(b3w2r1, U6) == Fraction(choose(6, 3) * choose(8, 2) * choose(9, 1),
len(U6))
###Output
_____no_output_____
###Markdown
Here we don't need to divide by any factorials, because `choose` has already accounted for that. We can get the same answer by figuring: "there are 6 ways to pick the first blue, 5 ways to pick the second blue, and 4 ways to pick the third; then 8 ways to pick the first white and 7 to pick the second; then 9 ways to pick a red. But the order `'B1, B2, B3'` should count as the same as `'B2, B3, B1'` and all the other orderings; so divide by 3! to account for the permutations of blues, by 2! to account for the permutations of whites, and by 100947 to get a probability:
###Code
P(b3w2r1, U6) == Fraction((6 * 5 * 4) * (8 * 7) * 9,
factorial(3) * factorial(2) * len(U6))
###Output
_____no_output_____
###Markdown
Urn Problem 3: What is the probability of exactly 4 white balls?We can interpret this as choosing 4 out of the 8 white balls, and 2 out of the 15 non-white balls. Then we can solve it the same three ways:
###Code
w4 = {s for s in U6 if
s.count('W') == 4}
P(w4, U6)
P(w4, U6) == Fraction(choose(8, 4) * choose(15, 2),
len(U6))
P(w4, U6) == Fraction((8 * 7 * 6 * 5) * (15 * 14),
factorial(4) * factorial(2) * len(U6))
###Output
_____no_output_____
###Markdown
Revised Version of `P`, with more general eventsTo calculate the probability of an even die roll, I originally said even = {2, 4, 6} But that's inelegant—I had to explicitly enumerate all the even numbers from one to six. If I ever wanted to deal with a twelve or twenty-sided die, I would have to go back and change `even`. I would prefer to define `even` once and for all like this:
###Code
def even(n): return n % 2 == 0
###Output
_____no_output_____
###Markdown
Now in order to make `P(even, D)` work, I'll have to modify `P` to accept an event as eithera *set* of outcomes (as before), or a *predicate* over outcomes—a function that returns true for an outcome that is in the event:
###Code
def P(event, space):
"""The probability of an event, given a sample space of equiprobable outcomes.
event can be either a set of outcomes, or a predicate (true for outcomes in the event)."""
if is_predicate(event):
event = such_that(event, space)
return Fraction(len(event & space), len(space))
is_predicate = callable
def such_that(predicate, collection):
"The subset of elements in the collection for which the predicate is true."
return {e for e in collection if predicate(e)}
###Output
_____no_output_____
###Markdown
Here we see how `such_that`, the new `even` predicate, and the new `P` work:
###Code
such_that(even, D)
P(even, D)
D12 = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12}
such_that(even, D12)
P(even, D12)
###Output
_____no_output_____
###Markdown
Note: `such_that` is just like the built-in function `filter`, except `such_that` returns a set.We can now define more interesting events using predicates; for example we can determine the probability that the sum of a three-dice roll is prime (using a definition of `is_prime` that is efficient enough for small `n`):
###Code
D3 = {(d1, d2, d3) for d1 in D for d2 in D for d3 in D}
def prime_sum(outcome): return is_prime(sum(outcome))
def is_prime(n): return n > 1 and not any(n % i == 0 for i in range(2, n))
P(prime_sum, D3)
###Output
_____no_output_____
###Markdown
Card ProblemsConsider dealing a hand of five playing cards. We can define `deck` as a set of 52 cards, and `Hands` as the sample space of all combinations of 5 cards:
###Code
suits = 'SHDC'
ranks = 'A23456789TJQK'
deck = cross(ranks, suits)
len(deck)
Hands = combos(deck, 5)
assert len(Hands) == choose(52, 5)
random.sample(Hands, 5)
###Output
_____no_output_____
###Markdown
Now we can answer questions like the probability of being dealt a flush (5 cards of the same suit):
###Code
def flush(hand):
return any(hand.count(suit) == 5 for suit in suits)
P(flush, Hands)
###Output
_____no_output_____
###Markdown
Or the probability of four of a kind:
###Code
def four_kind(hand):
return any(hand.count(rank) == 4 for rank in ranks)
P(four_kind, Hands)
###Output
_____no_output_____
###Markdown
Fermat and Pascal: Gambling, Triangles, and the Birth of ProbabilityPierre de Fermat1654Blaise Pascal]1654Consider a gambling game consisting of tossing a coin. Player H wins the game if 10 heads come up, and T wins if 10 tails come up. If the game is interrupted when H has 8 heads and T has 7 tails, how should the pot of money (which happens to be 100 Francs) be split?In 1654, Blaise Pascal and Pierre de Fermat corresponded on this problem, with Fermat [writing](http://mathforum.org/isaac/problems/prob1.html):>Dearest Blaise,>As to the problem of how to divide the 100 Francs, I think I have found a solution that you will find to be fair. Seeing as I needed only two points to win the game, and you needed 3, I think we can establish that after four more tosses of the coin, the game would have been over. For, in those four tosses, if you did not get the necessary 3 points for your victory, this would imply that I had in fact gained the necessary 2 points for my victory. In a similar manner, if I had not achieved the necessary 2 points for my victory, this would imply that you had in fact achieved at least 3 points and had therefore won the game. Thus, I believe the following list of possible endings to the game is exhaustive. I have denoted 'heads' by an 'h', and tails by a 't.' I have starred the outcomes that indicate a win for myself. h h h h * h h h t * h h t h * h h t t * h t h h * h t h t * h t t h * h t t t t h h h * t h h t * t h t h * t h t t t t h h * t t h t t t t h t t t t>I think you will agree that all of these outcomes are equally likely. Thus I believe that we should divide the stakes by the ration 11:5 in my favor, that is, I should receive (11/16)*100 = 68.75 Francs, while you should receive 31.25 Francs.>I hope all is well in Paris,>Your friend and colleague,>PierrePascal agreed with this solution, and [replied](http://mathforum.org/isaac/problems/prob2.html) with a generalization that made use of his previous invention, Pascal's Triangle. There's even [a book](https://smile.amazon.com/Unfinished-Game-Pascal-Fermat-Seventeenth-Century/dp/0465018963?sa-no-redirect=1) about it.We can solve the problem with the tools we have:
###Code
def win_unfinished_game(Hneeds, Tneeds):
"The probability that H will win the unfinished game, given the number of points needed by H and T to win."
def Hwins(outcome): return outcome.count('h') >= Hneeds
return P(Hwins, continuations(Hneeds, Tneeds))
def continuations(Hneeds, Tneeds):
"All continuations of a game where H needs `Hneeds` points to win and T needs `Tneeds`."
rounds = ['ht' for _ in range(Hneeds + Tneeds - 1)]
return set(itertools.product(*rounds))
continuations(2, 3)
win_unfinished_game(2, 3)
###Output
_____no_output_____
###Markdown
Our answer agrees with Pascal and Fermat; we're in good company! Non-Equiprobable Outcomes: Probability DistributionsSo far, we have made the assumption that every outcome in a sample space is equally likely. In real life, we often get outcomes that are not equiprobable. For example, the probability of a child being a girl is not exactly 1/2, and the probability is slightly different for a second child. An [article](http://people.kzoo.edu/barth/math105/moreboys.pdf) gives the following counts for two-child families in Denmark, where `GB` means a family where the first child is a girl and the second a boy: GG: 121801 GB: 126840 BG: 127123 BB: 135138 We will introduce three more definitions:* [Frequency](https://en.wikipedia.org/wiki/Frequency_%28statistics%29): a number describing how often an outcome occurs. Can be a count like 121801, or a ratio like 0.515.* [Distribution](http://mathworld.wolfram.com/StatisticalDistribution.html): A mapping from outcome to frequency for each outcome in a sample space. * [Probability Distribution](https://en.wikipedia.org/wiki/Probability_distribution): A distribution that has been *normalized* so that the sum of the frequencies is 1.We define `ProbDist` to take the same kinds of arguments that `dict` does: either a mapping or an iterable of `(key, val)` pairs, and/or optional keyword arguments.
###Code
class ProbDist(dict):
"A Probability Distribution; an {outcome: probability} mapping."
def __init__(self, mapping=(), **kwargs):
self.update(mapping, **kwargs)
# Make probabilities sum to 1.0; assert no negative probabilities
total = sum(self.values())
for outcome in self:
self[outcome] = self[outcome] / total
assert self[outcome] >= 0
###Output
_____no_output_____
###Markdown
We also need to modify the functions `P` and `such_that` to accept either a sample space or a probability distribution as the second argument.
###Code
def P(event, space):
"""The probability of an event, given a sample space of equiprobable outcomes.
event: a collection of outcomes, or a predicate that is true of outcomes in the event.
space: a set of outcomes or a probability distribution of {outcome: frequency} pairs."""
if is_predicate(event):
event = such_that(event, space)
if isinstance(space, ProbDist):
return sum(space[o] for o in space if o in event)
else:
return Fraction(len(event & space), len(space))
def such_that(predicate, space):
"""The outcomes in the sample pace for which the predicate is true.
If space is a set, return a subset {outcome,...};
if space is a ProbDist, return a ProbDist {outcome: frequency,...};
in both cases only with outcomes where predicate(element) is true."""
if isinstance(space, ProbDist):
return ProbDist({o:space[o] for o in space if predicate(o)})
else:
return {o for o in space if predicate(o)}
###Output
_____no_output_____
###Markdown
Here is the probability distribution for Danish two-child families:
###Code
DK = ProbDist(GG=121801, GB=126840,
BG=127123, BB=135138)
DK
###Output
_____no_output_____
###Markdown
And here are some predicates that will allow us to answer some questions:
###Code
def first_girl(outcome): return outcome[0] == 'G'
def first_boy(outcome): return outcome[0] == 'B'
def second_girl(outcome): return outcome[1] == 'G'
def second_boy(outcome): return outcome[1] == 'B'
def two_girls(outcome): return outcome == 'GG'
P(first_girl, DK)
P(second_girl, DK)
###Output
_____no_output_____
###Markdown
The above says that the probability of a girl is somewhere between 48% and 49%, but that it is slightly different between the first or second child.
###Code
P(second_girl, such_that(first_girl, DK)), P(second_girl, such_that(first_boy, DK))
P(second_boy, such_that(first_girl, DK)), P(second_boy, such_that(first_boy, DK))
###Output
_____no_output_____
###Markdown
The above says that the sex of the second child is more likely to be the same as the first child, by about 1/2 a percentage point. More Urn Problems: M&Ms and BayesHere's another urn problem (or "bag" problem) [from](http://allendowney.blogspot.com/2011/10/my-favorite-bayess-theorem-problems.html) prolific Python/Probability author [Allen Downey ](http://allendowney.blogspot.com/):> The blue M&M was introduced in 1995. Before then, the color mix in a bag of plain M&Ms was (30% Brown, 20% Yellow, 20% Red, 10% Green, 10% Orange, 10% Tan). Afterward it was (24% Blue , 20% Green, 16% Orange, 14% Yellow, 13% Red, 13% Brown). A friend of mine has two bags of M&Ms, and he tells me that one is from 1994 and one from 1996. He won't tell me which is which, but he gives me one M&M from each bag. One is yellow and one is green. What is the probability that the yellow M&M came from the 1994 bag?To solve this problem, we'll first represent probability distributions for each bag: `bag94` and `bag96`:
###Code
bag94 = ProbDist(brown=30, yellow=20, red=20, green=10, orange=10, tan=10)
bag96 = ProbDist(blue=24, green=20, orange=16, yellow=14, red=13, brown=13)
###Output
_____no_output_____
###Markdown
Next, define `MM` as the joint distribution—the sample space for picking one M&M from each bag. The outcome `'yellow green'` means that a yellow M&M was selected from the 1994 bag and a green one from the 1996 bag.
###Code
def joint(A, B, sep=''):
"""The joint distribution of two independent probability distributions.
Result is all entries of the form {a+sep+b: P(a)*P(b)}"""
return ProbDist({a + sep + b: A[a] * B[b]
for a in A
for b in B})
MM = joint(bag94, bag96, ' ')
MM
###Output
_____no_output_____
###Markdown
First we'll look at the "One is yellow and one is green" part:
###Code
def yellow_and_green(outcome): return 'yellow' in outcome and 'green' in outcome
such_that(yellow_and_green, MM)
###Output
_____no_output_____
###Markdown
Now we can answer the question: given that we got a yellow and a green (but don't know which comes from which bag), what is the probability that the yellow came from the 1994 bag?
###Code
def yellow94(outcome): return outcome.startswith('yellow')
P(yellow94, such_that(yellow_and_green, MM))
###Output
_____no_output_____
###Markdown
So there is a 74% chance that the yellow comes from the 1994 bag.Answering this question was straightforward: just like all the other probability problems, we simply create a sample space, and use `P` to pick out the probability of the event in question, given what we know about the outcome.But in a sense it is curious that we were able to solve this problem with the same methodology as the others: this problem comes from a section titled **My favorite Bayes's Theorem Problems**, so one would expect that we'd need to invoke Bayes Theorem to solve it. The computation above shows that that is not necessary. Rev. Thomas Bayes1701-1761Of course, we *could* solve it using Bayes Theorem. Why is Bayes Theorem recommended? Because we are asked about the probability of an event given the evidence, which is not immediately available; however the probability of the evidence given the event is. Before we see the colors of the M&Ms, there are two hypotheses, `A` and `B`, both with equal probability: A: first M&M from 94 bag, second from 96 bag B: first M&M from 96 bag, second from 94 bag P(A) = P(B) = 0.5 Then we get some evidence: E: first M&M yellow, second green We want to know the probability of hypothesis `A`, given the evidence: P(A | E) That's not easy to calculate (except by enumerating the sample space). But Bayes Theorem says: P(A | E) = P(E | A) * P(A) / P(E) The quantities on the right-hand-side are easier to calculate: P(E | A) = 0.20 * 0.20 = 0.04 P(E | B) = 0.10 * 0.14 = 0.014 P(A) = 0.5 P(B) = 0.5 P(E) = P(E | A) * P(A) + P(E | B) * P(B) = 0.04 * 0.5 + 0.014 * 0.5 = 0.027 And we can get a final answer: P(A | E) = P(E | A) * P(A) / P(E) = 0.04 * 0.5 / 0.027 = 0.7407407407 You have a choice: Bayes Theorem allows you to do less calculation at the cost of more algebra; that is a great trade-off if you are working with pencil and paper. Enumerating the state space allows you to do less algebra at the cost of more calculation; often a good trade-off if you have a computer. But regardless of the approach you use, it is important to understand Bayes theorem and how it works.There is one important question that Allen Downey does not address: *would you eat twenty-year-old M&Ms*?&128552; Newton's Answer to a Problem by PepysIsaac Newton1693Samuel Pepys1693[This paper](http://fermatslibrary.com/s/isaac-newton-as-a-probabilist) explains how Samuel Pepys wrote to Isaac Newton in 1693 to pose the problem:> Which of the following three propositions has the greatest chance of success? 1. Six fair dice are tossed independently and at least one โ6โ appears. 2. Twelve fair dice are tossed independently and at least two โ6โs appear. 3. Eighteen fair dice are tossed independently and at least three โ6โs appear. Newton was able to answer the question correctly (although his reasoning was not quite right); let's see how we can do. Since we're only interested in whether a die comes up as "6" or not, we can define a single die and the joint distribution over *n* dice as follows:
###Code
die = ProbDist({'6':1/6, '-':5/6})
def dice(n, die):
"Joint probability from tossing n dice."
if n == 1:
return die
else:
return joint(die, dice(n - 1, die))
dice(3, die)
###Output
_____no_output_____
###Markdown
Now we are ready to determine which proposition is more likely to have the required number of sixes:
###Code
def at_least(k, result): return lambda s: s.count(result) >= k
P(at_least(1, '6'), dice(6, die))
P(at_least(2, '6'), dice(12, die))
P(at_least(3, '6'), dice(18, die))
###Output
_____no_output_____
###Markdown
We reach the same conclusion Newton did, that the best chance is rolling six dice. SimulationSometimes it is inconvenient to explicitly define a sample space. Perhaps the sample space is infinite, or perhaps it is just very large and complicated, and we feel more confident in writing a program to *simulate* one pass through all the complications, rather than try to *enumerate* the complete sample space. *Random sampling* from the simulationcan give an accurate estimate of the probability. Simulating Monopoly[Mr. Monopoly](https://en.wikipedia.org/wiki/Rich_Uncle_Pennybags)1940—Consider [problem 84](https://projecteuler.net/problem=84) from the excellent [Project Euler](https://projecteuler.net), which asks for the probability that a player in the game Monopoly ends a roll on each of the squares on the board. To answer this we need to take into account die rolls, chance and community chest cards, and going to jail (from the "go to jail" space, from a card, or from rolling doubles three times in a row). We do not need to take into account anything about buying or selling properties or exchanging money or winning or losing the game, because these don't change a player's location. We will assume that a player in jail will always pay to get out of jail immediately. A game of Monopoly can go on forever, so the sample space is infinite. But even if we limit the sample space to say, 1000 rolls, there are $21^{1000}$ such sequences of rolls (and even more possibilities when we consider drawing cards). So it is infeasible to explicitly represent the sample space.But it is fairly straightforward to implement a simulation and run it for, say, 400,000 rolls (so the average square will be landed on 10,000 times). Here is the code for a simulation:
###Code
from collections import Counter, deque
import random
# The board: a list of the names of the 40 squares
# As specified by https://projecteuler.net/problem=84
board = """GO A1 CC1 A2 T1 R1 B1 CH1 B2 B3
JAIL C1 U1 C2 C3 R2 D1 CC2 D2 D3
FP E1 CH2 E2 E3 R3 F1 F2 U2 F3
G2J G1 G2 CC3 G3 R4 CH3 H1 T2 H2""".split()
def monopoly(steps):
"""Simulate given number of steps of Monopoly game,
yielding the number of the current square after each step."""
goto(0) # start at GO
CC_deck = Deck('GO JAIL' + 14 * ' ?')
CH_deck = Deck('GO JAIL C1 E3 H2 R1 R R U -3' + 6 * ' ?')
doubles = 0
jail = board.index('JAIL')
for _ in range(steps):
d1, d2 = random.randint(1, 6), random.randint(1, 6)
goto(here + d1 + d2)
doubles = (doubles + 1) if (d1 == d2) else 0
if doubles == 3 or board[here] == 'G2J':
goto(jail)
elif board[here].startswith('CC'):
do_card(CC_deck)
elif board[here].startswith('CH'):
do_card(CH_deck)
yield here
def goto(square):
"Update the global variable 'here' to be square."
global here
here = square % len(board)
def Deck(names):
"Make a shuffled deck of cards, given a space-delimited string."
cards = names.split()
random.shuffle(cards)
return deque(cards)
def do_card(deck):
"Take the top card from deck and do what it says."
global here
card = deck[0] # The top card
deck.rotate(-1) # Move top card to bottom of deck
if card == 'R' or card == 'U':
while not board[here].startswith(card):
goto(here + 1) # Advance to next railroad or utility
elif card == '-3':
goto(here - 3) # Go back 3 spaces
elif card != '?':
goto(board.index(card))# Go to destination named on card
###Output
_____no_output_____
###Markdown
And the results:
###Code
results = list(monopoly(400000))
###Output
_____no_output_____
###Markdown
I'll show a histogram of the squares, with a dotted red line at the average:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.hist(results, bins=40)
avg = len(results) / 40
plt.plot([0, 39], [avg, avg], 'r--');
###Output
_____no_output_____
###Markdown
Another way to see the results:
###Code
ProbDist(Counter(board[i] for i in results))
###Output
_____no_output_____
###Markdown
There is one square far above average: `JAIL`, at a little over 6%. There are four squares far below average: the three chance squares, `CH1`, `CH2`, and `CH3`, at around 1% (because 10 of the 16 chance cards send the player away from the square), and the "Go to Jail" square, square number 30 on the plot, which has a frequency of 0 because you can't end a turn there. The other squares are around 2% to 3% each, which you would expect, because 100% / 40 = 2.5%. The Central Limit Theorem / Strength in Numbers TheoremSo far, we have talked of an *outcome* as being a single state of the world. But it can be useful to break that state of the world down into components. We call these components **random variables**. For example, when we consider an experiment in which we roll two dice and observe their sum, we could model the situation with two random variables, one for each die. (Our representation of outcomes has been doing that implicitly all along, when we concatenate two parts of a string, but the concept of a random variable makes it official.)The **Central Limit Theorem** states that if you have a collection of random variables and sum them up, then the larger the collection, the closer the sum will be to a *normal distribution* (also called a *Gaussian distribution* or a *bell-shaped curve*). The theorem applies in all but a few pathological cases. As an example, let's take 5 random variables reprsenting the per-game scores of 5 basketball players, and then sum them together to form the team score. Each random variable/player is represented as a function; calling the function returns a single sample from the distribution:
###Code
from random import gauss, triangular, choice, vonmisesvariate, uniform
def SC(): return posint(gauss(15.1, 3) + 3 * triangular(1, 4, 13)) # 30.1
def KT(): return posint(gauss(10.2, 3) + 3 * triangular(1, 3.5, 9)) # 22.1
def DG(): return posint(vonmisesvariate(30, 2) * 3.08) # 14.0
def HB(): return posint(gauss(6.7, 1.5) if choice((True, False)) else gauss(16.7, 2.5)) # 11.7
def OT(): return posint(triangular(5, 17, 25) + uniform(0, 30) + gauss(6, 3)) # 37.0
def posint(x): "Positive integer"; return max(0, int(round(x)))
###Output
_____no_output_____
###Markdown
And here is a function to sample a random variable *k* times, show a histogram of the results, and return the mean:
###Code
from statistics import mean
def repeated_hist(rv, bins=10, k=100000):
"Repeat rv() k times and make a histogram of the results."
samples = [rv() for _ in range(k)]
plt.hist(samples, bins=bins)
return mean(samples)
###Output
_____no_output_____
###Markdown
The two top-scoring players have scoring distributions that are slightly skewed from normal:
###Code
repeated_hist(SC, bins=range(60))
repeated_hist(KT, bins=range(60))
###Output
_____no_output_____
###Markdown
The next two players have bi-modal distributions; some games they score a lot, some games not:
###Code
repeated_hist(DG, bins=range(60))
repeated_hist(HB, bins=range(60))
###Output
_____no_output_____
###Markdown
The fifth "player" (actually the sum of all the other players on the team) looks like this:
###Code
repeated_hist(OT, bins=range(60))
###Output
_____no_output_____
###Markdown
Now we define the team score to be the sum of the five players, and look at the distribution:
###Code
def GSW(): return SC() + KT() + DG() + HB() + OT()
repeated_hist(GSW, bins=range(70, 160, 2))
###Output
_____no_output_____
###Markdown
Sure enough, this looks very much like a normal distribution. The Central Limit Theorem appears to hold in this case. But I have to say "Central Limit" is not a very evocative name, so I propose we re-name this as the **Strength in Numbers Theorem**, to indicate the fact that if you have a lot of numbers, you tend to get the expected result. ConclusionWe've had an interesting tour and met some giants of the field: Laplace, Bernoulli, Fermat, Pascal, Bayes, Newton, ... even Mr. Monopoly and The Count.The Count1972—The conclusion is: be explicit about what the problem says, and then methodical about defining the sample space, and finally be careful in counting the number of outcomes in the numerator and denominator. Easy as 1-2-3. Appendix: Continuous Sample SpacesEverything up to here has been about discrete, finite sample spaces, where we can *enumerate* all the possible outcomes. But I was asked about *continuous* sample spaces, such as the space of real numbers. The principles are the same: probability is still the ratio of the favorable cases to all the cases, but now instead of *counting* cases, we have to (in general) compute integrals to compare the sizes of cases. Here we will cover a simple example, which we first solve approximately by simulation, and then exactly by calculation. The Hot New Game Show Problem: SimulationOliver Roeder posed [this problem](http://fivethirtyeight.com/features/can-you-win-this-hot-new-game-show/) in the 538 *Riddler* blog:>Two players go on a hot new game show called *Higher Number Wins.* The two go into separate booths, and each presses a button, and a random number between zero and one appears on a screen. (At this point, neither knows the otherโs number, but they do know the numbers are chosen from a standard uniform distribution.) They can choose to keep that first number, or to press the button again to discard the first number and get a second random number, which they must keep. Then, they come out of their booths and see the final number for each player on the wall. The lavish grand prize โ a case full of gold bullion โ is awarded to the player who kept the higher number. Which number is the optimal cutoff for players to discard their first number and choose another? Put another way, within which range should they choose to keep the first number, and within which range should they reject it and try their luck with a second number?We'll use this notation:- **A**, **B**: the two players.- *A*, *B*: the cutoff values they choose: the lower bound of the range of first numbers they will accept.- *a*, *b*: the actual random numbers that appear on the screen.For example, if player **A** chooses a cutoff of *A* = 0.6, that means that **A** would accept any first number greater than 0.6, and reject any number below that cutoff. The question is: What cutoff, *A*, should player **A** choose to maximize the chance of winning, that is, maximize P(*a* > *b*)?First, simulate the number that a player with a given cutoff gets (note that `random.random()` returns a float sampled uniformly from the interval [0..1]):
###Code
def number(cutoff):
"Play the game with given cutoff, returning the first or second random number."
first = random.random()
return first if first > cutoff else random.random()
number(.5)
###Output
_____no_output_____
###Markdown
Now compare the numbers returned with a cutoff of *A* versus a cutoff of *B*, and repeat for a large number of trials; this gives us an estimate of the probability that cutoff *A* is better than cutoff *B*:
###Code
def Pwin(A, B, trials=30000):
"The probability that cutoff A wins against cutoff B."
Awins = sum(number(A) > number(B)
for _ in range(trials))
return Awins / trials
Pwin(.5, .6)
###Output
_____no_output_____
###Markdown
Now define a function, `top`, that considers a collection of possible cutoffs, estimate the probability for each cutoff playing against each other cutoff, and returns a list with the `N` top cutoffs (the ones that defeated the most number of opponent cutoffs), and the number of opponents they defeat:
###Code
def top(N, cutoffs):
"Return the N best cutoffs and the number of opponent cutoffs they beat."
winners = Counter(A if Pwin(A, B) > 0.5 else B
for (A, B) in itertools.combinations(cutoffs, 2))
return winners.most_common(N)
from numpy import arange
%time top(5, arange(0.50, 0.99, 0.01))
###Output
_____no_output_____
###Markdown
We get a good idea of the top cutoffs, but they are close to each other, so we can't quite be sure which is best, only that the best is somewhere around 0.60. We could get a better estimate by increasing the number of trials, but that would consume more time. The Hot New Game Show Problem: Exact CalculationMore promising is the possibility of making `Pwin(A, B)` an exact calculation. But before we get to `Pwin(A, B)`, let's solve a simpler problem: assume that both players **A** and **B** have chosen a cutoff, and have each received a number above the cutoff. What is the probability that **A** gets the higher number? We'll call this `Phigher(A, B)`. We can think of this as a two-dimensional sample space of points in the (*a*, *b*) plane, where *a* ranges from the cutoff *A* to 1 and *b* ranges from the cutoff B to 1. Here is a diagram of that two-dimensional sample space, with the cutoffs *A*=0.5 and *B*=0.6:The total area of the sample space is 0.5 × 0.4 = 0.20, and in general it is (1 - *A*) · (1 - *B*). What about the favorable cases, where **A** beats **B**? That corresponds to the shaded triangle below:The area of a triangle is 1/2 the base times the height, or in this case, 0.42 / 2 = 0.08, and in general, (1 - *B*)2 / 2. So in general we have: Phigher(A, B) = favorable / total favorable = ((1 - B) ** 2) / 2 total = (1 - A) * (1 - B) Phigher(A, B) = (((1 - B) ** 2) / 2) / ((1 - A) * (1 - B)) Phigher(A, B) = (1 - B) / (2 * (1 - A)) And in this specific case we have: A = 0.5; B = 0.6 favorable = 0.4 ** 2 / 2 = 0.08 total = 0.5 * 0.4 = 0.20 Phigher(0.5, 0.6) = 0.08 / 0.20 = 0.4But note that this only works when the cutoff *A* ≤ *B*; when *A* > *B*, we need to reverse things. That gives us the code:
###Code
def Phigher(A, B):
"Probability that a sample from [A..1] is higher than one from [B..1]."
if A <= B:
return (1 - B) / (2 * (1 - A))
else:
return 1 - Phigher(B, A)
Phigher(0.5, 0.6)
###Output
_____no_output_____
###Markdown
We're now ready to tackle the full game. There are four cases to consider, depending on whether **A** and **B** gets a first number that is above or below their cutoff choices:| first *a* | first *b* | P(*a*, *b*) | P(A wins | *a*, *b*) | Comment ||:-----:|:-----:| ----------- | ------------- | ------------ || *a* > *A* | *b* > *B* | (1 - *A*) · (1 - *B*) | Phigher(*A*, *B*) | Both above cutoff; both keep first numbers || *a* < *A* | *b* < *B* | *A* · *B* | Phigher(0, 0) | Both below cutoff, both get new numbers from [0..1] || *a* > *A* | *b* < *B* | (1 - *A*) · *B* | Phigher(*A*, 0) | **A** keeps number; **B** gets new number from [0..1] || *a* *B* | *A* · (1 - *B*) | Phigher(0, *B*) | **A** gets new number from [0..1]; **B** keeps number |For example, the first row of this table says that the event of both first numbers being above their respective cutoffs has probability (1 - *A*) · (1 - *B*), and if this does occur, then the probability of **A** winning is Phigher(*A*, *B*).We're ready to replace the old simulation-based `Pwin` with a new calculation-based version:
###Code
def Pwin(A, B):
"With what probability does cutoff A win against cutoff B?"
return ((1-A) * (1-B) * Phigher(A, B) # both above cutoff
+ A * B * Phigher(0, 0) # both below cutoff
+ (1-A) * B * Phigher(A, 0) # A above, B below
+ A * (1-B) * Phigher(0, B)) # A below, B above
###Output
_____no_output_____
###Markdown
That was a lot of algebra. Let's define a few tests to check for obvious errors:
###Code
def test():
assert Phigher(0.5, 0.5) == Phigher(0.7, 0.7) == Phigher(0, 0) == 0.5
assert Pwin(0.5, 0.5) == Pwin(0.7, 0.7) == 0.5
assert Phigher(.6, .5) == 0.6
assert Phigher(.5, .6) == 0.4
return 'ok'
test()
###Output
_____no_output_____
###Markdown
Let's repeat the calculation with our new, exact `Pwin`:
###Code
top(5, arange(0.50, 0.99, 0.01))
###Output
_____no_output_____
###Markdown
It is good to see that the simulation and the exact calculation are in rough agreement; that gives me more confidence in both of them. We see here that 0.62 defeats all the other cutoffs, and 0.61 defeats all cutoffs except 0.62. The great thing about the exact calculation code is that it runs fast, regardless of how much accuracy we want. We can zero in on the range around 0.6:
###Code
top(10, arange(0.500, 0.700, 0.001))
###Output
_____no_output_____
###Markdown
This says 0.618 is best, better than 0.620. We can get even more accuracy:
###Code
top(5, arange(0.61700, 0.61900, 0.00001))
###Output
_____no_output_____
###Markdown
So 0.61803 is best. Does that number [look familiar](https://en.wikipedia.org/wiki/Golden_ratio)? Can you prove that it is what I think it is?To understand the strategic possibilities, it is helpful to draw a 3D plot of `Pwin(A, B)` for values of *A* and *B* between 0 and 1:
###Code
import numpy as np
from mpl_toolkits.mplot3d.axes3d import Axes3D
def map2(fn, A, B):
"Map fn to corresponding elements of 2D arrays A and B."
return [list(map(fn, Arow, Brow))
for (Arow, Brow) in zip(A, B)]
cutoffs = arange(0.00, 1.00, 0.02)
A, B = np.meshgrid(cutoffs, cutoffs)
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(1, 1, 1, projection='3d')
ax.set_xlabel('A')
ax.set_ylabel('B')
ax.set_zlabel('Pwin(A, B)')
ax.plot_surface(A, B, map2(Pwin, A, B));
###Output
_____no_output_____
###Markdown
What does this [Pringle of Probability](http://fivethirtyeight.com/features/should-you-shoot-free-throws-underhand/) show us? The highest win percentage for **A**, the peak of the surface, occurs when *A* is around 0.5 and *B* is 0 or 1. We can confirm that, finding the maximum `Pwin(A, B)` for many different cutoff values of `A` and `B`:
###Code
cutoffs = (set(arange(0.00, 1.00, 0.01)) |
set(arange(0.500, 0.700, 0.001)) |
set(arange(0.61700, 0.61900, 0.00001)))
max([Pwin(A, B), A, B]
for A in cutoffs for B in cutoffs)
###Output
_____no_output_____
###Markdown
So **A** could win 62.5% of the time if only **B** would chose a cutoff of 0. But, unfortunately for **A**, a rational player **B** is not going to do that. We can ask what happens if the game is changed so that player **A** has to declare a cutoff first, and then player **B** gets to respond with a cutoff, with full knowledge of **A**'s choice. In other words, what cutoff should **A** choose to maximize `Pwin(A, B)`, given that **B** is going to take that knowledge and pick a cutoff that minimizes `Pwin(A, B)`?
###Code
max(min([Pwin(A, B), A, B] for B in cutoffs)
for A in cutoffs)
###Output
_____no_output_____
###Markdown
And what if we run it the other way around, where **B** chooses a cutoff first, and then **A** responds?
###Code
min(max([Pwin(A, B), A, B] for A in cutoffs)
for B in cutoffs)
###Output
_____no_output_____ |
bac_a_sable.ipynb | ###Markdown
Bac ร SableExpรฉrimentation autour de diffรฉrentes fonctionnalitรฉs
###Code
from numpy.lib.stride_tricks import as_strided
import numpy as np
import time
from scipy.signal import convolve, convolve2d
###Output
_____no_output_____
###Markdown
Dรฉtection de configuration de capture:
###Code
k_diags = np.array([0.25 * np.array([[-1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[0, 0, 0, -1]]),
0.25 * np.array([[0, 0, 0, -1],
[0, 0, 1, 0],
[0, 1, 0, 0],
[-1, 0, 0, 0]])])
k_lines = [0.25 * np.array([[-1, 1, 1, -1]]), 0.25 * np.array([[-1],[1],[1],[-1]])]
#k_free_threes = [np.ones((1,6)), np.ones((6,1)), np.identity(6), np.rot90(np.identity(6))]
k_free_threes = [np.array([1, 2, 2, 2, 2, 1]),
np.array([[1], [2], [2], [2], [2], [1]]),
np.array([[1, 0, 0, 0, 0, 0],
[0, 2, 0, 0, 0, 0],
[0, 0, 2, 0, 0, 0],
[0, 0, 0, 2, 0, 0],
[0, 0, 0, 0, 2, 0],
[0, 0, 0, 0, 0, 1]]),
np.array([[0, 0, 0, 0, 0, 1],
[0, 0, 0, 0, 2, 0],
[0, 0, 0, 2, 0, 0],
[0, 0, 2, 0, 0, 0],
[0, 2, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0]])]
arr = np.random.randint(1,5, size=(1, 2))
arr = np.array([1, 2])
y, x = arr
print(f"valeur de x= {x} valeur de y = {y}")
extend_arr = np.zeros((16,16))
extend_arr[3:-3, 3:-3] = arr
extend_arr
c1 = [0,0]
c2 = [9,0]
c3 = [0,9]
c4 = [9,9]
c5 = [5,3]
c6 = [6,8]
view1 = extend_arr[c1[0]:c1[0]+3+1, c1[1]+3]
view2 = extend_arr[c1[0]+3:c1[0]+2*3+1, c1[1]+3]
view3 = extend_arr[c1[0]+3, c1[1]:c1[1]+3+1]
view4 = extend_arr[c1[0]+3, c1[1]+3:c1[1]+2*3+1]
print(view1) # colonne
print(view2) # colonne
print(view3) # ligne
print(view4) # ligne
conv_c = np.multiply(view1, k_lines[0])
conv_cc = np.multiply(view2, k_lines[0])
conv_l = np.multiply(view3, k_lines[0])
conv_ll = np.multiply(view4, k_lines[0])
print(conv_c)
print(conv_cc)
print(conv_l)
print(conv_ll)
res_c = np.sum(conv_c)
res_cc = np.sum(conv_cc)
res_l = np.sum(conv_l)
res_ll = np.sum(conv_ll)
print(res_c)
print(res_cc)
print(res_l)
print(res_ll)
view5 = extend_arr[c1[0]:c1[0] + 3 + 1, c1[1]:c1[1]+3+1]
view6 = extend_arr[c1[0]:c1[0] + 3 + 1, c1[1]+3:c1[1] + 2 * 3 + 1]
view7 = extend_arr[c1[0] + 3:c1[0] + 2 * 3 + 1, c1[1]:c1[1] + 3 + 1]
view8 = extend_arr[c1[0] + 3:c1[0] + 2 * 3 + 1, c1[1] + 3:c1[1] + 2 * 3 + 1]
print(view5)
print(view6)
print(view7)
print(view8)
np.sum(np.multiply(extend_grid[yx[0] : yx[0] + 3 + 1, yx[1] : yx[1] + 3 + 1], k_capture))
np.sum(np.multiply(extend_grid[yx[0] : yx[0] + 3 + 1, yx[1] + 3 : yx[1] + 2 * 3 + 1], k_capture))
np.sum(np.multiply(extend_grid[yx[0] + 3:yx[0] + 2 * 3 + 1, yx[1] : yx[1] + 3 + 1], k_capture))
np.sum(np.multiply(extend_grid[yx[0] + 3:yx[0] + 2 * 3 + 1, yx[1] + 3 : yx[1] + 2 * 3 + 1], k_capture))
sub_shape = k_diags[0].shape
print(sub_shape)
sub_arr = arr
view_shape = tuple(np.subtract(sub_arr.shape, sub_shape) + 1) + sub_shape
np.subtract(sub_arr.shape, sub_shape) + 1
view_shape
sub_arr.strides * 2
sub_arr = np.arange(1,122).reshape(-1,11)
sub_arr
sub_arr[3:8, 3:8]
def subviews_nxp(board:np.array, np:tuple, axis:int=0, b_diag:bool=False):
if axis == 0 and not b_diag:
d = board.shape[0] - np[0] + 1
elif axis == 1 and not b_diag:
d = board.shape[1] - np[1] + 1
elif b_diag:
d = board.shape[0] - max(np) + 1
sub_views_shape = (d, np[0], np[1])
sub_views_strides = (board.strides[1] + b_diag * board.strides[0], board.strides[0], board.strides[1])
sub_views = as_strided(board, sub_views_shape, sub_views_strides)
return sub_views
res = subviews_nxp(sub_arr[3:8, 3:8], (1,4), axis = 0, b_diag=True)
res
k_capture_l = np.array([[[1, 1, 1, 1]]])
convolve(res, k_capture_l, "valid")
arr_view = as_strided(sub_arr, (9, 4, 4), [80,80,8])
sub_arr.strides * 2
sub_arr.strides[0]
arr_view
def strided4D(arr,arr2,s):
strided = np.lib.stride_tricks.as_strided
s0,s1 = arr.strides
m1,n1 = arr.shape
m2,n2 = arr2.shape
out_shp = (1+(m1-m2)//s, m2, 1+(n1-n2)//s, n2)
return strided(arr, shape=out_shp, strides=(s*s0,s*s1,s0,s1))
test_arr = np.arange(1,26)
test_arr.reshape(5,5)
k = np.identity(3)
strided4D(test_arr.reshape(5, 5), k, 2)
strided4D(test_arr, k, 2)
def _subboard_4_Conv2D(grid, k_shape:tuple, stride:tuple) -> np.array:
""" Generates the sub view of the grid to be multiply with the kernel.
First the shape of the sub_grid array is calculated, it depends on
the grid shape and the kernel shape.
The sub_grid array shape will be (n_x, n_y, k_x, k_y) with:
* n_x: number of application of the kernel along row (with stride of 1)
* n_y: number of application of the kernel along column (with stride of 1)
* k_x, k_y: the shape of the kernel
In this way sub_grid is a numpy array of n_x/n_y rows/columns of (k_x x k_y)
sub view of the grid.
Args:
-----
k_shape ([tuple[int]]): shape of the kernel
stride ([tuple(int)]): put self.grid.strides * 2 (but why?)
"""
view_shape = tuple(np.subtract(grid.shape, k_shape) + 1) + k_shape
sub_grid = as_strided(grid, view_shape, stride * 2)
return sub_grid
def _my_conv2D(grid, kernel:np.array) -> np.array:
""" Retrieves the sub_grid from the function _subboard_4_Conv2D and performs
the convolution (array multiplication + einstein sum along the 3rd and 4th
dimensions).
Args:
-----
* kernel ([np.array]): the kernel to use for convolution.
"""
sub_grid = _subboard_4_Conv2D(grid, k_shape=kernel.shape, stride=grid.strides)
res_conv = np.multiply(sub_grid, kernel)
convolved = np.einsum('ijkl->ij', res_conv)
return convolved.astype('int8')
def check_board(grid):
"""[summary]
"""
## Checking if white pair captured
# Checking the diagonal:
conv_diag1 = _my_conv2D(k_diags[0])
conv_diag2 = _my_conv2D(k_diags[1])
# Checking vertical and horizontal
conv_lin1 = _my_conv2D(k_lines[0])
conv_lin2 = _my_conv2D(k_lines[1])
coord_cd1 = np.argwhere(conv_diag1 == 1)
coord_cd2 = np.argwhere(conv_diag2 == 1)
coord_cl1 = np.argwhere(conv_lin1 == 1)
coord_cl2 = np.argwhere(conv_lin2 == 1)
if coord_cd1.shape[0] != 0:
for coord in coord_cd1:
print("[check_board] - conv_diag1")
grid[coord[0] + 1][coord[1] + 1] = 0
grid[coord[0] + 2][coord[1] + 2] = 0
if coord_cd2.shape[0] != 0:
for coord in coord_cd2:
print("[check_board] - conv_diag2")
grid[coord[0] + 1][coord[1] + 2] = 0
grid[coord[0] + 2][coord[1] + 1] = 0
if coord_cl1.shape[0] != 0:
for coord in coord_cl1:
print("[check_board] - conv_lin1")
grid[coord[0]][coord[1] + 1] = 0
grid[coord[0]][coord[1] + 2] = 0
if coord_cl2.shape[0] != 0:
for coord in coord_cl2:
print("[check_board] - conv_lin2")
grid[coord[0] + 1][coord[1]] = 0
grid[coord[0] + 2][coord[1]] = 0
## Checking if black pair captured
# Checking the diagonal:
conv_diag1 = _my_conv2D(-1 * k_diags[0])
conv_diag2 = _my_conv2D(-1 * k_diags[1])
# Checking vertical and horizontal
conv_lin1 = _my_conv2D(-1 * k_lines[0])
conv_lin2 = _my_conv2D(-1 * k_lines[1])
print("<<<== CONV_LIN1 ==>>>")
print(conv_lin1)
print("<<<== CONV_LIN2 ==>>>")
print(conv_lin2)
print("<<<== ========= ==>>>")
coord_cd1 = np.argwhere(conv_diag1 == 1)
coord_cd2 = np.argwhere(conv_diag2 == 1)
coord_cl1 = np.argwhere(conv_lin1 == 1)
coord_cl2 = np.argwhere(conv_lin2 == 1)
if coord_cd1.shape[0] != 0:
for coord in coord_cd1:
print("[check_board] - conv_diag1 -1")
grid[coord[0] + 1][coord[1] + 1] = 0
grid[coord[0] + 2][coord[1] + 2] = 0
if coord_cd2.shape[0] != 0:
for coord in coord_cd2:
print("[check_board] - conv_diag2 -1")
grid[coord[0] + 1][coord[1] + 2] = 0
grid[coord[0] + 2][coord[1] + 1] = 0
if coord_cl1.shape[0] != 0:
for coord in coord_cl1:
print("[check_board] - conv_lin1 -1")
grid[coord[0]][coord[1] + 1] = 0
grid[coord[0]][coord[1] + 2] = 0
if coord_cl2.shape[0] != 0:
for coord in coord_cl2:
print("[check_board] - conv_lin2 -1")
grid[coord[0] + 1][coord[1]] = 0
grid[coord[0] + 2][coord[1]] = 0
def issimplefreethree_position(yx, grid):
"""[summary]
Args:
yx ([type]): [description]
grid ([type]): [description]
Returns:
___: [description]
"""
tmp = np.zeros((18,18))
tmp[yx[0] + 4, yx[1] + 4] = 1
tmp[4:-4, 4:-4] += grid + 1
r_start, r_end = yx[0], yx[0] + 8
c_start, c_end = yx[1], yx[1] + 8
print(tmp)
# Convolution sur la ligne
view_l = tmp[yx[0] + 4, yx[1]:yx[1] + 9]
#print(">=== view_l ===<\n", view_l)
res_l = [np.sum(np.multiply(view_l[i:i+6], k_free_threes[0])) for i in range(4)]
#print(">=== res_l ===<\n", res_l)
#Convolution sur la colonne:
view_c = tmp[yx[0]:yx[0] + 9, yx[1] + 4]
#print(">=== view_c ===<\n", view_c)
res_c = [np.sum(np.multiply(view_c[i:i+6], k_free_threes[1].flatten())) for i in range(4)]
#print(">=== res_c ===<\n", res_c)
# Convolution sur diagonale descendante gauche-droite
view_d1 = [tmp[yx[0]+i:yx[0]+6+i, yx[1]+i:yx[1]+6+i] for i in range(4)]
#print(">=== view_d1 ===<\n", np.array(view_d1))
res_d1 = [np.sum(np.multiply(view_d1[i], k_free_threes[2])) for i in range(4)]
#print(">=== res_d1 ===<\n", res_d1)
# Convolution sur diagonale montante gauche-droite
view_d2 = [tmp[yx[0] +3 - i:yx[0] + 9 -i, yx[1]+i:yx[1]+6+i] for i in range(4)]
#print(">=== view_d2 ===<\n", view_d2)
res_d2 = [np.sum(np.multiply(view_d2[i], k_free_threes[3])) for i in range(4)]
#print(">=== res_d2 ===<\n", res_d2)
res = [*res_l, *res_c, *res_d1, *res_d2]
if any([np.any(arr >= 16) for arr in res]):
count = np.count_nonzero(np.array(res) >=16)
return True, res, count
return False, res
grid1 = np.zeros((10,10))
grid2 = np.zeros((10,10))
grid3 = np.zeros((10,10))
grid4 = np.zeros((10,10))
grid5 = np.zeros((10,10))
grid6 = np.zeros((10,10))
grid7 = np.zeros((10,10))
grid1[0:2,2] = 1
grid2[3:5,2] = 1
grid3[5,0:2] = 1
grid4[6,3:5] = 1
#grid5[[3,4],[3,4]] = 1
grid5[[6, 6], [7, 8]] = 1
grid6[[5, 6], [5, 6]] = 1
grid7[[2, 5],[2, 5]] = 1
grid7
issimplefreethree_position([3,3], grid7)
###Output
[[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 1. 2. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 1. 1. 2. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 1. 1. 1. 1. 2. 1. 1. 1. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]]
###Markdown
Dรฉtection des doubles free threes
###Code
# kernels to check if there is a double free threes
S = 1 # choose a number corresponding to what the stone will be multiply by
V = 1 # choose a number corresponding to what the empty position will be multiply by
s1_a = np.array([[V, 0, 0, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0, 0],
[0, 0, S, 0, 0, 0, 0],
[0, 0, V, S, S, S, V],
[0, 0, 0, 0, V, 0, 0]], dtype='int8')
s1_b = np.array([[0, V, 0, 0, 0],
[0, S, 0, 0, 0],
[0, S, 0, 0, 0],
[V, S, S, S, V],
[0, V, 0, 0, 0]], dtype='int8')
s1_c = np.array([[0, 0, 0, 0, V],
[0, 0, 0, S, 0],
[0, 0, S, 0, 0],
[V, S, S, S, V],
[V, 0, 0, 0, 0]], dtype='int8')
s1_d = np.array([[V, 0, 0, 0, 0, 0, V],
[0, S, 0, 0, 0, S, 0],
[0, 0, S, 0, S, 0, 0],
[0, 0, 0, S, 0, 0, 0],
[0, 0, V, 0, V, 0, 0]], dtype='int8')
s2_a = np.array([[V, 0, 0, 0, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0, 0, 0],
[0, 0, S, 0, 0, 0, 0, 0],
[0, 0, 0, V, 0, 0, 0, 0],
[0, 0, 0, V, S, S, S, V],
[0, 0, 0, 0, 0, V, 0, 0]], dtype='int8')
s2_b = np.array([[0, V, 0, 0, 0],
[0, S, 0, 0, 0],
[0, S, 0, 0, 0],
[0, V, 0, 0, 0],
[V, S, S, S, V],
[0, V, 0, 0, 0]], dtype='int8')
s2_c = np.array([[0, 0, 0, 0, 0, V],
[0, 0, 0, 0, S, 0],
[0, 0, 0, S, 0, 0],
[0, 0, V, 0, 0, 0],
[V, S, S, S, V, 0],
[V, 0, 0, 0, 0, 0]], dtype='int8')
s2_d = np.array([[V, 0, 0, 0, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0, 0, V],
[0, 0, S, 0, 0, 0, S, 0],
[0, 0, 0, V, 0, S, 0, 0],
[0, 0, 0, 0, S, 0, 0, 0],
[0, 0, 0, V, 0, V, 0, 0]], dtype='int8')
s3_a = np.array([[V, 0, 0, 0, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0, 0, 0],
[0, 0, V, 0, 0, 0, 0, 0],
[0, 0, 0, S, 0, 0, 0, 0],
[0, 0, 0, V, S, S, S, V],
[0, 0, 0, 0, 0, V, 0, 0]], dtype='int8')
s3_b = np.array([[0, V, 0, 0, 0],
[0, S, 0, 0, 0],
[0, V, 0, 0, 0],
[0, S, 0, 0, 0],
[V, S, S, S, V],
[0, V, 0, 0, 0]], dtype='int8')
s3_c = np.array([[0, 0, 0, 0, 0, V],
[0, 0, 0, 0, S, 0],
[0, 0, 0, V, 0, 0],
[0, 0, S, 0, 0, 0],
[V, S, S, S, V, 0],
[V, 0, 0, 0, 0, 0]], dtype='int8')
s3_d = np.array([[V, 0, 0, 0, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0, 0, V],
[0, 0, V, 0, 0, 0, S, 0],
[0, 0, 0, S, 0, S, 0, 0],
[0, 0, 0, 0, S, 0, 0, 0],
[0, 0, 0, V, 0, V, 0, 0]], dtype='int8')
s4_a = np.array([[V, 0, 0, 0, 0, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0, 0, 0, 0],
[0, 0, S, 0, 0, 0, 0, 0, 0],
[0, 0, 0, V, 0, 0, 0, 0, 0],
[0, 0, 0, V, S, V, S, S, V],
[0, 0, 0, 0, 0, V, 0, 0, 0]], dtype='int8')
s4_b = np.array([[0, V, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0],
[0, V, 0, 0, 0, 0],
[V, S, V, S, S, V],
[0, V, 0, 0, 0, 0]], dtype='int8')
s4_c = np.array([[0, 0, 0, 0, 0, V],
[0, 0, 0, 0, S, 0],
[0, 0, 0, S, 0, 0],
[0, 0, V, 0, 0, 0],
[V, S, V, S, S, V],
[V, 0, 0, 0, 0, 0]], dtype='int8')
s4_d = np.array([[V, 0, 0, 0, 0, 0, 0, 0, V],
[0, S, 0, 0, 0, 0, 0, S, 0],
[0, 0, S, 0, 0, 0, S, 0, 0],
[0, 0, 0, V, 0, V, 0, 0, 0],
[0, 0, 0, 0, S, 0, 0, 0, 0],
[0, 0, 0, V, 0, V, 0, 0, 0]], dtype='int8')
s5_a = np.array([[V, 0, 0, 0, 0, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0, 0, 0, 0],
[0, 0, S, 0, 0, 0, 0, 0, 0],
[0, 0, 0, V, 0, 0, 0, 0, 0],
[0, 0, 0, V, S, S, V, S, V],
[0, 0, 0, 0, 0, V, 0, 0, 0]], dtype='int8')
s5_b = np.array([[0, V, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0],
[0, V, 0, 0, 0, 0],
[V, S, S, V, S, V],
[0, V, 0, 0, 0, 0]], dtype='int8')
s5_c = np.array([[0, 0, 0, 0, 0, V],
[0, 0, 0, 0, S, 0],
[0, 0, 0, S, 0, 0],
[0, 0, V, 0, 0, 0],
[V, S, S, V, S, V],
[V, 0, 0, 0, 0, 0]], dtype='int8')
s5_d = np.array([[V, 0, 0, 0, 0, 0, 0, 0, V],
[0, S, 0, 0, 0, 0, 0, S, 0],
[0, 0, S, 0, 0, 0, V, 0, 0],
[0, 0, 0, V, 0, S, 0, 0, 0],
[0, 0, 0, 0, S, 0, 0, 0, 0],
[0, 0, 0, V, 0, V, 0, 0, 0]], dtype='int8')
s6_a = np.array([[V, 0, 0, 0, 0, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0, 0, 0, 0],
[0, 0, V, 0, 0, 0, 0, 0, 0],
[0, 0, 0, S, 0, 0, 0, 0, 0],
[0, 0, 0, V, S, S, V, S, V],
[0, 0, 0, 0, 0, V, 0, 0, 0]], dtype='int8')
s6_b = np.array([[0, V, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0],
[0, V, 0, 0, 0, 0],
[0, S, 0, 0, 0, 0],
[V, S, S, V, S, V],
[0, V, 0, 0, 0, 0]], dtype='int8')
s6_c = np.array([[0, 0, 0, 0, 0, V],
[0, 0, 0, 0, S, 0],
[0, 0, 0, V, 0, 0],
[0, 0, S, 0, 0, 0],
[V, S, S, V, S, V],
[V, 0, 0, 0, 0, 0]], dtype='int8')
s6_d = np.array([[V, 0, 0, 0, 0, 0, 0, 0, V],
[0, S, 0, 0, 0, 0, 0, S, 0],
[0, 0, V, 0, 0, 0, V, 0, 0],
[0, 0, 0, S, 0, S, 0, 0, 0],
[0, 0, 0, 0, S, 0, 0, 0, 0],
[0, 0, 0, V, 0, V, 0, 0, 0]], dtype='int8')
dct_S = {'S1': {'a': s1_a, 'b': s1_b, 'c': s1_c, 'd':s1_d},
'S2': {'a': s2_a, 'b': s2_b, 'c': s2_c, 'd':s2_d},
'S3': {'a': s3_a, 'b': s3_b, 'c': s3_c, 'd':s3_d},
'S4': {'a': s4_a, 'b': s4_b, 'c': s4_c, 'd':s4_d},
'S5': {'a': s5_a, 'b': s5_b, 'c': s5_c, 'd':s5_d},
'S6': {'a': s6_a, 'b': s6_b, 'c': s6_c, 'd':s6_d}}
def R_transformation(arr:np.array) -> list:
l_kernel = []
for k in [0, 1, 2, 3]:
l_kernel.append(np.rot90(arr, k))
return l_kernel
def mR_transformation(arr:np.array) -> list:
l_kernel = []
for k in [0, 1, 2, 3]:
l_kernel.append(np.rot90(np.flipud(arr), k))
return l_kernel
table_transf = {'S1': {'a': [True, True],
'b': [True, False],
'c': [True, True],
'd': [True, False]},
'S2': {'a': [True, True],
'b': [True, True],
'c': [True, True],
'd': [True, True]},
'S3': {'a': [True, True],
'b': [True, True],
'c': [True, True],
'd': [True, True]},
'S4': {'a': [True, True],
'b': [True, False],
'c': [True, True],
'd': [True, False]},
'S5': {'a': [True, True],
'b': [True, True],
'c': [True, True],
'd': [True, True]},
'S6': {'a': [True, True],
'b': [True, False],
'c': [True, True],
'd': [True, False]}}
full_kernels = []
for first_k in table_transf.keys():
for second_k in table_transf[first_k].keys():
sym1, sym2 = table_transf[first_k][second_k]
if sym1:
[full_kernels.append(transf) for transf in R_transformation(dct_S[first_k][second_k])]
if sym2:
[full_kernels.append(transf) for transf in mR_transformation(dct_S[first_k][second_k])]
def _subboard_4_Conv2D(grid, k_shape:tuple, stride:tuple) -> np.array:
""" Generates the sub view of the grid to be multiply with the kernel.
First the shape of the sub_grid array is calculated, it depends on
the grid shape and the kernel shape.
The sub_grid array shape will be (n_x, n_y, k_x, k_y) with:
* n_x: number of application of the kernel along row (with stride of 1)
* n_y: number of application of the kernel along column (with stride of 1)
* k_x, k_y: the shape of the kernel
In this way sub_grid is a numpy array of n_x/n_y rows/columns of (k_x x k_y)
sub view of the grid.
Args:
-----
k_shape ([tuple[int]]): shape of the kernel
stride ([tuple(int)]): put self.grid.strides * 2 (but why?)
"""
view_shape = tuple(np.subtract(grid.shape, k_shape) + 1) + k_shape
sub_grid = as_strided(grid, view_shape, stride * 2)
return sub_grid
def _my_conv2D(grid, kernel:np.array) -> np.array:
""" Retrieves the sub_grid from the function _subboard_4_Conv2D and performs
the convolution (array multiplication + einstein sum along the 3rd and 4th
dimensions).
Args:
-----
* kernel ([np.array]): the kernel to use for convolution.
"""
sub_grid = _subboard_4_Conv2D(grid, k_shape=kernel.shape, stride=grid.strides)
res_conv = np.multiply(sub_grid, kernel)
convolved = np.einsum('ijkl->ij', res_conv)
return convolved.astype('int8')
v_my_conv2D = np.vectorize(_my_conv2D, signature='(m,n),(p,q)->()')
v_my_conv2D(grid, full_kernels)
grid = np.zeros((19,19), dtype = 'int8')
grid[[3, 4, 6, 6, 6],[3, 4, 6, 7, 8]] = 1
res = _my_conv2D(grid, full_kernels[0])
print(grid.shape)
print(full_kernels[0].shape)
print(res.shape)
res = []
start = time.time()
for kernel in full_kernels:
res.append(_my_conv2D(grid[], kernel))
end = time.time()
print(end-start)
BLACK = 1
WHITE = -1
k_freethree = np.array([1, 2, 2, 2, 2, 1])
def get_line_idx(yx:np.array):
return (np.ones((1,9)) * yx[0]).astype('int8'), (np.arange(-4, 5) + yx[1]).astype('int8')
def get_col_idx(yx:np.array):
return (np.arange(-4, 5) + yx[0]).astype('int8'), (np.ones((1,9)) * yx[1]).astype('int8')
def get_diag1_idx(yx:np.array):
return (np.arange(-4, 5) + yx[0]).astype('int8'), (np.arange(-4, 5) + yx[1]).astype('int8')
def get_diag2_idx(yx:np.array):
return (np.arange(4, -5, -1) + yx[0]).astype('int8'), (np.arange(-4, 5) + yx[1]).astype('int8')
def isdoublefreethree_position(yx:np.array, grid:np.array, color:int) -> bool:
"""[summary]
Args:
yx ([type]): [description]
grid ([type]): [description]
Returns:
bool: [description]
"""
pad_width = 5
c = color
extend_grid = np.pad(grid + c, pad_width, "constant", constant_values = (0))
extend_grid[yx[0] + pad_width, yx[1] + pad_width] = 2 * c
print(extend_grid)
res = []
res.append(np.convolve(extend_grid[get_line_idx(yx + pad_width)].reshape(-1,), c * k_freethree, "valid"))
res.append(np.convolve(extend_grid[get_col_idx(yx + pad_width)].reshape(-1,), c * k_freethree, "valid"))
res.append(np.convolve(extend_grid[get_diag1_idx(yx + pad_width)], c * k_freethree, "valid"))
res.append(np.convolve(extend_grid[get_diag2_idx(yx + pad_width)], c * k_freethree, "valid"))
nb_free_three = 0
for r_conv in res:
if (r_conv >= 16).any():
nb_free_three += 1
if nb_free_three > 1:
return True
return False
grid8 = np.zeros((10,10))
grid8[[2, 3, 5, 5],[2, 3, 6, 7]] = 1
grid8
stone = 1
yx = np.array([5, 5])
isdoublefreethree_position(yx , grid8, stone)
grid9 = np.zeros((10,10))
grid9[[2, 3, 5, 5],[7, 6, 6, 7]] = 1
grid9
isdoublefreethree_position(np.array([4,5]) , grid9, stone)
grid10 = np.zeros((10,10))
grid10[[2, 3, 3, 2],[2, 3, 5, 6]] = 1
grid10[[1], [7]] = -1
grid10
isdoublefreethree_position(np.array([4,4]) , grid10, stone)
isdoublefreethree_position([4, 4], grid10)
test = np.array([[1, 1, 0, 0, 0]])
kernel = np.array([[1, 1, 1, 1],
[1, 1, 1, 1],
[0, 1, 1, 1],
[0, 0, 1, 1],
[0, 0, 0, 1]])
np.dot(test, kernel)
test
res = np.array([[-1, -1, -1, -1, -1]])
for _ in range(10):
res = np.append(res, np.random.randint(low=0, high=10, size=(1,5)), axis=0)
res
res[1:2, 2:4]
(res / np.arange(1, 6)).astype('int8')
res = np.array([[-1, -1]])
for _ in range(10):
res = np.append(res, np.random.randint(low=0, high=3, size=(1,2)), axis=0)
res = np.delete(res, 0, 0)
res
np.unique(res, axis=0)
i = 0
while i < res.shape[0]:
print(f"res[{i}]: ", res[i])
mask = (res == res[i]).all(axis=1)
mask[0] = False
print("mask:", mask)
res = np.delete(res, mask, axis=0)
i += 1
res
res[1]
tmp
tmp[2:,0]
###Output
_____no_output_____
###Markdown
Elaboration du scoreboard
###Code
SIZE = 10
kernel = np.array([[1, 0, 1, 0, 1],
[0, 1, 1, 1, 0],
[1, 1, 1, 1, 1],
[0, 1, 1, 1, 0],
[1, 0, 1, 0, 1]])
scoreboard = np.zeros((SIZE,SIZE))
grid = np.zeros((SIZE,SIZE))
nb_white = np.random.randint(35,40)
nb_black = 0
while nb_black <= 0:
nb_black = nb_white + np.random.randint(-4, 4)
ii = jj = 0
while ii < nb_white:
x, y = list(np.random.randint(0, SIZE, size = 2))
grid[y, x] = 1
ii += 1
while jj < nb_black:
x, y = list(np.random.randint(0, SIZE, size = 2))
grid[y, x] = -1
jj += 1
possible_pos = np.argwhere(grid != 0)
curr_pos = np.array(possible_pos[np.random.randint(0, len(possible_pos) -1, size = 1)][0])
print(f"nb_white = {nb_white} -- nb_black = {nb_black} -- cur_pos = {curr_pos}")
grid
extend_grid = np.pad(grid, 4, "constant", constant_values = 0)
extend_scoreboard = np.pad(scoreboard, 2, "constant", constant_values = 0)
curr_pos
coordy = curr_pos[0] + 4
coordx = curr_pos[1] + 4
extend_grid[coordy-2:coordy+3, coordx-2:coordx+3]
#extend_scoreboard[curr_pos[0]:curr_pos[0]+5,curr_pos[1]:curr_pos[1]+5] = convolve2d(extend_grid[coordy-2:coordy+3, coordx-2:coordx+3], kernel, "same")
convolve2d(extend_grid[coordy-4:coordy+5, coordx-4:coordx+5], kernel, "valid")
extend_scoreboard
scoreboard = extend_scoreboard[2:-2, 2:-2]
scoreboard
###Output
_____no_output_____
###Markdown
Test sur la caractere terminal d'une sรฉquence de pierre:
###Code
BLACK = 1
WHITE = -1
k_capture = np.array([[-1, 1, 1, -1]])
last_color = BLACK
las_coord = np.array([4,5])
# %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% #
board = np.zeros((11,11))
board[2:7, 5] = BLACK
board[[4,5],[6,6]]= BLACK
board[[4,5],[3,7]] = WHITE
board
start_y = 0
start_x = 5
while (board[start_y, start_x] == 0):
start_y += 1
end_y = start_y + 5
end_x = start_x
print(f"({start_y};{start_x}) ({end_y};{end_x})")
ROI = board[start_y : end_y, start_x - 2 : end_x + 2 + 1]
ROI
res = convolve2d(ROI, last_color * k_capture, "valid")
res
np.any(res == 3)
dx = dy = 0
for i in range(5):
dx, dy = dx + 1, dy + 1
print(f"{dx}, {dy}")
np.rot45(k_capture)
###Output
_____no_output_____ |
wildlife-insights/Taxonomy_table.ipynb | ###Markdown
Table of Contents
###Code
import pandas as pd
import numpy as np
taxonomies_prod = pd.read_csv('/Users/alicia/Downloads/taxonomies_production.csv')
taxonomies_staging = pd.read_csv('/Users/alicia/Downloads/taxonomies_staging.csv')
taxonomies_prod.info()
taxonomies_staging.info()
for column in list(taxonomies_prod.columns):
if column not in ['id', 'unique_identifier', 'common_name_english', 'scientific_name', 'authority']:
print(column)
result = set(taxonomies_prod[column].unique()) - set(taxonomies_staging[column].unique())
print(list(result))
for column in list(taxonomies_staging.columns):
print(column)
print(taxonomies_staging[column].value_counts(dropna=False))
for column in list(taxonomies_prod.columns):
print(column)
print(taxonomies_prod[column].value_counts(dropna=False))
taxonomies_prod.head()
taxonomies_staging.head()
def diff_pd(df1, df2, cols):
"""Identify differences between two pandas DataFrames"""
diffCols = set(df1.columns.values) ^ set(df2.columns.values)
assert (set(df1.columns.values) ^ set(df2.columns.values)) is not None, \
f"DataFrame column names are different: {diffCols}"
df1.sort_index(axis=1, inplace=True)
df2.sort_index(axis=1, inplace=True)
print("Staging data shape"+ str(df1.shape))
print("Prod data shape"+ str(df2.shape))
if any(df1.dtypes != df2.dtypes):
"Data Types are different, trying to convert"
df2 = df2.astype(df1.dtypes)
if df1.equals(df2):
return None
else:
columns = set(df1.columns.values) - set(cols)
print(columns)
outputR = df1.merge(df2,how="right", on=['species', 'genus','family', 'order', 'taxon_level'], suffixes=('_staging', '_pro'), indicator=True)
print(outputR._merge.value_counts(dropna=False))
print('_________________________________')
print(outputR.shape)
print(outputR.info())
outputRM = outputR[['suborder_pro', 'genus', 'family', 'superfamily_pro', 'subspecies_pro', 'scientific_name_pro', 'species', 'subfamily_pro', 'class_pro', 'order', 'taxonomy_type_pro', 'authority_pro', 'iucn_category_id_pro', 'common_name_english_pro', 'taxon_level', 'unique_identifier_pro','id_staging']]
outputRM.drop_duplicates(keep=False)
print(outputRM.info())
return outputRM
test_output = diff_pd(taxonomies_staging, taxonomies_prod, ['unique_identifier','id'])
for column in list(test_output.columns):
print(column)
print(test_output[column].value_counts(dropna=True))
r = test_output['unique_identifier_pro'].value_counts(dropna=False)
f = list(r.where(r>1).dropna().index)
s = test_output[test_output['unique_identifier_pro'].isin(f)]
s[s.taxon_level != 'none'].info()
s.drop_duplicates(subset=['unique_identifier_pro'], keep='first', inplace=True)
s
test_output.drop_duplicates(subset=['unique_identifier_pro'], keep='first', inplace=True)
test_output.rename(columns={"suborder_pro": "suborder", "superfamily_pro": "superfamily", "subfamily_pro": "subfamily", "class_pro": "class", "authority_pro": "authority", "common_name_english_pro": "common_name_english", "iucn_category_id_pro": "iucn_category_id", "unique_identifier_pro": "unique_identifier","id_staging": "id","taxonomy_type_pro": "taxonomy_type","scientific_name_pro": "scientific_name","subspecies_pro": "subspecies"}, inplace=True)
test_output.info()
test_output.to_csv('/Users/alicia/Downloads/taxonomies_result.csv',index=False)
###Output
_____no_output_____ |
codici/.ipynb_checkpoints/loss1-checkpoint.ipynb | ###Markdown
###Code
%matplotlib inline
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import scipy as scipy
import scipy.special as sp
import pandas as pd
import urllib.request
colors = ["xkcd:dusty blue", "xkcd:dark peach", "xkcd:dark seafoam green",
"xkcd:dusty purple","xkcd:watermelon", "xkcd:dusky blue", "xkcd:amber",
"xkcd:purplish", "xkcd:dark teal", "xkcd:orange", "xkcd:slate"]
plt.style.use('ggplot')
plt.rcParams['font.family'] = 'sans-serif'
plt.rcParams['font.serif'] = 'Ubuntu'
plt.rcParams['font.monospace'] = 'Ubuntu Mono'
plt.rcParams['font.size'] = 10
plt.rcParams['axes.labelsize'] = 10
plt.rcParams['axes.labelweight'] = 'bold'
plt.rcParams['axes.titlesize'] = 10
plt.rcParams['xtick.labelsize'] = 8
plt.rcParams['ytick.labelsize'] = 8
plt.rcParams['legend.fontsize'] = 10
plt.rcParams['figure.titlesize'] = 12
plt.rcParams['image.cmap'] = 'jet'
plt.rcParams['image.interpolation'] = 'none'
plt.rcParams['figure.figsize'] = (16, 8)
plt.rcParams['lines.linewidth'] = 2
plt.rcParams['lines.markersize'] = 8
filepath = "../dataset/"
url = "https://tvml.github.io/ml2021/dataset/"
def get_file(filename,local):
if local:
return filepath+filename
else:
urllib.request.urlretrieve (url+filename, filename)
return filename
def plot_ds(data,m=None,q=None):
fig = plt.figure(figsize=(16,8))
minx, maxx = min(data.x1), max(data.x1)
deltax = .1*(maxx-minx)
x = np.linspace(minx-deltax,maxx+deltax,1000)
ax = fig.gca()
ax.scatter(data[data.t==0].x1, data[data.t==0].x2, s=40, edgecolor='k', alpha=.7)
ax.scatter(data[data.t==1].x1, data[data.t==1].x2, s=40, edgecolor='k', alpha=.7)
if m:
ax.plot(x, m*x+q, lw=2, color=colors[5])
plt.xlabel('$x_1$', fontsize=12)
plt.ylabel('$x_2$', fontsize=12)
plt.xticks(fontsize=10)
plt.yticks(fontsize=10)
plt.title('Dataset', fontsize=12)
plt.show()
def plot_all(cost_history, m, q, low, high, step):
idx = range(low,high,step)
ch = cost_history[idx]
th1 = m[idx]
th0 = q[idx]
fig = plt.figure(figsize=(18,6))
ax = fig.add_subplot(1,2,1)
minx, maxx, miny, maxy = 0, len(ch), ch.min(), ch.max()
deltay, deltax = .1*(maxy-miny), .1*(maxx-minx)
miny, maxy, minx, maxx = miny - deltay, maxy + deltay, minx - deltax, maxx + deltax
ax.plot(range(len(ch)), ch, alpha=1, color=colors[0], linewidth=2)
plt.xlabel('iterazioni')
plt.ylabel('costo')
plt.xlim(minx,maxx)
plt.ylim(miny,maxy)
ax.xaxis.set_major_formatter(mpl.ticker.FuncFormatter(lambda x, pos: '{:0.0f}'.format(x*step+low)))
plt.xticks(fontsize=8)
plt.yticks(fontsize=8)
ax = fig.add_subplot(1,2,2)
minx, maxx, miny, maxy = th0.min(), th0.max(), th1.min(), th1.max()
deltay, deltax = .1*(maxy-miny), .1*(maxx-minx)
miny, maxy = miny - deltay, maxy + deltay
miny, maxy, minx, maxx = miny - deltay, maxy + deltay, minx - deltax, maxx + deltax
ax.plot(th0, th1, alpha=1, color=colors[1], linewidth=2, zorder=1)
ax.scatter(th0[-1],th1[-1], color=colors[5], marker='o', s=40, zorder=2)
plt.xlabel(r'$m$')
plt.ylabel(r'$q$')
plt.xlim(minx,maxx)
plt.ylim(miny,maxy)
plt.xticks(fontsize=8)
plt.yticks(fontsize=8)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Rischio e minimizzazioneDato un qualunque algoritmo che fornisce per ogni valore di input $x$ una previsione $f(x)$, la qualitร delle previsioni fornite dall'algoritmo puรฒ essere definita per mezzo di una *funzione di costo* (loss function) $L(x_1, x_2)$, dove $x_1$ รจ il valore predetto dal modello e $x_2$ รจ il valore corretto associato a $x$ . Sostanzialmente, il valore della funzione di costo $L(f(x),y)$ misura quindi quanto "costa" (secondo il modello di costo indotto dalla funzione stessa) prevedere, dato $x$, il valore $f(x)$ invece del valore corretto $y$.Dato che evidentemente il costo รจ dipendente dalla coppia di valori $x,y$, una valutazione complessiva della qualitร delle predizioni dell'algoritmo potrร essere fornita considerando il valore atteso della funzione di costo al variare di $x$ e $y$, nell'ipotesi di una (densitร di) distribuzione di probabilitร congiunta di tali valori $p(x,y)$. La distribuzione $p(x,y)$ ci fornisce quindi la probabilitร che il prossimo punto su cui effettuare la predizione sia $x$ e che il valore corretto da predire sia $y$. Si noti che non si fa l'ipotesi che due diverse occorrenze di $x$ siano associate allo stesso valore di $y$: non si assume quindi una relazione funzionale, seppure sconosciuta, tra $x$ e $y$, ma solo una relazione in probabilitร $p(y\mid x)$.Questo permette di considerare la presenza di rumore nelle osservazioni effettuate.Da quanto detto, indicando con $D_x$ e $D_y$ i domini di definizione di $x$ e $y$, e assunta una distribuzione $p(x,y)$ che fornisce un modello statistico del contesto in cui si intende effettuare le predizioni, la qualitร di un algoritmo di previsione che calcola la funzione $f(x)$ sarร data dal *rischio*$$\mathcal{R}(f)=\mathbb{E}_p[L(f(x),y)]=\int_{D_x}\int_{D_y} L(f(x),y)p(x,y)dxdy$$Il rischio di dice quindi quanto ci aspettiamo che ci costi prevedere $f(x)$, assumendo che:1. $x$ sia estratto a caso dalla distribuzione marginale $$ p(x)=\int_{D_y} p(x,y)dy $$2. il relativo valore corretto da predire sia estratto a caso dalla distribuzione condizionata$$p(y\mid x)=\frac{p(x,y)}{p(x)}$$3. il costo sia rappresentato dalla funzione $L(x_1,x_2)$ Esempio Consideriamo il caso in cui vogliamo effettuare previsioni sulla possibilitร di pioggia in giornata, date le condizioni del cielo al mattino, assumendo che le possibili osservazioni siano "sereno" (S), "nuvoloso" (N), "coperto" (C), e che le previsioni siano "pioggia" (T) e "non pioggia" (F). La funzione di costo, sarร allora del tipo $L:\{T,F\}^2\mapsto\mathbb{R}$La definizione di una particolare funzione di costo รจ legata alla valutazione delle prioritร dell'utente. Nel caso specifico, se si valuta allo stesso modo "sgradevole" uscire con l'ombrello (per una previsione T) senza poi doverlo usare che bagnarsi per la pioggia non avendo preso l'ombrello (per una previsione F) allora la funzione di costo risulta $L_1(x_1,x_2)$, definita dalla tabella seguente| $x_1$/$x_2$ | T | F || :---------: | :--: | :--: || T | 0 | 1 || F | 1 | 0 |Se invece reputiamo molto piรน sgradevole bagnarci per non aver preso l'ombrello rispetto a prendere l'ombrello stesso inutilmente, allora la funzione di costo $L_2(x_1,x_2)$, potrร essere definita come| $x_1$/$x_2$ | T | F || :---------: | :--: | :--: || T | 0 | 1 || F | 25 | 0 |Se facciamo l'ipotesi che la distribuzione congiunta su $\{S,N,C\}\times\{T,F\}$ sia| $x$/$y$ | T | F || :-----: | :--: | :--: || S | .05 | .2 || N | .25 | .25 || C | .2 | .05 |e consideriamo due possibili funzioni predittive $f_1(x)$ e $f_2(x)$| $x$ | $f_1(x)$ | $f_2(x)$ || :--: | :--------------------: | :------: || S | F | F || N | F | T || C | T | T |possiamo verificare che nel caso in cui la funzione di costo sia $L_1$ allora il rischio nei due casi รจ $\mathcal{R}(f_1)=0.65$ e $\mathcal{R}(f_2)=0.4$ per cui $f_2$ รจ preferibile a $f_1$. Al contrario, se la funzione di costo รจ $L_2$, allora risulta $\mathcal{R}(f_1)=1.55$ e $\mathcal{R}(f_2)=7.55$, per cui, al contrario, $f_1$ รจ preferibile a $f_2$.Come si vede, quindi, la scelta tra $f_1(x)$ e $f_2(x)$ รจ dipendente dalla funzione di costo adottata e dalla distribuzione $p(x,y)$ che invece รจ data e, tra l'altro, sconosciuta. Quindi, una diversa distribuzione potrebbe portare a conclusioni diverse anche considerando una stessa funzione di costo: se ad esempio si fa riferimento alla funzione di costo $L_1$, allora la distribuzione congiunta| $x$/$y$ | T | F || :-----: | :--: | :--: || S | .05 | .05 || N | .05 | .4 || C | .05 | .4 |determina dei valori di rischio $\mathcal{R}(f_1)=0.6$ e $\mathcal{R}(f_2)=0.9$, rendendo ora $f_1$ preferibile a $f_2$. Rischio empiricoDato che la distribuzione reale $p(x,y)$ รจ sconosciuta per ipotesi (se cosรฌ non fosse potremmo sempre effettuare predizioni utilizzando la distribuzione condizionata reale $p(y\mid x)$) il calcolo del rischio reale รจ impossibile ed รจ necessario effettuare delle approssimazioni, sulla base dei dati disponibili. In particolare, possiamo applicare il metodo standard di utilizzando la media aritmetica su un campione come stimatore del valore atteso, e considerare il *rischio empirico* (empirical risk) calcolato effettuando l'operazione di media sul campione offerto dai dati disponibili nel training set $X=\{(x_1,y_1),\ldots,(x_n,y_n)\}$$$\overline{\mathcal{R}}(f; X)=\overline{L}(f(x), y; X)=\frac{1}{n}\sum_{i=1}^nL(f(x_i),y_i)$$La funzione utilizzata per le predizioni sarร allora quella che, nell'insieme di funzioni considerato, minimizza il rischio empirico$$f^*=\underset{f\in F}{\mathrm{argmin}}\;\overline{\mathcal{R}}(f;X)$$Si noti che, in effetti, il rischio empirico dipende sia dai dati in $X$ che dalla funzione $f$: in questo senso รจ una funzione rispetto a $X$ e un funzionale rispetto a $f$. La ricerca di $f^*$ comporta quindi una minimizzazione funzionale del rischio empirico. In generale, tale situazione viene semplificata limitando la ricerca all'interno di classi di funzioni definite da coefficienti: in questo modo, il rischio empirico puรฒ essere espresso come funzione dei coefficienti della funzione (oltre che di $X$) e la minimizzazione รจ una normale minimizzazione di funzione.Chiaramente, la speranza รจ che minimizzare il rischio empirico dia risultati simili a quelli che si otterrebbero minimizzando il rischio reale. Ciรฒ dipende, in generale, da quattro fattori:- La dimensione del training set $X$. Al crescere della quantitร di dati, $\overline{\mathcal{R}}(f; X)$ tende a $\mathcal{R}(f)$ per ogni funzione $f$- La distribuzione reale $p(x,y)$. Maggiore รจ la sua complessitร , maggiore รจ la quantitร di dati necessari per averne una buona approssimazione.- La funzione di costo $L$, che puรฒ creare problemi se assegna costi molto elevati in situazioni particolari e poco probabili- L'insieme $F$ delle funzioni considerate. Se la sua dimensione รจ elevata, e le funzioni hanno una struttura complessa, una maggior quantitร di dati risulta necessaria per avere una buona approssimazione.Al tempo stesso, considerare un insieme piccolo di funzioni semplici rende sรฌ la minimizzazione del rischio implicito su $F$ una buona approssimazione del minimo rischio reale su $F$ stesso, ma al tempo stesso comporta che tale minimo possa essere molto peggiore di quello ottenibile considerando classi piรน ampie di funzioni. Minimizzazione della funzione di rischio In generale, l'insieme $F$ delle funzioni รจ definito in modo parametrico $F=\{f(x;\theta)\}$ dove $\theta\in D_\theta$ รจ un coefficiente (tipicamente multidimensionale) che determina, all'interno della classe $F$ (definita tipicamente in termini ''strutturali'') la particolare funzione utilizzata. Un esempio tipico รจ offerto dalla *regressione lineare*, in cui si vuole prevedere il valore di un attributo $y$ con dominio $R$ sulla base dei valori di altri $m$ attributi $x_1,\ldots, x_m$ (che assumiamo per semplicitร in $R$ anch'essi): nella regressione lineare, l'insieme delle possibili funzioni $f:R^m\mapsto R$ รจ limitato alle sole funzioni lineari $f_\mathbf{w}(x)=w_0+w_1x_1+\ldots+w_mx_m$, e il parametro $\theta$ corrisponde al vettore $\mathbf{w}=(w_0,\ldots,w_m)$ dei coefficienti.In questo caso, il rischio empirico, fissata la famiglia $F$ di funzioni, puรฒ essere ora inteso come funzione di $\theta$$$\overline{\mathcal{R}}(\theta; X)=\overline{L}(f(x;\theta), y; X)=\frac{1}{n}\sum_{i=1}^nL(f(x_i;\theta),y_i)\hspace{2cm}f\in F$$e la minimizzazione del rischio empirico puรฒ essere effettuata rispetto a $\theta$$$\theta^*=\underset{\theta\in D_\theta}{\mathrm{argmin}}\;\overline{\mathcal{R}}(\theta;X)$$da cui deriva la funzione ottima (nella famiglia $F$) $f^*=f(x;\theta^*)$la minimizzazione della funzione di rischio avrร luogo nel dominio di definizione $D_\theta$ di $\theta$, e potrร essere effettuata in modi diversi, in dipendenza della situazione e di considerazioni di efficienza di calcolo e di qualitร delle soluzioni derivate. Ricerca analitica dell'ottimoSe il problema si pone in termini di minimizzazione *senza vincoli*, e quindi all'interno di $R^m$, un primo approccio รจ quello standard dell'analisi di funzioni, consistente nella ricerca di valori $\overline\theta$ di $\theta$ per i quali si annullano tutte le derivate parziali $\frac{\partial \overline{\mathcal{R}}(\theta; X)}{\partial \theta_i}$, tale cioรจ che, se indichiamo con $m$ la dimensione (numero delle componenti) di $\theta$, il sistema su $m$ incognite definito dalle $m$ equazioni$$\frac{\partial \overline{\mathcal{R}}(\theta; X)}{\partial \theta_i}\Bigr|_{\theta=\overline\theta}=0\hspace{2cm} i=1,\ldots,m$$risulta soddisfatto. La soluzione analitica di questo sistema risulta tipicamente ardua o impossibile, per cui vengono spesso adottate tecniche di tipo numerico. Gradient descentLa discesa del gradiente (*gradient descent*) รจ una delle tecniche di ottimizzazione piรน popolari, in particolare nel settore del Machine Learning e delle Reti Neurali. La tecnica consiste nel minimizzare una funzione obiettivo $J(\theta)$ definita sui parametri $\theta\in\mathbb{R}^d$ del modello mediante aggiornamenti successivi del valore di $\theta$ (a partire da un valore iniziale $\theta^{(0)}$) nella direzione opposta a quella del valore attuale del gradiente $J'(\theta)=\nabla J(\theta)$. Si ricorda, a tale proposito, che, data una funzione $f(x_1,x_2,\ldots,x_d)$, il gradiente $\nabla f$ di $f$ รจ il vettore $d$-dimensionale delle derivate di $f$ rispetto alle variabili $x_1,\ldots, x_d$: il vettore cioรจ tale che $[\nabla f]_i=\frac{\partial f}{\partial x_i}$. Un parametro $\eta$, detto *learning rate* determina la scala degli aggiornamenti effettuati, e quindi la dimensione dei passi effettuati nella direzione di un minimo locale.Possiamo interpretare la tecnica come il muoversi sulla superficie della funzione $J(\theta)$ seguendo sempre la direzione di massima pendenza verso il basso, fino a raggiungere un punto da cui รจ impossibile scendere ulteriormente. Varianti di discesa del gradienteIn molti casi, e sempre nell'ambito del ML, la funzione obiettivo corrisponde all'applicazione di una funzione di costo (*loss function*), predefinita e dipendente dal modello adottato, su un insieme dato di elementi di un dataset $X=(x_1,\ldots, x_n)$ (che nel caso di apprendimento supervisionato รจ un insieme di coppie $X=((x_1,t_1),\ldots,(x_n,t_n))$): rappresentiamo questa situazione con $J(\theta; X)$. Questo corrisponde all'approssimazione del *rischio*$$\mathcal{R}(\theta)=\int J(\theta,x)p(x)dx=E_{p}[\theta]$$In generale, la funzione di costo รจ definita in modo additivo rispetto agli elementi di $X$ (il costo relativo all'insieme $X$ รจ pari alla somma dei costi relativi ai suoi elementi), per cui il valore risulta $J(\theta;X)=\sum_{i=1}^nJ(\theta;x_i)$, o preferibilmente, per evitare una eccessiva dipendenza dal numero di elementi, come media$$J(\theta;X)=\frac{1}{n}\sum_{i=1}^nJ(\theta;x_i)$$Si noti che, per le proprietร dell'operazione di derivazione, da questa ipotesi deriva l'additivitร anche del gradiente, per cui$$J'(\theta; X)=\sum_{i=1}^nJ'(\theta;x_i)$$ o $$J'(\theta;X)=\frac{1}{n}\sum_{i=1}^nJ'(\theta;x_i)$$Possiamo allora identificare tre varianti del metodo, che differiscono tra loro per la quantitร di elementi di $X$ utilizzati, ad ogni passo, per calcolare il gradiente della funzione obiettivo. Una quantitร maggiore di dati utilizzati aumenta l'accuratezza dell'aggiornamento, ma anche il tempo necessario per effettuare l'aggiornamento stesso (in particolare, per valutare il gradiente per il valore attuale di $\theta$). Batch gradient descentIn questo caso, il gradiente รจ valutato, ogni volta, considerando tutti gli elementi nel training set $X$. Quindi si ha che al passo $k$-esimo viene eseguito l'aggiornamento$$\theta^{(k+1)}=\theta^{(k)}-\eta\sum_{i=1}^nJ'(\theta^{(k)};x_i)$$ o anche, per i singoli coefficienti$$\theta_j^{(k+1)}=\theta_j^{(k)}-\eta\sum_{i=1}^n\frac{\partial J(\theta;x_i)}{\partial\theta_j}\Bigr\vert_{\small\theta=\theta^{(k)}}$$Dato che si richiede quindi, ad ogni iterazione, la valutazione del gradiente (con il valore attuale $\theta^{(k)}$ di tutti i coefficienti) su tutti gli elementi di $X$, questa soluzione tende ad essere molto lenta, soprattutto in presenza di dataset di dimensioni molto estese, come nel caso di reti neurali complesse e deep learning. Inoltre, l'approccio diventa del tutto impraticabile se il dataset รจ talmente esteso da non entrare neanche in memoria.In termini di codice, il metodo batch gradient descent si presenta come:```pythonfor i in range(n_epochs): g = 0 for k in range(dataset_size): g = g+evaluate_gradient(loss_function, theta, X[k]) theta = theta-eta*g```Il ciclo viene eseguito un numero di volte pari al numero di epoche, dove per *epoca* si intende una iterazione su tutti glielementi di $X$. Di conseguenza, la valutazione di $\theta$ viene aggiornata un numero di volte pari al numero di epoche. Ilmetodo batch gradient descent converge certamente al minimo globale se la funzione $J(\theta)$ รจ convessa, mentre altrimenticonverge a un minimo locale. EsempioApplichiamo le considerazioni a un semplice problema di classificazione su un dataset bidimensionale, riportato graficamente di seguito.
###Code
data = pd.read_csv(get_file("testSet.txt", local=0), delim_whitespace=True, header=None, names=['x1','x2','t'])
plot_ds(data)
n = len(data)
nfeatures = len(data.columns)-1
X = np.array(data[['x1','x2']])
t = np.array(data['t']).reshape(-1,1)
X = np.column_stack((np.ones(n), X))
###Output
_____no_output_____
###Markdown
Il metodo considerato per la classificazione รจ la *logistic regression*, che determina un iperpiano (retta, in questo caso) di separazione minimizzando rispetto al vettore $\theta$ dei coefficienti dell'equazione dell'iperpiano (3 in questo caso) il rischio empirico sul dataset associato alla funzione di costo *cross-entropy*, per la quale il costo associato a un singolo elemento $x=(x_1,\ldots,x_d)$ รจ$$ J(\theta, x)=-\left(t\log y + (1-t)\log (1-y)\right) $$dove $t$ รจ il valore *target* รจ il valore $0/1$ della classe dell'elemento e $y\in (0,1)$ รจ il valore predetto dal modello, definito come$$y = \sigma(x) = \frac{1}{1+e^{-\sum_{i=1}^d\theta_ix_i+\theta_0}}$$
###Code
def sigma(theta, X):
return sp.expit(np.dot(X, theta))
###Output
_____no_output_____
###Markdown
Il rischio empirico associato all'intero dataset puรฒ essere allora definito come la corrispondente media$$J(\theta, X)=\frac{1}{n}\sum_{i=1}^n \left(t_i\log \sigma(x_i) -(1-t_i)\log (1-\sigma(x_i))\right)$$
###Code
def approx_zero(v):
eps = 1e-50
v[v<eps]=eps
return v
def cost(theta, X, t):
eps = 1e-50
v = sigma(theta,X)
v[v<eps]=eps
term1 = np.dot(np.log(v).T,t)
v = 1.0 - sigma(theta,X)
v[v<eps]=eps
term2 = np.dot(np.log(v).T,1-t)
return ((-term1 - term2) / len(X))[0]
###Output
_____no_output_____
###Markdown
Il gradiente della funzione di costo risulta allora pari a\begin{align*}\frac{\partial J(\theta,x)}{\partial\theta_i}&=-(t-\sigma(x))x_i\hspace{1cm}i=1,\ldots,d\\\frac{\partial J(\theta,x)}{\partial\theta_0}&=-(t-\sigma(x))\end{align*}e il corrispondente gradiente del rischio empirico รจ dato da\begin{align*}\frac{\partial J(\theta,X)}{\partial\theta_i}&=-\frac{1}{n}\sum_{j=1}^n (t_j-\sigma(x_j))x_{ji}\hspace{1cm}i=1,\ldots,d\\\frac{\partial J(\theta,X)}{\partial\theta_0}&=-\frac{1}{n}\sum_{i=1}^n(t_j-\sigma(x_j))\end{align*}
###Code
def gradient(theta, X, t):
return -np.dot(X.T, (t-sigma(theta, X))) / len(X)
###Output
_____no_output_____
###Markdown
Per quanto detto, una iterazione di BGD corrisponde agli aggiornamenti\begin{align*}\theta_j^{(k+1)}&=\theta_j^{(k)}-\eta\frac{\partial J(\theta,X)}{\partial\theta_j}{\LARGE\vert}_{\small\theta=\theta^{(k)}}=\theta_j^{(k)}+\frac{\eta}{n}\sum_{i=1}^n (t_i-\sigma(x_i))x_{ij}\hspace{1cm}j=1,\ldots,d\\\theta_0^{(k+1)}&=\theta_0^{(k)}-\eta\frac{\partial J(\theta,X)}{\partial\theta_0}{\LARGE\vert}_{\small\theta=\theta^{(k)}}=\theta_0^{(k)}+\frac{\eta}{n}\sum_{i=1}^n(t_i-\sigma(x_i))\end{align*}
###Code
def batch_gd(X, t, eta = 0.1, epochs = 10000):
theta = np.zeros(nfeatures+1).reshape(-1,1)
theta_history = []
cost_history = []
for k in range(epochs):
theta = theta - eta * gradient(theta,X,t)
theta_history.append(theta)
cost_history.append(cost(theta, X, t))
theta_history = np.array(theta_history).reshape(-1,3)
cost_history = np.array(cost_history).reshape(-1,1)
m = -theta_history[:,1]/theta_history[:,2]
q = -theta_history[:,0]/theta_history[:,2]
return cost_history, theta_history, m, q
###Output
_____no_output_____
###Markdown
Applicando il metodo sul dataset, fissando un valore per il parametro $\eta$ e per il numero di epoche (dove una epoca corrisponde all'applicazione dell'iterazione su tutti gli elementi del dataset), otteniamo le sequenze dei costi e dei valori di coefficiente angolare e termine noto della retta di separazione.
###Code
cost_history, theta_history, m, q = batch_gd(X, t, eta = 0.1, epochs = 100000)
###Output
_____no_output_____
###Markdown
La convergenza regolare del metodo รจ evidente nella figura seguente, dove si mostrano un andamento tipico della funzione di costorispetto al numero di iterazioni e la sequenza di valori assunti da $\theta$, considerata bidimensionale.
###Code
low, high, step = 0, 5000, 10
plot_all(cost_history, m, q, low, high, step)
m_star = 0.62595499
q_star = 7.3662299
f = lambda i: np.sqrt((m_star-m[i])**2+(q_star-q[i])**2)
dist = np.array([f(i) for i in range(len(m))])
np.argmin(dist>1e-2)+1
###Output
_____no_output_____
###Markdown
Di seguito, la retta di separazione risultante:
###Code
plot_ds(data,m[-1],q[-1])
###Output
_____no_output_____
###Markdown
Stochastic gradient descentNella stochastic gradient descent, a differenza del caso precedente, la valutazione del gradiente effettuata ad ogni iterazione fa riferimento a un solo elemento $x_i$ del training set. Quindi si ha$$\theta^{(k+1)}=\theta^{(k)}-\eta J'(\theta^{(k)};x_i)$$e, per i singoli coefficienti,$$\theta_j^{(k+1)}=\theta_j^{(k)}-\eta\frac{\partial J(\theta;x_i)}{\partial\theta_j}\LARGE\vert_{\small\theta=\theta^{(k)}}$$ La discesa del gradiente batch valuta il gradiente per tutti gli elementi, anche quelli simili tra loro, a ogni iterazione,eseguendo cosรฌ un insieme ridondante di operazioni. SGD risolve questo problema effettuando una sola valutazione, e quindioperando in modo piรน veloce.Al tempo stesso, perรฒ, mentre i valori della funzione di costo nel caso di BGD decrescono con regolaritร verso il minimo locale,applicando SGD si riscontra un andamento molto piรน irregolare, con fluttuazione della funzione di costo intorno a un trendcomplessivo di decrescita, ma con incrementi locali anche significativi. Questo da un lato puรฒ non risultare negativo, in quantole oscillazioni locali posso consentire di uscire dall'intorno di un minimo locale, proseguendo la ricerca di nuovi minimi. Altempo stesso, l'oscillazione locale rende difficile la convergenza finale verso il minimo.Questa oscillazione si riscontra anche nell'andamento dei valori dei coefficienti. Si noti comunque che, considerando la sequenza dei valori della funzione di costo assunti al termine di ogni *epoca* (sequenzadelle iterazioni che considerano tutti gli elementi del dataset), emerge la tendenza di decrescita di fondo. In termini di codice, il metodo stochastic gradient descent si presenta come:```pythonfor i in range(n_epochs): np.random.shuffle(data) for k in range(dataset_size): g = evaluate_gradient(loss_function, theta, X[k]) theta = theta-eta*g```Nel caso della logistic regression, l'aggiornamento a ogni iterazione risulta quindi\begin{align*}\theta_j^{(k+1)}&=\theta_j^{(k)}+\eta(t_i-\sigma(x_i))x_{ij}\hspace{1cm}j=1,\ldots,d\\\theta_0^{(k+1)}&=\theta_0^{(k)}+\eta(t_i-\sigma(x_i))\end{align*}
###Code
def stochastic_gd(X, t, eta = 0.01, epochs = 1000):
theta = np.zeros(nfeatures+1).reshape(-1,1)
theta_history = []
cost_history = []
for j in range(epochs):
for i in range(n):
e = (t[i] - sigma(theta, X[i,:]))[0]
theta = theta + eta * e * X[i,:].reshape(-1,1)
theta_history.append(theta)
cost_history.append(cost(theta, X, t))
theta_history = np.array(theta_history).reshape(-1,3)
cost_history = np.array(cost_history).reshape(-1,1)
m = -theta_history[:,1]/theta_history[:,2]
q = -theta_history[:,0]/theta_history[:,2]
return cost_history, theta_history, m, q
###Output
_____no_output_____
###Markdown
Applicando il metodo รจ necessario ancora specificare il valore di $\theta$ e il numero di epoche. Per la struttura dell'algoritmo, si avranno allora un numero di iterazioni pari al numero di epoche moltiplicato per la dimensionae $n$ del dataset.
###Code
cost_history, theta_history, m, q = stochastic_gd(X, t, eta = 0.01, epochs = 10000)
low, high, step = 0*n, 150*n, 30
plot_all(cost_history, m, q, low, high, step)
dist = np.array([f(i) for i in range(len(m))])
np.argmin(dist>1e-2)+1
plot_ds(data,m[-1],q[-1])
###Output
_____no_output_____
###Markdown
Come si puรฒ vedere dalla figura seguente, considerando i valori di costo e dei coefficienti soltanto alla fine delle varie epoche risulta una andamento uniforme dei valori stessi.
###Code
low, high, step = 0*n, 1000*n, n
plot_all(cost_history, m, q, low, high, step)
###Output
_____no_output_____
###Markdown
Mini-batch gradient descentQuesto approccio si pone in posizione intermedia rispetto ai due precedenti, generalizzando l'impostazione di SGD di considerare un solo elemento per iterazione a considerare sottoinsiemi diversi del dataset. L'algoritmo opera quindi partizionando, all'inizio di ogni epoca, il dataset in $\lceil n/s\rceil$ sottoinsiemi (*mini-batch*) di dimensione prefissata $s$, ed effettuando poi $\lceil n/s\rceil$ iterazioni all'interno di ognuna delle quali l'aggiornamento di $\theta$ viene effettuato valutando il gradiente sugli $s$ elementi del mini-batch attuale.La discesa del gradiente con mini-batch รจ l'algoritmo tipicamente utilizzato per l'addestramento di reti neurali, in particolare in presenza di reti *deep*.Se indichiamo con $X_i\subset X$ il mini-batch attualmente considerato, l'aggiornamento a ogni iterazione รจ il seguente$$\theta^{(k+1)}=\theta^{(k)}-\eta\sum_{x\in X_i}J'(\theta^{(k)};x)$$o anche$$\theta_j^{(k+1)}=\theta_j^{(k)}-\eta\sum_{x\in X_i}\frac{\partial J(\theta;x)}{\partial\theta_j}\LARGE\vert_{\small\theta=\theta^{(k)}}$$In questo modo, la varianza degli aggiornamenti dei coefficienti viene diminuita. Inoltre, รจ possibile fare uso, in pratica, di implementazioni molto efficienti del calcolo del gradiente rispetto a un mini-batch disponibili nelle piรน recenti librerie per il *deep learning*. La dimensione dei mini-batch varia tra $50$ e $256$.```pythonfor i in range(n_epochs): np.random.shuffle(data) for batch in get_batches(dataset, batch_size): g = 0 for x in batch: g = g+evaluate_gradient(loss_function, theta, batch) theta = theta-eta*g```Ne risulta un andamento oscillante sia della funzione di costo che dei valori stimati dei coefficienti. Chiaramente, l'oscillazione sarร tanto piรน marcata quanto minore รจ la dimensione dei mini-batch, e quindi quanto piรน si tende a SGD.Gli aggiornamenti nel caso della logistic regression derivano immediatamente da quanto sopra\begin{align*}\theta_j^{(k+1)}&=\theta_j^{(k)}+\eta\sum_{x_i\in MB}( t_i-y_i)x_{ij}\hspace{1cm}j=1,\ldots,d\\\theta_0^{(k+1)}&=\theta_0^{(k)}+\eta\sum_{x_i\in MB}(t_i-y_i)\end{align*}
###Code
def mb_gd(X, t, eta = 0.01, epochs = 1000, minibatch_size = 5):
mb = int(np.ceil(float(n)/minibatch_size))
idx = np.arange(0,n)
np.random.shuffle(idx)
theta = np.zeros(nfeatures+1).reshape(-1,1)
theta_history = []
cost_history = []
cost_history_iter = []
for j in range(epochs):
for k in range(mb-1):
g = 0
for i in idx[k*minibatch_size:(k+1)*minibatch_size]:
e = (t[i] - sigma(theta, X[i,:]))[0]
g = g + e * X[i,:]
theta = theta + eta * g.reshape(-1,1)
theta_history.append(theta)
cost_history.append(cost(theta, X, t))
g = 0
for i in idx[k*minibatch_size:n]:
e = (t[i] - sigma(theta, X[i,:]))[0]
g = g + e * X[i,:]
theta = theta + eta * g.reshape(-1,1)
theta_history.append(theta)
cost_history.append(cost(theta, X, t))
theta_history = np.array(theta_history).reshape(-1,3)
cost_history = np.array(cost_history).reshape(-1,1)
m = -theta_history[:,1]/theta_history[:,2]
q = -theta_history[:,0]/theta_history[:,2]
return cost_history, m, q
cost_history, m, q = mb_gd(X, t, eta = 0.01, epochs = 10000, minibatch_size = 5)
low, high, step = 0, 5000, 10
plot_all(cost_history, m, q, low, high, step)
dist = np.array([f(i) for i in range(len(m))])
np.argmin(dist>1e-2)+1
plot_ds(data,m[-1],q[-1])
###Output
_____no_output_____
###Markdown
Criticitร I metodi elementari di discesa del gradiente illustrati sopra non garantiscono in generale una elevata convergenza. Inoltre, il loro utilizzo pone un insieme di questioni- la scelta del valore del learning rate $\eta$ puรฒ risultare difficile. Un valore troppo piccolo puรฒ comportare una convergenza eccessivamente lenta, mentre un valore troppo grande puรฒ portare ad oscillazioni intorno al minimo, o addirittura a divergenza- per ovviare a questa problematica รจ possibile utilizzare dei metodi di aggiustamento di $\eta$ nel tempo, ad esempio riducendolo secondo uno schema predefinito o quando il decremento della funzione di costo calcolata in due epoche successive risulti inferiore a una soglia data. Sia gli schemi che le soglie devono perรฒ essere predefiniti e non possono quindi adattarsi in dipendenza delle caratteristiche del dataset- lo stesso learning rate si applica per l'aggiornamento di tutti i coefficienti- in molti casi la funzione di costo, in particolare se si ha a che fare con reti neurali, risulta fortemente non convessa, caratterizzata quindi da numerosi minimi locali e da punti di sella. I metodi considerati possono avere difficoltร a uscire da situazioni di questo tipo, e in particolare dai punti di sella, spesso circondati da regioni a gradiente molto limitato. MomentoI metodi precedenti risultano poco efficienti in situazioni in cui la funzione di costo varia in modo molto diverso al variare della direzione considerata (ad esempio se si hanno valli che discendono lentamente e con pareti laterali ripide). In questo caso, infatti, gli algoritmi precedenti procedono molto lentamente in direzione del minimo, oscillando in modo sostanziale nella direzione trasversale ad essa: questa situazione รจ illustrata a sinistra nella figura sottostante.Il *metodo del momento* fa riferimento ad una interpretazione fisica del metodo di ottimizzazione, in cui il processo di discesa del gradiente viene visto come lo spostamento di un corpo di massa $m=1$ che si muove sulla superficie della funzione di costo $J(\theta)$ soggetto a una forza peso $F(\theta)=-\nabla U(\theta)$, dove $U(\theta)=\eta h(\theta)=\eta J(\theta)$ รจ l'energia potenziale del corpo nella posizione $\theta$ (si assume quindi che la costante fisica $g$ relativa alla forza peso $F=-mgh$ sia pari a $\eta$). In questo modello, il valore negativo del gradiente $-\eta J'(\theta)$ รจ quindi pari al vettore forza (e accelerazione, in quanto $a=\frac{F}{m}$) del corpo nel punto $\theta$.Nel metodo della discesa del gradiente, si assume che lo spostamento del corpo in un certo punto $\theta$ sia determinato dalla accelerazione calcolata nello stesso punto, e quindi dal gradiente $J'(\theta)$, in quanto vale la regola di aggiornamento $\theta^{(k+1)}=\theta^{(k)}-\eta J'(\theta^{(k)})$.Nel metodo del momento, si fa riferimento a un modello piรน consistente con la realtร fisica di un corpo che si muove su una superficie soggetto alla forza peso, modello che prevede di utilizzare il concetto di velocitร $v(\theta)$. In questo modello, lo spostamento del corpo a partire da un certo punto $\theta$ รจ determinato dalla velocitร calcolata nello stesso punto $\theta^{(k+1)}=\theta^{(k)}+v^{(k+1)}$, dove la variazione di velocitร รจ data dalla accelerazione $v^{(k+1)}=v^{(k)}-\eta J'(\theta^{(k)})$.Come si puรฒ osservare, si ha che\begin{align*}v^{(k+1)}&=-\eta J'(\theta^{(k)})+v^{(k)}=-\eta J'(\theta^{(k)})-\eta J'(\theta^{(k-1)})+v^{(k-1)}=\cdots=-\eta\sum_{i=0}^kJ'(\theta^{(i)})+v^{(0)}\\\theta^{(k+1)}&=\theta^{(k)}+v^{(k+1)}=\theta^{(k)}-\eta\sum_{i=0}^kJ'(\theta^{(i)})+v^{(0)}\end{align*}che corrisponde all'associare lo spostamento alla somma (integrale nel caso della fisica) delle accelerazioni passate. Il riferimento a questo modello porta l'algoritmo a tendere ad ogni passo a mantenere, almeno in parte, la direzione del passo precedente (in quanto $v^{(k+1)}=-\eta J'(\theta^{(k)})+v^{(k)})$, premiando le direzioni che si manifestano con costanza in una sequenza di passi. Ne deriva il comportamento a destra della figura precedente, in cui l'inerzia nella direzione del minimo porta a una limitazione delle oscillazioni.Si noti che ciรฒ non avviene nella discesa del gradiente, in cui si ha $v^{(k+1)}=-\eta J'(\theta^{(k)})$.Matematicamente, l'effetto di inerzia viene ottenuto sottraendo alla velocitร (vettoriale) calcolata al passo precedente la valutazione del gradiente effettuata nella corrispondente posizione. Il gradiente viene sottratto in quanto, mantenendo la corrispondenza con la meccanica, un gradiente positivo tende a ridurre la velocitร .Il metodo del momento utilizza tipicamente un secondo parametro $\gamma$, che determina la frazione di $v^{(k)}$ che permane nella definizione di $v^{(k+1)}$, e che svolge la funzione (fisicamente) di un coefficiente di attrito. Si ottiene quindi la formulazione:\begin{align*}v^{(k+1)}&=\gamma v^{(k)} -\eta\sum_{i=1}^nJ'(\theta^{(k)};x_i)\\\theta^{(k+1)}&=\theta^{(k)}+v^{(k+1)}\end{align*}Il metodo del momento ad ogni passo determina inizialmente il vettore di spostamento attuale, a partire da quello al passo precedente e dal gradiente di $\theta$: il contributo relativo dei due termini รจ pesato dalla coppia di parametri $\gamma$ e $\eta$. Lo spostamento calcolato viene quindi applicato al valore attuale di $\theta$ (il segno meno deriva come sempre dal fatto che stiamo assumendo di cercare un minimo locale).Se il gradiente รจ orientato nella stessa direzione della velocitร attuale, tale velocitร viene incrementata, per cui l'aggiornamento di $\theta$ diviene maggiore, incrementandosi man mano che la direzione di spostamento rimane coerente con il gradiente nei punti attraversati.```pythonv = 0for i in range(n_epochs): g = 0 for k in range(dataset_size): g = g+evaluate_gradient(loss_function, theta, X[k]) v = gamma*v-eta*g theta = theta+v``` Come si puรฒ vedere, mentre $\theta^{(k)}=(\theta_1^{(k)},\ldots,\theta_d^{(k)})^T$ รจ la valutazione della soluzione ottima al passo $k$, $v^{(k)}=(v_1^{(k)},\ldots,v_d^{(k)})^T$ รจ l'aggiornamento applicato a tale valore per ottenere $\theta^{(k+1)}$: possiamo vedere quindi $v$ come il vettore velocitร di spostamento di $\theta$ nello spazio delle soluzioni.Come giร illustrato sopra, possiamo esprimere l'aggiornamento nel modo seguente, evidenziando come esso dipenda dal gradiente calcolato in tutte le posizioni precedentemente attraversate, con un effetto che va a diminuire esponenzialmente con $\gamma$ man mano che si risale nel passato. Assumendo $v^{(0)}=0$:\begin{align*}\theta^{(k+1)}&=\theta^{(k+1)}+v^{(k+1)}= \theta^{(k)}+\gamma v^{(k)}-\eta\sum_{i=1}^nJ'(\theta^{(k)};x_i)=\theta^{(k)}+\gamma^2 v^{(k-1)}-\gamma\eta\sum_{i=1}^nJ'(\theta^{(k-1)};x_i) -\eta\sum_{i=1}^nJ'(\theta^{(k)};x_i)\\&=\theta^{(k)}+\gamma^2 v^{(k-1)}-\eta\left(\sum_{i=1}^nJ'(\theta^{(k)};x_i)+\gamma\sum_{i=1}^nJ'(\theta^{(k-1)};x_i)\right)=\cdots=\theta^{(k)}-\eta\left(\sum_{j=0}^k\gamma^j\sum_{i=1}^nJ'(\theta^{(k-j)};x_i)\right)\end{align*}Gli aggiornamenti nel caso della logistic regression derivano immediatamente\begin{align*}v_j^{(k+1)}&=\gamma v_j^{(k)}+\frac{\eta}{n}\sum_{i=1}^n( t_i-\sigma(x_i))x_{ij}\hspace{1cm}j=1,\ldots,d\\v_0^{(k+1)}&=\gamma v_0^{(k)}+\frac{\eta}{n}\sum_{i=1}^n(t_i-\sigma(x_i)) \\\theta_j^{(k+1)}&=\theta_j^{(k)}+v_j^{(k+1)}\hspace{1cm}j=0,\ldots,d\end{align*}
###Code
def momentum_gd(X,t, eta = 0.1, gamma = 0.97, epochs = 1000):
theta = np.zeros(nfeatures+1).reshape(-1,1)
v = np.zeros(nfeatures+1).reshape(-1,1)
theta_history = []
cost_history = []
for k in range(epochs):
v = gamma*v - eta * gradient(theta,X,t)
theta = theta + v
theta_history.append(theta)
cost_history.append(cost(theta, X, t))
theta_history = np.array(theta_history).reshape(-1,3)
cost_history = np.array(cost_history).reshape(-1,1)
m = -theta_history[:,1]/theta_history[:,2]
q = -theta_history[:,0]/theta_history[:,2]
return cost_history, m, q
cost_history, m, q = momentum_gd(X, t, eta = 0.1, gamma = 0.97, epochs = 10000)
low, high, step = 0, 5000, 10
plot_all(cost_history, m, q, low, high, step)
dist = np.array([f(i) for i in range(len(m))])
np.argmin(dist>1e-2)+1
plot_ds(data,m[-1],q[-1])
###Output
_____no_output_____
###Markdown
Accelerazione del gradiente di NesterovNel metodo del momento, la conoscenza al passo $k$ di $\theta^{(k)}$ e di $v^{(k)}$ permette, senza calcolare il gradiente, di avere una valutazione approssimata $\tilde{\theta}^{(k+1)}=\theta^{(k)}+\gamma v^{(k)}$ di$$\theta^{(k+1)}=\theta^{(k)}+v^{(k+1)}=\theta^{(k)}+\gamma v^{(k)}-\eta\sum_{i=1}^nJ'(\theta^{(k)};x_i)=\tilde{\theta}^{(k+1)}-\eta\sum_{i=1}^nJ'(\theta^{(k)};x_i)$$Il metodo di Nesterov segue lo stesso approccio del metodo del momento, con la differenza che, ad ogni passo, la valutazione del gradiente viene effettuata, con un *look-ahead* approssimato, non nel punto attuale $\theta^{(k)}$ dello spazio delle soluzioni visitato ma, piรน o meno, nel punto successivo $\theta^{(k+1)}$ (approssimato da $\tilde{\theta}^{(k+1)}$). In questo modo, le variazioni di $v$ (e quindi di $\theta$) vengono anticipate rispetto a quanto avviene nel metodo del momento.\begin{align*}v^{(k+1)}&=\gamma v^{(k)} +\eta\sum_{i=1}^nJ'(\tilde{\theta}^{(k)};x_i)=\gamma v^{(k)} +\eta\sum_{i=1}^nJ'(\theta^{(k)}+\gamma v^{(k)};x_i)\\\theta^{(k+1)}&=\theta^{(k)}+v^{(k+1)}\end{align*} ```pythonv = 0for i in range(n_epochs): g = 0 theta_approx = theta+gamma*v for k in range(dataset_size): g = g+evaluate_gradient(loss_function, theta_approx, X[k]) v = gamma*v-eta*g theta = theta+v```
###Code
def nesterov_gd(X,t, eta = 0.1, gamma = 0.97, epochs = 1000):
theta = np.zeros(nfeatures+1).reshape(-1,1)
v = np.zeros(nfeatures+1).reshape(-1,1)
theta_history = []
cost_history = []
for k in range(epochs):
v = gamma*v - eta * gradient(theta+gamma*v,X,t)
theta = theta + v
theta_history.append(theta)
cost_history.append(cost(theta, X, t))
theta_history = np.array(theta_history).reshape(-1,3)
cost_history = np.array(cost_history).reshape(-1,1)
m = -theta_history[:,1]/theta_history[:,2]
q = -theta_history[:,0]/theta_history[:,2]
return cost_history, m, q
cost_history, m, q = nesterov_gd(X, t, eta = 0.1, gamma = 0.97, epochs = 10000)
low, high, step = 0, 5000, 10
plot_all(cost_history, m, q, low, high, step)
dist = np.array([f(i) for i in range(len(m))])
np.argmin(dist>1e-2)+1
plot_ds(data,m[-1],q[-1])
###Output
_____no_output_____ |
A_star_route_planning/A_star_route_planning.ipynb | ###Markdown
A\* Route Planner Using Advanced Data Structures.We will implement A\* search to implement a "Google-maps" style route planning algorithm. PathPlanner class`__init__` - We initialize our path planner with a map, M, and typically a start and goal node. If either of these are `None`, the rest of the variables here are also set to none. - `closedSet` includes any explored/visited nodes. - `openSet` are any nodes on our frontier for potential future exploration. - `cameFrom` will hold the previous node that best reaches a given node- `gScore` is the `g` in our `f = g + h` equation, or the actual cost to reach our current node- `fScore` is the combination of `g` and `h`, i.e. the `gScore` plus a heuristic; total cost to reach the goal- `path` comes from the `run_search` function..`reconstruct_path` - This function just rebuilds the path after search is run, going from the goal node backwards using each node's `cameFrom` information.`_reset` - Resets *most* of our initialized variables for PathPlanner. This *does not* reset the map, start or goal variables.`run_search` - The method checks whether the map, goal and start have been added to the class. Then, it will also check if the other variables, other than `path` are initialized (note that these are only needed to be re-run if the goal or start were not originally given when initializing the class.`is_open_empty`, is used to check whether there are still nodes to explore. If we're at our goal, we reconstruct the path. If not, we move our current node from the frontier (`openSet`) and into explored (`closedSet`). Then, we check out the neighbors of the current node, check out their costs, and plan our next move. The Map
###Code
from helpers import Map, load_map_10, load_map_40, show_map
import math
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Map Basics
###Code
map_10 = load_map_10()
show_map(map_10)
###Output
_____no_output_____
###Markdown
The map above shows a disconnected network of 10 intersections. The two intersections on the left are connected to each other but they are not connected to the rest of the road network. This map is quite literal in its expression of distance and connectivity. On the graph above, the edge between 2 nodes(intersections) represents a literal straight road not just an abstract connection of 2 cities.These `Map` objects have two properties we will use to implement A\* search: `intersections` and `roads`**Intersections**The `intersections` are represented as a dictionary. In this example, there are 10 intersections, each identified by an x,y coordinate. The coordinates are listed below. You can hover over each dot in the map above to see the intersection number.
###Code
map_10.intersections
###Output
_____no_output_____
###Markdown
**Roads**The `roads` property is a list where `roads[i]` contains a list of the intersections that intersection `i` connects to.
###Code
# this shows that intersection 0 connects to intersections 7, 6, and 5
map_10.roads[0]
start=5
coords=map_10.intersections[start]
x=coords[0]
y=coords[1]
print(x)
print(y)
# This shows the full connectivity of the map
map_10.roads
len(map_10.intersections)
# map_40 is a bigger map than map_10
map_40 = load_map_40()
show_map(map_40)
###Output
_____no_output_____
###Markdown
Advanced VisualizationsThe map above shows a network of roads which spans 40 different intersections (labeled 0 through 39). The `show_map` function which generated this map also takes a few optional parameters which might be useful for visualizing the output of the search algorithm you will write.* `start` - The "start" node for the search algorithm.* `goal` - The "goal" node.* `path` - An array of integers which corresponds to a valid sequence of intersection visits on the map.
###Code
# run this code, note the effect of including the optional
# parameters in the function call.
show_map(map_40, start=5, goal=34, path=[5,16,37,12,34])
###Output
_____no_output_____
###Markdown
Pathplanner Class
###Code
import math
import heapq
class PathPlanner():
"""Construct a PathPlanner Object"""
def __init__(self, M, start=None, goal=None):
""" """
self.map = M
self.start= start
self.goal = goal
self.closedSet = self.create_closedSet() if goal != None and start != None else None
self.openSet = self.create_openSet() if goal != None and start != None else None
self.cameFrom = self.create_cameFrom() if goal != None and start != None else None
self.gScore = self.create_gScore() if goal != None and start != None else None
self.fScore = self.create_fScore() if goal != None and start != None else None
self.path = self.run_search() if self.map and self.start != None and self.goal != None else None
def reconstruct_path(self, current):
""" Reconstructs path after search """
total_path = [current]
while current in self.cameFrom.keys():
current = self.cameFrom[current]
total_path.append(current)
return total_path
def _reset(self):
"""Private method used to reset the closedSet, openSet, cameFrom, gScore, fScore, and path attributes"""
self.closedSet = None
self.openSet = None
self.cameFrom = None
self.gScore = None
self.fScore = None
self.path = self.run_search() if self.map and self.start and self.goal else None
def run_search(self):
""" """
if self.map == None:
raise(ValueError, "Must create map before running search. Try running PathPlanner.set_map(start_node)")
if self.goal == None:
raise(ValueError, "Must create goal node before running search. Try running PathPlanner.set_goal(start_node)")
if self.start == None:
raise(ValueError, "Must create start node before running search. Try running PathPlanner.set_start(start_node)")
self.closedSet = self.closedSet if self.closedSet != None else self.create_closedSet()
self.openSet = self.openSet if self.openSet != None else self.create_openSet()
self.cameFrom = self.cameFrom if self.cameFrom != None else self.create_cameFrom()
self.gScore = self.gScore if self.gScore != None else self.create_gScore()
self.fScore = self.fScore if self.fScore != None else self.create_fScore()
while not self.is_open_empty():
current = self.get_current_node()
if current == self.goal:
self.path = [x for x in reversed(self.reconstruct_path(current))]
return self.path
else:
self.openSet.remove(current)
self.closedSet.add(current)
for neighbor in self.get_neighbors(current):
if neighbor in self.closedSet:
continue # Ignore the neighbor which is already evaluated.
if not neighbor in self.openSet: # Discover a new node
self.openSet.add(neighbor)
heapq.heappush(self.openHeap, (self.get_tentative_gScore(current, neighbor) ,neighbor))
# The distance from start to a neighbor
#the "dist_between" function may vary as per the solution requirements.
if self.get_tentative_gScore(current, neighbor) >= self.get_gScore(neighbor):
continue # This is not a better path.
# This path is the best until now. Record it!
self.record_best_path_to(current, neighbor)
print("No Path Found")
self.path = None
return False
def create_closedSet(self):
""" Creates and returns a data structure suitable to hold the set of nodes already evaluated"""
return set()
def create_openSet(self):
""" Creates and returns a data structure suitable to hold the set of currently discovered nodes
that are not evaluated yet. Initially, only the start node is known."""
if self.start != None:
self.openHeap=[]
heapq.heappush(self.openHeap, (0, self.start))
return set([self.start])
raise(ValueError, "Must create start node before creating an open set. Try running PathPlanner.set_start(start_node)")
def create_cameFrom(self):
"""Creates and returns a data structure that shows which node can most efficiently be reached from another,
for each node."""
cameFrom = {}
return cameFrom
def create_gScore(self):
"""Creates and returns a data structure that holds the cost of getting from the start node to that node,
for each node. The cost of going from start to start is zero."""
g_scores=[ float("infinity") for _ in range(len(self.map.intersections))]
g_scores[self.start]=0.0
return g_scores
def create_fScore(self):
"""Creates and returns a data structure that holds the total cost of getting from the start node to the goal
by passing by that node, for each node. That value is partly known, partly heuristic.
For the first node, that value is completely heuristic."""
f_scores=[ float("infinity") for _ in range(len(self.map.intersections))]
f_scores[self.start]= 1000
return f_scores
def set_map(self, M):
"""Method used to set map attribute """
self._reset(self)
self.start = None
self.goal = None
self.map=M
def set_start(self, start):
"""Method used to set start attribute """
self._reset(self)
self.start=start
def set_goal(self, goal):
"""Method used to set goal attribute """
self._reset(self)
self.goal=goal
# TODO: Set goal value.
def is_open_empty(self):
"""returns True if the open set is empty. False otherwise. """
# TODO: Return True if the open set is empty. False otherwise.
if len(self.openSet):
return False
else:
return True
def get_current_node(self):
""" Returns the node in the open set with the lowest value of f(node)."""
#cost, node = heapq.heappop(self.openHeap)
return heapq.heappop(self.openHeap)[1]
# inefficient:
# node_scores=[]
# openList = list(self.openSet) #to enable indexing
# for node in openList:
# node_scores.append(self.calculate_fscore(node))
# min_index=node_scores.index(min(node_scores))
# print("self.openHeap",self.openHeap)
# print(node_scores)
# return openList[min_index]
def get_neighbors(self, node):
"""Returns the neighbors of a node"""
return self.map.roads[node]
def get_gScore(self, node):
"""Returns the g Score of a node"""
return self.gScore[node]
def distance(self, node_1, node_2):
""" Computes the Euclidean L2 Distance"""
node1_coords=self.map.intersections[node_1]
node2_coords=self.map.intersections[node_2]
return math.sqrt((node1_coords[0]-node2_coords[0])**2+ (node1_coords[1]-node2_coords[1])**2)
def get_tentative_gScore(self, current, neighbor):
"""Returns the tentative g Score of a node + distance from the current node to it's neighbors"""
return self.gScore[current]+self.distance(current, neighbor)
def heuristic_cost_estimate(self, node):
""" Returns the heuristic cost estimate of a node """
if self.goal != None:
return self.distance(node, self.goal)
raise(ValueError, "Must create goal node before.")
def calculate_fscore(self, node):
"""Calculate the f score of a node.F = G + H """
return self.get_gScore(node)+self.heuristic_cost_estimate(node)
def record_best_path_to(self, current, neighbor):
"""Record the best path to a node by updating cameFrom, gScore, and fScore """
self.cameFrom[neighbor]=current
self.gScore[neighbor]=self.get_tentative_gScore(current, neighbor)
self.fScore[neighbor]=self.get_gScore(neighbor)+ self.heuristic_cost_estimate(neighbor)
#Reference:https://en.wikipedia.org/wiki/A*_search_algorithm
###Output
_____no_output_____
###Markdown
VisualizeLet's visualize the results of the algorithm!
###Code
# Visualize your the result of the above test! You can also change start and goal here to check other paths
start = 5
goal = 34
show_map(map_40, start=start, goal=goal, path=PathPlanner(map_40, start, goal).path)
from test import test
test(PathPlanner)
# Visualize your the result of the above test! You can also change start and goal here to check other paths
start = 5
goal = 35
show_map(map_40, start=start, goal=goal, path=PathPlanner(map_40, start, goal).path)
###Output
_____no_output_____ |
notebooks/regressor/Tribuo Regressor Example.ipynb | ###Markdown
This demonstrates Tribuo regression for comparison with scikit-learn regression, although these resulting regression models were not used in the final comparisons
###Code
%jars ../../jars/tribuo-json-4.1.0-jar-with-dependencies.jar
%jars ../../jars/tribuo-regression-liblinear-4.1.0-jar-with-dependencies.jar
%jars ../../jars/tribuo-regression-sgd-4.1.0-jar-with-dependencies.jar
%jars ../../jars/tribuo-regression-xgboost-4.1.0-jar-with-dependencies.jar
%jars ../../jars/tribuo-regression-tree-4.1.0-jar-with-dependencies.jar
import java.nio.file.Paths;
import java.nio.file.Files;
import java.util.logging.Level;
import java.util.logging.Logger;
import org.tribuo.*;
import org.tribuo.data.csv.CSVLoader;
import org.tribuo.datasource.ListDataSource;
import org.tribuo.evaluation.TrainTestSplitter;
import org.tribuo.math.optimisers.*;
import org.tribuo.regression.*;
import org.tribuo.regression.evaluation.*;
import org.tribuo.regression.liblinear.LibLinearRegressionTrainer;
import org.tribuo.regression.sgd.RegressionObjective;
import org.tribuo.regression.sgd.linear.LinearSGDTrainer;
import org.tribuo.regression.sgd.objectives.SquaredLoss;
import org.tribuo.regression.rtree.CARTRegressionTrainer;
import org.tribuo.util.Util;
var regressionFactory = new RegressionFactory();
var csvLoader = new CSVLoader<>(regressionFactory);
// This dataset is generated in the notebook: scikit-learn Regressor Example - Data Cleanup
var oceanSource = csvLoader.loadDataSource(Paths.get("../../data/cleanedBottle.csv"), "temp");
var splitter = new TrainTestSplitter<>(oceanSource, 0.8f, 0L);
Dataset<Regressor> trainData = new MutableDataset<>(splitter.getTrain());
Dataset<Regressor> evalData = new MutableDataset<>(splitter.getTest());
//testData
System.out.println(String.format("Training data size = %d, number of features = %d",trainData.size(),trainData.getFeatureMap().size()));
System.out.println(String.format("Testing data size = %d, number of features = %d",evalData.size(),evalData.getFeatureMap().size()));
public Model<Regressor> train(String name, Trainer<Regressor> trainer, Dataset<Regressor> trainData) {
// Train the model
var startTime = System.currentTimeMillis();
Model<Regressor> model = trainer.train(trainData);
var endTime = System.currentTimeMillis();
System.out.println("Training " + name + " took " + Util.formatDuration(startTime,endTime));
// Evaluate the model on the training data
// This is a useful debugging tool to check the model actually learned something
RegressionEvaluator eval = new RegressionEvaluator();
var evaluation = eval.evaluate(model,trainData);
// We create a dimension here to aid pulling out the appropriate statistics.
// You can also produce the String directly by calling "evaluation.toString()"
var dimension = new Regressor("DIM-0",Double.NaN);
System.out.printf("Evaluation (train):%n RMSE %f%n MAE %f%n R^2 %f%n",
evaluation.rmse(dimension), evaluation.mae(dimension), evaluation.r2(dimension));
return model;
}
public void evaluate(Model<Regressor> model, Dataset<Regressor> testData) {
// Evaluate the model on the test data
RegressionEvaluator eval = new RegressionEvaluator();
var evaluation = eval.evaluate(model,testData);
// We create a dimension here to aid pulling out the appropriate statistics.
// You can also produce the String directly by calling "evaluation.toString()"
var dimension = new Regressor("DIM-0",Double.NaN);
System.out.printf("Evaluation (test):%n RMSE %f%n MAE %f%n R^2 %f%n",
evaluation.rmse(dimension), evaluation.mae(dimension), evaluation.r2(dimension));
}
var lrsgd = new LinearSGDTrainer(
new SquaredLoss(), // loss function
SGD.getLinearDecaySGD(0.01), // gradient descent algorithm
5, // number of training epochs
trainData.size()/4,// logging interval
1, // minibatch size
1L // RNG seed
);
var lrada = new LinearSGDTrainer(
new SquaredLoss(),
new AdaGrad(0.01),
5,
trainData.size()/4,
1,
1L
);
var lr = new LibLinearRegressionTrainer();
var cart = new CARTRegressionTrainer(6);
System.out.println(lr.toString());
System.out.println(lrsgd.toString());
System.out.println(lrada.toString());
System.out.println(cart.toString());
var lrModel = train("Linear Regression",lr,trainData);
evaluate(lrModel,evalData);
var lrsgdModel = train("Linear Regression (SGD)",lrsgd,trainData);
evaluate(lrsgdModel,evalData);
var lradaModel = train("Linear Regression (AdaGrad)",lrada,trainData);
evaluate(lradaModel,evalData);
// var cartModel = train("CART",cart,trainData);
// evaluate(cartModel,evalData);
###Output
_____no_output_____ |
notebooks/machinelearning.ipynb | ###Markdown
Aufnahme der Daten Trainieren und Klassifizieren
###Code
import pyaudio
import wave
import time
for x in range(1,21):
if x is 1:
time.sleep(3)
print("--------------------------------------------------")
print("Bitte drehen Sie jetzt nach rechts!")
print("--------------------------------------------------")
time.sleep(5)
if x is 11:
print("--------------------------------------------------")
print("Bitte aufhรถren zu drehen!")
print("--------------------------------------------------")
time.sleep(3)
print("--------------------------------------------------")
print("Bitte drehen Sie jetzt nach links!")
print("--------------------------------------------------")
time.sleep(5)
if x in [1,10]:
o="rechts"
if x in [11,20]:
o="links"
#chunk = 1024 # Record in chunks of 1024 samples
chunk = 100
sample_format = pyaudio.paInt16 # 16 bits per sample
channels = 1
fs = 44100 # Record at 44100 samples per second
seconds = 10
filename = ("data_samples/training/none/lttl"+str(x)+".wav")
p = pyaudio.PyAudio() # Create an interface to PortAudio
print('Recording '+str(x))
stream = p.open(format=sample_format,
channels=channels,
rate=fs,
frames_per_buffer=chunk,
input=True)
frames = [] # Initialize array to store frames
# Store data in chunks for 3 seconds
for i in range(0, int(fs / chunk * seconds)):
data = stream.read(chunk)
frames.append(data)
# Stop and close the stream
stream.stop_stream()
stream.close()
# Terminate the PortAudio interface
p.terminate()
print('Finished recording ' + str(x))
# Save the recorded data as a WAV file
wf = wave.open(filename, 'wb')
wf.setnchannels(channels)
wf.setsampwidth(p.get_sample_size(sample_format))
wf.setframerate(fs)
wf.writeframes(b''.join(frames))
wf.close()
time.sleep(1)
if x is 20:
print("--------------------------------------------------")
print("Bitte aufhรถren zu drehen!")
print("--------------------------------------------------")
continue
###Output
--------------------------------------------------
Bitte drehen Sie jetzt nach rechts!
--------------------------------------------------
Recording 1
Finished recording 1
Recording 2
Finished recording 2
Recording 3
Finished recording 3
Recording 4
Finished recording 4
Recording 5
Finished recording 5
Recording 6
Finished recording 6
Recording 7
Finished recording 7
Recording 8
Finished recording 8
Recording 9
Finished recording 9
Recording 10
Finished recording 10
--------------------------------------------------
Bitte aufhรถren zu drehen!
--------------------------------------------------
--------------------------------------------------
Bitte drehen Sie jetzt nach links!
--------------------------------------------------
Recording 11
Finished recording 11
Recording 12
Finished recording 12
Recording 13
Finished recording 13
Recording 14
Finished recording 14
Recording 15
Finished recording 15
Recording 16
Finished recording 16
Recording 17
Finished recording 17
Recording 18
Finished recording 18
Recording 19
Finished recording 19
Recording 20
Finished recording 20
--------------------------------------------------
Bitte aufhรถren zu drehen!
--------------------------------------------------
###Markdown
Training: Support Vektor machine
###Code
from pyAudioProcessing.run_classification import train_and_classify
# Training
#train_and_classify("data_samples/training", "train", ["gfcc,mfcc"], "svm", "svm_clf")
#train_and_classify("data_samples/training", "train", ["gfcc"], "svm", "svm_clf")
train_and_classify("data_samples/training", "train", ["gfcc,mfcc"], "svm", "svm_clf")
#train_and_classify("data_samples/training", "train", ["gfcc,mfcc"], "randomforest", "svm_clf")
#train_and_classify("data_samples/training", "train", ["gfcc"], "svm", "svm_clf")
#train_and_classify("data_samples/training", "train", ["gfcc,mfcc"], "svm", "svm_clf")
###Output
_____no_output_____
###Markdown
Test
###Code
import pyaudio
import wave
import time
#chunk = 1024 # Record in chunks of 1024 samples
chunk = 1024
sample_format = pyaudio.paInt16 # 16 bits per sample
channels = 1
fs = 44100 # Record at 44100 samples per second
seconds = 10
filename = ("data_samples/testing/test/test.wav")
for i in range(0,1):
p = pyaudio.PyAudio() # Create an interface to PortAudio
print("Recording Test")
stream = p.open(format=sample_format,
channels=channels,
rate=fs,
frames_per_buffer=chunk,
input=True)
frames = [] # Initialize array to store frames
# Store data in chunks for 3 seconds
for i in range(0, int(fs / chunk * seconds)):
data = stream.read(chunk)
frames.append(data)
# Stop and close the stream
stream.stop_stream()
stream.close()
# Terminate the PortAudio interface
p.terminate()
print("Finished recording Test")
# Save the recorded data as a WAV file
wf = wave.open(filename, 'wb')
wf.setnchannels(channels)
wf.setsampwidth(p.get_sample_size(sample_format))
wf.setframerate(fs)
wf.writeframes(b''.join(frames))
wf.close()
time.sleep(1)
# Classify data
from pyAudioProcessing.run_classification import train_and_classify
train_and_classify("data_samples/testing", "classify", ["gfcc,mfcc"], "svm", "svm_clf")
time.sleep(3)
import json
from pprint import pprint
with open('classifier_results.json') as f:
data = json.load(f)
Klasse1 =(data["data_samples/testing\\test"]["test.wav"]["classes"][0])
Wert1= (data["data_samples/testing\\test"]["test.wav"]["probabilities"][0])
Klasse2= (data["data_samples/testing\\test"]["test.wav"]["classes"][1])
Wert2= (data["data_samples/testing\\test"]["test.wav"]["probabilities"][1])
print("###############################Ergebnis#####################################################")
#print(str(Klasse1) + ":" + str(Wert1)) #links
#print(str(Klasse2) + ":" + str(Wert2)) #rechts
if Wert1>Wert2:
print("Die Drehrichtung ist "+str(Klasse1)+", mit dem Wahrscheinlickeit von "+str(Wert1))
if Wert1<Wert2:
print("Die Drehrichtung ist "+str(Klasse2)+", mit dem Wahrscheinlickeit von "+str(Wert2))
if Wert1==Wert2:
print("Fehler")
print("############################################################################################")
from pyAudioProcessing.run_classification import train_and_classify
import time
train_and_classify("data_samples/testing", "classify", ["gfcc,mfcc"], "svm", "svm_clf")
time.sleep(3)
import json
from pprint import pprint
with open('classifier_results.json') as f:
data = json.load(f)
Klasse1 =(data["data_samples/testing\\test"]["test.wav"]["classes"][0])
Wert1= (data["data_samples/testing\\test"]["test.wav"]["probabilities"][0])
Klasse2= (data["data_samples/testing\\test"]["test.wav"]["classes"][1])
Wert2= (data["data_samples/testing\\test"]["test.wav"]["probabilities"][1])
Klasse3=(data["data_samples/testing\\test"]["test.wav"]["classes"][2])
Wert3= (data["data_samples/testing\\test"]["test.wav"]["probabilities"][2])
print("###############################Ergebnis#####################################################")
#print(str(Klasse1) + ":" + str(Wert1)) #links
#print(str(Klasse2) + ":" + str(Wert2)) #rechts
if Wert1>Wert2 , Wert1>Wert3:
print("Die Drehrichtung ist "+str(Klasse1)+", mit dem Wahrscheinlickeit von "+str(Wert1))
if Wert1<Wert2 , Wert2>Wert3:
print("Die Drehrichtung ist "+str(Klasse2)+", mit dem Wahrscheinlickeit von "+str(Wert2))
if Wert3>Wert1 , Wert3>Wert2:
print("Die Drehrichtung ist "+str(Klasse3)+", mit dem Wahrscheinlickeit von "+str(Wert3))
if Wert1==Wert2:
print("Fehler")
print("############################################################################################")
###Output
_____no_output_____ |
6210 assignment1/.ipynb_checkpoints/species-checkpoint.ipynb | ###Markdown
Exclude differences
###Code
data['Category'].value_counts()
data['Record_Status'].value_counts()
data['Occurrence'].value_counts()
data['Nativeness'].value_counts()
data['Abundance'].value_counts()
data['Seasonality'].value_counts()
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Identify missing data
###Code
data['Occurrence']=data['Occurrence'].fillna('Not confirmed')
data.isnull().sum()
data['Conservation_Status'].value_counts()
data['Conservation_Status']=data['Conservation_Status'].fillna('Under Review')
data.isnull().sum()
data['Nativeness']=data['Nativeness'].fillna('Under Review')
data.isnull().sum()
data['Seasonality']=data['Seasonality'].fillna('Unknown')
data.isnull().sum()
data['Family']=data['Family'].fillna('Unknown')
data.isnull().sum()
# drop rows with missing values
data.dropna(inplace=True)
data.isnull().sum()
data = pd.DataFrame(data)
data.to_csv('species1.csv',index = False, header =("Species_ID","Park_Name","Category","Order","Family","Scientific_Name","Common_Names","Record_Status","Occurrence","Nativeness","Abundance","Seasonality","Conservation_Status"))
print(data)
data.shape
data.info()
data.duplicated()
###Output
_____no_output_____ |
challenges/ibm-quantum/iqc-fall-2020/week-0/ex_0_ja.ipynb | ###Markdown
Week0: IBM Quantum Challenge ๅบ็คใฎๅญฆ็ฟ ๆฌๅญฆ็ฟใฏๅๅฟ่
ๅใใซใใพใใฏ้ๅญๅ่ทฏใใคใใฃใๆผ็ฎใฎๅบ็คใๅญฆใถใใจใ็ฎ็ใจใใฆใใพใใใใงใซๅ
ๅฎนใ็่งฃใใใฆใใๆนใฏใใใฎๆๆใฏ้ฃใฐใใฆใใใ ใใฆใใพใใพใใใ๏ผๅฝใใญใฅใกใณใใฏใQiskit Textbookใฎ[The atoms of computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)ใๅ
ใซไฝๆใใพใใใ๏ผ ้ๅญ่จ็ฎใจใฏ๏ผๅคๅ
ธ่จ็ฎใจใฎๆฏ่ผ้ๅญใณใณใใฅใผใฟใผใฎใใญใฐใฉใใณใฐใฏใไปใงใฏ่ชฐใใ่ชๅฎ
ใใๅฟซ้ฉใซใใใชใใใจใใงใใพใใใงใใไฝใใฉใใใฃใฆใคใใใฎใงใใใใ๏ผใใใใ้ๅญใใญใฐใฉใ ใจใฏ๏ผ้ๅญใณใณใใฅใผใฟใผใจใฏไฝใงใใใใ๏ผใใใใฎ็ๅใฏใไปๆฅใฎๆจๆบใฎใใธใฟใซใณใณใใฅใผใฟใผใจๆฏ่ผใใใใจใงๅ็ญใงใใพใใใใฎ่จไบใงใฏใใพใใฏใใธใฟใซใณใณใใฅใผใฟใผใฎๅบๆฌๅๅใใ่ฆใฆใใใพใใใใฎๅพใ้ๅญใณใณใใฅใผใใฃใณใฐใฎ่ฉฑใซในใ ใผใบใซ็งป่กใงใใใใใ้ๅญใณใณใใฅใผใใฃใณใฐไธใงใฎ่จ็ฎใ่กใใจใใจๅใใใผใซใไฝฟ็จใใฆๅฎ่กใใพใใ ใในใฆใฎๆ
ๅ ฑใฏใใใใงใใใใใใฏไธใฎไธญใฎใใใจใใใใๆ
ๅ ฑใ0ใพใใฏ1ใง่กจ็พใใๆ
ๅ ฑใฎๆๅฐๅไฝใฎใใจใงใใไพใใฐใ็งใใกใๆฅๅธธ็ใซไฝฟ็จใใๅ้ฒๆฐ่จๆณใฎๆฐๅญใฏ0, 1, 2, 3, 4, 5, 6, 7, 8. 9ใฎ10ๅใฎๆฐๅญใฎ็ตๅใใง่กจ็พใใพใใๅไฝใฎๅคใฏ10ใฎ็ดฏไนใใใใใใใใคๅซใใงใใใฎใใ่กจใใฆใใพใใไพใจใใฆ **9213** ใจใใๆฐๅญใใฟใฆใฟใพใใใใใใฎๆฐๅญใฏ9000 + 200 + 10 + 3ใซๅ่งฃใงใใพใใ0~9ใพใงใฎๅๆฐๅญใซๅฏพใใฆใ10ใฎ็ดฏไนใใใใใใใใคๅซใใงใใใฎใใใฟใฆใฟใพใใจใ**9213** = **9 x 103 + 2 x 102 + 1 x 101+ 3 x 100** ใจใใฆ่กจ็พใงใใพใใ ใคใฅใใฆใๅใๆฐๅญใไบ้ฒๆฐใใใผในใซๅ่งฃใใฆใฟใพใใใใ0ใจ1ใฎๆฐๅญใซๅฏพใใฆใ2ใฎ็ดฏไนใใใใใใใใคๅซใใงใใใใใฟใฆใฟใพใใจใ**9213 = 1 x 213 + 0 x 212 + 0 x 211 + 0 x 210 + 1 x 29 + 1 x 28 + 1 x 27 + 1 x 26 + 1 x 25 + 1 x 24 + 1 x 23 + 1 x 22 + 0 x 21 + 1 x 20** ใซๅ่งฃใใใใจใใงใใใใจใใใ9213ใจใใๆฐๅญใฏใใคใใชใง่กจ็พใใใจ **10001111111101** ใซใชใใพใใ ไปปๆใฎๆฐใ ใใงใฏใชใใใใกใใฎ[ใใผใใซ](https://www.ibm.com/support/knowledgecenter/en/ssw_aix_72/network/conversion_table.html)ใซใใใใใซใๆงใ
ใชๆๅญใ่จๅทใ0ใจ1ใฎใใคใใชใง่กจใใใจใใงใใพใใใใฎๅฏพๅฟ่กจใฏๆฅญ็ใฎๆจๆบใจใใฆๅบใๆก็จใใใฆใใใใใฎ่จไบใใคใณใฟใผใใใใ้ใใฆ่ชญ่
ใงใใ็ใใใซๅฑใใใใ้ใซใ็จใใใใฆใใพใใ็งใใกใ็ฅใฃใฆใใใณใณใใฅใผใฟใผใๆฑใใในใฆใฎๆ
ๅ ฑ๏ผe.g. ๆฐๅญใๆๅญใ็ปๅใ้ณๅฃฐ, etc.)ใฏใๅฎใฏใใใใใใใๅใฎใใใพใใงใใใใจใๆนใใฆ่ฆใใฆใใใพใใใใ 8-bitใฎใณใณใใฅใผใฟใผใIBM Quantum Experienceใงใคใใฃใฆใฟใใ ้ๅญๅ่ทฏใไฝๆใใฆๅฎ่กใใใใใฎใใผใซใซIBM Quantum ExperienceใใใใพใใGUIใไบ็ท่ญใซไผผใฆใใใใจใใใ้็งฐComposerใจใๅผใฐใใฆใใพใใใใฎCircuit Composerใใคใใฃใฆ8-bitใฎใณใณใใฅใผใฟใผใใคใใฃใฆใฟใพใใใใ 1. (ใพใ ใๆธใฟใงใชใๆนใฏ)IBM Quantum Experienceใฎใใผใธใใ[ใขใซใฆใณใไฝๆ](https://quantum-computing.ibm.com/)ใ่กใใพใใ2. ไธญๅคฎใฎStart a new circuitใใฟใณใๆผใใฆCircuit Composerใใใกใใใพใใ๏ผใพใใฏ[ใใกใ](https://quantum-computing.ibm.com/composer/new-experiment)ใใใComposerใซใขใฏใปในใงใใพใใ๏ผ3. ้ๅญใฒใผใใ็ดๆฅใใฉใใฐ&ใใญใใใใฆๅ่ทฏใๆง็ฏใงใใพใใ4. ๅณๅดใฎCode editorใ็ดๆฅ็ทจ้ใใฆๅ่ทฏใๆง็ฏใใใใจใๅฏ่ฝใงใใ
###Code
from IPython.display import Image, display
Image('composer01.jpg')
###Output
_____no_output_____
###Markdown
ใงใฏใ้ๅญใฌใธในใฟใจๅคๅ
ธใฌใธในใฟใใใใใ8ๅ็จๆใใฆ(i.e. qreg[8], creg[8] )ใๆธฌๅฎ็จใฎใชใใฌใผใฟใผ๏ผใกใผใฟใผๆจกๆงใฎ็ฐ่ฒใฎใฒใผใ๏ผใๅ้ๅญใใใใใใฎใณใฆใใ็ทใซ้ ็ชใซ้
็ฝฎใใฆใฟใพใใใใ๏ผไธๆใซใงใใชใใฃใๅ ดๅใฏใ[ใใกใ](https://quantum-computing.ibm.com/composer/new-experiment?initial=N4IgdghgtgpiBcICqYAuBLVAbGATABAMboBOhArpiADQgCOEAzlAiAPIAKAogHICKAQQDKAWXwAmAHQAGANwAdMOjCEs5XDHzz6MLOgBGARknLC2hWEV0SMAOb46AbQAcAXQuEb9wi-eLFsEzkNg6O0q74ALQAfERhfmCBjMGaToYRMXHpFkkpoeIZsT4FOTBBIU4AzIVx1aXlqY4ALDU%2BLfXJFY4ArK09CbldAGx9Ix15TgDsfdOyNCAajJ7oAA4YAPZgrCAAvkA)ใใๅ็
งใใ ใใใ๏ผ
###Code
Image('composer02.jpg')
###Output
_____no_output_____
###Markdown
ใใฎๅณใง็คบใใใฆใใใฎใใใๅ่ทฏใใจๅผใฐใใใใฎใงใใๅ่ทฏใฏ็ทใฎๅทฆใใๅณใธใจๆ้็บๅฑ็ใซ้ๅญใใใใฎ็ถๆ
ใๆไฝใใใใฎใงใใใ็งใใกใซ้ฆดๆใฟใฎใใๅคๅ
ธใณใณใใฅใผใฟใผใฎ่ซ็ๅ่ทฏใ้ๅญๅ่ทฏใใคใใฃใฆๅใใใใซใคใใใใจใใงใใพใใใใใฎไพใงใฏๅคงใใใใจใฏ่ตทใใฆใใพใใใ8ๅใฎ้ๅญใใใใๆบๅใใใใใใใ0ใใ7็ช็ฎใพใงใฎ็ชๅทใๅฒใๆฏใใใฆใใพใใๅ้ๅญใใใใซใๆธฌๅฎใใฎใชใใฌใผใทใงใณใ้ฉ็จใใใฆใใใใใฎใๆธฌๅฎใใซใใฃใฆ'0'ใพใใฏ'1'ใฎๅคใ่ชญใฟๅใใใพใใ ้ๅญใใใใฏๅธธใซ0ใฎ็ถๆ
ใๅใใใๅๆๅใใใพใใใใฎใใใไธ่จๅ่ทฏใฎใใใซใๆธฌๅฎใไปฅๅคไฝใใใชใใจใใฏใใในใฆใฎๆธฌๅฎ็ตๆใฏๅฝ็ถ'0'ใซใชใใพใใComposerไธๅดใฎใฟใใง'Measurement Probabilities'ใใใญใใใใฆใณใใ้ธใใงใใ ใใใใใใจใไธ่จใฎใใใชใในใใฐใฉใ ใ่กจ็คบใใใใในใฆใฎ้ๅญใใใใ'0'ใ่ฟใใฆใใๆงๅญ (i.e. '00000000')ใ็ขบ่ชใใใใจใใงใใพใใ
###Code
Image('histogram01.jpg',width="300", height="200")
###Output
_____no_output_____
###Markdown
'0'ไปฅๅคใฎใใใใใจใณใณใผใใใใซใฏใNOTใฒใผใใไฝฟใใพใใใณใณใใฅใผใฟใผใฎๆผ็ฎใซใใใฆๆใๅบๆฌ็ใชใชใใฌใผใทใงใณใงใใNOTใฒใผใใฏ'0'ใ'1'ใซใ'1'ใ'0'ใซๅ่ปขใใพใใCircuitใงใใใ้ฉ็จใใใซใฏใ่กจ็คบใใใฆใใ้ๅญใฒใผใใฎไธญใใใ$\oplus$ใจ่กจ็คบใใใฆใใ็ดบ่ฒใฎใขใคใณใณใ้ธใใงไบ็ท่ญใฎไธใซใใฉใใฐ&ใใญใใใใพใใ๏ผใใฒใ่ชๅใงใใฉใคใใฆใใใ ใใใใงใใใๆไฝใซๅฐใฃใๅ ดๅใฏ[ใใกใ](https://quantum-computing.ibm.com/composer/new-experiment?initial=N4IgdghgtgpiBcICqYAuBLVAbGATABAMboBOhArpiADQgCOEAzlAiAPIAKAogHICKAQQDKAWXwAmAHQAGANwAdMOjCEs5XDHzz6MLOgBGARknLC2hWEV0SMAOb46AbQAcAXQuEb9wi-eLFAB4OjgDsfmCwTOQ2wdKu%2BAC0AHxEjnEWkYzRmk6G8cmpeRkwUTFO4vkpPhXFpTmOAMyVqU21WWWOACzNPt1t2cEArD2Ow-0dAGwjU%2BP1YYlVoe40IBqMnugADhgA9mCsIAC%2BQA)ใใๅ็
งใใ ใใใ๏ผ
###Code
Image('composer03.jpg')
###Output
_____no_output_____
###Markdown
ใใใปใฉใฎใในใใฐใฉใ ใใไปๅบฆใฏ'10000000'ใๅบๅใใฆใใใฎใ็ขบ่ชใงใใพใใใใฎๅ่ทฏใฏ8-bitใฎใณใณใใฅใผใฟใผใจๅใใงใใ
###Code
Image('histogram02.jpg',width="300", height="200")
###Output
_____no_output_____
###Markdown
ใใใงๅ่ปขใใใฆใใใใใใฏใ้ๅญใใใ7็ชใซๅฏพใใฆใงใใใใจใซใใไธๅบฆ็็ฎใใฆใใ ใใใใใใฆๅบๅใใใใใใๅใฎไธ็ชๅทฆใใใใซๅฏพๅฟใใฆ'1'ใซๅ่ปขใใฆใใพใใ(i.e., '10000000') ใคใพใใCircuit Composerใงใฏ็ชๅทใฎๅคงใใ้ๅญใใใใปใฉ้ซใไฝใฎใใใๆฐใซๅฏพๅฟใใฆใใพใใใใใใใใจใงใ7็ช็ฎใฎ้ๅญใใใใฏ27ใใใใคใใใฎใใ6็ช็ฎใฎ้ๅญใใใใฏ26ใใใใคใใใฎใใ5็ช็ฎใฎ้ๅญใใใใฏ25ใใใใคใใใฎใใใจใใๅ
ทๅใซใ้ๅญใใใใงๆฐใ่กจ็พใใใใใชใใพใใใใใงใฏ7็ช็ฎใฎ้ๅญใใใใๅ่ปขใใใฆ'1'ใซใใใใจใงใ27 = 128ใใใฎ8-bitใณใณใใฅใผใฟใผไธใง่กจ็พใใใใจใใงใใพใใใ๏ผ้ๅญ่จ็ฎใซ้ขใใไปใฎใตใคใใๆ็งๆธใงใฏใ้ใฎ่กจ่จๆนๆณใๆ็คบใใฆใใใใจใๅคใใใใพใใใไธ่จ่กจ่จๆณใฏ้ๅญใใใใงๆดๆฐๅคใฎใใคใใช่กจ็พใ่กใไธใงใกใชใใใใใใใใCircuit Composerใไฝฟใฃใฆใใใจใใฏ็ชๅทใฎๅคงใใช้ๅญใใใ๏ผ้ซใไฝใฎใใใๆฐใจ่ฆใใฆใใใพใใใใ๏ผ ไปปๆใฎๆฐๅคใๅใๅ่ทฏใงใจใณใณใผใใใฆใฟใใ ใใฆใใใใงไปปๆใฎๆดๆฐใใจใณใณใผใใใฆใฟใพใใใใใใฎๆฐๅญใใใคใใชใงใฏใฉใฎใใใซ่กจ็พใใใใฎใใฏๆค็ดขใใฆไบใ็ขบ่ชใใฆใใใพใใใใ๏ผใใ็ตๆใซ'0b'ใๅซใพใใฆใใๅ ดๅใฏใใฎ2ๆกใฏๅใๆจใฆใฆๅทฆๅดใซ'0'ใ่ถณใใฆๅ
จไฝใ8ๆกใซใใพใใใใ๏ผไปฅไธใฏ'34'ใๅ
ฅๅๅคใจใใฆใจใณใณใผใใใๅ ดๅใฎๅ่ทฏใงใใ5็ช็ฎใฎ้ๅญใใใใจ1็ช็ฎใฎ้ๅญใใใใๅ่ปขใใฆใใพใใ๏ผi.e. 34 = 1 x 25 + 1 x 21) ใใฒใ่ชๅใงๅ่ทฏไฝๆใซใใฉใคใใฆใใใ ใใใใงใใใๆไฝใซๅฐใฃใๅ ดๅใฏ[ใใกใ](https://quantum-computing.ibm.com/composer/new-experiment?initial=N4IgdghgtgpiBcICqYAuBLVAbGATABAMboBOhArpiADQgCOEAzlAiAPIAKAogHICKAQQDKAWXwAmAHQAGANwAdMOjCEs5XDHzz6MLOgBGARknLC2hWEV0SMAOb46AbQAcAXQuEb9wi-eLFAB4OjoZ%2BYEFOAKxhsEzkNsHSrvgAtAB8RI5JFrGM8ZpOoakZPqE5MHEJTuLJ6Zk15ZUFjgDMtSWtMRV5VY4ALO2ZA409zdHFmdEj%2BcEAbIM%2B89O9AOwLjmuyNCAajJ7oAA4YAPZgrCAAvkA)ใใๅ็
งใใ ใใใ
###Code
Image('composer04.jpg',width="600", height="400")
###Output
_____no_output_____ |
files/notebooks/2019-10-29-openmm1.ipynb | ###Markdown
Molecular modelling software: OpenMM[OpenMM](http://openmm.org/) is "A high performance toolkit for molecular simulation. Use it as a library, or as an application. We include extensive language bindings for Python, C, C++, and even Fortran. The code is open source and actively maintained on Github, licensed under MIT and LGPL. Part of the Omnia suite of tools for predictive biomolecular simulation." Here's their [GitHub repo](https://github.com/openmm/openmm), and [Conda link](https://anaconda.org/omnia/openmm), though I think they might be relocating their channel to conda-forge.Here's my opinion, OpenMM is a very powerful, flexible engine that has integration with a variety of other MD engines, supports a variety of molecular models, excellent GPU support, active open-source development, and is the underlying molecular dynamics engine for OpenForceField efforts, but very easy to port to other MD engines via ParmEd. There's also support for enhanced sampling and integration with deep learning libraries. If there was a 21st century, best-software-practices, open-source software for molecular modelling and simulation, OpenMM (or HOOMD) would likely be it. In reality, I'm not sure how many graduate students/academic labs opt to use OpenMM if the lab has historically used another MD engine. Also, this is a somewhat unfounded observation, but I'm curious if/how much the computer-aided drug design industry has adopted the use of OpenMM. More editorializing, but my graduate work never brought me into tight overlap with the OpenMM world/community, but it certainly seems like a vibrant community that is pushing the development and popularity of molecular modelling and simulation The OpenMM Public APII'm mainly summarizing and regurgitating the [OpenMM documentation](http://docs.openmm.org/latest/userguide/library.htmlthe-openmm-public-api). These are some important terms to know within the OpenMM API:* **System** - this object stores information about numbers of particles, particle masses, box information, constraints, and virtual site information. Note the lack of positions, bonding information, integrators, simulation run parameters. The **System** object also contains your **Forces**.* **Force** - **Force** objects describe how your particles interact with each other. This is where your force field gets implemented - outlining the molecular model forces in play, the treatment of long range interactions, and even your barostat. This is, broadly, what a **Force** object is, but there is much more in the details of specific **Force** objects, like an `openmm.HarmonicBondForce`. * Upon implementation, it's interesting to note that the "Container" is the **Force** object, and it contains the parameters and particles the obey this force. Sort of turning this concept upside-down, Parmed's atoms and bonds are the objects that contain the interaction parameters of that force.from a **Context** * **Integrator** - This is the integration algorithm by which you progress your particle's positions and simulation over time.* **Context** - this object stores information about your particle coordinates, velocities, and specially-defined/parametrized **Forces**. When you run an actual simulation or produce a trajectory, you will have to start from a **Context**. **Contexts** contain information about integrators, which helps distinguish information about your molecular model of your **System** (forces, masses) from the things that will be used to run your simulation.* **State** - this is like a single frame/snapshot/checkpoint within your simulation. It's everything that was being calculated at that particular timestep. If you want peer into your simulation, you will be looking at its **State**. If you want to report some information, you will be parsing information from the **State**.There are numerous tutorials on running OpenMM simulations, but I want to focus on building the OpenMM objects and everything before you need to think about **Integrators** or **States**, as this is key for builting interoperability between molecular modelling software.
###Code
import simtk.unit as unit
import simtk.openmm as openmm
###Output
_____no_output_____
###Markdown
In this bare-bones model, we will just create an `OpenMM.System` object, and the only forces interacting in the system will be the `OpenMM.NonbondedForce`. After we add the `force` to the `system`, we are returned the index of the `force` - if you wanted to find it within our `system` via `system.getForces()`, which is a list of `force` objects. [Credit to the OpenMM documentation](http://docs.openmm.org/latest/userguide/library.htmlrunning-a-simulation-using-the-openmm-public-api)
###Code
system = openmm.System() # Create the openmm System
nonbonded_force = openmm.NonbondedForce() # Create the Force object, specifically, a NonbondedForce object
print(system.addForce(nonbonded_force)) # Returns the index of the force we just added
print(system.getForces())
###Output
0
[<simtk.openmm.openmm.NonbondedForce; proxy of <Swig Object of type 'OpenMM::NonbondedForce *' at 0x107166510> >]
###Markdown
As a brief foray into python-C++ interfaces, these two objects have slightly different (python) addresses, but we will see that they refer to the same C++ object
###Code
print(system.getForce(0))
print(nonbonded_force)
###Output
<simtk.openmm.openmm.NonbondedForce; proxy of <Swig Object of type 'OpenMM::NonbondedForce *' at 0x1071664e0> >
<simtk.openmm.openmm.NonbondedForce; proxy of <Swig Object of type 'OpenMM::NonbondedForce *' at 0x103e46fc0> >
###Markdown
Next, we will start creating our particles and nonbonded interaction parameters.This code is contrived for sake of example, but you can imagine there are more sophisticated and relevant ways to add positions, masses, or nonbonded parameters
###Code
import itertools as it
import numpy as np
positions = [] # Create a running list of positions
for x,y,z in it.product([0,1,2], repeat=3): # Looping through a 3-dimensional grid, 27 coordinates
# Add to our running list of positions
# Note that these are just ints, we will have to turn them into simtk.Quantity later
positions.append([x,y,z])
# Add the particle's mass to the System object
system.addParticle(39.95 * unit.amu)
# Add nonbonded parameters to our NonbondedForce object - charge, LJ sigma, LJ epsilon
nonbonded_force.addParticle(0*unit.elementary_charge,
0.3350 * unit.nanometer,
0.996 * unit.kilojoule_per_mole)
###Output
_____no_output_____
###Markdown
We can compare the two `force` objects from earlier - the `NonbondedForce` we created from code and the `NonbondedForce` that is returned when we access our `system`. Both refer to the same underlying `OpenMM.NonbondedForce` object and will reflect the same information. These are just two ways of accessing this object. The `system` also agrees with the number of particles we have added.
###Code
(system.getForce(0).getNumParticles(), nonbonded_force.getNumParticles(), system.getNumParticles())
###Output
_____no_output_____
###Markdown
The next object to deal with is the `OpenMM.Context`, which specifies positions. First we need to convert our list of coordinates into a more-tractable `numpy.ndarray` of coordinates, and then turn that into a `simtk.Quantity` of our coordinates. Additionally, the `OpenMM.Context` constructor requires an integrator (at this point we are trying to build our simulation), and then we can specify the positions within that context
###Code
np_positions = np.asarray(positions)
unit_positions = np_positions * unit.nanometer
type(np_positions), type(unit_positions)
integrator = openmm.VerletIntegrator(1.0) # 1 ps timestep
context = openmm.Context(system, integrator) # create context
context.setPositions(unit_positions) # specify positions within context
###Output
_____no_output_____
###Markdown
We can parse some information about our `context`, and this is done by getting the `state` of our `context`.Note how the time is 0.0 ps (we haven't run our simulation at all).But we can also parse the potential energy of our context - this is the potential energy given the positions we initialized and forces we specified.
###Code
print(context.getState().getTime())
print(context.getState(getEnergy=True).getPotentialEnergy())
###Output
0.0 ps
-0.3682566285133362 kJ/mol
###Markdown
What happens to our `state` after we've run for some amount of time? We will run for 10 time steps (or 10 ps since our timestep is 1 ps). We can see the the `time` reported by our `state` has changed, and so has the `potentialEnergy`
###Code
integrator.step(10) # Run for 10 timesteps
print(context.getState().getTime())
print(context.getState(getEnergy=True).getPotentialEnergy())
type(system), type(context), type(integrator), type(nonbonded_force)
###Output
_____no_output_____
###Markdown
This summarizes how `system`, `force`, `context`, `state`, and `integrator` objects interact with each other within the OpenMM API. Side note, observe where in the API these are stored - at the base level `openmm.XYZ`, this next section will move "up a level" to some objects and API that build off these base level API More practical OpenMM simulationsWe just talked about some of the base-layer objects within OpenMM, but often people will "wrap" those base layer objects within an `OpenMM.Simulation` object, pass topological (bonding + box information) through a `openmm.Topology` object, attach `reporter` objects, and then run the simulation.The `Simulation` wraps the `topology`, `system`, `integrator`, and hardware platforms and implicitly creates the `Context`.The `Topology` contains information about the atoms, bonds, chains, and residues within your system, in addition to box information. Reporter objects are used to print/save various information about the trajectory.* [`OpenMM.Simulation` documentation](http://docs.openmm.org/latest/api-python/generated/simtk.openmm.app.simulation.Simulation.html)* [`OpenMM.Topology` documentation](http://docs.openmm.org/latest/api-python/generated/simtk.openmm.app.topology.Topology.html)* [OpenMM reporters](http://docs.openmm.org/latest/api-python/app.htmlreporting-output) Here's some contrived code to quickly make an ethane molecule, atomtype, and parametrize according to OPLSAA
###Code
import mbuild as mb
import foyer
import parmed as pmd
from mbuild.examples import Ethane
cmpd = Ethane() # mbuild compound
ff = foyer.Forcefield(name='oplsaa') # foyer forcefield
structure = ff.apply(cmpd) # apply forcefield to compound to get a pmd.Structure
###Output
/Users/ayang41/Programs/foyer/foyer/validator.py:132: ValidationWarning: You have empty smart definition(s)
warn("You have empty smart definition(s)", ValidationWarning)
/Users/ayang41/Programs/foyer/foyer/forcefield.py:248: UserWarning: Parameters have not been assigned to all impropers. Total system impropers: 8, Parameterized impropers: 0. Note that if your system contains torsions of Ryckaert-Bellemans functional form, all of these torsions are processed as propers
warnings.warn(msg)
###Markdown
Now we have a `parmed.Structure` that has atomtypes and force field parameters. Conveniently, `parmed.Structure` can quickly create an `openmm.app.topology` object, and we can see some basic information like numbers of atoms and bonds.It's also worth observing that this is `openmm.app.topology`, within the "application layer", one level above the base layer
###Code
print(structure.topology) # the parmed structure can create the openmm topology
print(type(structure.topology))
[a for a in structure.topology.atoms()]
###Output
<Topology; 1 chains, 1 residues, 8 atoms, 7 bonds>
<class 'simtk.openmm.app.topology.Topology'>
###Markdown
We can now build out some other relevant features of running a simulation
###Code
system = structure.createSystem() # the parmed structure can create the openmm system
integrator = openmm.VerletIntegrator(1.0) # create another openmm integrator
###Output
_____no_output_____
###Markdown
Putting it all together, we make our `Simluation` object. Once again, note how this is within the `app` layer
###Code
simulation = openmm.app.Simulation(structure.topology, system, integrator)
type(simulation)
###Output
_____no_output_____
###Markdown
After creating the `Simulation` object, we have access to the `Context` related to the `System` and `Integrator`
###Code
simulation.context
###Output
_____no_output_____
###Markdown
Once again, we need to specify the positions. Fortunately, the `parmed.Structure` already uses `simtk.Quantity` for its positions.
###Code
simulation.context.setPositions(structure.positions)
###Output
_____no_output_____
###Markdown
Before running the simulation, we can get some `State` information related to this `Context`
###Code
simulation.context.getState().getTime()
###Output
_____no_output_____
###Markdown
We can now run this simulation and observe that the `State` changes
###Code
simulation.step(10)
simulation.context.getState().getTime()
###Output
_____no_output_____ |
Hackerearth-Predict-the-genetic-disorders/5_genetic_testing_catBoost.ipynb | ###Markdown
###Code
import pandas as pd
import numpy as np
import io
import gc
import time
from pprint import pprint
from datetime import date
# settings
import warnings
warnings.filterwarnings("ignore")
gc.enable()
!pip3 install imbalanced-learn > /dev/null
!pip3 install catboost > /dev/null
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
# connect to google drive
from google.colab import drive
drive.mount('/content/drive')
gDrivePath = '/content/drive/MyDrive/Datasets/Hackerearth_genetic_testing/dataset/'
df_train = pd.read_csv(gDrivePath+'train_preprocessed.csv')
df_test = pd.read_csv(gDrivePath+'test_preprocessed.csv')
# df_train = pd.read_csv('train_preprocessed.csv')
# df_test = pd.read_csv('test_preprocessed.csv')
df_train.sample(3)
df_train['GeneticDisorder-DisorderSubclass'] = df_train['Genetic Disorder'] + '<->' + df_train['Disorder Subclass']
df_train.sample(3)
# df_train[['Col1','Col2']] = df_train['GeneticDisorder-DisorderSubclass'].str.split("<->",expand=True)
# df_train.sample(3)
###Output
_____no_output_____
###Markdown
Checking if the dataset is balanced/imbalanced
###Code
target_count = df_train['GeneticDisorder-DisorderSubclass'].value_counts()
target_count
###Output
_____no_output_____
###Markdown
Splitting Data into train-cv
###Code
target_labels = df_train['GeneticDisorder-DisorderSubclass'].values
df_train.drop(['Genetic Disorder','Disorder Subclass', 'GeneticDisorder-DisorderSubclass'], axis=1, inplace=True)
df_test.drop(['Genetic Disorder','Disorder Subclass', 'GeneticDisorder-DisorderSubclass'], axis=1, inplace=True, errors='ignore')
# classification split for genetic_disorder_labels
from sklearn.model_selection import train_test_split
X_train, X_cv, y_train, y_cv = train_test_split(df_train, target_labels, test_size=0.1, random_state=50)
###Output
_____no_output_____
###Markdown
Over Sampling using SMOTE
###Code
# https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/
from imblearn.over_sampling import SMOTE
smote_overSampling = SMOTE()
X_train,y_train = smote_overSampling.fit_resample(X_train,y_train)
unique, counts = np.unique(y_train, return_counts=True)
dict(zip(unique, counts))
###Output
_____no_output_____
###Markdown
Scaling data
###Code
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_cv_scaled = scaler.transform(X_cv)
X_test_scaled = scaler.transform(df_test)
# X_train_scaled
###Output
_____no_output_____
###Markdown
Hyperparameter tuning Hyperparameter tuning Catboost
###Code
%%time
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from catboost import CatBoostClassifier
CBC = CatBoostClassifier(loss_function='MultiClass', use_best_model=True, task_type="GPU")
# Best Params: {'random_strength': 0.5, 'max_ctr_complexity': 3, 'learning_rate': 0.1, 'l2_leaf_reg': 5, 'iterations': 500, 'depth': 10, 'border_count': 200, 'bagging_temperature': 0.03}
parameters = {
'depth':[3,1,2,6,4,5,7,8,9,10],
'iterations':[250,100,500,1000],
'learning_rate':[0.03,0.001,0.01,0.1,0.2,0.3],
'l2_leaf_reg':[3,1,5,10,100],
'border_count':[32,5,10,20,50,100,200],
'bagging_temperature':[0.03,0.09,0.25,0.75],
'random_strength':[0.2,0.5,0.8],
'max_ctr_complexity':[1,2,3,4,5]
}
Grid_CBC = RandomizedSearchCV(estimator=CBC, param_distributions=parameters, verbose=1, n_iter=200, scoring='f1_weighted', cv=5)
Grid_CBC.fit(X_train_scaled, y_train, eval_set=(X_cv_scaled, y_cv))
print("Best Params:", Grid_CBC.best_params_)
###Output
Best Params: {'random_strength': 0.5, 'max_ctr_complexity': 3, 'learning_rate': 0.1, 'l2_leaf_reg': 5, 'iterations': 500, 'depth': 10, 'border_count': 200, 'bagging_temperature': 0.03}
###Markdown
Create predictions
###Code
predictions_test = Grid_CBC.predict(X_test_scaled)
predictions_test
# for catboost
predictions_tests = []
for x in predictions_test:
predictions_tests.append(x[0])
###Output
_____no_output_____
###Markdown
Create Submission file
###Code
predictions_genetic_disorder_test = []
predictions_disorder_subclass_test = []
for myString in predictions_tests:
genetic_disorder, disorder_subclass = myString.split('<->')
predictions_genetic_disorder_test.append(genetic_disorder)
predictions_disorder_subclass_test.append(disorder_subclass)
print("length genetic_disorder list:", len(predictions_genetic_disorder_test))
print("length disorder_subclass list:", len(predictions_disorder_subclass_test))
read = pd.read_csv(gDrivePath+'test.csv')
read.shape
submission = pd.DataFrame({
"Patient Id": read["Patient Id"],
"Genetic Disorder": predictions_genetic_disorder_test,
"Disorder Subclass": predictions_disorder_subclass_test,
})
submission.head()
submission.to_csv('submission.csv', index=False)
submission.to_csv(gDrivePath+'submission.csv', index=False)
###Output
_____no_output_____ |
auto_signif.ipynb | ###Markdown
Auto Significance Testing
###Code
# cbcbc
Things To Improve
-- Hypothesis test - categorical, sample average against the population mean
-- 1 tail, 2 tail
x- More thorough visualization/exploration of the data
x- Pull more of the functions apart
-- More automation
-- Independence of observations, no autocorrelation
-- ANOVA/MANOVA
-- Implement this logic flow, with assumption checking at each and every step, https://python.plainenglish.io/statistical-tests-with-python-880251e9b572
-- Statistical tests focused on A/A tests
x- Transformations to normality
-- https://www.intro2r.info/unit3/which-test.html
#Helper functions
from statsmodels.stats import weightstats as stests
from scipy.stats import shapiro,ttest_ind,skew,kurtosis,probplot
from matplotlib import pyplot as plt
import numpy as np
import scipy as scipy
# identify outliers using simple interquartile ranges
def iqr_outlier(x,threshold):
import numpy as np
sorted(x)
Q1,Q3 = np.percentile(x,[25,75])
IQR = Q3 - Q1
lower_range = Q1 - (threshold * IQR)
upper_range = Q3 + (threshold * IQR)
output = [i for i in x if i < lower_range or i > upper_range]
if len(output) > 0:
print(len(outliers),' outlier/s detected (Using IQR). This can cuase accuracy issues with normality transformations')
else:
print('Sample is outlier free (Using IQR)')
print()
return output
# test normality of sample
def norm_test(x,alpha):
#Plot the data
plt.style.use('ggplot')
plt.hist(x,bins='auto')
plt.title('Sample Histogram')
plt.ylabel('Frequency')
plt.show()
fig = plt.figure()
ax1 = fig.add_subplot(211)
prob = stats.probplot(x, dist=stats.norm, plot=ax1)
#ax1.set_xlabel('')
ax1.set_title('Probality plot against normal distribution')
plt.show()
#Skewness & Kurtosis test
print('---Sample distribution----------------------')
kurt = kurtosis(x)
skewness = skew(x)
print( 'Kurtosis (Normal dist. = 0): {}'.format(kurt))
print( 'Skewness (Normal dist. = 0): {}'.format(skewness))
print()
#Normality test
stat_x, p_x = shapiro(x)
print('Normality statistics =%.3f, p=%.3f' % (stat_x, p_x))
print('Gaussian (fail to reject H0)') if p_x > alpha else print('non-Gaussian (reject H0)')
print()
if p_x > alpha:
dist = 'gaussian'
else:
dist = 'non-gaussian'
return dist
#Equality of variance test
def var_test(x,y,alpha):
print('---Variance equality test -----------------------')
x_var = np.array(x)
y_var = np.array(y)
f = np.var(x_var, ddof=1)/np.var(y_var, ddof=1) #calculate F test statistic
dfn = x_var.size-1 #define degrees of freedom numerator
dfd = y_var.size-1 #define degrees of freedom denominator
f_test_p_value = 1-scipy.stats.f.cdf(f, dfn, dfd) #find p-value of F test statistic
if f_test_p_value < alpha:
variance = 'unequal'
else:
variance = 'equal'
print('f test p-value =', f_test_p_value)
print('Variance <> equal') if f_test_p_value < alpha else print('Variance = equal')
print()
return variance
#Statistical tests
def two_sample_ztest(x,y,alpha,x_normality,y_normality):
print('--Z Test----------------------------------------------')
print('Sample sizes might be too small to run a z test. x =%.f, y =%.f' % (len(x),len(y))) if len(x) < 30 or len(y) < 30 else print('Sample large enough for z test. x =%.f, y =%.f' % (len(x),len(y)))
print('Some assumptions have violated for this test, see above') if x_normality == 'non-gaussian' or y_normality == 'non-gaussian' else print('Assumptions are met, but its always better to check yourself')
ztest ,pval_z = stests.ztest(x, y, value=0,alternative='two-sided')
print('Reject H0, significant difference exists, p value = %.3f' % (pval_z)) if pval_z < alpha else print('Fail to reject H0, no statistical difference between samples, p value = %.3f' % (pval_z))
print()
def two_sample_ttest(x,y,alpha,x_normality,y_normality,variance):
print('--T Test---------------------------------------------')
print('T Test with unequal variance') if variance == 'unequal' else print('T Test with equal variance')
print('Some assumptions have been violated for this test, see above') if x_normality == 'non-gaussian' or y_normality== 'non-gaussian' else print('Assumptions are met, but its always better to check yourself')
if variance == 'unequal':
t, pval_t = ttest_ind(x, y, equal_var=False)
else:
t, pval_t = ttest_ind(x, y, equal_var=True)
print('T statistic =',t)
print('Reject H0, significant difference exists, p value = %.3f' % (pval_t)) if pval_t < alpha else print('Fail to reject H0, no statistical difference between samples, p value = %.3f' % (pval_t))
print()
def two_sample_mwutest(x,y,alpha):
print('--Mann-Whitney U Test (Nonparametric)-----------------')
print('Sample sizes might be too small to run a mwu test. x =%.f, y =%.f' % (len(x),len(y))) if len(x) < 20 or len(y) < 20 else print('Sample large enough for mwu test. x =%.f, y =%.f' % (len(x),len(y)))
stat_mw, p_mw = scipy.stats.mannwhitneyu(x, y)
print('MWU statistics = %.3f, p=%.3f' % (stat_mw, p_mw))
print('Reject H0, significant difference exists, p value = %.3f' % (p_mw)) if p_mw < alpha else print('Fail to reject H0, no statistical difference between samples, p value = %.3f' % (p_mw))
#YJ normality transformation
def power_transform(x,y):
skew_x = skew(x)
skew_y = skew(y)
pt_x_output, pt_x_lambda = stats.yeojohnson(x)
pt_y_output, pt_y_lambda = stats.yeojohnson(y)
if skew_x > skew_y:
x_transform = stats.yeojohnson(x,pt_x_lambda)
y_transform = stats.yeojohnson(y,pt_x_lambda)
else:
x_transform = stats.yeojohnson(x,pt_y_lambda)
y_transform = stats.yeojohnson(y,pt_y_lambda)
return x_transform, y_transform
#Output general info of sign. test
def sign_cont_two_sample(x,y,alpha,transform=None):
if transform == 1:
x,y = power_transform(x,y)
print('--1st Sample------------------------------------------')
x_norm = norm_test(x,alpha)
iqr_outlier(x,1.5)
print('--2nd Sample------------------------------------------')
y_norm = norm_test(y,alpha)
iqr_outlier(y,1.5)
print('--Variance test---------------------------------------')
var_output = var_test(x,y,alpha)
#Statistical Tests
two_sample_ztest(x,y,alpha,x_norm,y_norm)
two_sample_ttest(x,y,alpha,var_output,x_norm,y_norm)
two_sample_mwutest(x,y,alpha)
x = [0,4,5,5,5,5,9,2,2,2,7]
y = [1,2,3,4,5,6,7,4,2,2,5,7,1]
alpha = 0.05
sign_cont_two_sample(x,y,alpha,transform=0)
import statistics
print(statistics.mean(power_transform(x)))
print(statistics.mean(power_transform(y)))
#Test again for normality
power_transform(x,y)
###Output
_____no_output_____ |
article/src/scripts/Bengalese_Finches/behavior/BF-behav-figure.ipynb | ###Markdown
Load source data for panel with model eval results.
###Code
EVAL_CSV_FNAME = 'eval-across-days.csv'
eval_across_days_csv = RESULTS_ROOT / EVAL_CSV_FNAME
eval_df = pd.read_csv(eval_across_days_csv)
# note we convert segment error rate to %
eval_df.avg_segment_error_rate = eval_df.avg_segment_error_rate * 100
df_minsegdur_majvote = eval_df[eval_df.cleanup == 'min_segment_dur_majority_vote']
###Output
_____no_output_____
###Markdown
Load source data for panels with transition probabilities.
###Code
TRANS_PROBS_CSV_FNAME = 'transition-probabilities.csv'
probs_csv = RESULTS_ROOT / TRANS_PROBS_CSV_FNAME
df = pd.read_csv(probs_csv)
JSON_FNAME = 'transition-probabilities-x-y-plot.json'
xyerr_json = RESULTS_ROOT / JSON_FNAME
with xyerr_json.open('r') as fp:
animal_xyerr = json.load(fp)
sns.set_style('white')
sns.set_context('paper')
FIG_ROOT = pyprojroot.here() / 'doc' / 'article' / 'figures' / 'mainfig_bf_behavior'
FIGSIZE = (8, 4)
DPI = 300
fig, ax = plt.subplots(figsize=FIGSIZE, dpi=DPI)
#gs = fig.add_gridspec(ncols=2, nrows=2)
# top_panel_ax = fig.add_subplot(gs[0, :])
# bottom_left_panel_ax = fig.add_subplot(gs[1, 0])
# bottom_right_panel_ax = fig.add_subplot(gs[1, 1])
# top panel
g = sns.lineplot(data=df_minsegdur_majvote,
x='day_int', y='avg_segment_error_rate',
hue='animal_id',
style='animal_id',
ci='sd',
ax=ax
)
g.legend_.set_title('Animal ID')
ax.set(xticks=[1, 2, 3, 4])
ax.set_ylim([0., 10])
ax.set_ylabel('Syllable error rate (%)')
ax.set_xlabel('Day');
FIG_STEM = 'syllable-error-rate-across-days'
for ext in ('eps', 'svg'):
fig.savefig(FIG_ROOT / f'{FIG_STEM}.{ext}')
FIGSIZE = (8, 4)
DPI = 300
fig, ax_arr = plt.subplots(1, 2,
figsize=FIGSIZE,
dpi=DPI,
constrained_layout=True)
fig.set_constrained_layout_pads(w_pad=4 / 72, h_pad=4 / 72, hspace=0, wspace=0.1)
left_panel_ax, right_panel_ax = ax_arr[0], ax_arr[1]
df_gr41rd51 = df[df.animal_id == 'gr41rd51'].copy()
df_gr41rd51['Source'] = df_gr41rd51.source.map({'ground_truth': 'Ground truth', 'model': 'Model predictions'})
g = sns.lineplot(
data=df_gr41rd51,
x='day',
y='prob',
hue='transition',
style='Source',
ci='sd',
ax=left_panel_ax
)
handles, labels = left_panel_ax.get_legend_handles_labels()
source_handles, source_labels = handles[3:], labels[3:] # just show "source" in legend
g.legend(
source_handles,
source_labels,
loc='center right'
)
left_panel_ax.annotate('p(e-f)',
xy=(1.0, 0.79),
xycoords='data',
textcoords='data',
verticalalignment='bottom',
horizontalalignment='right',
transform=ax.transAxes,
color=handles[1].get_color(),
fontsize=15)
left_panel_ax.annotate('p(e-i)',
xy=(1.0, 0.18),
xycoords='data',
textcoords='data',
verticalalignment='bottom',
horizontalalignment='right',
transform=ax.transAxes,
color=handles[2].get_color(),
fontsize=15)
left_panel_ax.set_ylabel('Transition\nprobability')
left_panel_ax.set_xlabel('Day')
left_panel_ax.set_ylim([0., 1.])
xticklabels = left_panel_ax.get_xticklabels()
left_panel_ax.set_xticklabels(
list(range(1, len(xticklabels) + 1))
)
MARKERS = ['o', 'v', '^', '<', '>', '8', 's', 'p', '*', 'h', 'H', 'D', 'd', 'P', 'X']
for (animal_id_trans_tup, xyerr_dict), marker in zip(animal_xyerr.items(), MARKERS):
x, y, yerr = xyerr_dict['x'], xyerr_dict['y'], xyerr_dict['yerr'],
right_panel_ax.errorbar(x, y, yerr=yerr, fmt=marker, markersize=8)
# right_panel_ax.set_aspect('equal')
right_panel_ax.set_xlim([0., 1.0])
right_panel_ax.set_ylim([0., 1.0])
lims = [
np.min([right_panel_ax.get_xlim(), right_panel_ax.get_ylim()]), # min of both axes
np.max([right_panel_ax.get_xlim(), right_panel_ax.get_ylim()]), # max of both axes
]
right_panel_ax.plot(lims, lims, 'k--', alpha=0.75, zorder=0)
right_panel_ax.set_xlabel('Ground\ntruth');
right_panel_ax.set_ylabel('Model predictions');
FIG_STEM = 'transition-probabilities-across-days'
for ext in ('eps', 'svg'):
fig.savefig(FIG_ROOT / f'{FIG_STEM}.{ext}')
###Output
<ipython-input-9-06cbe1dd1642>:60: UserWarning: FixedFormatter should only be used together with FixedLocator
left_panel_ax.set_xticklabels(
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
The PostScript backend does not support transparency; partially transparent artists will be rendered opaque.
###Markdown
"single figure" version (in case we want it back)
###Code
# FIGSIZE = (10, 10)
# DPI = 300
# fig = plt.figure(constrained_layout=True, figsize=FIGSIZE, dpi=DPI)
# gs = fig.add_gridspec(ncols=2, nrows=2)
# top_panel_ax = fig.add_subplot(gs[0, :])
# bottom_left_panel_ax = fig.add_subplot(gs[1, 0])
# bottom_right_panel_ax = fig.add_subplot(gs[1, 1])
# # top panel
# g = sns.lineplot(data=df_minsegdur_majvote,
# x='day_int', y='avg_segment_error_rate',
# hue='animal_id',
# style='animal_id',
# ci='sd',
# ax=top_panel_ax
# )
# top_panel_ax.set(xticks=[1, 2, 3, 4])
# top_panel_ax.set_ylim([0., 10])
# top_panel_ax.set_ylabel('Syllable error rate (%)')
# top_panel_ax.set_xlabel('Day');
# df_gr41rd51 = df[df.animal_id == 'gr41rd51']
# g = sns.lineplot(
# data=df_gr41rd51,
# x='day',
# y='prob',
# hue='transition',
# style='source',
# ci='sd',
# ax=bottom_left_panel_ax
# )
# bottom_left_panel_ax.set_ylabel('Transition\nprobability')
# bottom_left_panel_ax.set_xlabel('Day')
# bottom_left_panel_ax.set_ylim([0., 1.])
# xticklabels = bottom_left_panel_ax.get_xticklabels()
# bottom_left_panel_ax.set_xticklabels(
# list(range(1, len(xticklabels) + 1))
# )
# MARKERS = ['o', 'v', '^', '<', '>', '8', 's', 'p', '*', 'h', 'H', 'D', 'd', 'P', 'X']
# for (animal_id_trans_tup, xyerr_dict), marker in zip(animal_xyerr.items(), MARKERS):
# x, y, yerr = xyerr_dict['x'], xyerr_dict['y'], xyerr_dict['yerr'],
# bottom_right_panel_ax.errorbar(x, y, yerr=yerr, fmt=marker, markersize=8)
# lims = [
# np.min([bottom_right_panel_ax.get_xlim(), bottom_right_panel_ax.get_ylim()]), # min of both axes
# np.max([bottom_right_panel_ax.get_xlim(), bottom_right_panel_ax.get_ylim()]), # max of both axes
# ]
# bottom_right_panel_ax.plot(lims, lims, 'k--', alpha=0.75, zorder=0)
# bottom_right_panel_ax.set_aspect('equal')
# bottom_right_panel_ax.set_xlim(lims)
# bottom_right_panel_ax.set_ylim(lims)
# bottom_right_panel_ax.set_xlabel('Ground\ntruth')
# bottom_right_panel_ax.set_ylabel('Predicted')
# #FIG_STEM = 'transition-probabilities-representative-case-across-days'
# # for ext in ('eps', 'pdf', 'svg'):
# # g.fig.savefig(FIG_ROOT / f'{FIG_STEM}.{ext}')
###Output
_____no_output_____ |
example-notebooks/utils/plot_get_roi_vertices.ipynb | ###Markdown
Get Vertices for an ROIIn this example we show how to get the vertices that are inside an ROI that wasdefined in the SVG ROI file (see :doc:`/rois.rst`).
###Code
import cortex
# get vertices for fusiform face area FFA in subject S1
roi_verts = cortex.get_roi_verts('S1', 'FFA')
# roi_verts is a dictionary (in this case with only one entry)
ffa_verts = roi_verts['FFA']
# this includes indices from both hemispheres
# let's create an empty Vertex object and fill FFA
ffa_map = cortex.Vertex.empty('S1', cmap='plasma')
ffa_map.data[ffa_verts] = 1.0
cortex.quickshow(ffa_map)
###Output
_____no_output_____ |
examples/dirac_time_nonuniform.ipynb | ###Markdown
Overview This notebook explains how to reconstruct a signal consisting of a $\tau$-periodic stream of Diracs at unknown locations. We will apply the following standard FRI-based workflow. 1. Generate signalWe generate the FRI signal which we will then try reconstruct:$ \displaystyle x = \sum_{k' \in \mathbb{Z}} \sum_{k=1}^{K} \alpha_k \delta(t - t_k - k' \tau ) $ (1) *CODE: Inspect the signal and make sure you understand its parameters.*
###Code
np.random.seed(7)
K = 5 # number of Diracs
TAU = 1 # period of the Dirac stream
# amplitudes of the Diracs
ak = np.sign(np.random.randn(K)) * (1 + (np.random.rand(K) - 0.5) / 1.)
# locations of the Diracs
tk = np.random.rand(K) * TAU
# plot the signal.
plot_dirac(tk, ak)
###Output
_____no_output_____
###Markdown
2. Simulate measurementsWe also simulate measurements by constructing a non-uniformly sampled, low-pass filtered signal, and add measurement noise$$y_{\ell} = \sum_{k=1}^K \alpha_k \varphi(t_{\ell}' - t_k),$$for $\ell=1,\cdots,L$. Here $\varphi(t)=\frac{\sin(\pi B t)}{B\tau\sin(\pi t / \tau)}$ and $B$ is the bandwidth of the ideal lowpass filter.*CODE: Do you understand the values of $M$, $B$ and $L$ in the code snippet below?**CODE: Generate the signal $y_\ell$ and add noise.*
###Code
np.random.seed(3)
def generate_noisy_signal(signal, SNR=None, sigma_noise=None):
''' Add noise to given signal.
If SNR is given: generate signal such that resulting SNR equals SNR.
If sigma_noise is given: add Gaussian noise of standard deviation sigma_noise.
:param SNR: desired SNR (in dB).
:param sigma_noise: desired standard deviation.
:return: noisy signal, noise vector
'''
if SNR is not None:
noise = np.random.randn(len(signal))
noise = noise / linalg.norm(noise) * linalg.norm(signal) * 10 ** (-SNR / 20.)
elif sigma_noise is not None:
noise = np.random.normal(scale=sigma_noise, loc=0, size=signal.shape)
return signal + noise, noise
def generate_time_samples(tau, L):
''' Generate L randomly distributed time instances. '''
# number of time domain samples
Tmax = tau / L # the average sampling step size (had we used a uniform sampling setup)
# generate the random sampling time instances
t_samp = np.arange(0, L, dtype=float) * Tmax
t_samp += np.sign(np.random.randn(L)) * np.random.rand(L) * Tmax / 2.
# round t_samp to [0, tau)
t_samp -= np.floor(t_samp / tau) * tau
return t_samp
def phi(t):
''' Dirichlet kernel evaluated at t. '''
numerator = np.sin(np.pi * B * t)
denominator = B * TAU * np.sin(np.pi * t / TAU)
idx = np.abs(denominator) < 1e-12
numerator[idx] = np.cos(np.pi * B * t[idx])
denominator[idx] = np.cos(np.pi * t[idx] / TAU)
return numerator / denominator
M = K # number of Fourier samples (at least K)
B = (2. * M + 1.) / TAU # bandwidth of the sampling filter
L = (2 * M + 1) # number of time samples
# measured signal
t_samp = generate_time_samples(TAU, L)
#################### CALCULATE SAMPLES y_l #####################
## sampled version
y_ell_samp = ...
## continuous version
y_ell_continuous = ...
## generate noisy signal
# SNR level
P = np.inf # i.e., noiseless
# P = 10 # SNR = 10dB
y_ell, noise = generate_noisy_signal(y_ell_samp, SNR=P)
# plot the signal
plt.figure()
plt.plot(t_continuous, y_ell_continuous, color='grey', label='continuous')
plt.plot(t_samp, y_ell_samp, linestyle='', marker='+', label='samples')
plt.plot(t_samp, y_ell, linestyle='', marker='x', label='noisy samples')
plot_dirac(tk, ak, label='ground truth', ax=plt.gca())
plt.xlabel('t')
plt.legend()
###Output
_____no_output_____
###Markdown
3. Find standard formSince the signal it is FRI, we know that we can find a signal of the standard form with a 1-to-1 relation to the original signal:$ \displaystyle\hat{x}_m = \sum_{k=1}^{K} \beta_k u_k^m $ (2) *PEN AND PAPER: Find values of $\beta_k$ and $u_k$*. *CODE: Implement standard form below*. Since the above holds, we know that the signal can be anihilated by a filter $h$. *OPTIONAL: Show that for this simple example, this filter is given by*$$ H(z) = h_0 \prod_{k=1}^K (1 - u_k z^{-1}) $$
###Code
def get_standard_form(ak, tk):
'''
:param ak: vector of Dirac amplitudes
:param tk: vector of Dirac locations
:return: vector of standard form coefficients
'''
x_hat = ...
return x_hat
x_hat = get_standard_form(ak, tk)
###Output
_____no_output_____
###Markdown
4. Find and implement $ G $ Once the signal is in form of Eq-(2), we need to identify how it is related to measurements $y$. *PEN AND PAPER: find the expression of matrix $G$ such that $ G \hat{x} = y$**CODE: implement G below.*
###Code
def get_G(t_samp):
'''
Compute G such that y=Gx
:param t_samp: vector of sampling times.
:return: matrix G
'''
G = ...
return G
G = get_G(t_samp)
## generate noiseless signal
y_ell_test = np.real(np.dot(G, x_hat))
assert np.isclose(y_ell_samp, y_ell_test).all()
###Output
_____no_output_____
###Markdown
5. Solve optimizationNow we have all the ingredients to solve the optimization of the form: find $ \hat{x}, h $ such that $ || y - G \hat{x} ||_2 \leq \varepsilon $and $ \hat{x} * h = 0 $*CODE: you do not have to implement this part, just inspect the obtained solution and make sure it is correct.*
###Code
# noise energy, in the noiseless case 1e-10 is considered as 0
noise_level = np.max([1e-10, linalg.norm(noise)])
max_ini = 100 # maximum number of random initialisations
xhat_recon, min_error, c_opt, ini = dirac_recon_time(G, y_ell, K, noise_level, max_ini)
#################### VERIFY IF xhat_recon IS CORRECT #####################
...
print('Noise level: {:.2e}'.format(noise_level))
print('Minimum approximation error |a - Gb|_2: {:.2e}'.format(min_error))
###Output
_____no_output_____
###Markdown
6. Reconstruct original signalNow that we have extracted the filter and $\hat{x}$, what would you do to find the signal's parameters ?
###Code
def get_locations(c_opt):
'''
Get dirac locations from filter coefficients.
:param c_opt: vector of annihilating filter coefficients
:return: vector of dirac locations (between 0 and TAU)
'''
############### GET tk_recon FROM RECONSTRUCTED FILTER ##################
tk_recon = ...
return tk_recon
tk_recon = get_locations(c_opt)
# location estimation error
t_error = distance(tk_recon, tk)[0]
print('location error: {:.2e}'.format(t_error))
def get_amplitudes(tk_recon, t_samp, y_ell):
'''
Get dirac amplitudes.
:tk_recon: vector of dirac locations.
:t_samp: vector of sampling times
:y_ell: vector of measurements
:return: vector of amplitudes of diracs.
'''
############### CALCULATE AMPLITUDES ##################
ak_recon = ...
return ak_recon
ak_recon = get_amplitudes(tk_recon, t_samp, y_ell)
a_error = distance(ak_recon, ak)[0]
print('amplitude error: {:.2e}'.format(a_error))
fig = plt.figure(num=1, figsize=(5.5, 2.5), dpi=90)
ax1 = plt.axes([0.125, 0.59, 0.85, 0.31])
t_error_pow = np.int(np.floor(np.log10(t_error)))
title = 'reconstructed vs. original signal'
plot_diracs(ax1, tk, ak, tk_recon, ak_recon, title)
###Output
_____no_output_____ |
notebooks/Tutorial/julia.ipynb | ###Markdown
Julia methods as actors Installing necessary requirements1) Install Julia itself. Start from here: 2) Install `PyCall` which is a Python binding library for Julia (i.e. calling Python from Julia).```juliajulia> Pkg.update()julia> Pkg.add("PyCall")```3) Install `pyjulia` Python package:3a) Download it from github.com: ```bashgit clone [email protected]:JuliaLang/pyjulia.git```3b) Install the package:```bashcd pyjuliapip install . Copy to site-packages pip install -e . Makes link to current directory in site-packages```4) Try in Python:```pythonimport juliajl = julia.Julia() Takes a few secondsjl.eval("2 + 2") Should immediately return "4"```
###Code
%load_ext autoreload
%autoreload(2)
from wowp.actors.julia import JuliaMethod
from wowp.schedulers import LinearizedScheduler
import numpy as np
###Output
_____no_output_____
###Markdown
Simple calling
###Code
sqrt = JuliaMethod("sqrt", inports="a")
sqrt(4)
###Output
_____no_output_____
###Markdown
Calling on numpy arrays
###Code
sqrt = JuliaMethod("sqrt", inports="a")
array = np.random.rand(5, 5)
scheduler = LinearizedScheduler()
scheduler.put_value(sqrt.inports.a, array)
scheduler.execute()
sqrt.outports.result.pop()
###Output
_____no_output_____
###Markdown
Chain sqrt method to pass numpy arrays
###Code
sqrt = JuliaMethod("sqrt", inports="a")
sqrt2 = JuliaMethod("sqrt", inports="a")
sqrt.outports.result.connect(sqrt2.inports.a)
array = np.random.rand(5, 5)
scheduler = LinearizedScheduler()
scheduler.put_value(sqrt.inports.a, array)
scheduler.execute()
sqrt2.outports.result.pop()
###Output
_____no_output_____
###Markdown
Using method from a package
###Code
%%file ABCD.jl
module ABCD
VERSION < v"0.4-" && using Docile
export quad
@doc doc"""Fourth power of the argument.""" ->
function quad(a)
a ^ 4
end
end
quad = JuliaMethod(package_name="ABCD", method_name="quad", inports="a")
quad(4.0)
quad.name
###Output
_____no_output_____
###Markdown
Non-existent module or package
###Code
xxx = JuliaMethod(package_name="ABBD", method_name="x")
xxx()
xxx = JuliaMethod(package_name="ABCD", method_name="xx")
xxx()
###Output
_____no_output_____
###Markdown
Unicode identifiers The page of julia states that unicode identifiers are not valid. This is true for automatically imported methods. But not for `JuliaMethod`. Names like `ฯtimes!` are fine :-)
###Code
%%file UnicodePi.jl
module UnicodePi
VERSION < v"0.4-" && using Docile
export ฯtimes!
@doc doc"""Return pi times argument""" ->
function ฯtimes!(a)
ฯ * a
end
end
pi_times = JuliaMethod(package_name="UnicodePi", method_name="ฯtimes!", inports="x")
print(pi_times.name)
pi_times(4)
from wowp.tools.plotting import ipy_show
ipy_show(pi_times)
###Output
_____no_output_____ |
tutorial-examples/eric-fiala-wmlce-notebooks-master/4-tutorial-snap-ml-credit-risk.ipynb | ###Markdown
Credit Risk Analysis using IBM Snap ML Imports
###Code
from __future__ import print_function
import numpy as np
import pandas as pd
pd.options.display.max_columns = 999
import matplotlib.pyplot as plt
%matplotlib inline
import sklearn
from sklearn.model_selection import train_test_split, StratifiedKFold
from sklearn.preprocessing import MinMaxScaler, LabelEncoder, normalize
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import f1_score, accuracy_score, roc_curve, roc_auc_score
from scipy.stats import chi2_contingency,ttest_ind
from sklearn.utils import shuffle
import time
import warnings
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
Dataset Visualization
###Code
cust_pd_full = pd.read_csv('./credit_customer_data.csv')
rows=1000000
cust_pd = cust_pd_full.head(rows)
print("There are " + str(len(cust_pd)) + " observations in the customer history dataset.")
print("There are " + str(len(cust_pd.columns)) + " variables in the dataset.")
cust_pd.head()
###Output
There are 1000000 observations in the customer history dataset.
There are 19 variables in the dataset.
###Markdown
Data Preprocessing
###Code
# Split dataframe into Features (X) and Labels (y)
cust_pd_Y = cust_pd[['IS_DEFAULT']]
cust_pd_X = cust_pd.drop(['IS_DEFAULT'],axis=1)
print('cust_pd_X.shape=', cust_pd_X.shape, 'cust_pd_Y.shape=', cust_pd_Y.shape)
###Output
cust_pd_X.shape= (1000000, 18) cust_pd_Y.shape= (1000000, 1)
###Markdown
Transform Labels (y)
###Code
cust_pd_Y.head()
le = LabelEncoder()
cust_pd_Y["IS_DEFAULT"] = le.fit_transform(cust_pd_Y['IS_DEFAULT'])
cust_pd_Y.head()
###Output
_____no_output_____
###Markdown
Transform Features (X)
###Code
print('features X dataframe shape = ', cust_pd_X.shape)
cust_pd_X.head()
###Output
features X dataframe shape = (1000000, 18)
###Markdown
One-Hot Encoding of Categorical Features
###Code
categoricalColumns = ['CREDIT_HISTORY', 'TRANSACTION_CATEGORY', 'ACCOUNT_TYPE', 'ACCOUNT_AGE',
'STATE', 'IS_URBAN', 'IS_STATE_BORDER', 'HAS_CO_APPLICANT', 'HAS_GUARANTOR',
'OWN_REAL_ESTATE', 'OTHER_INSTALMENT_PLAN',
'OWN_RESIDENCE', 'RFM_SCORE', 'OWN_CAR', 'SHIP_INTERNATIONAL']
cust_pd_X = pd.get_dummies(cust_pd_X, columns=categoricalColumns)
cust_pd_X.head()
print('features X dataframe shape = ', cust_pd_X.shape)
###Output
features X dataframe shape = (1000000, 51)
###Markdown
Normalize Features
###Code
min_max_scaler = MinMaxScaler()
features = min_max_scaler.fit_transform(cust_pd_X)
features = normalize(features, axis=1, norm='l1')
cust_pd_X = pd.DataFrame(features,columns=cust_pd_X.columns)
cust_pd_X.head()
###Output
_____no_output_____
###Markdown
Generate Train and Test Datasets
###Code
labels = cust_pd_Y.values
features = cust_pd_X.values
labels = np.reshape(labels,(-1,1))
X_train,X_test,y_train,y_test = \
train_test_split(features, labels, test_size=0.3, random_state=42, stratify=labels)
print('X_train.shape=', X_train.shape, 'Y_train.shape=', y_train.shape)
print('X_test.shape=', X_test.shape, 'Y_test.shape=', y_test.shape)
###Output
X_train.shape= (700000, 51) Y_train.shape= (700000, 1)
X_test.shape= (300000, 51) Y_test.shape= (300000, 1)
###Markdown
Train a Logistic Regression Model using Scikit-Learn
###Code
# While we are importing from SnapML we are using the Scikit-learn 'liblinear' solver
# You could choose to import the model from Scikit-learn if you don't believe us!
from pai4sk.linear_model import LogisticRegression
sklearn_lr = LogisticRegression(solver='liblinear')
print(sklearn_lr)
# Train a logistic regression model using Scikit-Learn
t0 = time.time()
sklearn_lr.fit(X_train, y_train)
sklearn_time = time.time() - t0
print("[sklearn] Training time (s): {0:.2f}".format(sklearn_time))
# Evaluate accuracy on test set
sklearn_pred = sklearn_lr.predict(X_test)
print('[sklearn] Accuracy score : {0:.6f}'.format(accuracy_score(y_test, sklearn_pred)))
###Output
[sklearn] Training time (s): 5.29
[sklearn] Accuracy score : 0.957530
###Markdown
Train a Logistic Regression Model using Snap ML
###Code
from pai4sk import LogisticRegression
snapml_lr = LogisticRegression(use_gpu=True, device_ids=[0,1])
print(snapml_lr.get_params())
# Train a logistic regression model using Snap ML
t0 = time.time()
model = snapml_lr.fit(X_train, y_train)
snapml_time = time.time() - t0
print("[Snap ML] Training time (s): {0:.2f}".format(snapml_time))
# Evaluate accuracy on test set
snapml_pred = snapml_lr.predict(X_test)
print('[Snap ML] Accuracy score : {0:.6f}'.format(accuracy_score(y_test, snapml_pred)))
print('[Logistic Regression] Snap ML vs. sklearn speedup : {0:.2f}x '.format(sklearn_time/snapml_time))
###Output
[Snap ML] Training time (s): 0.62
[Snap ML] Accuracy score : 0.957513
[Logistic Regression] Snap ML vs. sklearn speedup : 8.50x
###Markdown
Train a Random Forest Model using Scikit-Learn
###Code
# Import the Random Forest model from the pai4sk package
from sklearn.ensemble import RandomForestClassifier
sklearn_rf = RandomForestClassifier(n_estimators=160, n_jobs=160, random_state=0)
# Training a random forest model using scikit-learn
t0 = time.time()
sklearn_rf.fit(X_train, y_train)
sklearn_time = time.time() - t0
print("[sklearn] Training time (s): {0:.5f}".format(sklearn_time))
# Evaluate accuracy on test set
sklearn_pred = sklearn_rf.predict(X_test)
print('[sklearn] Accuracy score : ', accuracy_score(y_test, sklearn_pred))
###Output
[sklearn] Training time (s): 17.29556
[sklearn] Accuracy score : 0.9774766666666667
###Markdown
Train a Random Forest Model using Snap ML
###Code
# Import the Random Forest model directly from the SnapML package
from pai4sk import RandomForestClassifier
snapml_rf = RandomForestClassifier(n_estimators=160, n_jobs=160, random_state=0)
# Training a random forest model using Snap ML
t0 = time.time()
snapml_rf.fit(X_train, y_train)
snapml_time = time.time()-t0
print("[Snap ML] Training time (s): {0:.5f}".format(snapml_time))
# Evaluate accuracy on test set
snapml_pred = snapml_rf.predict(X_test, num_threads=160)
print('[Snap ML] Accuracy score : {0:.6f}'.format(accuracy_score(y_test, snapml_pred)))
print('[Random Forest] Snap ML vs. sklearn speedup : {0:.2f}x '.format(sklearn_time/snapml_time))
###Output
[Snap ML] Training time (s): 14.31760
[Snap ML] Accuracy score : 0.979097
[Random Forest] Snap ML vs. sklearn speedup : 1.21x
|
webscraping_yahoo_finance_currency.ipynb | ###Markdown
Project Yahoo Finance Currency Tracker Akbar Azad
###Code
from bs4 import BeautifulSoup
import requests
import pandas as pd
import datetime
import time
import os
def extract_currency(url = 'https://finance.yahoo.com/currencies/'):
# Extract html from URL
r = requests.get(url)
data = r.text
soup = BeautifulSoup(data)
# Extract information by ID in multiples of 14 starting from 40 till 390
id_number = 40
currency_list = []
while id_number <= 390:
id_number_string = str(id_number)
currency_id = soup.find_all('tr', {'data-reactid':id_number_string})
#print(len(currency_id))
currency_info = [f for f in currency_id[0]]
currency_info_text = [f.text for f in currency_info]
currency_dict = {'Symbol': currency_info_text[0],
'Name': currency_info_text[1],
'Last_Price': currency_info_text[2],
'Change': currency_info_text[3],
'%_Change': currency_info_text[4]}
currency_list.append(currency_dict)
id_number += 14
currency_df = pd.DataFrame(currency_list)
currency_df.to_csv('C:\\Users\\65961\\OneDrive\\Desktop\\Data_Products\\' + 'webscraping_yahoo_finance_currency_' + datetime.datetime.now().strftime('%Y%m%d%H%M%S') + '.csv', index = False)
return currency_df
currency_df = extract_currency()
###Output
_____no_output_____ |
Test place.ipynb | ###Markdown
Testing PyDoA
###Code
from DoaProcessor import DoaProcessor
d = DoaProcessor('./samples/sample.csv',4)
d.setDegreeForSpeaker(d.getHighestNdegrees(sep=60))
d.getPeakDegree(group='group-1',sigma=1.0)
d.plotDegreeDistribution(group='group-1')
d.getPeakDegree(1.5)
d.getHighestNdegrees(sep=50)
d = {104: 939, 315: 775, 194: 636, 300: 597, 30: 470, 45: 432, 14: 335, 120: 285, 210: 245, 284: 65, 358: 31, 88: 23, 225: 20, 241: 18, 61: 10, 178: 8, 135: 7, 82: 4, 331: 2, 151: 2, 268: 2, 337: 1}
d.getHighestNdegrees(sep=60)
d.setDegreeForSpeaker([30, 104, 194, 315])
a = d.assignUserLabel()
a.groupby('users').count()
d.getSpeakingTime(plot=True,time='min')
xlabels = []
for i in range(4):
xlabels.append('user-%d'%(i+1))
d.drawNetwork('group-1')
plt.title('Group-1 Network')
d.generateEdgeFile()
###Output
_____no_output_____ |
nbs/12_examples.glue-benchmark.ipynb | ###Markdown
GLUE Benchmark
###Code
from transformers import AutoModelForSequenceClassification
from fastai.text.all import *
from fastai.callback.wandb import *
from fasthugs.learner import TransLearner
from fasthugs.data import TransformersTextBlock, TextGetter, get_splits, PreprocCategoryBlock
from datasets import load_dataset, concatenate_datasets
###Output
_____no_output_____
###Markdown
Setup Let's define main settings for the run in one place:
###Code
ds_name = 'glue'
model_name = "distilroberta-base"
max_len = 512
bs = 32
val_bs = bs*2
lr = 3e-5
GLUE_TASKS = ["cola", "mnli", "mnli-mm", "mrpc", "qnli", "qqp", "rte", "sst2", "stsb", "wnli"]
def validate_task():
assert task in GLUE_TASKS
from fastai.metrics import MatthewsCorrCoef, F1Score, PearsonCorrCoef, SpearmanCorrCoef
glue_metrics = {
'cola':[MatthewsCorrCoef()],
'sst2':[accuracy],
'mrpc':[F1Score(), accuracy],
'stsb':[PearsonCorrCoef(), SpearmanCorrCoef()],
'qqp' :[F1Score(), accuracy],
'mnli':[accuracy],
'qnli':[accuracy],
'rte' :[accuracy],
'wnli':[accuracy],
}
###Output
_____no_output_____
###Markdown
CoLA
###Code
task = 'cola'
validate_task()
#hide-output
ds = load_dataset(ds_name, task)
ds.keys()
len(ds['train']), len(ds['validation'])
train_idx, valid_idx = get_splits(ds)
valid_idx
train_ds = concatenate_datasets([ds['train'], ds['validation']])
train_ds[0]
vocab = train_ds.features['label'].names
dblock = DataBlock(blocks = [TransformersTextBlock(pretrained_model_name=model_name), PreprocCategoryBlock(vocab)],
get_x=ItemGetter('sentence'),
get_y=ItemGetter('label'),
splitter=IndexSplitter(valid_idx))
%%time
dls = dblock.dataloaders(train_ds, bs=bs, val_bs=val_bs)
dls.show_batch(max_n=4)
import wandb
WANDB_NAME = f'{ds_name}-{task}-{model_name}'
GROUP = f'{ds_name}-{task}-{model_name}-{lr:.0e}'
NOTES = f'finetuning {model_name} with RAdam lr={lr:.0e}'
CONFIG = {}
TAGS =[model_name, ds_name, 'radam']
#hide_output
wandb.init(reinit=True, project="fasthugs", entity="fastai_community",
name=WANDB_NAME, group=GROUP, notes=NOTES, tags=TAGS, config=CONFIG);
#hide_output
model = AutoModelForSequenceClassification.from_pretrained(model_name)
metrics = glue_metrics[task]
learn = TransLearner(dls, model, metrics=metrics).to_fp16()
cbs = [WandbCallback(log_preds=False, log_model=False), SaveModelCallback(monitor=metrics[0].name)]
learn.fit_one_cycle(4, lr, cbs=cbs)
learn.show_results()
###Output
_____no_output_____
###Markdown
SST
###Code
task = 'sst2'
validate_task()
ds = load_dataset(ds_name, task)
ds.keys()
len(ds['train']), len(ds['validation'])
train_idx, valid_idx = get_splits(ds)
valid_idx
train_ds = concatenate_datasets([ds['train'], ds['validation']])
train_ds[0]
vocab = train_ds.features['label'].names
dblock = DataBlock(blocks = [TransformersTextBlock(pretrained_model_name=model_name), PreprocCategoryBlock(vocab)],
get_x=ItemGetter('sentence'),
get_y=ItemGetter('label'),
splitter=IndexSplitter(valid_idx))
%%time
dls = dblock.dataloaders(train_ds, bs=bs, val_bs=val_bs)
dls.show_batch(max_n=4)
WANDB_NAME = f'{ds_name}-{task}-{model_name}'
GROUP = f'{ds_name}-{task}-{model_name}-{lr:.0e}'
NOTES = f'finetuning {model_name} with RAdam lr={lr:.0e}'
TAGS =[model_name, ds_name, 'radam']
#hide_output
wandb.init(reinit=True, project="fasthugs", entity="fastai_community",
name=WANDB_NAME, group=GROUP, notes=NOTES, tags=TAGS, config=CONFIG);
###Output
_____no_output_____
###Markdown
Training
###Code
#hide_output
model = AutoModelForSequenceClassification.from_pretrained(model_name)
metrics = glue_metrics[task]
learn = TransLearner(dls, model, metrics=metrics).to_fp16()
cbs = [WandbCallback(log_preds=False, log_model=False), SaveModelCallback(monitor=metrics[0].__name__)]
learn.fit_one_cycle(4, lr, cbs=cbs)
learn.show_results()
###Output
_____no_output_____
###Markdown
Microsoft Research Paraphrase Corpus
###Code
task = 'mrpc'
validate_task()
ds = load_dataset(ds_name, task)
len(ds['train']), len(ds['validation'])
train_idx, valid_idx = get_splits(ds)
valid_idx
train_ds = concatenate_datasets([ds['train'], ds['validation']])
train_ds[0]
vocab = train_ds.features['label'].names
dblock = DataBlock(blocks = [TransformersTextBlock(pretrained_model_name=model_name), PreprocCategoryBlock(vocab)],
get_x=TextGetter('sentence1', 'sentence2'),
get_y=ItemGetter('label'),
splitter=IndexSplitter(valid_idx))
%%time
dls = dblock.dataloaders(train_ds, bs=bs, val_bs=val_bs)
dls.show_batch(max_n=4)
WANDB_NAME = f'{ds_name}-{task}-{model_name}'
GROUP = f'{ds_name}-{task}-{model_name}-{lr:.0e}'
NOTES = f'finetuning {model_name} with RAdam lr={lr:.0e}'
TAGS =[model_name, ds_name, 'radam']
#hide_output
wandb.init(reinit=True, project="fasthugs", entity="fastai_community",
name=WANDB_NAME, group=GROUP, notes=NOTES, tags=TAGS, config=CONFIG);
###Output
_____no_output_____
###Markdown
Training
###Code
#hide_output
model = AutoModelForSequenceClassification.from_pretrained(model_name)
metrics = glue_metrics[task]
learn = TransLearner(dls, model, metrics=metrics).to_fp16()
cbs = [WandbCallback(log_preds=False, log_model=False), SaveModelCallback(monitor=metrics[0].name)]
learn.fit_one_cycle(4, lr, cbs=cbs)
learn.show_results()
###Output
_____no_output_____
###Markdown
Semantic Textual Similarity Benchmark
###Code
task = 'stsb'
validate_task()
ds = load_dataset(ds_name, task)
len(ds['train']), len(ds['validation'])
train_idx, valid_idx = get_splits(ds)
valid_idx
train_ds = concatenate_datasets([ds['train'], ds['validation']])
train_ds[0]
dblock = DataBlock(blocks = [TransformersTextBlock(pretrained_model_name=model_name), RegressionBlock(1)],
get_x=TextGetter('sentence1', 'sentence2'),
get_y=ItemGetter('label'),
splitter=IndexSplitter(valid_idx))
%%time
dls = dblock.dataloaders(train_ds, bs=bs, val_bs=val_bs)
dls.show_batch(max_n=4)
WANDB_NAME = f'{ds_name}-{task}-{model_name}'
GROUP = f'{ds_name}-{task}-{model_name}-{lr:.0e}'
NOTES = f'finetuning {model_name} with RAdam lr={lr:.0e}'
TAGS =[model_name, ds_name, 'radam']
#hide_output
wandb.init(reinit=True, project="fasthugs", entity="fastai_community",
name=WANDB_NAME, group=GROUP, notes=NOTES, tags=TAGS, config=CONFIG);
#hide_output
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=1)
metrics = glue_metrics[task]
learn = TransLearner(dls, model, metrics=metrics).to_fp16()
# cbs = [WandbCallback(log_preds=False, log_model=False), SaveModelCallback(monitor=metrics[0].name)]
cbs = []
learn.fit_one_cycle(4, lr, cbs=cbs)
learn.show_results()
###Output
_____no_output_____
###Markdown
Quora Question Pairs
###Code
task = 'qqp'
validate_task()
ds = load_dataset(ds_name, task)
len(ds['train']), len(ds['validation'])
train_idx, valid_idx = get_splits(ds)
valid_idx
train_ds = concatenate_datasets([ds['train'], ds['validation']])
train_ds[0]
dblock = DataBlock(blocks = [TransformersTextBlock(pretrained_model_name=model_name), PreprocCategoryBlock(vocab)],
get_x=TextGetter('question1', 'question2'),
get_y=ItemGetter('label'),
splitter=IndexSplitter(valid_idx))
%%time
dls = dblock.dataloaders(train_ds, bs=bs, val_bs=val_bs)
dls.show_batch(max_n=4)
WANDB_NAME = f'{ds_name}-{task}-{model_name}'
GROUP = f'{ds_name}-{task}-{model_name}-{lr:.0e}'
NOTES = f'finetuning {model_name} with RAdam lr={lr:.0e}'
TAGS =[model_name, ds_name, 'radam']
#hide_output
wandb.init(reinit=True, project="fasthugs", entity="fastai_community",
name=WANDB_NAME, group=GROUP, notes=NOTES, tags=TAGS, config=CONFIG);
#hide_output
model = AutoModelForSequenceClassification.from_pretrained(model_name)
metrics = glue_metrics[task]
learn = TransLearner(dls, model, metrics=metrics).to_fp16()
# cbs = [WandbCallback(log_preds=False, log_model=False), SaveModelCallback(monitor=metrics[0].name)]
cbs = []
learn.fit_one_cycle(4, lr, cbs=cbs)
learn.show_results()
###Output
_____no_output_____
###Markdown
MultiNLI
###Code
task = 'mnli'
validate_task()
ds = load_dataset(ds_name, task)
ds.keys()
train_idx, valid_idx = get_splits(ds, valid='validation_matched')
valid_idx
train_ds = concatenate_datasets([ds['train'], ds['validation_matched']])
train_ds[0]
lens = train_ds.map(lambda s: {'len': len(s['premise'])+len(s['hypothesis'])}, remove_columns=train_ds.column_names, num_proc=4, keep_in_memory=True)
train_lens = lens.select(train_idx)['len']
valid_lens = lens.select(valid_idx)['len']
dblock = DataBlock(blocks = [TransformersTextBlock(pretrained_model_name=model_name),
CategoryBlock(vocab={0:'entailment', 1:'neutral', 2:'contradiction'})],
get_x=TextGetter('premise', 'hypothesis'),
get_y=ItemGetter('label'),
splitter=IndexSplitter(valid_idx))
%%time
dl_kwargs=[{'res':train_lens}, {'val_res':valid_lens}]
dls = dblock.dataloaders(train_ds, bs=bs, val_bs=val_bs, dl_kwargs=dl_kwargs, num_workers=4)
dls.show_batch(max_n=4)
WANDB_NAME = f'{ds_name}-{task}-{model_name}'
GROUP = f'{ds_name}-{task}-{model_name}-{lr:.0e}'
NOTES = f'finetuning {model_name} with RAdam lr={lr:.0e}'
TAGS =[model_name, ds_name, 'radam']
#hide_output
wandb.init(reinit=True, project="fasthugs", entity="fastai_community",
name=WANDB_NAME, group=GROUP, notes=NOTES, tags=TAGS, config=CONFIG);
#hide_output
model = AutoModelForSequenceClassification.from_pretrained(model_name, num_labels=3)
metrics = glue_metrics[task]
learn = TransLearner(dls, model, metrics=metrics).to_fp16()
# cbs = [WandbCallback(log_preds=False, log_model=False), SaveModelCallback(monitor=metrics[0].name)]
cbs = []
learn.fit_one_cycle(4, lr, cbs=cbs)
learn.show_results()
valid_mm_dl = dls.test_dl(ds['validation_mismatched'], with_labels=True)
learn.validate(dl=valid_mm_dl)
###Output
_____no_output_____
###Markdown
Question NLI
###Code
task = 'qnli'
validate_task()
ds = load_dataset(ds_name, task)
len(ds['train']), len(ds['validation'])
train_idx, valid_idx = get_splits(ds)
valid_idx
train_ds = concatenate_datasets([ds['train'], ds['validation']])
train_ds[0]
dblock = DataBlock(blocks = [TransformersTextBlock(pretrained_model_name=model_name), PreprocCategoryBlock(vocab)],
get_x=TextGetter('question', 'sentence'),
get_y=ItemGetter('label'),
splitter=IndexSplitter(valid_idx))
%%time
dls = dblock.dataloaders(train_ds, bs=bs, val_bs=val_bs)
dls.show_batch(max_n=4)
WANDB_NAME = f'{ds_name}-{task}-{model_name}'
GROUP = f'{ds_name}-{task}-{model_name}-{lr:.0e}'
NOTES = f'finetuning {model_name} with RAdam lr={lr:.0e}'
TAGS =[model_name, ds_name, 'radam']
#hide_output
wandb.init(reinit=True, project="fasthugs", entity="fastai_community",
name=WANDB_NAME, group=GROUP, notes=NOTES, tags=TAGS, config=CONFIG);
#hide_output
model = AutoModelForSequenceClassification.from_pretrained(model_name)
metrics = glue_metrics[task]
learn = TransLearner(dls, model, metrics=metrics).to_fp16()
# cbs = [WandbCallback(log_preds=False, log_model=False), SaveModelCallback(monitor=metrics[0].name)]
cbs = []
learn.fit_one_cycle(4, lr, cbs=cbs)
learn.show_results()
###Output
_____no_output_____
###Markdown
Recognizing Textual Entailment
###Code
task = 'rte'
validate_task()
ds = load_dataset(ds_name, task)
len(ds['train']), len(ds['validation'])
train_idx, valid_idx = get_splits(ds)
valid_idx
train_ds = concatenate_datasets([ds['train'], ds['validation']])
train_ds[0]
dblock = DataBlock(blocks = [TransformersTextBlock(pretrained_model_name=model_name), PreprocCategoryBlock(vocab)],
get_x=TextGetter('sentence1', 'sentence2'),
get_y=ItemGetter('label'),
splitter=IndexSplitter(valid_idx))
%%time
dls = dblock.dataloaders(train_ds, bs=bs, val_bs=val_bs)
dls.show_batch(max_n=4)
WANDB_NAME = f'{ds_name}-{task}-{model_name}'
GROUP = f'{ds_name}-{task}-{model_name}-{lr:.0e}'
NOTES = f'finetuning {model_name} with RAdam lr={lr:.0e}'
TAGS =[model_name, ds_name, 'radam']
#hide_output
wandb.init(reinit=True, project="fasthugs", entity="fastai_community",
name=WANDB_NAME, group=GROUP, notes=NOTES, tags=TAGS, config=CONFIG);
#hide_output
model = AutoModelForSequenceClassification.from_pretrained(model_name)
metrics = glue_metrics[task]
learn = TransLearner(dls, model, metrics=metrics).to_fp16()
# cbs = [WandbCallback(log_preds=False, log_model=False), SaveModelCallback(monitor=metrics[0].name)]
cbs = []
learn.fit_one_cycle(4, lr, cbs=cbs)
learn.show_results()
###Output
_____no_output_____
###Markdown
Winograd NLI
###Code
task = 'wnli'
validate_task()
ds = load_dataset(ds_name, task)
len(ds['train']), len(ds['validation'])
train_idx, valid_idx = get_splits(ds)
valid_idx
train_ds = concatenate_datasets([ds['train'], ds['validation']])
train_ds[0]
dblock = DataBlock(blocks = [TransformersTextBlock(pretrained_model_name=model_name), PreprocCategoryBlock(vocab)],
get_x=TextGetter('sentence1', 'sentence2'),
get_y=ItemGetter('label'),
splitter=IndexSplitter(valid_idx))
%%time
dls = dblock.dataloaders(train_ds, bs=bs, val_bs=val_bs)
dls.show_batch(max_n=4)
WANDB_NAME = f'{ds_name}-{task}-{model_name}'
GROUP = f'{ds_name}-{task}-{model_name}-{lr:.0e}'
NOTES = f'finetuning {model_name} with RAdam lr={lr:.0e}'
TAGS =[model_name, ds_name, 'radam']
#hide_output
wandb.init(reinit=True, project="fasthugs", entity="fastai_community",
name=WANDB_NAME, group=GROUP, notes=NOTES, tags=TAGS, config=CONFIG);
#hide_output
model = AutoModelForSequenceClassification.from_pretrained(model_name)
metrics = glue_metrics[task]
learn = TransLearner(dls, model, metrics=metrics).to_fp16()
# cbs = [WandbCallback(log_preds=False, log_model=False), SaveModelCallback(monitor=metrics[0].name)]
cbs = []
learn.fit_one_cycle(4, lr, cbs=cbs)
learn.show_results()
###Output
_____no_output_____ |
chapter2-insertion-sort.ipynb | ###Markdown
Loading LibrariesLet's import all the necessary packages first. You can safely ignore this section.
###Code
import java.util.Random;
import java.lang.*;
%maven org.knowm.xchart:xchart:3.5.2
import org.knowm.xchart.*;
###Output
_____no_output_____
###Markdown
Helper MethodsLet's code three helper methods:* random array generator* array printer* copyArrayIt is assumed that you are fully capable of coding two similar methods by yourself. If you are new to Java (but have some experience with a different language), playing with these methods will help you get familiar with Java faster.
###Code
// random array generator
public int[] randomArr(int size) {
Random r = new Random();
int[] arr = new int[size];
for (int i = 0; i < size; i++) {
arr[i] = r.nextInt(1000) + 1;
}
return arr;
}
// array printer
public void printArr(int[] arr) {
for (int num : arr) {
System.out.print(num + " ");
}
System.out.println();
}
// array deep copy
public void copyArray(int[] from, int[] to) {
if (from.length != to.length) {
System.exit(0);
}
for (int i = 0; i < from.length; i++) {
to[i] = from[i];
}
}
// reverse an array
public void reverse(int[] arr) {
int s = 0, e = arr.length-1;
int temp;
while (s < e) {
temp = arr[s];
arr[s] = arr[e];
arr[e] = temp;
s++;
e--;
}
}
###Output
_____no_output_____
###Markdown
Insertion Sort Let's implement an insertion sort that only works with integers.
###Code
public void insertionSort(int[] arr) {
for (int i = 1; i < arr.length; i++) {
int key = arr[i];
int j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j+1] = arr[j];
j--;
}
arr[j+1] = key;
}
}
// sanity check
int[] arr = randomArr(5);
System.out.print("Given: ");
printArr(arr);
insertionSort(arr);
System.out.print("Sorted: ");
printArr(arr);
###Output
Given: 108 28 391 896 346
Sorted: 28 108 346 391 896
###Markdown
Let's modify insertion sort to keep track of its time complexity.
###Code
public int insertionSortTrack(int[] arr) {
int steps = 0;
for (int i = 1; i < arr.length; i++) {
int key = arr[i];
int j = i - 1;
while (j >= 0 && arr[j] > key) {
arr[j+1] = arr[j];
j--;
steps += 2;
}
arr[j+1] = key;
steps += 5;
}
return steps;
}
###Output
_____no_output_____
###Markdown
Now lets plot the time complexity of Insertion Sort, including its worst, best and average scenarios.
###Code
// predetermined size
int size = 30;
// storage of steps
int[] best = new int[size];
int[] normal = new int[size];
int[] worst = new int[size];
// populate storage
for (int i = 1; i < size; i++) {
// normal
int[] tempB = randomArr(i);
int[] tempN = new int[tempB.length];
copyArray(tempB, tempN);
// best
Arrays.sort(tempB);
// worst
int[] tempW = new int[tempB.length];
copyArray(tempB, tempW);
reverse(tempW);
best[i] = insertionSortTrack(tempB);
normal[i] = insertionSortTrack(tempN);
worst[i] = insertionSortTrack(tempW);
}
// size of input - convert int to double for plotting
double[] xData = new double[size];
for (int i = 1; i < xData.length; i++) {
xData[i] = i;
}
// best - convert int to double for plotting
double[] yDataB = new double[size];
for (int i = 0; i < yDataB.length; i++) {
yDataB[i] = best[i];
}
// normal - convert int to double for plotting
double[] yDataN = new double[size];
for (int i = 0; i < yDataN.length; i++) {
yDataN[i] = normal[i];
}
// worst - convert int to double for plotting
double[] yDataW = new double[size];
for (int i = 0; i < yDataW.length; i++) {
yDataW[i] = worst[i];
}
// plot it
XYChart chart = new XYChartBuilder().width(600).height(400).title("Insertion Sort").xAxisTitle("Input Size n").yAxisTitle("Running Time T(n)").build();
chart.addSeries("Best", xData, yDataB);
chart.addSeries("Normal", xData, yDataN);
chart.addSeries("Worst", xData, yDataW);
BitmapEncoder.getBufferedImage(chart);
###Output
_____no_output_____
###Markdown
Do It Yourself Practice - sort a linked listGiven a singly linked list with all elements as integers, sort it using insertion sort.A linked listnode is defined for you. The solution that is covered in class can also be found below.Please also indicate the best, average and worst time complexity of your own solution. You may want to lay out some basic reasoning for your answers (*maybe with a plot*).
###Code
public class ListNode {
int val;
ListNode next;
ListNode(int data) {
val = data;
}
}
public ListNode insertLSort(ListNode head) {
// your code goes here
// remove this line
return ListNode(-1);
}
###Output
_____no_output_____ |
2D_fiter_granite_texture_elimination.ipynb | ###Markdown
[Example of] image processing for detection of strains on granite stones
###Code
from __future__ import print_function
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
import cv2
import numpy as np
from matplotlib import pyplot as plt
plt.rcParams["figure.figsize"] = (15,10)
###Output
_____no_output_____
###Markdown
Experiment 1: Granite Texture elemination using a simple 2D average filter. Hypothesis/Assumption:- Usually, granite texture is high-frequency component, while strain is low-frequency component. Hence, a low-pass filter should be able to separate them. Objectives:- investigate elimination/mitigation of granite texture using 2D filter method. - evaluate the mitigation performance of the simplest filter (average filter) against strain intensity.- practice OpenCV and related image processing library in python Diagram:  Related Theorem* Histogram Equalization...* Average Filter...* Binary Imaging... Experimental Setup Input image (test unit) parameters:- 0%,25%,50%,75%,100% of strain intensity
###Code
file_name = ['grey-granite-background.jpg', 'grey-granite-kmutnb-25.jpg', 'grey-granite-kmutnb-50.jpg', 'grey-granite-kmutnb-75.jpg', 'grey-granite-kmutnb-100.jpg']
for i in range(5):
plt.subplot(2,3,i+1),plt.imshow(cv2.imread(file_name[i],2),cmap='gray'),plt.title(file_name[i])
###Output
_____no_output_____
###Markdown
Experimental parameters:- Average Filter - size: 1..200- Binarizer - threshold: 0..1 Procedures and Results: 1. Pre-processing: Histotram Equilization
###Code
def f(file):
img = cv2.imread(file,2)
equ = cv2.equalizeHist(img)
plt.subplot(131),plt.imshow(img,cmap='gray', vmin=0, vmax=255),plt.title(file)
plt.subplot(132),plt.imshow(equ,cmap='gray', vmin=0, vmax=255),plt.title('Histogram Equalized')
interact(f, file=file_name)
###Output
_____no_output_____
###Markdown
2. 2D Average filter
###Code
def img_filter(file, hist_eq, size=40):
img = cv2.imread(file,2)
if hist_eq:
equ = cv2.equalizeHist(img)
else:
equ = img
kernel = np.ones((size,size),np.float32)/(size*size)
dst = cv2.filter2D(equ,-1,kernel)
plt.subplot(131),plt.imshow(dst,cmap='gray', vmin=0, vmax=255),plt.title('Averaging')
interact(img_filter,file=file_name, hist_eq=True, size=(1,200,1));
###Output
_____no_output_____
###Markdown
3. Binary Imaging
###Code
def binarize(file,hist_eq,size=40,t=0.4):
img = cv2.imread(file,2)
if hist_eq:
equ = cv2.equalizeHist(img)
else:
equ = img
kernel = np.ones((size,size),np.float32)/(size*size)
dst = cv2.filter2D(equ,-1,kernel)
binary_img = (dst>=t*255)*1
plt.subplot(131),plt.imshow(img,cmap='gray', vmin=0, vmax=255),plt.title(file)
plt.subplot(132),plt.imshow(dst,cmap='gray', vmin=0, vmax=255),plt.title('Averaging')
plt.subplot(133),plt.imshow(binary_img),plt.title('Binary')
plt.show()
interact(binarize,file=file_name, hist_eq=True, size=(1,200,1), t=(0,1,0.01));
###Output
_____no_output_____ |
.ipynb_checkpoints/CandidateElimination-181070007-checkpoint.ipynb | ###Markdown
CANDIDATE ELIMINATION ALGORITHM
###Code
import numpy as np
import pandas as pd
data_link = './titanic.csv'
raw_tdf = pd.read_csv(data_link)
print(raw_tdf.head())
print(raw_tdf.columns)
#drop name column
raw_tdf=raw_tdf.drop(['Name', 'Age','Fare'] , axis=1)
raw_tdf
#mapping sex column
''' male->0 female->1'''
mapping_df_sex = {'male':0 , 'female':1}
raw_tdf['Sex']= raw_tdf['Sex'].map(mapping_df_sex)
print(raw_tdf)
training_data = raw_tdf.iloc[30:35 ,:]
print(training_data)
testing_data = raw_tdf.iloc[5:10 , :]
testing_data
testing_true_values = testing_data['Survived']
testing_true_values
testing_xis = testing_data.drop(['Survived'] , axis=True)
testing_xis
training_features = training_data.iloc[:,1:]
print(training_features)
training_target = training_data['Survived']
print(training_target)
def candidate_elimination(features , targets):
specific_h =None
for idx,val in enumerate(targets):
if val==1:
specific_h = features[idx]
break
general_h = [ ['?' for i in range(len(specific_h)) ] for j in range(len(specific_h)) ]
print('Specific_hypothesis',specific_h , end="\n\n")
print('General_hypothesis',general_h , end="\n\n")
#its training time
for idx , val in enumerate(features):
if targets[idx]==1:
if i in range(len(specific_h)):
if specific_h[i]==val[i]:
#do nothing
pass
else :
#generalize
#find-s algo basically
specific_h[i]='?'
general_h[i][i]='?'
if targets[idx]==0 : #negative example found
for i in range(len(specific_h)):
if val[i]==specific_h[i]:
#generalize
general_h[i][i]='?'
else :
#specific update in general hypothesis
general_h[i][i]=specific_h[i]
return specific_h , general_h
def train(x,y):
features = np.array(x)
targets = np.array(y)
specific_h , general_h = candidate_elimination(features,targets)
quest_list = ['?' for i in range(len(general_h))]
indx = [i for i ,val in enumerate(general_h) if val==quest_list ]
for i in indx :
general_h.remove(quest_list)
return specific_h , general_h
specific_h , general_h = train(training_features,training_target)
print('After training \n\n\n')
print('Specific Hypothesis :\t',specific_h)
print('General Hypothesis :\t',general_h)
###Output
Specific_hypothesis [1 1 1 0]
General_hypothesis [['?', '?', '?', '?'], ['?', '?', '?', '?'], ['?', '?', '?', '?'], ['?', '?', '?', '?']]
After training
Specific Hypothesis : [1 1 1 0]
General Hypothesis : [['?', 1, '?', '?']]
###Markdown
Tesing time
###Code
def cealgo_match(xi ,hypothesis) :
count=0
lhypo = len(hypothesis)
for i in range(lhypo):
if xi[i]==hypothesis[i]:
count+=1
return (count/lhypo)
def predict(testing_xi , true_labels , s_hypothesis):
score = 0
xlen = len(testing_xi)
testing_xi = np.array(testing_xi)
true_lables = np.array(true_labels)
for i in range(xlen):
score+= cealgo_match(testing_xi[i] , s_hypothesis )
return score/xlen
accuracy = predict(testing_xis,testing_true_values , specific_h)
print('Accuracy = {}%'.format(accuracy*100))
###Output
Accuracy = 35.0%
|
tutorials/colab/Laptime_simulation.ipynb | ###Markdown
IntroductionUsually it is recommended to use lumos with the docker image it provides. But we can also run `lumos` with conda environment.Google Colab provides free GPU and TPU VMs that one could use with jupyter notebook style UI, and this is what we're going to use.To set up the environment, we'll:1) install conda on google colab using `condacolab`2) clone the `lumos` git repo, and setup the conda environment (this will be replaced by pip install in the future)3) run laptime simulation example Install Conda on Google Colab. Last modified 2021.08.04 -->`condacolab` simplifies the setup as much as possible, but there are some gotchas.**โ ๏ธ Read this before continuing!*** The `condacolab` commands need to be run the first Code cell!* Once you run `condacolab.install()`, the Python kernel will be restarted. This is **normal and expected**. After that, you can continue running the cells below like normal.* Do not use the `Run all` option. Run the `condacolab` cell _individually_ and wait for the kernel to restart. **Only then**, you can run all cells if you want.* You can only use the `base` environment. Do not try to create new ones; instead update `base` with either: * `conda install ` * `conda env update -n base -f environment.yml`* If you want to use GPUs, make sure you are using such an instance before starting!* If you get an error, please raise an issue [here](https://github.com/jaimergp/condacolab/issues).
###Code
!pip install -q condacolab
import condacolab
condacolab.install()
import condacolab
condacolab.check()
# Make sure cuda is available
!nvcc --version
!nvidia-smi
###Output
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Mon_Oct_12_20:09:46_PDT_2020
Cuda compilation tools, release 11.1, V11.1.105
Build cuda_11.1.TC455_06.29190527_0
Wed May 11 19:02:38 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla T4 Off | 00000000:00:04.0 Off | 0 |
| N/A 43C P8 11W / 70W | 0MiB / 15109MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
###Markdown
Setup `lumos` environment
###Code
# Clone repo and enter the right path
!git clone https://github.com/numagic/lumos.git
%cd lumos
!ls
# Colab by default uses python 3.7, which we can't change. Condacolab also only
# supports the base env, which we update.
# It may take 4-5 minutes to set up the conda environment, particularly
# with setting up the cuda toolkit for the GPU.
# !conda env update -n base -f environment.yml
# Or... we could direclty install them which seems faster than asking conda to
# solve for the environment. So we'll take this to make the colab experience
# better.
# TODO: make dependency automatic -> this would require conda as there are
# non-python dependencies
!conda install -c conda-forge cyipopt
!pip install casadi pyarrow pandas
# Install the GPU version of jax to use GPU (need correct cuda version)
!pip install jax[cuda11_cudnn82] -f https://storage.googleapis.com/jax-releases/jax_releases.html
# install lumos
!pip install numagic-lumos==0.0.2rc3
# test jax on GPU, if you see a spike in the GPU ram used -> yes, you're using GPU
# If you see warnings about GPU not found, then either the VM connected has no
# GPU or the support packages are not installed correctly
import jax
import jax.numpy as jnp
import numpy as np
a = np.random.randn(100, 100)
b = np.random.randn(100, 100)
c = jnp.dot(a, b)
print(jax.devices())
del a, b, c
###Output
[GpuDevice(id=0, process_index=0)]
###Markdown
Run laptime simulation exampleNote that unfortunately colab does not show the stdout printed to the terminal, therefore the user must use the command tabs: 'Runtime' -> 'View runtime logs' to see the stdout outputs, such as those from IPOPT.
###Code
import logging
import sys
use_gpu_with_jax = True
is_cyclic = True
backend = "jax" # supports jax and casadi
def main():
import jax
import os
import cyipopt
cyipopt.set_logging_level(logging.WARN)
from lumos.models.composition import ModelMaker
from lumos.models.simple_vehicle_on_track import SimpleVehicleOnTrack
from lumos.models.tires.utils import create_params_from_tir_file
from lumos.simulations.laptime_simulation import LaptimeSimulation
TRACK_DIR = "data/tracks"
# Usually GPUs are designed to operate with float32 or even float16, and are
# much slower with doubles (float64). Here we stick with float64 to ensure
# we have the same results as 64bit as with casadi backend.
if use_gpu_with_jax:
jax.config.update('jax_platform_name', 'gpu')
os.environ['JAX_PLATFORM_NAME'] = 'GPU'
jax.config.update("jax_enable_x64", True)
else:
# somehow jax doesn't see the cpu device on colab?!
jax.config.update('jax_platform_name', 'cpu')
os.environ['JAX_PLATFORM_NAME'] = 'CPU'
jax.config.update("jax_enable_x64", True)
track = "Catalunya"
track_file = os.path.join(TRACK_DIR, track + ".csv")
model_config = SimpleVehicleOnTrack.get_recursive_default_model_config()
# EXAMPLE: change tire model
# model_config.replace_subtree("vehicle.tire", ModelMaker.make_config("PerantoniTire"))
# EXMAPLE: change an aero model
# model_config.replace_subtree("vehicle.aero", ModelMaker.make_config("MLPAero"))
model = SimpleVehicleOnTrack(model_config=model_config)
params = model.get_recursive_default_params()
# Example of changing model parameters
# TODO: an issue here is that we need to instantiate the model first to get params
# but that's unavoidable because without the model, we don't even know the tree
# structure of all the submodels, let alone the default parameters.
# params.set_param("vehicle.vehicle_mass", 1700)
# Example: change tire parameters
sharpened_params = create_params_from_tir_file("data/tires/sharpened.tir")
# FIXME: here we're using private methods. We should probably add a method to change
# the parameters of an entire node in the ParameterTree
tire_params = params._get_subtree("vehicle.tire")
tire_params._data = sharpened_params
params.replace_subtree("vehicle.tire", tire_params)
final_outputs = {}
final_states = {}
ocp = LaptimeSimulation(
model_params=params,
model_config=model_config,
sim_config=LaptimeSimulation.get_sim_config(
num_intervals=2500,
hessian_approximation="exact",
is_cyclic=is_cyclic,
is_condensed=False,
backend=backend,
track=track_file,
transcription="LGR",
),
)
x0 = ocp.get_init_guess()
print("starting the first solve!")
solution, info = ocp.solve(
x0,
max_iter=200,
print_level=5,
print_timing_statistics="yes",
print_info_string="yes",
dual_inf_tol=1e-3,
constr_viol_tol=1e-3,
)
total_time = ocp.dec_var_operator.get_var(
solution, group="states", name="time", stage=-1
)
print(info["status_msg"])
print(f"finished in {info['num_iter']} iterations")
print(f"Maneuver time {total_time:.3f} sec")
# # We can change the parameters and solve again
# ocp.modify_model_param("vehicle.vehicle_mass", 2100.0)
# print("starting the second solve!")
# solution, info = ocp.solve(
# solution,
# max_iter=200,
# print_level=5,
# print_timing_statistics="yes",
# print_info_string="yes",
# derivative_test="none",
# dual_inf_tol=1e-3,
# constr_viol_tol=1e-3,
# )
# total_time = ocp.dec_var_operator.get_var(
# solution, group="states", name="time", stage=-1
# )
# print(f"Maneuver time {total_time:.3f} sec")
# timing (note: this could be rather unstable due to the VM and resources available)
# with 2500 intervals, per model algebra NLP call (con + jac + hess)
# casadi: : 1.35sec
# JAX CPU : 3.25sec (and 34.8sec for 25000 intervals, linear scaling)
# JAX GPU K80 (64bit): 0.12sec (and 0.89sec for 25000 intervals, still sublinear scaling)
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
main()
###Output
INFO:numexpr.utils:NumExpr defaulting to 2 threads.
DEBUG:lumos.models.tires.utils:FILE_TYPE is not a numeric value and is discarded.
DEBUG:lumos.models.tires.utils:FILE_FORMAT is not a numeric value and is discarded.
DEBUG:lumos.models.tires.utils:LENGTH is not a numeric value and is discarded.
DEBUG:lumos.models.tires.utils:FORCE is not a numeric value and is discarded.
DEBUG:lumos.models.tires.utils:ANGLE is not a numeric value and is discarded.
DEBUG:lumos.models.tires.utils:MASS is not a numeric value and is discarded.
DEBUG:lumos.models.tires.utils:TIME is not a numeric value and is discarded.
DEBUG:lumos.models.tires.utils:TYRESIDE is not a numeric value and is discarded.
WARNING:lumos.simulations.laptime_simulation:states.time must be non-cyclic, automatically adding it to the list.
WARNING:lumos.simulations.laptime_simulation:inputs.track_heading must be non-cyclic, automatically adding it to the list.
INFO:lumos.models.tracks:left distance violation: 0
INFO:lumos.models.tracks:right distance violation: 0
WARNING:absl:Finished tracing + transforming apply_and_forward for jit in 0.7687845230102539 sec
WARNING:absl:Compiling apply_and_forward (140345740290672) for 95 args.
WARNING:absl:Finished XLA compilation of apply_and_forward in 1.1458117961883545 sec
INFO:lumos.optimal_control.scaled_mesh_ocp:Triggering jax JIT
INFO:lumos.optimal_control.nlp:time.objective: 0.000032
INFO:lumos.optimal_control.nlp:time.gradient: 0.000249
INFO:lumos.optimal_control.nlp:time.hessian: 0.000008
INFO:lumos.optimal_control.nlp:inputs_penalty.objective: 0.000692
INFO:lumos.optimal_control.nlp:inputs_penalty.gradient: 0.006202
INFO:lumos.optimal_control.nlp:inputs_penalty.hessian: 0.000246
|
Code/Historical Attempts/Simulator_RG_PC-Copy1.ipynb | ###Markdown
Influence Simulation for Poisson Random Graph We consider the following undirected graphs for simulating influence networks and cascades. erdos_renyi_graph - Poisson/Binomial Degree Distribution
###Code
from SimulationHelper2 import *
from tqdm import tqdm
import numpy as np
import os
import matplotlib.pyplot as plt
%matplotlib inline
## Multiprocessing Package - Speed up simulation
from multiprocessing import cpu_count
from dask.distributed import Client, progress
import dask
client = Client(threads_per_worker=1)
client
###Output
P:\Programs\anaconda3\lib\site-packages\distributed\node.py:244: UserWarning: Port 8787 is already in use.
Perhaps you already have a cluster running?
Hosting the HTTP server on port 50035 instead
http_address["port"], self.http_server.port
###Markdown
Simulation of Influentials in a Poisson Random GraphThe following code is for simulating and measuring the cascade of influentials.
###Code
######################################################################
############################# Parameters #############################
######################################################################
N = 100
phi = 0.18
max_n_avg = 36
increment = 0.25
num_simulations = 50
groupsize = 4
n_avg = np.arange(1, max_n_avg, increment)
p = [avg/(N-1) for avg in n_avg]
n = len(p)
%%time
pool = []
x = []
for i in tqdm(range(num_simulations)):
for j in range(n):
pool.append(dask.delayed(run_sim_groups)(N, p[j], phi, groupsize))
results = dask.compute(pool)
%%time
for phi in [0.05, 0.10, 0.2, 0.25, 0.5]:
pool = []
x = []
for i in tqdm(range(num_simulations)):
for j in range(n):
pool.append(dask.delayed(run_simulation_RG_PC)(N, p[j], phi, groupsize))
results = dask.compute(pool)
tmp = np.array(results[0])
file_dir = "./Results"
file_name = "RG_N_PC{}phi{}avg{}sim{}inc{}size{}.npy".format(N,int(phi*100),max_n_avg,num_simulations, increment,groupsize)
file_path = os.path.join(file_dir, file_name)
if not os.path.exists(file_dir):
os.makedirs(file_dir)
np.save(file_path, tmp)
###Output
_____no_output_____
###Markdown
Post ProcessingThis code reformats the simulation output for plotting.
###Code
tmp = np.array(results[0])
dims = (num_simulations, n)
names = ["0-5", "5-10", "10-15", "15-20", "0-10", "0-15", "0-20", "Normal", "95-100"]
s_05, s_10, s_15, s_20 = np.reshape(tmp[:,0], dims), np.reshape(tmp[:,1], dims), np.reshape(tmp[:,2], dims), np.reshape(tmp[:,3], dims)
s_010, s_015, s_020, s_n, s_95 = np.reshape(tmp[:,4], dims), np.reshape(tmp[:,5], dims), np.reshape(tmp[:,6], dims), np.reshape(tmp[:,7], dims), np.reshape(tmp[:,8], dims)
t_05, t_10, t_15, t_20 = np.reshape(tmp[:,9], dims), np.reshape(tmp[:,10], dims), np.reshape(tmp[:,11], dims), np.reshape(tmp[:,12], dims)
t_010, t_015, t_020, t_n, t_95 = np.reshape(tmp[:,13], dims), np.reshape(tmp[:,14], dims), np.reshape(tmp[:,15], dims), np.reshape(tmp[:,16], dims), np.reshape(tmp[:,17], dims)
# Number of Nodes of Network Influenced
S_05, S_10, S_15, S_20 = np.apply_along_axis(np.mean, 0, s_05), np.apply_along_axis(np.mean, 0, s_10), np.apply_along_axis(np.mean, 0, s_15), np.apply_along_axis(np.mean, 0, s_20)
S_010, S_015, S_020, S_n, S_95 = np.apply_along_axis(np.mean, 0, s_010), np.apply_along_axis(np.mean, 0, s_015), np.apply_along_axis(np.mean, 0, s_020), np.apply_along_axis(np.mean, 0, s_n), np.apply_along_axis(np.mean, 0, s_95)
# Proportion of Network Influenced
N_05, N_10, N_15, N_20 = [x/N for x in S_05], [x/N for x in S_10], [x/N for x in S_15], [x/N for x in S_20]
N_010, N_015, N_020, N_n, N_95 = [x/N for x in S_010], [x/N for x in S_015], [x/N for x in S_020], [x/N for x in S_n], [x/N for x in S_95]
# Averaged Time of Influenced Nodes
T_05, T_10, T_15, T_20 = np.apply_along_axis(np.mean, 0, t_05), np.apply_along_axis(np.mean, 0, t_10), np.apply_along_axis(np.mean, 0, t_15), np.apply_along_axis(np.mean, 0, t_20)
T_010, T_015, T_020, T_n, T_95 = np.apply_along_axis(np.mean, 0, t_010), np.apply_along_axis(np.mean, 0, t_015), np.apply_along_axis(np.mean, 0, t_020), np.apply_along_axis(np.mean, 0, t_n), np.apply_along_axis(np.mean, 0, t_95)
np.sum(np.isnan(tmp))
tmp.shape
###Output
_____no_output_____
###Markdown
PlottingThe below plots represent - Comparison of (average) number of nodes influenced by influential/normal nodes as average degree changes- Comparison of (average) percentage of nodes influenced by influential/normal nodes as average degree changes- Comparison of (average) time of node influence by influential/normal nodes as average degree changes
###Code
plt.plot(n_avg, N_05)
plt.plot(n_avg, N_10)
plt.plot(n_avg, N_15)
plt.plot(n_avg, N_20)
plt.plot(n_avg, N_010)
plt.plot(n_avg, N_015)
plt.plot(n_avg, N_020)
plt.plot(n_avg, N_n)
plt.plot(n_avg, N_95)
plt.ylabel("Average Number Influenced")
plt.xlabel("Average Degree")
plt.title("Number of Nodes Influenced")
plt.legend(names)
plt.plot(n_avg, S_05)
plt.plot(n_avg, S_10)
plt.plot(n_avg, S_15)
plt.plot(n_avg, S_20)
plt.plot(n_avg, S_010)
plt.plot(n_avg, S_015)
plt.plot(n_avg, S_020)
plt.plot(n_avg, S_n)
plt.plot(n_avg, S_95)
plt.ylabel("Average Number Influenced")
plt.xlabel("Average Degree")
plt.title("Percentage of Network Influenced")
plt.legend(names)
plt.plot(n_avg, T_05)
plt.plot(n_avg, T_10)
plt.plot(n_avg, T_15)
plt.plot(n_avg, T_20)
plt.plot(n_avg, T_010)
plt.plot(n_avg, T_015)
plt.plot(n_avg, T_020)
plt.plot(n_avg, T_n)
plt.plot(n_avg, T_95)
plt.ylabel("Average Time of Influenced")
plt.xlabel("Average Degree")
plt.title("Comparison of Average Influenced Time against Average Degree")
plt.legend(names)
###Output
_____no_output_____
###Markdown
Storing Simulation ResultsRaw simulation results are stored as npy files.
###Code
file_dir = "./Results"
file_name = "RG_N_PC{}phi{}avg{}sim{}inc{}size{}.npy".format(N,int(phi*100),max_n_avg,num_simulations, increment,groupsize)
file_path = os.path.join(file_dir, file_name)
if not os.path.exists(file_dir):
os.makedirs(file_dir)
np.save(file_path, tmp)
tmp = np.load(file_path)
file_dir = "./Results"
file_name = "RG_N_PC{}phi{}avg{}sim{}inc{}size{}.npy".format(N,int(phi*100),max_n_avg,num_simulations, increment,groupsize)
file_path = os.path.join(file_dir, file_name)
###Output
_____no_output_____ |
Jupyter-Create-SlideShow.ipynb | ###Markdown
Create a Jupyter SlideshowIn this notebook I will walk you through creating your own Jupyter slideshow. 1. Press the View menu. 2. Move to Cell Toolbar. 3. Select Slideshow. 4. At the top right corner you will see a Slide Type. Let's play with it. My Slideshow By Laura, July 28 2020 About this slideshowIn this Slideshow we will learn about sine and cosine waves using Python.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0, 10, 1000)
fig, ax = plt.subplots()
ax.plot(x, np.sin(x), '-b', label='Sine')
ax.plot(x, np.cos(x), '--r', label='Cosine')
ax.axis('equal')
ax.grid(True)
leg = ax.legend();
###Output
_____no_output_____
###Markdown
I don't want to show this slide....
###Code
# Nor this one
print("My notes")
x,y = 1,2
print(x + y)
###Output
My notes
3
|
Notebooks/ORF_MLP_108.ipynb | ###Markdown
ORF recognition by MLPSo far, no MLP has exceeded 50% accurcy on any ORF problem.Here, try a variety of things.RNA length 16, CDS length 8.No luck with 32 neurons or 64 neuronsInstead of sigmoid, tried tanh and relu.Instead of 4 layers, tried 1.RNA length 12, CDS length 6.2 layers of 32 neurons, sigmoid.Even 512 neurons, rectangular or triangular, didn't work.Move INPUT_SHAPE from compile() to first layer parameter. This works: All PC='AC'*, all NC='GT'*. 100% accurate on one epoch with 2 layers of 12 neurons.Nothing works! Now suspect the data preparation is incorrect. Try trivializing the problem by always adding ATG or TAG.
###Code
import time
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
PC_SEQUENCES=32000 # how many protein-coding sequences
NC_SEQUENCES=32000 # how many non-coding sequences
PC_TESTS=1000
NC_TESTS=1000
RNA_LEN=32 # how long is each sequence
CDS_LEN=16 # min CDS len to be coding
ALPHABET=4 # how many different letters are possible
INPUT_SHAPE_2D = (RNA_LEN,ALPHABET,1) # Conv2D needs 3D inputs
INPUT_SHAPE = (None,RNA_LEN,ALPHABET) # MLP requires batch size None
FILTERS = 16 # how many different patterns the model looks for
CELLS = 16
NEURONS = 16
DROP_RATE = 0.4
WIDTH = 3 # how wide each pattern is, in bases
STRIDE_2D = (1,1) # For Conv2D how far in each direction
STRIDE = 1 # For Conv1D, how far between pattern matches, in bases
EPOCHS=50 # how many times to train on all the data
SPLITS=3 # SPLITS=3 means train on 2/3 and validate on 1/3
FOLDS=3 # train the model this many times (range 1 to SPLITS)
import sys
IN_COLAB = False
try:
from google.colab import drive
IN_COLAB = True
except:
pass
if IN_COLAB:
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
#drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
from RNA_describe import Random_Base_Oracle
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py')
with open('RNA_prep.py', 'w') as f:
f.write(r.text)
from RNA_prep import prepare_inputs_len_x_alphabet
else:
print("CoLab not working. On my PC, use relative paths.")
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_describe import ORF_counter,Random_Base_Oracle
from SimTools.RNA_prep import prepare_inputs_len_x_alphabet
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
from os import listdir
import csv
from zipfile import ZipFile
import numpy as np
import pandas as pd
from scipy import stats # mode
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Conv1D,Conv2D
from keras.layers import GRU,LSTM
from keras.layers import Flatten,TimeDistributed
from keras.layers import MaxPooling1D,MaxPooling2D
from keras.losses import BinaryCrossentropy
# tf.keras.losses.BinaryCrossentropy
import matplotlib.pyplot as plt
from matplotlib import colors
mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1
np.set_printoptions(precision=2)
rbo=Random_Base_Oracle(RNA_LEN,True)
pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,10) # just testing
pc_all,nc_all = rbo.get_partitioned_sequences(CDS_LEN,PC_SEQUENCES+PC_TESTS)
print("Use",len(pc_all),"PC seqs")
print("Use",len(nc_all),"NC seqs")
# Make the problem super easy!
def trivialize_sequences(list_of_seq,option):
num_seq = len(list_of_seq)
for i in range(0,num_seq):
seq = list_of_seq[i]
if option==0:
list_of_seq[i] = 'TTTTTT'+seq[6:]
else:
list_of_seq[i] = 'AAAAAA'+seq[6:]
if False:
print("Trivialize...")
trivialize_sequences(pc_all,1)
print("Trivial PC:",pc_all[:5])
print("Trivial PC:",pc_all[-5:])
trivialize_sequences(nc_all,0)
print("Trivial NC:",nc_all[:5])
print("Trivial NC:",nc_all[-5:])
# Describe the sequences
def describe_sequences(list_of_seq):
oc = ORF_counter()
num_seq = len(list_of_seq)
rna_lens = np.zeros(num_seq)
orf_lens = np.zeros(num_seq)
for i in range(0,num_seq):
rna_len = len(list_of_seq[i])
rna_lens[i] = rna_len
oc.set_sequence(list_of_seq[i])
orf_len = oc.get_max_orf_len()
orf_lens[i] = orf_len
print ("Average RNA length:",rna_lens.mean())
print ("Average ORF length:",orf_lens.mean())
print("Simulated sequences prior to adjustment:")
print("PC seqs")
describe_sequences(pc_all)
print("NC seqs")
describe_sequences(nc_all)
pc_train=pc_all[:PC_SEQUENCES]
nc_train=nc_all[:NC_SEQUENCES]
pc_test=pc_all[PC_SEQUENCES:]
nc_test=nc_all[NC_SEQUENCES:]
# Use code from our SimTools library.
X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles
print("Data ready.")
print(len(X),"sequences total")
print(len(X[0]),"bases/sequence")
print(len(X[0][0]),"dimensions/base")
#print(X[0])
print(type(X[0]))
print(X[0].shape)
def make_DNN():
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
dnn.add(Flatten())
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32,
input_shape=INPUT_SHAPE ))
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
#dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=np.float32))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
#dnn.build()
#ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE)
#bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
#model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"])
return dnn
model = make_DNN()
print(model.summary())
from keras.callbacks import ModelCheckpoint
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS) # this does not shuffle
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(X,y)
from keras.models import load_model
X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET)
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
###Output
_____no_output_____ |
pyha/cores/fft/bitreversal_fftshift_avgpool/bitreversal_fftshift_avgpool.ipynb | ###Markdown
Table of Contents Info
###Code
print(inspect.getdoc(BitreversalFFTshiftAVGPool))
print('\n\nMain interface\n' + inspect.getdoc(BitreversalFFTshiftAVGPool.main))
###Output
Bitreversal, FFTShift and AveragePooling
----------------------------------------
Fixes bitreversal, performs fftshift and applies average pooling, implemented with 2 BRAM blocks.
Internal accumulator may overflow, in which case it is saturated.
Args:
fft_size:
avg_freq_axis: Pooling in frequnecy domain, decimates the data rate and has major impact on resource usage.
Large decimations use LESS memory.
Example, if input is 1024 point fft and avg_freq_axis is 2, then output is 512 points.
avg_time_axis: Pooling in time domain, decimates the data rate.
TODO: this core should be unsigned...
Main interface
Args:
input (DataValid): 36 bits, type not restricted
Returns:
DataValid: Output type shifted right by the bit-growth.
###Markdown
Examples Fix bitreversal and fftshift
###Code
file = get_data_file('limem_ph3weak_40m')
input_signal = load_complex64_file(file)
fft_size = 512
_, _, spectro_out = signal.spectrogram(input_signal, 1, nperseg=fft_size, return_onesided=False, detrend=False,
noverlap=0, window='hann')
spectro_out = toggle_bit_reverse(spectro_out.T) # apply hardware impairments
spectro_out = spectro_out.flatten()
dut = BitreversalFFTshiftAVGPool(fft_size, avg_freq_axis=2, avg_time_axis=1)
sims = simulate(dut, spectro_out, trace=True, pipeline_flush='auto') # run simulations and gather trace
plot_trace()
plot_imshow(sims, rows=dut.FFT_SIZE // dut.AVG_FREQ_AXIS, transpose=True)
###Output
INFO:sim:Tracing is enabled, running "MODEL" and "HARDWARE" simulations
INFO:sim:Running "MODEL" simulation...
INFO:sim:OK!
INFO:sim:Running "HARDWARE" simulation...
###Markdown
Pool `avg_freq_axis = 8` and `avg_time_axis=16`
###Code
avg_time_axis = 16
dut = BitreversalFFTshiftAVGPool(fft_size, avg_freq_axis=8, avg_time_axis=avg_time_axis)
sims = simulate(dut, spectro_out, trace=True, pipeline_flush='auto') # run simulations and gather trace
plot_trace()
plot_imshow(sims, rows=dut.FFT_SIZE // dut.AVG_FREQ_AXIS, transpose=True)
###Output
INFO:sim:Tracing is enabled, running "MODEL" and "HARDWARE" simulations
INFO:sim:Running "MODEL" simulation...
INFO:sim:OK!
INFO:sim:Running "HARDWARE" simulation...
###Markdown
Conversion to VHDL and RTL/NETLIST simulations
###Code
# Pyha supports running 'RTL' (using GHDL) and 'NETLIST' (netlist after quartus_map) level simulations.
avg_time_axis = 4
fft_size = 512
input_signal = np.random.normal(size=fft_size * avg_time_axis) * 0.025
dut = BitreversalFFTshiftAVGPool(fft_size=fft_size, avg_freq_axis=8, avg_time_axis=avg_time_axis)
sims = simulate(dut, input_signal, pipeline_flush='auto',
simulations=['MODEL', 'HARDWARE', 'RTL', 'NETLIST'],
conversion_path='/tmp/pyha_output')
assert hardware_sims_equal(sims)
###Output
INFO:sim:Running "MODEL" simulation...
INFO:sim:OK!
INFO:sim:Simulaton needs to support conversion to VHDL -> major slowdown
INFO:sim:Running "HARDWARE" simulation...
###Markdown
Synthesis: resource usage and Fmax
###Code
quartus = get_simulator_quartus() # reuse the work that was done during the simulation
print(quartus.get_resource_usage('fit'))
print(quartus.get_fmax())
###Output
INFO:synth:Running quartus_fit quartus_project...
INFO:synth:Running quartus_sta -t script.tcl...
|
gym_cartpole_v0/dqn/CartPole_v0_DQN.ipynb | ###Markdown
CartPole v0 Deep Q-NetworkCartPole v0 problem resolution using DQN (Q-Learning using Neural Network and experience replay) with Tensorflow.See [online example](https://gym.openai.com/evaluations/eval_CABwYtsAQ6C9F19XwGz2w) at Gym OpenAI.
###Code
import numpy as np
import random
import gym
from gym import wrappers
import matplotlib.pyplot as plt
import tensorflow as tf
env = gym.make('CartPole-v0')
env = wrappers.Monitor(env, 'CartPole-record',force=True)
###Output
[2017-08-26 17:31:51,543] Making new env: CartPole-v0
[2017-08-26 17:31:51,562] Clearing 24 monitor files from previous run (because force=True was provided)
###Markdown
Config
###Code
# Env variables
nb_actions = 2
# Training tuning
train_batch_size = 10
e = 0.5
e_decay = 0.99
y = .9
lr = 1.0
# Training loop
nb_max_episodes = 3000
episode_count = 0
total_steps = 0
update_freq = 100
test_freq = 100
log_freq = 100
###Output
_____no_output_____
###Markdown
Training Model
###Code
n_input = 4
n_hidden_1 = 128
n_hidden_2 = 128
tf.reset_default_graph()
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_1, nb_actions]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([nb_actions]))
}
x = tf.placeholder(shape=[None,4],dtype=tf.float32)
layer_1 = tf.nn.tanh(tf.add(tf.matmul(x, weights['h1']), biases['b1']))
layer_2 = tf.nn.tanh(tf.add(tf.matmul(layer_1, weights['h2']), biases['b2']))
Qout = tf.add(tf.matmul(layer_2, weights['out']), biases['out'])
nextQ = tf.placeholder(shape=[None,nb_actions],dtype=tf.float32)
loss = tf.reduce_sum(tf.square(nextQ - Qout))
trainer = tf.train.AdamOptimizer(learning_rate=0.0001)
update_model = trainer.minimize(loss)
###Output
_____no_output_____
###Markdown
Experience Replay
###Code
class ExperienceBuffer():
def __init__(self, buffer_size = 10000):
self.buffer = []
self.buffer_size = buffer_size
def add(self,experience):
if len(self.buffer) + len(experience) >= self.buffer_size:
self.buffer = self.buffer[:1]
self.buffer.append(experience)
def sample(self,size):
return random.sample(self.buffer, size)
###Output
_____no_output_____
###Markdown
Test accurracy
###Code
def test_accurracy():
test_batch_size = 100
c = 0
nb_success = 0.0
max_time, min_time = 0.0, float('inf')
while c < test_batch_size:
j = 0
is_gameover = False
state = env.reset()
while not is_gameover:
j += 1
Q1 = sess.run(Qout, feed_dict={x: [state]})[0]
action = np.argmax(Q1)
next_state, reward, is_gameover, _ = env.step(action)
# env.render()
state = next_state
nb_success += j
max_time = max(j, max_time)
min_time = min(j, min_time)
c += 1
return nb_success / c, min_time, max_time
###Output
_____no_output_____
###Markdown
Training Loop
###Code
mean_times = []
rewards = []
minus_hist = []
plus_hist = []
init = tf.global_variables_initializer()
replay_buffer = ExperienceBuffer()
finished = False
with tf.Session() as sess:
sess.run(init)
while not finished:
state = env.reset()
is_gameover = False
reward_tot = 0
episode_step = 0
while not is_gameover:
episode_step += 1
# Eval current state
Q1 = sess.run(Qout, feed_dict={x: [state]})[0]
action_minus, action_plus = Q1
minus_hist.append(action_minus)
plus_hist.append(action_plus)
if random.random() < e:
action = random.randint(0, nb_actions-1)
else:
action = np.argmax(Q1)
# Play best move with chance of random
next_state, reward, is_gameover, _ = env.step(action)
if is_gameover:
reward = -100
# Eval next state
Q2 = sess.run(Qout, feed_dict={x: [next_state]})[0]
# Update first state evaluation
Q1[action] += lr * (reward + y*np.max(Q2) - Q1[action])
sess.run(update_model,feed_dict={x: [state], nextQ:[Q1]})
# Register experience
replay_buffer.add((state,Q1))
reward_tot += reward
total_steps += 1
state = next_state
if is_gameover:
break
e *= e_decay
if total_steps % (update_freq) == 0 and len(replay_buffer.buffer) > train_batch_size:
train_batch = replay_buffer.sample(train_batch_size)
train_x, train_Q = list(zip(*train_batch))
#Update the network with our target values.
sess.run(update_model, feed_dict={
x: train_x,
nextQ: train_Q
})
if episode_count % test_freq == 0 and episode_count != 0:
mean_time, min_time, max_time = test_accurracy()
mean_times.append(mean_time)
if episode_count % log_freq == 0:
print('step: %s, episode: %s, mean time: %s. min: %s. max: %s. random move probability: %s'
% (total_steps, episode_count, mean_time, min_time, max_time, e))
if mean_time > 196:
finished = True
episode_count += 1
rewards.append(reward_tot)
env.close()
###Output
[2017-08-26 17:32:01,306] Starting new video recorder writing to D:\workspace\rl-dojo\gym_cartpole_v0\qlearning_nn\CartPole-record\openaigym.video.0.41032.video000000.mp4
[2017-08-26 17:32:02,263] Starting new video recorder writing to D:\workspace\rl-dojo\gym_cartpole_v0\qlearning_nn\CartPole-record\openaigym.video.0.41032.video000001.mp4
[2017-08-26 17:32:03,709] Starting new video recorder writing to D:\workspace\rl-dojo\gym_cartpole_v0\qlearning_nn\CartPole-record\openaigym.video.0.41032.video000008.mp4
[2017-08-26 17:32:06,172] Starting new video recorder writing to D:\workspace\rl-dojo\gym_cartpole_v0\qlearning_nn\CartPole-record\openaigym.video.0.41032.video000027.mp4
[2017-08-26 17:32:09,057] Starting new video recorder writing to D:\workspace\rl-dojo\gym_cartpole_v0\qlearning_nn\CartPole-record\openaigym.video.0.41032.video000064.mp4
[2017-08-26 17:32:11,735] Starting new video recorder writing to D:\workspace\rl-dojo\gym_cartpole_v0\qlearning_nn\CartPole-record\openaigym.video.0.41032.video000125.mp4
###Markdown
Metrics
###Code
import matplotlib.pyplot as plt
plt.plot(mean_times)
plt.xlabel('steps')
plt.ylabel('mean time')
plt.show()
f, (ax1, ax2) = plt.subplots(1, 2, sharey=True)
ax1.plot(minus_hist)
ax1.set_title('Minus QValues')
ax2.plot(plus_hist)
ax2.set_title('Plus QValues')
plt.show()
###Output
_____no_output_____
###Markdown
Upload results on Gym OpenAI
###Code
gym.upload('CartPole-record', api_key='your_key')
###Output
[2017-08-26 17:33:59,265] [CartPole-v0] Uploading 801 episodes of training data
[2017-08-26 17:34:02,189] [CartPole-v0] Uploading videos of 10 training episodes (75956 bytes)
[2017-08-26 17:34:02,793] [CartPole-v0] Creating evaluation object from CartPole-record with learning curve and training video
[2017-08-26 17:34:03,145]
****************************************************
You successfully uploaded your evaluation on CartPole-v0 to
OpenAI Gym! You can find it at:
https://gym.openai.com/evaluations/eval_yii4hTCcRky6P5E2MIWihw
****************************************************
|
OOP/Assignment-Quiz/OOP-Python-Assignments-1.ipynb | ###Markdown
Python Assignment Program -1
###Code
''' The example below shows how different variables are assigned.
The same variables are printed '''
#!/usr/bin/python
x = 30 # a whole number
f = 3.1415926 # a floating point number
myName = "OOP using Python" # a string variable - Camel casing
print(x)
print(f)
print(myName)
combination = myName + " " + myName
print(combination)
sum = f + f
print(sum)
###Output
_____no_output_____
###Markdown
Program -2
###Code
# Problem 2 Python strings
x = "Welcome to OOP Course"
print(x)
print(x[0]) # indexing starts from 0
print(x[1])
s = x[0:3] # substring
print(s)
t = x[8:10] # indexing
print(t)
s = "My lucky number is %d, what is yours?" % 7 # Combine numbers and text
print(s)
s = "My lucky number is " + str(7) + ", what is yours?" # alternative method of combining numbers and text
print(s)
###Output
_____no_output_____
###Markdown
Program -3
###Code
# Problem 3 - Use string functions
s = "OOP Course is offered in IIIT DWD"
s = s.replace("OOP","CS")
index = s.find("IIIT DWD")
if "IIIT DWD" in s:
print("Institute found")
print(s,"\n", index)
###Output
_____no_output_____
###Markdown
Program 4
###Code
# Problem 4
# define strings
firstName = "OOP"
lastName = "Course"
words = ["How","are","you","learning ", "OOP " , "Course","?"]
sentence = ' '.join(words)
print(sentence)
###Output
_____no_output_____
###Markdown
Program 5
###Code
# Problem 5
s = "OOP course is easy to Learn"
words = s.split() # splits sentence into list of words
print(words, len(s))
x = list(words[0]) # splits word in character
print(x)
###Output
_____no_output_____
###Markdown
Program 6
###Code
# problem 6
import random
# Create a random floating point number and print it.
print(random.random())
# pick a random whole number between 0 and 10.
print(random.randrange(0,15))
# pick a random floating point number between 0 and 10.
print(random.uniform(0,10))
###Output
_____no_output_____
###Markdown
Program 7
###Code
# problem 7
#!/usr/bin/env python3
name = input('What is your name? ')
print('Welcome to OOP Course ' + name)
qualification = input('What is your Qualification? ')
print(name + "'s" + ' 90Educational Qualification is ' + qualification)
phoneNum = input('Give me pohone number? ')
print('Phone number is : ' + str(phoneNum))
###Output
_____no_output_____
###Markdown
Program 8
###Code
# problem 8 interactive program
#!/usr/bin/env python3
gender = input("Gender? ")
gender = gender.lower()
if gender == "male":
print(" cat is male")
elif gender == "female":
print("cat is female")
else:
print("Invalid input")
age = int(input("Age of your cat? "))
if age < 5:
print("cat age is", age)
else:
print("Your cat is adult.")
x = 3
if x == 2:
print('two')
elif x == 3:
print('three')
elif x == 4:
print('four')
else:
print('something else')
###Output
_____no_output_____
###Markdown
Program 9 - Loops
###Code
# Problem 9
#!/usr/bin/env python3
# The first loop will repeat the print function for every item of the list.
# The second loop will do a calculation on every element of the list num and print the result.
city = ['Tokyo','New York','Toronto','Hong Kong']
print('Cities loop:')
for x in city:
print('City: ' + x)
# problem 10
x = 3
while x < 10:
print(x)
x = x + 1
###Output
_____no_output_____
###Markdown
Program 10
###Code
# problem 10 - functions without parameters
def currentYear():
print('2018')
currentYear()
def f(x,y): # with parameters
return x*y
print(f(3,4))
result = f(3,4) # return to a variable
print(result)
print('\n') # newline
num = [1,2,3,4,5,6,7,8,9]
print('x^2 loop:')
for x in num:
y = x * x
print(str(x) + '*' + str(x) + '=' + str(y))
# Python lists
pythonList = [ "New York", "Los Angles", "Boston", "Denver" ]
print(pythonList) # prints all elements
print(pythonList[0]) # print first element
print(pythonList[-1])
###Output
_____no_output_____ |
examples/WH_05 Warehouse key variables exploration.ipynb | ###Markdown
Warehouse key variables exploration*This notebook illustrates how to assess the inventory position of a storage system.*Use the virtual environment logproj.yml to run this notebook.****Alessandro Tufano 2020 Import packages
###Code
# %% append functions path
import sys; sys.path.insert(0, '..') #add the above level with the package
import pandas as pd
import numpy as np
from IPython.display import display, HTML #display dataframe
#import utilities
from logproj.utilities import creaCartella
###Output
_____no_output_____
###Markdown
Set data fields
###Code
string_casestudy = 'TOY_DATA'
###Output
_____no_output_____
###Markdown
Import data
###Code
# %% import data
from logproj.data_generator_warehouse import generateWarehouseData
D_locations, D_SKUs, D_movements, D_inventory = generateWarehouseData()
#print locations dataframe
display(HTML(D_locations.head().to_html()))
#print SKUs master file dataframe
display(HTML(D_SKUs.head().to_html()))
#print SKUs master file dataframe
display(HTML(D_movements.head().to_html()))
#print SKUs master file dataframe
display(HTML(D_inventory.head().to_html()))
###Output
_____no_output_____
###Markdown
Create folder hierarchy
###Code
# %% create folder hierarchy
pathResults = 'C:\\Users\\aletu\\desktop'
_, root_path = creaCartella(pathResults,f"{string_casestudy}_results")
_, path_results = creaCartella(root_path,f"P8_warehouseAssessment")
###Output
Cartella TOY_DATA_results giร esistente
Cartella P8_warehouseAssessment giร esistente
###Markdown
Define a learning table for each picking list
###Code
# %% STUDY CORRELATIONS
_, path_current = creaCartella(path_results,f"Correlations")
from logproj.P8_performanceAssessment.wh_explore_metrics import buildLearningTablePickList
# extract learning table for each picking list
D_learning=buildLearningTablePickList(D_movements)
D_learning.to_excel(path_current+"\\learning table.xlsx")
#print the learning table
display(HTML(D_learning.head().to_html()))
###Output
_____no_output_____
###Markdown
Plot the histograms of the key variables
###Code
# %% histograms
from logproj.P8_performanceAssessment.wh_explore_metrics import histogramKeyVars
output_figures = histogramKeyVars(D_learning)
for key in output_figures.keys():
output_figures[key].savefig(path_current+f"\\{key}.png")
###Output
_____no_output_____
###Markdown
Plot the correlation matrices
###Code
from logproj.P8_performanceAssessment.wh_explore_metrics import exploreKeyVars
output_figures = exploreKeyVars(D_learning)
for key in output_figures.keys():
output_figures[key].savefig(path_current+f"\\{key}.png")
###Output
_____no_output_____ |
Notebooks/MethylTools/Probe_Annotations.ipynb | ###Markdown
Read in Probe Annotations * These are parsed out in [Compile_Probe_Annoations](./Compile_Probe_Annotations.ipynb) notebook.
###Code
import pandas as pd
DATA_STORE = '/data_ssd/methylation_annotation_2.h5'
store = pd.HDFStore(DATA_STORE)
islands = pd.read_hdf(DATA_STORE, 'islands')
locations = pd.read_hdf(DATA_STORE, 'locations')
other = pd.read_hdf(DATA_STORE, 'other')
snps = pd.read_hdf(DATA_STORE, 'snps')
probe_annotations = pd.read_hdf(DATA_STORE, 'probe_annotations')
probe_to_island = store['probe_to_island']
island_to_gene = store['island_to_gene']
###Output
_____no_output_____
###Markdown
Auxilary function to map a data-vector from probes onto CpG Islands
###Code
def map_to_islands(s):
'''
s is a Series of measurments on the probe level.
'''
on_island = s.groupby(island_to_gene.Islands_Name).mean().order()
v = pd.concat([island_to_gene, on_island], axis=1).set_index(0)[1]
islands_mapped_to_genes = v.groupby(level=0).mean().order()
return on_island, islands_mapped_to_genes
###Output
_____no_output_____
###Markdown
Helper for making CpG island plots
###Code
def island_plot_maker(df, split, islands, ann, colors=None):
'''
df: a DataFrame of probe beta values
islands: a DataFrame mapping probes to CpG islands and
annotations
ann: a DataFrame mapping probes to gene annotations
and genomic coordinates of probe
'''
if colors is None:
colors = colors_st
groups = split.dropna().unique()
assert len(groups) == 2
def f(region):
p = ti(islands.Islands_Name == region)
p3 = ann.ix[p].join(islands.ix[p]).sort('Genomic_Coordinate')
p = p3.index
in_island = ti(p3.Relation_to_Island == 'Island')
fig, ax = subplots(figsize=(10,4))
for i,g in enumerate(groups):
ax.scatter(p3.Genomic_Coordinate, df[ti(split == g)].ix[p].mean(1),
color=colors[i], label=g)
ax.axvspan(p3.Genomic_Coordinate.ix[in_island[0]] - 30,
p3.Genomic_Coordinate.ix[in_island[-1]] + 30,
alpha=.2, color=colors[2], label='Island')
ax.set_xlabel('Genomic Coordinate')
ax.set_ylabel('Beta Value')
ax.legend(loc='lower right', fancybox=True)
prettify_ax(ax)
return f
###Output
_____no_output_____
###Markdown
Create annotation probe sets
###Code
cpg_island = probe_to_island.Relation_to_Island == 'Island'
dhs_site = other.DHS == 'TRUE'
enhancer = other.Enhancer == 'TRUE'
gene_body = other.UCSC_RefGene_Group.str.contains('Body')
gene_tss = other.UCSC_RefGene_Group.str.contains('TSS')
promoter = other.Regulatory_Feature_Group.str.contains('Promoter_Associated')
###Output
_____no_output_____
###Markdown
PRC2 probe annotations are initiallized in [PRC2 Probes](./PRC2_Probes.ipynb) notbook.
###Code
p = '/cellar/users/agross/TCGA_Code/MethylTools/Data/PRC2_Binding/'
prc2_probes = pd.read_csv(p + 'mapped_to_methylation_probes.csv',
index_col=0)
prc2_probes = prc2_probes.sum(1)>2
probe_sets = {'PRC2': prc2_probes, 'CpG Island': cpg_island,
'DHS Site': dhs_site, 'Enhancer': enhancer,
'Gene Body': gene_body, 'TSS': gene_tss,
'Promoter': promoter}
###Output
_____no_output_____ |
eigenwords_demo.ipynb | ###Markdown
Eigenwords
###Code
import numpy as np
import pandas as pd
from eigenwords import EigenwordsOSCCA
###Output
_____no_output_____
###Markdown
training
###Code
%%time
model = EigenwordsOSCCA()
model.load_corpus('./data/text8_with_phrase', verbose=False)
###Output
CPU times: user 5min 35s, sys: 38.5 s, total: 6min 14s
Wall time: 3min 49s
###Markdown
most similar words
###Code
model.wv.most_similar('cat')
model.wv.most_similar(positive=['king','woman'], negative=['man'])
###Output
_____no_output_____
###Markdown
word similarity task
###Code
word_table = pd.read_csv('./data/combined.csv', names=['word1', 'word2', 'score'], skiprows=[0])
word_table.head(10)
%%time
model.evaluate_word_pairs(word_table) # pearson, spearman, oov-ratio
###Output
CPU times: user 62.7 ms, sys: 1.68 ms, total: 64.3 ms
Wall time: 61.9 ms
###Markdown
word analogy task
###Code
model.wv.evaluate_word_analogies('./data/questions-words.txt')
###Output
_____no_output_____ |
deeplearning1/nbs/mercedes/code/mercedes-benz-greener-manufacturing-decision-tree-CV.ipynb | ###Markdown
Importing necessary packages
###Code
import numpy as np # linear algebra
import pandas as pd # data processing, CSV file I/O (e.g. pd.read_csv)
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
from sklearn.tree import DecisionTreeRegressor
seed = 42
# read datasets
train = pd.read_csv('../input/train.csv')
test = pd.read_csv('../input/test.csv')
# process columns, apply LabelEncoder to categorical features
for c in train.columns:
if train[c].dtype == 'object':
lbl = LabelEncoder()
lbl.fit(list(train[c].values) + list(test[c].values))
train[c] = lbl.transform(list(train[c].values))
test[c] = lbl.transform(list(test[c].values))
# shape
print('Shape train: {}\nShape test: {}'.format(train.shape, test.shape))
from sklearn.decomposition import PCA, FastICA
n_comp = 100
# PCA
pca = PCA(n_components=n_comp, random_state=42)
pca2_results_train = pca.fit_transform(train.drop(["y"], axis=1))
pca2_results_test = pca.transform(test)
# ICA
ica = FastICA(n_components=n_comp, random_state=42)
ica2_results_train = ica.fit_transform(train.drop(["y"], axis=1))
ica2_results_test = ica.transform(test)
# Append decomposition components to datasets
#for i in range(1, n_comp+1):
# train['pca_' + str(i)] = pca2_results_train[:,i-1]
# test['pca_' + str(i)] = pca2_results_test[:, i-1]
# train['ica_' + str(i)] = ica2_results_train[:,i-1]
# test['ica_' + str(i)] = ica2_results_test[:, i-1]
#y_train = train["y"]
#y_mean = np.mean(y_train)
train_reduced = pd.concat([pd.DataFrame(pca2_results_train), pd.DataFrame(ica2_results_train)], axis = 1)
test_reduced = pd.concat([pd.DataFrame(pca2_results_test), pd.DataFrame(ica2_results_test)], axis = 1)
#X, y = train.drop('y', axis=1).values, train.y.values
#print(X.shape)
X, y = train_reduced, train.y.values
print X.shape
model = DecisionTreeRegressor(random_state=seed)
DTR_params = {
'max_depth': [4,8],
'min_samples_split': [2,4],
'min_samples_leaf': [1,2,4]
}
reg = GridSearchCV(model, DTR_params, cv = 5, verbose=1, n_jobs = -1)
reg.fit(X, y)
print(reg.best_score_)
print(reg.best_params_)
means = reg.cv_results_['mean_test_score']
stds = reg.cv_results_['std_test_score']
params = reg.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
# make predictions and save results
y_pred = xgb_model.predict(x_test)
output = pd.DataFrame({'id': test['ID'].astype(np.int32), 'y': y_pred})
#output.to_csv('xgb_6.csv', index=False)
###Output
_____no_output_____
###Markdown
Trying base data with lasso
###Code
from sklearn.linear_model import Lasso
lasso_reg = Lasso(max_iter=6000)
lasso_params = {
'max_iter': [5000, 6000, 7000],
'alpha': [1.55, 1.57, 1.6],
'fit_intercept': [True,False],
'normalize': [True, False],
'precompute': [True, False],
'tol': [0.004, 0.0045, 0.005],
'selection': ['random', 'cyclic']
}
lasso_reg_cv = GridSearchCV(lasso_reg, lasso_params, cv = 5, verbose=1, n_jobs = -1)
lasso_reg_cv.fit(X, y)
print(lasso_reg_cv.best_score_)
print(lasso_reg_cv.best_params_)
means = lasso_reg_cv.cv_results_['mean_test_score']
stds = lasso_reg_cv.cv_results_['std_test_score']
params = lasso_reg_cv.cv_results_['params']
for mean, stdev, param in zip(means, stds, params):
print("%f (%f) with: %r" % (mean, stdev, param))
###Output
0.433292773306
{'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
0.433189 (0.063078) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
0.433257 (0.063102) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
0.433212 (0.063085) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
0.433214 (0.063089) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
0.433217 (0.063084) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
0.433218 (0.063088) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
0.433139 (0.063079) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
0.433080 (0.062919) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
0.433223 (0.063091) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
0.433224 (0.063084) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
0.432976 (0.063040) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
0.433293 (0.063058) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
0.433211 (0.063084) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
0.433205 (0.063165) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
0.433149 (0.063185) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
0.433214 (0.063089) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
0.433171 (0.063160) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
0.433236 (0.063098) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
0.433212 (0.063086) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-149.829003 (78.355360) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-149.828773 (78.355034) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-149.827688 (78.355649) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-149.819581 (78.342138) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-149.821921 (78.339676) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-149.830500 (78.353602) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-149.827104 (78.353077) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-149.827582 (78.355630) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-149.821992 (78.339884) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-149.821493 (78.353719) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-149.827540 (78.355647) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-149.827760 (78.355472) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-149.820705 (78.355286) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-149.825981 (78.351972) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-149.828006 (78.356348) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-149.827913 (78.356002) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-149.825318 (78.352413) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-149.823084 (78.348704) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-149.818997 (78.342266) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-149.824117 (78.353380) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-149.824922 (78.351841) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-149.820286 (78.359639) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-149.821843 (78.348872) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-149.826093 (78.356549) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-149.825770 (78.352948) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-149.814557 (78.350903) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-149.820093 (78.350017) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-149.827853 (78.356078) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-149.818661 (78.341341) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-149.827842 (78.355625) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-149.826951 (78.355909) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-149.825314 (78.354222) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-149.826876 (78.356848) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.55}
-149.826770 (78.355188) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-149.826901 (78.356766) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-149.823797 (78.354339) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.55}
-149.828308 (78.356266) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.55}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
0.431866 (0.063002) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
0.431731 (0.062821) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
0.431603 (0.063008) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
0.431977 (0.063309) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
0.431827 (0.063110) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
0.431783 (0.063177) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
0.432055 (0.062948) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
0.431947 (0.063057) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
0.432001 (0.063044) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
0.431711 (0.062987) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
0.431993 (0.062811) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
0.431973 (0.062805) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
0.431857 (0.063038) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
0.432000 (0.063029) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
0.431863 (0.062997) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
0.431852 (0.063008) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
0.431845 (0.063020) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
0.431414 (0.062947) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
0.431877 (0.063013) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-149.796408 (78.329155) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-149.797070 (78.326446) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-149.793895 (78.323150) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-149.794436 (78.333944) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-149.797251 (78.327744) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-149.795889 (78.328825) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-149.795502 (78.328452) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-149.798757 (78.331504) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-149.795984 (78.328676) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-149.798208 (78.326757) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-149.797985 (78.329932) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-149.798964 (78.331065) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-149.797117 (78.331205) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-149.798252 (78.330928) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-149.792103 (78.329758) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-149.795506 (78.328807) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-149.798513 (78.331800) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-149.792235 (78.327293) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-149.796986 (78.330438) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-149.793586 (78.325733) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-149.797004 (78.330452) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-149.797687 (78.332365) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-149.794545 (78.325980) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-149.799488 (78.330268) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-149.790102 (78.316884) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-149.797461 (78.328896) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-149.796117 (78.327413) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-149.790840 (78.321563) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-149.797476 (78.328919) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-149.797862 (78.330160) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-149.797159 (78.330287) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-149.794324 (78.322522) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-149.797491 (78.330279) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.57}
-149.796632 (78.332255) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-149.794075 (78.324447) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-149.795656 (78.328255) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.57}
-149.799112 (78.331563) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.57}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
0.429812 (0.063062) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
0.429936 (0.062944) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
0.429626 (0.062898) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
0.430128 (0.062592) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
0.429851 (0.062886) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
0.429704 (0.062863) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
0.429624 (0.062916) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
0.429934 (0.063007) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
0.429934 (0.062861) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
0.429885 (0.062914) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
0.429966 (0.062967) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
0.430040 (0.062958) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-0.008255 (0.011708) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
0.429934 (0.062926) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
0.430245 (0.062533) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
0.430133 (0.062560) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
0.429920 (0.062919) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
0.429881 (0.062911) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
0.429938 (0.063012) with: {'normalize': False, 'selection': 'random', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
0.429890 (0.062924) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': True, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-149.755654 (78.297667) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-149.757540 (78.297930) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-149.748923 (78.286303) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-149.755128 (78.294063) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-149.755794 (78.296223) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-149.757965 (78.296990) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-149.754508 (78.293488) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-149.757370 (78.298259) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-149.756404 (78.298671) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-149.757234 (78.296966) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-149.755221 (78.295775) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-149.756936 (78.298090) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 5000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-149.757414 (78.298020) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-149.757005 (78.297241) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-149.756633 (78.295977) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-149.755126 (78.295669) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-149.758056 (78.298085) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-149.759074 (78.295791) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-149.753080 (78.291327) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-149.759092 (78.295576) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-149.757304 (78.297453) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-149.756403 (78.298111) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-149.755296 (78.299044) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-149.755818 (78.296043) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 6000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-149.756969 (78.297600) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-149.760360 (78.299186) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-149.754958 (78.296410) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-149.757229 (78.297904) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-149.757249 (78.297841) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-149.752724 (78.291548) with: {'normalize': True, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': True, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-149.756598 (78.298437) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-149.755578 (78.294710) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-149.757015 (78.297074) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.004, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.0045, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': True, 'tol': 0.005, 'alpha': 1.6}
-149.755950 (78.294631) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-149.756708 (78.297615) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-149.754903 (78.295398) with: {'normalize': False, 'selection': 'random', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.004, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.0045, 'alpha': 1.6}
-149.757772 (78.298118) with: {'normalize': False, 'selection': 'cyclic', 'fit_intercept': False, 'max_iter': 7000, 'precompute': False, 'tol': 0.005, 'alpha': 1.6}
|
notebooks/1. Missing Data in Time Series.ipynb | ###Markdown
Missing Data in Time Series Traffic Data, Again**Say we are contacted from a local transportation authority**They want to improve their traffic monitoring system Traffic Data, Again**They give us data from an occupancy sensor**Our data refers to real traffic in the Minnesota Twin Cities Area
###Code
nab.plot_series(data, labels, windows, figsize=figsize)
###Output
_____no_output_____
###Markdown
* They have pre-labeled an (easy) anomaly that they wish to detect* ...But that is _not the most striking aspect_ of this series Traffic Data, Again**There is a period, and _straight lines in the plot_**
###Code
nab.plot_series(data, labels, windows, figsize=figsize)
###Output
_____no_output_____
###Markdown
They are _artefacts_, due to _missing values_ in the time series Missing Values**We can make it clearer by explicitly plotting the sampling points**
###Code
nab.plot_series(data, labels, windows, show_sampling_points=True, figsize=figsize)
###Output
_____no_output_____
###Markdown
There is a large gap, plus scattered missing values here and there Missing Values in Time Series**Missing values in real-world time series are _very common_**They arise for a variety of reasons:* Malfunctioning sensors* Network problems* Lost data* Sensor maintenance/installation/removal* ...**...And can be very annoying to deal with*** They prevent the application of sliding windows* They complicate the detection of periods* ... Preparing the Ground Preparing the Ground**Before we can deal with missing values we need to tackle an issue**I.e. our main series has a _sparse index_* ...Meaning that index values are non-contiguous* ...And missing values are represented as gaps**If we want to fill the missing values...*** ...We need to decide _where_ the missing values are> **In other words, we need a _dense_ (temporal) index**With a dense index:* Missing values can be represented as NaN (Not a Number)* ...And can be filled by replacing NaN with a meaningful value Choosing a Sampling Frequency**First, we need to pick a frequency for the new index**We start by having a look at the typical sampling step in our series:
###Code
data.head()
###Output
_____no_output_____
###Markdown
* The interval between consecutive measurements seems to be 5 minute long* ...But looking at just a few data points is not enough Choosing a Sampling Frequency**It is much better to compute the distance between consecutive index values**
###Code
delta = data.index[1:] - data.index[:-1]
delta[:3]
###Output
_____no_output_____
###Markdown
* The difference between two `datetime` objects is a `timedelta` object* They are all parts of [the `datetime` module](https://docs.python.org/3/library/datetime.html)**Then we can check the _value counts_*** This can be done with the `value_counts` methodThe methods returns a series:* The index contains values* The series data are the corresponding counts Choosing a Sampling Frequency**Let's have a look at our value counts**
###Code
vc = pd.Series(delta).value_counts()
vc.iloc[:10]
###Output
_____no_output_____
###Markdown
**By far the most common value is 5 minutes*** Some values are not multiples of 5 minutes (e.g. 4, 6, 11 minutes)* I.e. they are _out of alignment_ Resampling the Original Dataset**Therefore, first we need to _realign_ the original index**This is also called _resampling_ (or _binning_), and can be done in pandas with:```pythonDatetimeIndex.resample(rule=None, ...)```* `rule` specifies the length of each individual interval (or "bin")* The method has many additional options to control its behavior**Resample is an iterator: we need to choose what to do with each bin**E.g. compute the mean, stdev, take the first value
###Code
ddata = data.resample('5min').mean()
ddata.head()
###Output
_____no_output_____
###Markdown
Inspecting the Resampled Dataset**Now we can inspect this new "dense" series**
###Code
nab.plot_series(ddata, labels, windows, show_markers=True, figsize=figsize)
###Output
_____no_output_____ |
04.gold_silver/09.commodity-forecasting-GRU.ipynb | ###Markdown
Commodity price forecasting using GRU
###Code
import pandas as pd
import numpy as np
import os
import time
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
# preprocessing methods
from sklearn.preprocessing import StandardScaler
# accuracy measures and data spliting
from sklearn.metrics import mean_squared_error
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import train_test_split
# deep learning libraries
from keras.models import Input, Model
from keras.models import Sequential
from keras.layers import SimpleRNN, LSTM, Dense, GRU
from keras.layers import Conv1D, MaxPooling1D
from keras import layers
from keras import losses
from keras import optimizers
from keras import metrics
from keras import callbacks
from keras import initializers
plt.style.use('seaborn')
plt.rcParams['figure.figsize'] = 15, 7
###Output
_____no_output_____
###Markdown
1. Data import and basic analysis
###Code
DATADIR = 'data/'
MODELDIR = '../checkpoints/commodity/nn/'
path = os.path.join(DATADIR, 'gold-silver.csv')
data = pd.read_csv(path, header=0, index_col=[0], infer_datetime_format=True, sep=';')
data.head()
data[['gold', 'silver']].plot();
plt.plot(np.log(data.gold), label='log(gold)')
plt.plot(np.log(data.silver), label='log(silver)')
plt.title('Commodity data', fontsize='14')
plt.show()
###Output
_____no_output_____
###Markdown
2. Data preparation
###Code
# function to prepare x and y variable
# for the univariate series
def prepare_data(df, steps=1):
temp = df.shift(-steps).copy()
y = temp[:-steps].copy()
X = df[:-steps].copy()
return X, y
gold_X, gold_y = prepare_data(np.log(data[['gold']]), steps=1)
silver_X, silver_y = prepare_data(np.log(data[['silver']]), steps=1)
len(gold_X), len(gold_y), len(silver_X), len(silver_y)
X = pd.concat([gold_X, silver_X], axis=1)
y = pd.concat([gold_y, silver_y], axis=1)
X.head()
y.head()
seed = 42
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.05,
random_state=seed, shuffle=False)
print('Training and test data shape:')
X_train.shape, y_train.shape, X_test.shape, y_test.shape
timesteps = 1
features = X_train.shape[1]
xavier = initializers.glorot_normal()
X_train = np.reshape(X_train.values, (X_train.shape[0], timesteps, features))
X_test = np.reshape(X_test.values, (X_test.shape[0], timesteps, features))
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
3. Model building
###Code
def model_evaluation(y_train, y_test, y_train_pred, y_test_pred):
y_train_inv, y_test_inv = np.exp(y_train), np.exp(y_test)
y_train_pred_inv, y_test_pred_inv = np.exp(y_train_pred), np.exp(y_test_pred)
# MAE and NRMSE calculation for gold
y_train_gold = y_train_inv.values[:, 0]
y_train_pred_gold = y_train_pred_inv[:, 0]
y_test_gold = y_test_inv.values[:, 0]
y_test_pred_gold = y_test_pred_inv[:, 0]
train_rmse_g = np.sqrt(mean_squared_error(y_train_gold, y_train_pred_gold))
train_mae_g = np.round(mean_absolute_error(y_train_gold, y_train_pred_gold), 3)
train_nrmse_g = np.round(train_rmse_g/np.std(y_train_gold), 3)
test_rmse_g = np.sqrt(mean_squared_error(y_test_gold, y_test_pred_gold))
test_mae_g = np.round(mean_absolute_error(y_test_gold, y_test_pred_gold), 3)
test_nrmse_g = np.round(test_rmse_g/np.std(y_test_gold), 3)
print('Training and test result for gold:')
print(f'Training MAE: {train_mae_g}')
print(f'Trainig NRMSE: {train_nrmse_g}')
print(f'Test MAE: {test_mae_g}')
print(f'Test NRMSE: {test_nrmse_g}')
print()
# MAE and NRMSE calculation for silver
y_train_silver = y_train_inv.values[:, 1]
y_train_pred_silver = y_train_pred_inv[:, 1]
y_test_silver = y_test_inv.values[:, 1]
y_test_pred_silver = y_test_pred_inv[:, 1]
train_rmse_s = np.sqrt(mean_squared_error(y_train_silver, y_train_pred_silver))
train_mae_s = np.round(mean_absolute_error(y_train_silver, y_train_pred_silver), 3)
train_nrmse_s = np.round(train_rmse_s/np.std(y_train_silver), 3)
test_rmse_s = np.sqrt(mean_squared_error(y_test_silver, y_test_pred_silver))
test_mae_s = np.round(mean_absolute_error(y_test_silver, y_test_pred_silver), 3)
test_nrmse_s = np.round(test_rmse_s/np.std(y_test_silver), 3)
print('Training and test result for silver:')
print(f'Training MAE: {train_mae_s}')
print(f'Trainig NRMSE: {train_nrmse_s}')
print(f'Test MAE: {test_mae_s}')
print(f'Test NRMSE: {test_nrmse_s}')
return y_train_pred_inv, y_test_pred_inv
def model_training(X_train, X_test, y_train, model, batch=4, name='m'):
start = time.time()
loss = losses.mean_squared_error
opt = optimizers.Adam()
metric = [metrics.mean_absolute_error]
model.compile(loss=loss, optimizer=opt, metrics=metric)
callbacks_list = [callbacks.ReduceLROnPlateau(monitor='loss', factor=0.2,
patience=5, min_lr=0.001)]
history = model.fit(X_train, y_train,
epochs=100,
batch_size=batch,
verbose=0,
shuffle=False,
callbacks=callbacks_list
)
# save model weights and
if os.path.exists(MODELDIR):
pass
else:
os.makedirs(MODELDIR)
m_name = name + str('.h5')
w_name = name + str('_w.h5')
model.save(os.path.join(MODELDIR, m_name))
model.save_weights(os.path.join(MODELDIR, w_name))
# prediction
y_train_pred = model.predict(X_train)
y_test_pred = model.predict(X_test)
end = time.time()
time_taken = np.round((end-start), 3)
print(f'Time taken to complete the process: {time_taken} seconds')
return y_train_pred, y_test_pred, history
###Output
_____no_output_____
###Markdown
GRU - v1
###Code
model = Sequential()
model.add(GRU(3, input_shape = (timesteps, features), kernel_initializer=xavier,
activation='relu'))
model.add(Dense(2, kernel_initializer=xavier))
model.summary()
# training
y_train_pred, y_test_pred, history = model_training(X_train, X_test, y_train, model, batch=4, name='GRU-v1')
# evaluation
y_train_pred, y_test_pred = model_evaluation(y_train, y_test, y_train_pred, y_test_pred)
# plotting
plt.subplot(211)
plt.plot(np.exp(y_test.values[:, 0]), label='actual')
plt.plot(y_test_pred[:, 0], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for gold using GRU', fontsize=14)
plt.legend()
plt.subplot(212)
plt.plot(np.exp(y_test.values[:, 1]), label='actual')
plt.plot(y_test_pred[:, 1], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for silver using GRU', fontsize=14)
plt.legend()
plt.tight_layout()
plt.show()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
gru_1 (GRU) (None, 3) 54
_________________________________________________________________
dense_1 (Dense) (None, 2) 8
=================================================================
Total params: 62
Trainable params: 62
Non-trainable params: 0
_________________________________________________________________
Time taken to complete the process: 10.012 seconds
Training and test result for gold:
Training MAE: 467.21
Trainig NRMSE: 1.621
Test MAE: 1588.552
Test NRMSE: 29.89
Training and test result for silver:
Training MAE: 5.201
Trainig NRMSE: 1.028
Test MAE: 23.799
Test NRMSE: 9.184
###Markdown
GRU - v2
###Code
model = Sequential()
model.add(GRU(3, input_shape = (timesteps, features), kernel_initializer=xavier,
activation='relu', return_sequences=True))
model.add(GRU(3, input_shape = (timesteps, features), kernel_initializer=xavier,
activation='relu'))
model.add(Dense(2, kernel_initializer=xavier))
model.summary()
# training
y_train_pred, y_test_pred, history = model_training(X_train, X_test, y_train, model, batch=4, name='GRU-v2')
# evaluation
y_train_pred, y_test_pred = model_evaluation(y_train, y_test, y_train_pred, y_test_pred)
# plotting
plt.subplot(211)
plt.plot(np.exp(y_test.values[:, 0]), label='actual')
plt.plot(y_test_pred[:, 0], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for gold using GRU', fontsize=14)
plt.legend()
plt.subplot(212)
plt.plot(np.exp(y_test.values[:, 1]), label='actual')
plt.plot(y_test_pred[:, 1], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for silver using GRU', fontsize=14)
plt.legend()
plt.tight_layout()
plt.show()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
gru_2 (GRU) (None, 1, 3) 54
_________________________________________________________________
gru_3 (GRU) (None, 3) 63
_________________________________________________________________
dense_2 (Dense) (None, 2) 8
=================================================================
Total params: 125
Trainable params: 125
Non-trainable params: 0
_________________________________________________________________
Time taken to complete the process: 15.526 seconds
Training and test result for gold:
Training MAE: 466.563
Trainig NRMSE: 1.619
Test MAE: 1587.905
Test NRMSE: 29.878
Training and test result for silver:
Training MAE: 5.204
Trainig NRMSE: 1.028
Test MAE: 23.789
Test NRMSE: 9.18
###Markdown
GRU - v3 (Final Model)
###Code
model = Sequential()
model.add(GRU(5, input_shape = (timesteps, features), kernel_initializer=xavier,
activation='relu', return_sequences=True))
model.add(GRU(5, input_shape = (timesteps, features), kernel_initializer=xavier,
activation='relu'))
model.add(Dense(2, kernel_initializer=xavier))
model.summary()
# training
y_train_pred, y_test_pred, history = model_training(X_train, X_test, y_train, model, batch=4, name='GRU-v3')
# evaluation
y_train_pred, y_test_pred = model_evaluation(y_train, y_test, y_train_pred, y_test_pred)
# plotting
plt.subplot(211)
plt.plot(np.exp(y_test.values[:, 0]), label='actual')
plt.plot(y_test_pred[:, 0], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for gold using GRU', fontsize=14)
plt.legend()
plt.subplot(212)
plt.plot(np.exp(y_test.values[:, 1]), label='actual')
plt.plot(y_test_pred[:, 1], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for silver using GRU', fontsize=14)
plt.legend()
plt.tight_layout()
plt.show()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
gru_4 (GRU) (None, 1, 5) 120
_________________________________________________________________
gru_5 (GRU) (None, 5) 165
_________________________________________________________________
dense_3 (Dense) (None, 2) 12
=================================================================
Total params: 297
Trainable params: 297
Non-trainable params: 0
_________________________________________________________________
Time taken to complete the process: 15.475 seconds
Training and test result for gold:
Training MAE: 124.272
Trainig NRMSE: 0.359
Test MAE: 58.498
Test NRMSE: 1.619
Training and test result for silver:
Training MAE: 0.873
Trainig NRMSE: 0.194
Test MAE: 2.531
Test NRMSE: 1.085
###Markdown
GRU - v4
###Code
model = Sequential()
model.add(GRU(5, input_shape = (timesteps, features), kernel_initializer=xavier,
activation='relu'))
model.add(Dense(2, kernel_initializer=xavier))
model.summary()
# training
y_train_pred, y_test_pred, history = model_training(X_train, X_test, y_train, model, batch=4, name='GRU-v4')
# evaluation
y_train_pred, y_test_pred = model_evaluation(y_train, y_test, y_train_pred, y_test_pred)
# plotting
plt.subplot(211)
plt.plot(np.exp(y_test.values[:, 0]), label='actual')
plt.plot(y_test_pred[:, 0], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for gold using GRU', fontsize=14)
plt.legend()
plt.subplot(212)
plt.plot(np.exp(y_test.values[:, 1]), label='actual')
plt.plot(y_test_pred[:, 1], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for silver using GRU', fontsize=14)
plt.legend()
plt.tight_layout()
plt.show()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
gru_6 (GRU) (None, 5) 120
_________________________________________________________________
dense_4 (Dense) (None, 2) 12
=================================================================
Total params: 132
Trainable params: 132
Non-trainable params: 0
_________________________________________________________________
Time taken to complete the process: 12.284 seconds
Training and test result for gold:
Training MAE: 86.465
Trainig NRMSE: 0.468
Test MAE: 510.03
Test NRMSE: 9.94
Training and test result for silver:
Training MAE: 3.023
Trainig NRMSE: 0.654
Test MAE: 14.623
Test NRMSE: 5.687
###Markdown
GRU - v5
###Code
model = Sequential()
model.add(GRU(10, input_shape = (timesteps, features), kernel_initializer=xavier,
activation='relu'))
model.add(Dense(2, kernel_initializer=xavier))
model.summary()
# training
y_train_pred, y_test_pred, history = model_training(X_train, X_test, y_train, model, batch=4, name='GRU-v5')
# evaluation
y_train_pred, y_test_pred = model_evaluation(y_train, y_test, y_train_pred, y_test_pred)
# plotting
plt.subplot(211)
plt.plot(np.exp(y_test.values[:, 0]), label='actual')
plt.plot(y_test_pred[:, 0], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for gold using GRU', fontsize=14)
plt.legend()
plt.subplot(212)
plt.plot(np.exp(y_test.values[:, 1]), label='actual')
plt.plot(y_test_pred[:, 1], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for silver using GRU', fontsize=14)
plt.legend()
plt.tight_layout()
plt.show()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
gru_7 (GRU) (None, 10) 390
_________________________________________________________________
dense_5 (Dense) (None, 2) 22
=================================================================
Total params: 412
Trainable params: 412
Non-trainable params: 0
_________________________________________________________________
Time taken to complete the process: 10.703 seconds
Training and test result for gold:
Training MAE: 467.457
Trainig NRMSE: 1.621
Test MAE: 1588.799
Test NRMSE: 29.895
Training and test result for silver:
Training MAE: 5.206
Trainig NRMSE: 1.027
Test MAE: 23.784
Test NRMSE: 9.178
###Markdown
GRU - v6
###Code
model = Sequential()
model.add(GRU(7, input_shape = (timesteps, features), kernel_initializer=xavier,
activation='relu'))
model.add(Dense(2, kernel_initializer=xavier))
model.summary()
# training
y_train_pred, y_test_pred, history = model_training(X_train, X_test, y_train, model, batch=4, name='GRU-v6')
# evaluation
y_train_pred, y_test_pred = model_evaluation(y_train, y_test, y_train_pred, y_test_pred)
# plotting
plt.subplot(211)
plt.plot(np.exp(y_test.values[:, 0]), label='actual')
plt.plot(y_test_pred[:, 0], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for gold using GRU', fontsize=14)
plt.legend()
plt.subplot(212)
plt.plot(np.exp(y_test.values[:, 1]), label='actual')
plt.plot(y_test_pred[:, 1], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for silver using GRU', fontsize=14)
plt.legend()
plt.tight_layout()
plt.show()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
gru_8 (GRU) (None, 7) 210
_________________________________________________________________
dense_6 (Dense) (None, 2) 16
=================================================================
Total params: 226
Trainable params: 226
Non-trainable params: 0
_________________________________________________________________
Time taken to complete the process: 11.47 seconds
Training and test result for gold:
Training MAE: 35.947
Trainig NRMSE: 0.16
Test MAE: 75.018
Test NRMSE: 1.764
Training and test result for silver:
Training MAE: 1.092
Trainig NRMSE: 0.213
Test MAE: 2.694
Test NRMSE: 1.199
###Markdown
GRU - v7
###Code
model = Sequential()
model.add(GRU(5, input_shape = (timesteps, features), kernel_initializer=xavier,
activation='relu'))
model.add(Dense(2, kernel_initializer=xavier))
model.summary()
# training
y_train_pred, y_test_pred, history = model_training(X_train, X_test, y_train, model, batch=1, name='GRU-v7')
# evaluation
y_train_pred, y_test_pred = model_evaluation(y_train, y_test, y_train_pred, y_test_pred)
# plotting
plt.subplot(211)
plt.plot(np.exp(y_test.values[:, 0]), label='actual')
plt.plot(y_test_pred[:, 0], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for gold using GRU', fontsize=14)
plt.legend()
plt.subplot(212)
plt.plot(np.exp(y_test.values[:, 1]), label='actual')
plt.plot(y_test_pred[:, 1], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for silver using GRU', fontsize=14)
plt.legend()
plt.tight_layout()
plt.show()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
gru_9 (GRU) (None, 5) 120
_________________________________________________________________
dense_7 (Dense) (None, 2) 12
=================================================================
Total params: 132
Trainable params: 132
Non-trainable params: 0
_________________________________________________________________
Time taken to complete the process: 39.759 seconds
Training and test result for gold:
Training MAE: 266.405
Trainig NRMSE: 1.016
Test MAE: 1187.96
Test NRMSE: 22.362
Training and test result for silver:
Training MAE: 5.278
Trainig NRMSE: 1.021
Test MAE: 23.546
Test NRMSE: 9.088
###Markdown
GRU - v8
###Code
model = Sequential()
model.add(GRU(4, input_shape = (timesteps, features), kernel_initializer=xavier,
activation='relu'))
model.add(Dense(2, kernel_initializer=xavier))
model.summary()
# training
y_train_pred, y_test_pred, history = model_training(X_train, X_test, y_train, model, batch=2, name='GRU-v8')
# evaluation
y_train_pred, y_test_pred = model_evaluation(y_train, y_test, y_train_pred, y_test_pred)
# plotting
plt.subplot(211)
plt.plot(np.exp(y_test.values[:, 0]), label='actual')
plt.plot(y_test_pred[:, 0], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for gold using GRU', fontsize=14)
plt.legend()
plt.subplot(212)
plt.plot(np.exp(y_test.values[:, 1]), label='actual')
plt.plot(y_test_pred[:, 1], label='predicted')
plt.ylabel('$')
plt.xlabel('sample')
plt.title('Test prediction for silver using GRU', fontsize=14)
plt.legend()
plt.tight_layout()
plt.show()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
gru_10 (GRU) (None, 4) 84
_________________________________________________________________
dense_8 (Dense) (None, 2) 10
=================================================================
Total params: 94
Trainable params: 94
Non-trainable params: 0
_________________________________________________________________
Time taken to complete the process: 21.05 seconds
Training and test result for gold:
Training MAE: 46.121
Trainig NRMSE: 0.18
Test MAE: 102.868
Test NRMSE: 2.133
Training and test result for silver:
Training MAE: 1.141
Trainig NRMSE: 0.213
Test MAE: 2.979
Test NRMSE: 1.31
|
examples/ibtracs/.ipynb_checkpoints/histogram_TC_intensity-checkpoint.ipynb | ###Markdown
Read Atlantic or Epac storm data from IBTRacS and plot histograms for TC categories NCSU Tropical and Large Scale DynamicsAnantha Aiyyer
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import xarray as xr
dataDir = "./"
filename = "IBTrACS.since1980.v04r00.nc"
# select basin
# for this code choose only NA or EP
basinWant = str.encode("EP")
# select year range
year1 = 1995
year2 = 2019
file = dataDir+filename
try:
ds = xr.open_dataset(file)
except:
print ("file not found. quitting code")
quit()
print ("Ibtracs file found and opened")
# subset the storms based on the basin and years
years = pd.to_datetime(ds.time[:,0].values).year
inds = np.where( (ds.basin[:,0] == basinWant) & (years>=year1) & (years<=year2))[0]
#The variable usa_sshs contains the storm category for the USA defined EP and NA basins
sshs = ds.usa_sshs[inds,:]
max_sshs = sshs.max(dim='date_time',skipna=True)
stormYears = pd.to_datetime(ds.time[inds,0].values).year
print(stormYears)
###Output
Int64Index([1995, 1995, 1995, 1995, 1995, 1995, 1995, 1995, 1995, 1995,
...
2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019, 2019],
dtype='int64', length=412)
###Markdown
Use pandas and seaborn for histogram
###Code
# Now create a pandas dataframe for easy histogram plot
# omit categories less than -2
df = pd.DataFrame({'SS': max_sshs.where(max_sshs > -2, drop=True)})
#print (df.SS.value_counts(dropna=False))
print(df.groupby(['SS'])['SS'].count())
# use seaborn for the plot
sns.histplot(data=df, x="SS", discrete=True)
plt.show()
###Output
SS
-1.0 39
0.0 176
1.0 69
2.0 32
3.0 33
4.0 38
5.0 15
Name: SS, dtype: int64
###Markdown
Some additional info - for example, names of cat5 storms
###Code
StormNames = ds.name[inds]
StormDates = ds.time[inds,0]
# list the names and years of category 5 storms
cat5N = StormNames.where(max_sshs == 4, drop=True).values
cat5Y = pd.to_datetime(StormDates.where(max_sshs == 4, drop=True).values).year
for y,n in zip(cat5Y,cat5N):
print(y,n.decode())
###Output
1995 FELIX
1995 LUIS
1995 OPAL
1996 EDOUARD
1996 HORTENSE
1998 GEORGES
1999 CINDY
1999 BRET
1999 FLOYD
1999 GERT
1999 LENNY
2000 ISAAC
2000 KEITH
2001 IRIS
2001 MICHELLE
2002 LILI
2003 FABIAN
2004 CHARLEY
2004 FRANCES
2004 KARL
2005 DENNIS
2008 GUSTAV
2008 IKE
2008 OMAR
2008 PALOMA
2009 BILL
2010 DANIELLE
2010 EARL
2010 IGOR
2010 JULIA
2011 KATIA
2011 OPHELIA
2014 GONZALO
2015 JOAQUIN
2016 NICOLE
2017 HARVEY
2017 JOSE
2018 FLORENCE
|
.ipynb_checkpoints/d3-checkpoint.ipynb | ###Markdown
Here is the master pipeline Camera Caliberation
###Code
import numpy as np
import cv2
import glob
import pickle
import matplotlib.pyplot as plt
#%matplotlib notebook
# Finding image and object points
def undistort(test_img):
# prepare object points (our ideal reference), like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)
# Stores mtx and dist coefficients in a pickle file to use later
nx=9 # Number of inner corners of our chessboard along x axis (or columns)
ny=6 # Number of inner corners of our chessboard along y axis (or rows)
objp = np.zeros((ny*nx,3), np.float32) #We have 9 corners on X axis and 6 corners on Y axis
objp[:,:2] = np.mgrid[0:nx, 0:ny].T.reshape(-1,2) # Gives us coorinate points in pairs as a list of 54 items. It's shape will be (54,2)
# Arrays to store object points and image points from all the images.
objpoints = [] # 3d points in real world space. These are the points for our ideal chessboard which we are using as a reference.
imgpoints = [] # 2d points in image plane. We'll extract these from the images given for caliberating the camera
# Make a list of calibration images
images = glob.glob('camera_cal/calibration*.jpg')
# Step through the list and search for chessboard corners
for idx, fname in enumerate(images):
calib_img = cv2.imread(fname)
gray = cv2.cvtColor(calib_img, cv2.COLOR_BGR2GRAY)
# Find the chessboard corners
# Grayscale conversion ensures an 8bit image as input.The next function needs that kind of input only. Generally color images are 24 bit images. (Refer "Bits in images" in notes)
ret, corners = cv2.findChessboardCorners(gray, (nx,ny), None)
# If found, add object points, image points
if ret == True:
objpoints.append(objp) # These will be same for caliberation image. The same points will get appended every time this fires up
imgpoints.append(corners) # Corners
# Draw and display the identified corners (This step can be completely skipped)
cv2.drawChessboardCorners(calib_img, (nx,ny), corners, ret)
write_name = 'corners_found'+str(idx)+'.jpg'
cv2.imwrite('output_files/corners_found_for_calib/'+write_name, calib_img)
cv2.imshow(write_name, calib_img) #We dont want to see the images now so commenting out. TO see output later, un-comment these 3 lines
cv2.waitKey(500) #Delete after testing. These will be used to show you images one after the other
cv2.destroyAllWindows() #Delete this after testing
# Test undistortion on an image
test_img_size = (test_img.shape[1], test_img.shape[0]) # (x_axis_max)X(y_axis_max)
# Do camera calibration given object points and image points
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, test_img_size,None,None)
# Use the above obtained results to undistort
undist_img = cv2.undistort(test_img, mtx, dist, None, mtx)
cv2.imwrite('output_files/test_undist.jpg',undist_img)
# Save the camera calibration result for later use (we won't worry about rvecs / tvecs)
dist_pickle = {}
dist_pickle["mtx"] = mtx
dist_pickle["dist"] = dist
pickle.dump( dist_pickle, open( "output_files/calib_pickle_files/dist_pickle.p", "wb" ) )
"""Caution: When you use mtx and dist later, ensure that the image used has same dimensions
as the images used here for caliberation, other we'll have to make some changes in the code"""
#undist_img = cv2.cvtColor(dst, cv2.COLOR_BGR2RGB)
return undist_img
test_img= cv2.imread('camera_cal/calibration1.jpg') #Note: Your image will be in BGR format
output=undistort(test_img)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,10)) #Refer subplots in python libraries
ax1.imshow(test_img)
ax1.set_title('Original Image', fontsize=30)
ax2.imshow(output)
ax2.set_title('Undistorted Image', fontsize=30)
cv2.waitKey(500)
cv2.destroyAllWindows()
###Output
_____no_output_____
###Markdown
Main Pipeline:
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import math
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
def process_image(frame):
def cal_undistort(img):
# Reads mtx and dist matrices, peforms image distortion correction and returns the undistorted image
import pickle
# Read in the saved matrices
my_dist_pickle = pickle.load( open( "output_files/calib_pickle_files/dist_pickle.p", "rb" ) )
mtx = my_dist_pickle["mtx"]
dist = my_dist_pickle["dist"]
undistorted_img = cv2.undistort(img, mtx, dist, None, mtx)
#undistorted_img = cv2.cvtColor(undistorted_img, cv2.COLOR_BGR2RGB) #Use if you use cv2 to import image. ax.imshow() needs RGB image
return undistorted_img
def yellow_threshold(img, sxbinary):
# Convert to HLS color space and separate the S channel & H channel
# Note: img is the undistorted image
hls = cv2.cvtColor(img, cv2.COLOR_RGB2HLS)
s_channel = hls[:,:,2]
h_channel = hls[:,:,0]
# Threshold color channel
s_thresh_min = 100
s_thresh_max = 255
#for 360 degree, my desired values for yellow ranged between 35 and 50. Diving this range by 2:
h_thresh_min = 10 # Taking a bit lower than required to esnure that yellow is captured
h_thresh_max = 25
s_binary = np.zeros_like(s_channel)
s_binary[(s_channel >= s_thresh_min) & (s_channel <= s_thresh_max)] = 1
h_binary = np.zeros_like(h_channel)
h_binary[(h_channel >= h_thresh_min) & (h_channel <= h_thresh_max)] = 1
# Combine the two binary thresholds
yellow_binary = np.zeros_like(s_binary)
yellow_binary[(((s_binary == 1) | (sxbinary == 1) ) & (h_binary ==1))] = 1
return yellow_binary
def xgrad_binary(img, thresh_min=30, thresh_max=100):
# Grayscale image
gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Sobel x
sobelx = cv2.Sobel(gray, cv2.CV_64F, 1, 0) # Take the derivative in x
abs_sobelx = np.absolute(sobelx) # Absolute x derivative to accentuate lines away from horizontal
scaled_sobel = np.uint8(255*abs_sobelx/np.max(abs_sobelx))
# Threshold x gradient
#thresh_min = 30 # Given as default values to the parameters. These are good starting points
#thresh_max = 100 # The tweaked values are given as arguments to the function while calling it
sxbinary = np.zeros_like(scaled_sobel)
sxbinary[(scaled_sobel >= thresh_min) & (scaled_sobel <= thresh_max)] = 1
return sxbinary
def white_threshold(img, sxbinary, lower_white_thresh = 170):
# Isolating RGB channel (as we've used matplotlib to read the image)
# The order would have been BGR if we had used cv2 to read the image
r_channel = img[:,:,0]
g_channel = img[:,:,1]
b_channel = img[:,:,2]
# Threshold color channel
r_thresh_min = lower_white_thresh
r_thresh_max = 255
r_binary = np.zeros_like(r_channel)
r_binary[(r_channel >= r_thresh_min) & (r_channel <= r_thresh_max)] = 1
g_thresh_min = lower_white_thresh
g_thresh_max = 255
g_binary = np.zeros_like(g_channel)
g_binary[(g_channel >= g_thresh_min) & (g_channel <= g_thresh_max)] = 1
b_thresh_min = lower_white_thresh
b_thresh_max = 255
b_binary = np.zeros_like(b_channel)
b_binary[(b_channel >= b_thresh_min) & (b_channel <= b_thresh_max)] = 1
white_binary = np.zeros_like(r_channel)
white_binary[((r_binary ==1) & (g_binary ==1) & (b_binary ==1) & (sxbinary==1))] = 1
return white_binary
def thresh_img(img):
sxbinary = xgrad_binary(img, thresh_min=25, thresh_max=130)
yellow_binary = yellow_threshold(img, sxbinary) #(((s) | (sx)) & (h))
white_binary = white_threshold(img, sxbinary, lower_white_thresh = 150)
# Combine the two binary thresholds
combined_binary = np.zeros_like(sxbinary)
combined_binary[((yellow_binary == 1) | (white_binary == 1))] = 1
# We close by sending out a 3D image just as we took as input
# Because, to process the image, we were using binary images
out_img = np.dstack((combined_binary, combined_binary, combined_binary))*255
return out_img
def perspective_transform(img):
# Define calibration box in source (original) and destination (desired or warped) coordinates
img_size = (img.shape[1], img.shape[0])
"""Notice the format used for img_size. Yaha bhi ulta hai. x axis aur fir y axis chahiye.
Apne format mein rows(y axis) and columns (x axis) hain"""
# Four source coordinates
# Order of points: top left, top right, bottom right, bottom left
src = np.array(
[[435*img.shape[1]/960, 350*img.shape[0]/540],
[530*img.shape[1]/960, 350*img.shape[0]/540],
[885*img.shape[1]/960, img.shape[0]],
[220*img.shape[1]/960, img.shape[0]]], dtype='f')
# Next, we'll define a desired rectangle plane for the warped image.
# We'll choose 4 points where we want source points to end up
# This time we'll choose our points by eyeballing a rectangle
dst = np.array(
[[290*img.shape[1]/960, 0],
[740*img.shape[1]/960, 0],
[740*img.shape[1]/960, img.shape[0]],
[290*img.shape[1]/960, img.shape[0]]], dtype='f')
#Compute the perspective transform, M, given source and destination points:
M = cv2.getPerspectiveTransform(src, dst)
#Warp an image using the perspective transform, M; using linear interpolation
#Interpolating points is just filling in missing points as it warps an image
# The input image for this function can be a colored image too
warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_LINEAR)
return warped, src, dst
def rev_perspective_transform(img, src, dst):
img_size = (img.shape[1], img.shape[0])
#Compute the perspective transform, M, given source and destination points:
Minv = cv2.getPerspectiveTransform(dst, src)
#Warp an image using the perspective transform, M; using linear interpolation
#Interpolating points is just filling in missing points as it warps an image
# The input image for this function can be a colored image too
un_warped = cv2.warpPerspective(img, Minv, img_size, flags=cv2.INTER_LINEAR)
return un_warped, Minv
def draw_polygon(img1, img2, src, dst):
src = src.astype(int) #Very important step (Pixels cannot be in decimals)
dst = dst.astype(int)
cv2.polylines(img1, [src], True, (255,0,0), 3)
cv2.polylines(img2, [dst], True, (255,0,0), 3)
def histogram_bottom_peaks (warped_img):
# This will detect the bottom point of our lane lines
# Take a histogram of the bottom half of the image
bottom_half = warped_img[((2*warped_img.shape[0])//5):,:,0] # Collecting all pixels in the bottom half
histogram = np.sum(bottom_half, axis=0) # Summing them along y axis (or along columns)
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2) # 1D array hai histogram toh uska bas 0th index filled hoga
#print(np.shape(histogram)) #OUTPUT:(1280,)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
return leftx_base, rightx_base
def find_lane_pixels(warped_img):
leftx_base, rightx_base = histogram_bottom_peaks(warped_img)
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin. So width = 2*margin
margin = 90
# Set minimum number of pixels found to recenter window
minpix = 1000 #I've changed this from 50 as given in lectures
# Set height of windows - based on nwindows above and image shape
window_height = np.int(warped_img.shape[0]//nwindows)
# Identify the x and y positions of all nonzero pixels in the image
nonzero = warped_img.nonzero() #pixel ke coordinates dega 2 seperate arrays mein
nonzeroy = np.array(nonzero[0]) # Y coordinates milenge 1D array mein. They will we arranged in the order of pixels
nonzerox = np.array(nonzero[1])
# Current positions to be updated later for each window in nwindows
leftx_current = leftx_base #initially set kar diya hai. For loop ke end mein change karenge
rightx_current = rightx_base
# Create empty lists to receive left and right lane pixel indices
left_lane_inds = [] # Ismein lane-pixels ke indices collect karenge.
# 'nonzerox' array mein index daalke coordinate mil jaayega
right_lane_inds = []
# Step through the windows one by one
for window in range(nwindows):
# Identify window boundaries in x and y (and right and left)
win_y_low = warped_img.shape[0] - (window+1)*window_height
win_y_high = warped_img.shape[0] - window*window_height
"""### TO-DO: Find the four below boundaries of the window ###"""
win_xleft_low = leftx_current - margin
win_xleft_high = leftx_current + margin
win_xright_low = rightx_current - margin
win_xright_high = rightx_current + margin
"""
# Create an output image to draw on and visualize the result
out_img = np.copy(warped_img)
# Draw the windows on the visualization image
cv2.rectangle(out_img,(win_xleft_low,win_y_low),
(win_xleft_high,win_y_high),(0,255,0), 2)
cv2.rectangle(out_img,(win_xright_low,win_y_low),
(win_xright_high,win_y_high),(0,255,0), 2)
"""
### TO-DO: Identify the nonzero pixels in x and y within the window ###
#Iska poora explanation seperate page mein likha hai
good_left_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xleft_low) & (nonzerox < win_xleft_high)).nonzero()[0]
good_right_inds = ((nonzeroy >= win_y_low) & (nonzeroy < win_y_high) &
(nonzerox >= win_xright_low) & (nonzerox < win_xright_high)).nonzero()[0]
# Append these indices to the lists
left_lane_inds.append(good_left_inds)
right_lane_inds.append(good_right_inds)
# If you found > minpix pixels, recenter next window on the mean position of the pixels in your current window (re-centre)
if len(good_left_inds) > minpix:
leftx_current = np.int(np.mean(nonzerox[good_left_inds]))
if len(good_right_inds) > minpix:
rightx_current = np.int(np.mean(nonzerox[good_right_inds]))
# Concatenate the arrays of indices (previously was a list of lists of pixels)
try:
left_lane_inds = np.concatenate(left_lane_inds)
right_lane_inds = np.concatenate(right_lane_inds)
except ValueError:
# Avoids an error if the above is not implemented fully
pass
# Extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
"""return leftx, lefty, rightx, righty, out_img""" #agar rectangles bana rahe ho toh out_image rakhna
return leftx, lefty, rightx, righty
def fit_polynomial(warped_img, leftx, lefty, rightx, righty, fit_history, variance_history, rad_curv_history):
"""This will fit a parabola on each lane line, give back lane curve coordinates, radius of curvature """
#Fit a second order polynomial to each using `np.polyfit` ###
left_fit = np.polyfit(lefty,leftx,2)
right_fit = np.polyfit(righty,rightx,2)
# We'll plot x as a function of y
ploty = np.linspace(0, warped_img.shape[0]-1, warped_img.shape[0])
"""Primary coefficient detection: 1st level of curve fitting where frame naturally detects poins and fit's curve"""
#Steps: find a,b,c for our parabola: x=a(y^2)+b(y)+c
"""
1.a) If lane pixels found, fit a curve and get the coefficients for the left and right lane
1.b) If #pixels insuffient and curve couldn't be fit, use the curve from the previous frame if you have that data
(In case of lack of points in 1st frame, fit an arbitrary parabola with all coeff=1: Expected to improve later on)
2) Using coefficient we'll fit a parabola. We'll improve it with following techiniques later on:
- Variance of lane pixels from parabola (to account for distance of curve points from the original pixels and attempt to minimise it)
- Shape and position of parabolas in the previous frame,
- Trends in radius of curvature,
- Frame mirroring (fine tuning one lane in the frame wrt to the other)
"""
try:
a1_new= left_fit[0]
b1_new= left_fit[1]
c1_new= left_fit[2]
a2_new= right_fit[0]
b2_new= right_fit[1]
c2_new= right_fit[2]
#Calculate the x-coordinates of the parabola. Here x is the dependendent variable and y is independent
left_fitx = a1_new*ploty**2 + b1_new*ploty + c1_new
right_fitx = a2_new*ploty**2 + b2_new*ploty + c2_new
status = True
except TypeError:
# Avoids an error if `left` and `right_fit` are still none or incorrect
print('The function failed to fit a line!')
if(len(lane.curve_fit)!=5): #If you dont have any values in the history
left_fitx = 1*ploty**2 + 1*ploty #This is a senseless curve. If it was the 1st frame, we need to do something
right_fitx = 1*ploty**2 + 1*ploty
else: #replicate lane from previous frame if you have history
left_fitx = fit_history[0][4][0]*ploty**2 + fit_history[0][4][1]*ploty + fit_history[0][4][2]
right_fitx = fit_history[1][4][0]*ploty**2 + fit_history[1][4][1]*ploty + fit_history[1][4][2]
lane.count=-1 #Restart your search in next frame. At the end of this frame, 1 gets added. Hence we'll get net 0.
status = False
"""VARIANCE: Average distance of lane pixels from our curve which we have fit"""
# Calculating variance for both lanes in the current frame.
# Even if current frame is the 1st frame, we still need the data for the further frames
# Hence it is calculated before the immediate next 'if' statement
left_sum = 0
for index in range(len(leftx)):
left_sum+= abs(leftx[index]-(a1_new*lefty[index]**2 + b1_new*lefty[index] + c1_new))
left_variance_new=left_sum/len(leftx)
right_sum=0
for index in range(len(rightx)):
right_sum+= abs(rightx[index]-(a2_new*righty[index]**2 + b2_new*righty[index] + c2_new))
right_variance_new=right_sum/len(rightx)
#If you have history for variance and curve coefficients
if((len(lane.curve_fit)==5)&(len(lane.variance)==5)):
left_variance_old = sum([(0.2*((5-index)**3)*element) for index,element in enumerate(variance_history[0])])/sum([0.2*((5-index)**3) for index in range(0,5)])
right_variance_old = sum([(0.2*((5-index)**3)*element) for index,element in enumerate(variance_history[1])])/sum([0.2*((5-index)**3) for index in range(0,5)])
# Finding weighted average for the previous elements data within fit_history
a1_old= sum([(0.2*(index+1)*element[0]) for index,element in enumerate(fit_history[0])])/sum([0.2*(index+1) for index in range(0,5)])
b1_old= sum([(0.2*(index+1)*element[1]) for index,element in enumerate(fit_history[0])])/sum([0.2*(index+1) for index in range(0,5)])
c1_old= sum([(0.2*(index+1)*element[2]) for index,element in enumerate(fit_history[0])])/sum([0.2*(index+1) for index in range(0,5)])
a2_old= sum([(0.2*(index+1)*element[0]) for index,element in enumerate(fit_history[1])])/sum([0.2*(index+1) for index in range(0,5)])
b2_old= sum([(0.2*(index+1)*element[1]) for index,element in enumerate(fit_history[1])])/sum([0.2*(index+1) for index in range(0,5)])
c2_old= sum([(0.2*(index+1)*element[2]) for index,element in enumerate(fit_history[1])])/sum([0.2*(index+1) for index in range(0,5)])
"""
a1_new = (a1_new*((left_variance_old)**2) + a1_old*((left_variance_new)**2))/(((left_variance_old)**2) + ((left_variance_new)**2))
b1_new = (b1_new*((left_variance_old)**2) + b1_old*((left_variance_new)**2))/(((left_variance_old)**2) + ((left_variance_new)**2))
c1_new = (c1_new*((left_variance_old)**2) + c1_old*((left_variance_new)**2))/(((left_variance_old)**2) + ((left_variance_new))**2)
a2_new = (a2_new*((right_variance_old)**2) + a2_old*((right_variance_new)**2))/(((right_variance_old)**2) + ((right_variance_new))**2)
b2_new = (b2_new*((right_variance_old)**2) + b2_old*((right_variance_new)**2))/(((right_variance_old)**2) + ((right_variance_new))**2)
c2_new = (c2_new*((right_variance_old)**2) + c2_old*((right_variance_new)**2))/(((right_variance_old)**2) + ((right_variance_new))**2)
"""
### Tracking the difference in curve fit coefficients over the frame
# from last to last frame -> last frame
del_a1_old = lane.coeff_diff[0][0]
del_b1_old = lane.coeff_diff[0][1]
del_c1_old = lane.coeff_diff[0][2]
del_a2_old = lane.coeff_diff[1][0]
del_b2_old = lane.coeff_diff[1][1]
del_c2_old = lane.coeff_diff[1][2]
# from last frame -> current frame
del_a1 = abs(a1_new - fit_history[0][4][0])
del_b1 = abs(b1_new - fit_history[0][4][1])
del_c1 = abs(c1_new - fit_history[0][4][2])
del_a2 = abs(a2_new - fit_history[1][4][0])
del_b2 = abs(b2_new - fit_history[1][4][1])
del_c2 = abs(c2_new - fit_history[1][4][2])
# Storing the new values so that the values can be used in the next frame
# As we are overwriting, the old values were called earlier & then the new values were found
lane.coeff_diff = [[del_a1, del_b1, del_c1], [del_a2, del_b2, del_c2]]
# bas ab delta coefficient for each coefficient nikalna hai aur vo formula likh dena har element ke liye
"""
a1_new = (a1_new*(del_a1_old) + a1_old*(del_a1))/((del_a1_old) + (del_a1))
b1_new = (b1_new*(del_b1_old) + b1_old*(del_b1))/((del_b1_old) + (del_b1))
c1_new = (c1_new*(del_c1_old) + c1_old*(del_c1))/((del_c1_old) + (del_c1))
a2_new = (a2_new*(del_a2_old) + a2_old*(del_a2))/((del_a2_old) + (del_a2))
b2_new = (b2_new*(del_b2_old) + b2_old*(del_b2))/((del_b2_old) + (del_b2))
c2_new = (c2_new*(del_c2_old) + c2_old*(del_c2))/((del_c2_old) + (del_c2))
"""
y_eval = np.max(ploty)
# Calculation of R_curve (radius of curvature)
left_curverad = (((1 + (2*a1_new*y_eval + b1_new)**2)**1.5) / (2*a1_new))
right_curverad = (((1 + (2*a2_new*y_eval + b2_new)**2)**1.5) / (2*a2_new))
if(len(lane.rad_curv)==5):
"""How to check series is decreasing or increasing"""
slope_avg=0
for i in range(0,4):
slope_avg += ((slope_avg*i) + (rad_curv_history[0][i+1] - rad_curv_history[0][i]))/(i+1)
# If this is not the point of inflection, and still the radius of curvature changes sign, discard the curve
# Left
if (((rad_curv_history[0][4]>0) & (left_curverad<0) & (slope_avg<=0)) | ((rad_curv_history[0][4]<0) & (left_curverad>0) & (slope_avg>=0))):
a1_new = fit_history[0][4][0]
b1_new = fit_history[0][4][1]
c1_new = fit_history[0][4][2]
# Right
if (((rad_curv_history[1][4]>0) & (right_curverad<0) & (slope_avg<=0)) | ((rad_curv_history[1][4]<0) & (right_curverad>0) & (slope_avg>=0))):
a2_new = fit_history[1][4][0]
b2_new = fit_history[1][4][1]
c2_new = fit_history[1][4][2]
"""FRAME MIRRORING: Fine tuning one lane wrt to the other same as they'll have similar curvature"""
#Steps:
"""
1) Weighted average of the coefficients related to curve shape (a,b) to make both parabola a bit similar
2) Adjusting the 'c' coefficient using the lane centre of previous frame and lane width acc to current frame
"""
# We'll use lane centre for the previous frame to fine tune c of the parabola. First frame won't have a history so
# Just for the 1st frame, we'll define it according to itself and use. Won't make any impact but will set a base for the next frames
if (lane.count==0):
lane.lane_bottom_centre = (((a2_new*(warped_img.shape[0]-1)**2 + b2_new*(warped_img.shape[0]-1) + c2_new) + (a1_new*(warped_img.shape[0]-1)**2 + b1_new*(warped_img.shape[0]-1) + c1_new))/2)
# We'll find lane width according to the latest curve coefficients till now
lane.lane_width = (((lane.lane_width*lane.count)+(a2_new*(warped_img.shape[0]-1)**2 + b2_new*(warped_img.shape[0]-1) + c2_new) - (a1_new*(warped_img.shape[0]-1)**2 + b1_new*(warped_img.shape[0]-1) + c1_new))/(lane.count+1))
a1 = 0.8*a1_new + 0.2*a2_new
b1 = 0.8*b1_new + 0.2*b2_new
a2 = 0.2*a1_new + 0.8*a2_new
b2 = 0.2*b1_new + 0.8*b2_new
#c1 = 0.8*c1_new + 0.2*c2_new
#c2 = 0.2*c1_new + 0.8*c2_new
#T Taking the lane centre fromt the previous frame and finding "c" such that both lanes are equidistant from it.
c1_mirror = ((lane.lane_bottom_centre - (lane.lane_width/2))-(a1*(warped_img.shape[0]-1)**2 + b1*(warped_img.shape[0]-1)))
c2_mirror = ((lane.lane_bottom_centre + (lane.lane_width/2))-(a2*(warped_img.shape[0]-1)**2 + b2*(warped_img.shape[0]-1)))
c1= 0.7*c1_new + 0.3*c1_mirror
c2 = 0.7*c2_new + 0.3*c2_mirror
# Now we'll find the lane centre of this frame and overwrite the global variable s that the next frame can use this value
lane.lane_bottom_centre = (((a2*(warped_img.shape[0]-1)**2 + b2*(warped_img.shape[0]-1) + c2) + (a1*(warped_img.shape[0]-1)**2 + b1*(warped_img.shape[0]-1) + c1))/2)
#print("lane.lane_width",lane.lane_width)
#print("lane.lane_bottom_centre",lane.lane_bottom_centre)
left_curverad = (((1 + (2*a1*y_eval + b1)**2)**1.5) / (2*a1))
right_curverad = (((1 + (2*a2*y_eval + b2)**2)**1.5) / (2*a2))
left_fitx = a1*ploty**2 + b1*ploty + c1
right_fitx = a2*ploty**2 + b2*ploty + c2
return [[a1,b1,c1], [a2,b2,c2]], left_fitx, right_fitx, status, [left_variance_new, right_variance_new], [left_curverad,right_curverad], ploty
# out_img here has boxes drawn and the pixels are colored
#return [[a1_new,b1_new,c1_new], [a2_new,b2_new,c2_new]], left_fitx, right_fitx, status, [left_variance_new, right_variance_new], ploty
def color_pixels_and_curve(out_img, leftx, lefty, rightx, righty, left_fitx, right_fitx):
ploty = np.linspace(0, warped_img.shape[0]-1, warped_img.shape[0])
## Visualization ##
# Colors in the left and right lane regions
out_img[lefty, leftx] = [255, 0, 0]
out_img[righty, rightx] = [0, 0, 255]
# Converting the coordinates of our line into integer values as index of the image can't take decimals
left_fitx_int = left_fitx.astype(np.int32)
right_fitx_int = right_fitx.astype(np.int32)
ploty_int = ploty.astype(np.int32)
# Coloring the curve as yellow
out_img[ploty_int,left_fitx_int] = [255,255,0]
out_img[ploty_int,right_fitx_int] = [255,255,0]
# To thicken the curve, drawing more yellow lines
out_img[ploty_int,left_fitx_int+1] = [255,255,0]
out_img[ploty_int,right_fitx_int+1] = [255,255,0]
out_img[ploty_int,left_fitx_int-1] = [255,255,0]
out_img[ploty_int,right_fitx_int-1] = [255,255,0]
out_img[ploty_int,left_fitx_int+2] = [255,255,0]
out_img[ploty_int,right_fitx_int+2] = [255,255,0]
out_img[ploty_int,left_fitx_int-2] = [255,255,0]
out_img[ploty_int,right_fitx_int-2] = [255,255,0]
def search_around_poly(warped_img, left_fit, right_fit):
# HYPERPARAMETER
# Choosing the width of the margin around the previous polynomial to search
margin = 100
# Grab activated pixels
nonzero = warped_img.nonzero()
nonzeroy = np.array(nonzero[0])
nonzerox = np.array(nonzero[1])
### Setting the area of search based on activated x-values ###
### within the +/- margin of our polynomial function ###
left_lane_inds = ((nonzerox > (left_fit[0]*(nonzeroy**2) + left_fit[1]*nonzeroy +
left_fit[2] - margin)) & (nonzerox < (left_fit[0]*(nonzeroy**2) +
left_fit[1]*nonzeroy + left_fit[2] + margin)))
right_lane_inds = ((nonzerox > (right_fit[0]*(nonzeroy**2) + right_fit[1]*nonzeroy +
right_fit[2] - margin)) & (nonzerox < (right_fit[0]*(nonzeroy**2) +
right_fit[1]*nonzeroy + right_fit[2] + margin)))
# Again, extract left and right line pixel positions
leftx = nonzerox[left_lane_inds]
lefty = nonzeroy[left_lane_inds]
rightx = nonzerox[right_lane_inds]
righty = nonzeroy[right_lane_inds]
return leftx, lefty, rightx, righty
def modify_array(array, new_value):
if len(array)!=5:
for i in range(0,5):
array.append(new_value)
else:
dump_var=array[0]
array[0]=array[1]
array[1]=array[2]
array[2]=array[3]
array[3]=array[4]
array[4]=new_value
return array
def truncate(number, digits) -> float:
stepper = 10.0 ** digits
return math.trunc(stepper * number) / stepper
"""Main code begins here:"""
undist_img = cal_undistort(frame)
thresh_img = thresh_img(undist_img) # Note: Output here is not a binary image. It has been stacked already within the function
warped_img, src, dst = perspective_transform(thresh_img)
#draw_polygon(frame, warped_img, src, dst) #the first image is the original image that you import into the system
#print("starting count",lane.count)
# Making the curve coefficient, variance, radius of curvature history ready for our new frame.
left_fit_previous = [i[0] for i in lane.curve_fit]
right_fit_previous = [i[1] for i in lane.curve_fit]
fit_history=[left_fit_previous, right_fit_previous]
left_variance_previous = [i[0] for i in lane.variance]
right_variance_previous = [i[1] for i in lane.variance]
variance_history=[left_variance_previous, right_variance_previous]
left_rad_curv_prev = [i[0] for i in lane.rad_curv]
right_rad_curv_prev = [i[1] for i in lane.rad_curv]
rad_curv_history = [left_rad_curv_prev, right_rad_curv_prev]
#print(rad_curv_history)
# These variables realted to history could have been defined in condition lane.count>0 below
# Reason for defining above: We will want to get back to finding lane pixels from scratch
# if our frame is a bad frame or the lane pixels deviate too much from the previous frame.
# In that case, we set lane.count=0 and start searching from scratch
# but we DO have history data at that point which will be used in fit_polnomial() function
if (lane.count == 0):
leftx, lefty, rightx, righty = find_lane_pixels(warped_img) # Find our lane pixels first
elif (lane.count > 0):
leftx, lefty, rightx, righty = search_around_poly(warped_img, left_fit_previous[4], right_fit_previous[4])
curve_fit_new, left_fitx, right_fitx, status, variance_new, rad_curv_new,ploty = fit_polynomial(warped_img, leftx, lefty, rightx, righty, fit_history, variance_history,rad_curv_history)
# Define conversions in x and y from pixels space to meters
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/650 # meters per pixel in x dimension
#Finding the fit for the curve fit who's constituent points: x and y have been caliberated
left_fit_cr = np.polyfit(ploty*ym_per_pix, left_fitx*xm_per_pix, 2)
right_fit_cr = np.polyfit(ploty*ym_per_pix, right_fitx*xm_per_pix, 2)
#Finding the correct radius of curvature in the real world frame (in metric system istead of pixel space)
# We'll choose the maximum y-value, corresponding to the bottom of the image (this is where we find roc)
y_eval = np.max(ploty)
left_curverad = (((1 + (2*left_fit_cr[0]*y_eval + left_fit_cr[1])**2)**1.5) / np.absolute(2*left_fit_cr[0]))
right_curverad = (((1 + (2*right_fit_cr[0]*y_eval + right_fit_cr[1])**2)**1.5) / np.absolute(2*right_fit_cr[0]))
avg_rad_curv = truncate(((left_curverad + right_curverad)/2),3)
offset = truncate((((lane.lane_bottom_centre - frame.shape[1]/2))*xm_per_pix),3)
#print("avg_rad_curv",avg_rad_curv)
#print("offset",offset)
lane.rad_curv = modify_array(lane.rad_curv, rad_curv_new)
lane.detected = status
lane.curve_fit = modify_array(lane.curve_fit, curve_fit_new)
lane.variance = modify_array(lane.variance, variance_new)
#print(lane.variance)
# Now we'll color the lane pixels and plot the identified curve over the image
#color_pixels_and_curve(warped_img, leftx, lefty, rightx, righty, left_fitx, right_fitx)
unwarped_img, Minv = rev_perspective_transform(warped_img, src, dst)
# Create an image to draw the lines on
color_warp = np.zeros_like(warped_img).astype(np.uint8)
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv, (frame.shape[1], frame.shape[0]))
# Combine the result with the original image
result = cv2.addWeighted(undist_img, 1, newwarp, 0.3, 0)
text1 = "Curvature radius: "+str(avg_rad_curv)+"m"
text2 = "Offset: "+str(offset)+"m"
cv2.putText(result, text1, (40, 60), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), thickness=2)
cv2.putText(result, text2, (40, 110), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 255, 255), thickness=2)
lane.count = lane.count+1
#return warped_img
#return color_warp
return result
#return unwarped_img
#return undist_img
#return thresh_img
#return warped_img
"""#color pixel funcction ko un-comment kardena"""
###Output
_____no_output_____
###Markdown
Class has been created below
###Code
# Define a class to receive the characteristics of each line detection
class Line():
def __init__(self):
#Let's count the number of consecutive frames
self.count = 0
# was the line detected in the last iteration?
self.detected = False
#polynomial coefficients for the most recent fit
self.curve_fit = []
# Traking variance for the right lane
self.variance = []
#difference in fit coefficients between last and new fits. Just store the difference in coefficients for the last frame
self.coeff_diff = [[0,0,0],[0,0,0]]
#Lane width measured at the start of reset
self.lane_width = 0
#Let's track the midpoint of the previous frame
self.lane_bottom_centre = 0
#radius of curvature of the line in some units
self.rad_curv = []
lane=Line()
import glob
test_images = glob.glob('test_images/*.jpg')
# Step through the list and search for chessboard corners
for idx, fname in enumerate(test_images):
img = cv2.imread(fname)
#print ("success"+str(idx))
write_name = 'output_files/img_results/undist_result '+str(idx+1)+'.jpg'
color_corrected_img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
lane.count = 0 # Necessary otherwise the images will start fixing the curve according to the history
output = process_image(color_corrected_img)
output_mod = cv2.cvtColor(output, cv2.COLOR_RGB2BGR)
cv2.imwrite(write_name,output_mod)
cv2.imshow(write_name, output_mod)
cv2.waitKey(500)
cv2.destroyAllWindows()
frame1= mpimg.imread("test_images/test (4).jpg")
"""
frame2= mpimg.imread("my_test_images/Highway_snaps/image (2).jpg")
frame3= mpimg.imread("my_test_images/Highway_snaps/image (3).jpg")
frame4= mpimg.imread("my_test_images/Highway_snaps/image (4).jpg")
frame5= mpimg.imread("my_test_images/Highway_snaps/image (5).jpg")
frame6= mpimg.imread("my_test_images/Highway_snaps/image (6).jpg")
frame7= mpimg.imread("my_test_images/Highway_snaps/image (7).jpg")
frame8= mpimg.imread("my_test_images/Highway_snaps/image (8).jpg")
frame9= mpimg.imread("my_test_images/Highway_snaps/image (9).jpg")
%matplotlib notebook
(process_image(frame1))
(process_image(frame2))
(process_image(frame3))
(process_image(frame4))
(process_image(frame5))
(process_image(frame6))
(process_image(frame7))
(process_image(frame8))
"""
plt.imshow(process_image(frame1))
###Output
_____no_output_____
###Markdown
Video test
###Code
# Define a class to receive the characteristics of each line detection
class Line():
def __init__(self):
#Let's count the number of consecutive frames
self.count = 0
# was the line detected in the last iteration?
self.detected = False
#polynomial coefficients for the most recent fit
self.curve_fit = []
# Traking variance for the right lane
self.variance = []
#difference in fit coefficients between last and new fits. Just store the difference in coefficients for the last frame
self.coeff_diff = [[0,0,0],[0,0,0]]
#Lane width measured at the start of reset
self.lane_width = 0
#Let's track the midpoint of the previous frame
self.lane_bottom_centre = 0
#radius of curvature of the line in some units
self.rad_curv = []
lane=Line()
project_output = 'Project_Result_roc_sign_changed_no_other_filters.mp4'
clip1 = VideoFileClip("test_videos/project_video.mp4").subclip(19,26)
project_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!
%time project_clip.write_videofile(project_output, audio=False)
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(project_output))
###Output
_____no_output_____ |
notebooks/iterate-on-data.ipynb | ###Markdown
Bulk Labelling as a NotebookThis notebook contains a convenient pattern to cluster and label new text data. The end-goal is to discover intents that might be used in a virtual assistant setting. This can be especially useful in an early stage and is part of the "iterate on your data"-mindset. Note that this tactic won't generate "gold" labels but it should generate something useful to help you get started. Dependencies You'll need to install a few things to get started. - [whatlies](https://rasahq.github.io/whatlies/)- [human-learn](https://koaning.github.io/human-learn/)- [ipywidgets](https://ipywidgets.readthedocs.io/en/stable/)You can install all tools by running this line in an empty cell; ```python%pip install "whatlies[all]" "human-learn" "ipywidgets"```Note that in order for the widgets to work you'll also need to run these commands *before* running jupyter.```bashjupyter nbextension enable --py widgetsnbextensionjupyter labextension install @jupyter-widgets/jupyterlab-manager```Next, you *should* run this notebook on port 8888. If you can't, be sure to read [this comment]]() and set a flag;```export BOKEH_ALLOW_WS_ORIGIN=localhost:8889python -m jupyter lab --port 8889 --allow-websocket-origin=localhost:8889```We use `whatlies` to fetch embeddings and to handle the dimensionality reduction. We use `human-learn` for the interactive labelling interface. Feel free to check the documentation of both packages to learn more. Let's goTo get started we'll first import a few tools.
###Code
import pathlib
import numpy as np
import ipywidgets as widgets
import pandas as pd
from whatlies import EmbeddingSet
from whatlies.transformers import Pca, Umap
from hulearn.preprocessing import InteractivePreprocessor
from hulearn.experimental.interactive import InteractiveCharts
from whatlies.language import UniversalSentenceLanguage, LaBSELanguage
df = pd.read_csv("twcs.csv")
pattern = "(https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|www\.[a-zA-Z0-9][a-zA-Z0-9-]+[a-zA-Z0-9]\.[^\s]{2,}|https?:\/\/(?:www\.|(?!www))[a-zA-Z0-9]+\.[^\s]{2,}|www\.[a-zA-Z0-9]+\.[^\s]{2,})"
df_orig.to_csv("xbox-support.csv", index=False)
import pandas as pd
df_orig = (pd.read_csv("twcs.csv")
.loc[lambda d: d['text'].str.contains("XboxSupport")][['text']]
.assign(text=lambda d: d['text'].str.replace('@[A-Za-z0-9]+', ''))
.assign(text=lambda d: d['text'].str.replace(pattern, ''))
.drop_duplicates())
texts = list(df_orig['text'][:1000])
###Output
_____no_output_____
###Markdown
Next, we're going to pick the language model of interest.
###Code
# The language agnostic bert model works is a good starting option, also for Non-English use-cases.
# lang = LaBSELanguage()
# The universal sentence language works well if you're dealing with English sentences.
lang = UniversalSentenceLanguage()
%%time
# This is where we prepare all of the state
embset = lang[texts]
df = embset.transform(Umap(2)).to_dataframe().reset_index()
df.columns = ['text', 'd1', 'd2']
df['label'] = ''
%%time
# This is where we prepare all of the state
embset = lang[texts]
df = embset.transform(Umap(2)).to_dataframe().reset_index()
df.columns = ['text', 'd1', 'd2']
df['label'] = ''
# Here's the global state object
state = {}
state['df'] = df.copy()
state['chart'] = InteractiveCharts(df.loc[lambda d: d['label'] == ''], labels=['group'])
###Output
_____no_output_____
###Markdown
Showing Clusters The idea is that we're embedding text embeddings in a two dimensional space. For more info on the details watch [the first tutorial](https://www.youtube.com/watch?v=YsMoGd7sYMQ&t=1s&ab_channel=Rasa).We'll be using Vincent's infamous [human-learn library](https://koaning.github.io/human-learn/guide/drawing-features/custom-features.html) to draw selections of 2D embeddings.Drawing can be a bit tricky though, so pay attention. 0. To start drawing, make sure the red ball icon is selected.1. You'll want to double-click to start drawing. 2. You can then click points together to form a polygon. 3. Next you need to double-click to stop drawing. This allows you to draw polygons that can be used in the code below to fetch the examples that you're interested in. Once you've drawn a polygon click "show examples" to see examples of your selections and use the textbox and "add label" button to add labels.
###Code
pd.set_option('display.max_colwidth', -1)
def show_draw_chart(b=None):
with out_chart:
out_chart.clear_output()
state['chart'].dataf = state['df'].loc[lambda d: d['label'] == '']
state['chart'].charts = []
state['chart'].add_chart(x='d1', y='d2', legend=False)
def show_examples(b=None):
with out_table:
out_table.clear_output()
tfm = InteractivePreprocessor(json_desc=state['chart'].data())
subset = state['df'].pipe(tfm.pandas_pipe).loc[lambda d: d['group'] != 0]
display(subset.sample(min(15, subset.shape[0]))[['text']])
def assign_label(b=None):
tfm = InteractivePreprocessor(json_desc=state['chart'].data())
idx = state['df'].pipe(tfm.pandas_pipe).loc[lambda d: d['group'] != 0].index
state['df'].iloc[idx, 3] = label_name.value
with out_counter:
out_counter.clear_output()
n_lab = state['df'].loc[lambda d: d['label'] != ''].shape[0]
print(f"{n_lab}/{state['df'].shape[0]} labelled")
def retrain_state(b=None):
keep = list(state['df'].loc[lambda d: d['label'] == '']['text'])
umap = Umap(2)
new_df = EmbeddingSet(*[e for e in embset if e.name in keep]).transform(umap).to_dataframe().reset_index()
new_df.columns = ['text', 'd1', 'd2']
new_df['label'] = ''
state['df'] = pd.concat([new_df, state['df'].loc[lambda d: d['label'] != '']])
show_draw_chart(b)
out_table = widgets.Output()
out_chart = widgets.Output()
out_counter = widgets.Output()
label_name = widgets.Text("label name")
btn_examples = widgets.Button(
description='Show Examples',
icon='eye'
)
btn_label = widgets.Button(
description='Add label',
icon='check'
)
btn_retrain = widgets.Button(
description='Retrain',
icon='coffee'
)
btn_redraw = widgets.Button(
description='Redraw',
icon='check'
)
btn_examples.on_click(show_examples)
btn_label.on_click(assign_label)
btn_redraw.on_click(show_draw_chart)
btn_retrain.on_click(retrain_state)
show_draw_chart()
display(widgets.VBox([widgets.HBox([btn_retrain, btn_examples, btn_redraw]),
widgets.HBox([out_chart, out_table])]),
label_name,
widgets.HBox([btn_label, out_counter]))
intent_words = {
"gratitude": ['thank'],
"technical_issue": ['overheat', 'fix', 'not switching off', 'noise'],
"anger": ['shit', 'suck', 'fuck', 'fvck', 'stupid'],
"return": ['return'],
"game_related": ['madden', 'fifa', 'kotor', 'creed', 'gears', 'scrabble',
'dlc', 'war', 'cod', 'halo', 'minecraft', 'wolfenstein',
'farcry', 'tomb raider', 'witcher', "dragon age", "mass effect",
"me1", "me2", "me3", "dragon age"],
"help": ["suggestions", "help"],
"digital-purchase": ["dlc", "code"]
}
def assign_simple_intents(dataf, **kwargs):
df_internal = dataf.assign(text=lambda d: d['text'].str.lower())
for intent, words in kwargs.items():
df_internal[intent] = False
for w in words:
df_internal[intent] = np.where(df_internal['text'].str.contains(w), True, df_internal[intent])
return df_internal
def keep_only_one_label(dataf):
return (dataf.loc[lambda d: d.drop(columns=['text']).sum(axis=1) == 1]
.melt(id_vars="text", value_vars=list(intent_words.keys()), var_name='label')
.loc[lambda d: d['value'] == True]
.drop(columns=['value']))
ml_df = (df_orig
.pipe(assign_simple_intents, **intent_words)
.pipe(keep_only_one_label))
df_orig.shape, ml_df.shape
import matplotlib.pylab as plt
from sklearn.pipeline import Pipeline, FeatureUnion
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.linear_model import LogisticRegression
pipe = Pipeline([
('feats', FeatureUnion([
('cv1', CountVectorizer()),
])),
('mod', LogisticRegression(max_iter=1000))
])
ml_text = list(ml_df['text'])
probas = pipe.fit(ml_text, ml_df['label']).predict_proba(df_orig['text']).max(axis=1)
plt.hist(probas, 30);
###Output
_____no_output_____
###Markdown
Let's look at a couple of these examples. Can we find more words?
###Code
anger_examples = (df_orig
.loc[probas > 0.9]
.assign(pred = lambda d: pipe.predict(d['text']))
.loc[lambda d: d['pred'] == 'gratitude'])
anger_examples
###Output
_____no_output_____
###Markdown
For "gratitude" I think I've found some more keywords: - brilliant- great job - better- for sure
###Code
tech_examples = (df_orig
.loc[probas > 0.9]
.assign(pred = lambda d: pipe.predict(d['text']))
.loc[lambda d: d['pred'] == 'technical_issue'])
tech_examples
###Output
_____no_output_____
###Markdown
Same thing with "tech examples". More good words; - error code- patch - data corrupted - connection
###Code
anger_examples = (df_orig
.loc[probas > 0.9]
.assign(pred = lambda d: pipe.predict(d['text']))
.loc[lambda d: d['pred'] == 'digital-purchase'])
anger_examples
###Output
_____no_output_____
###Markdown
So what do we do now? Well ... we repeat until we think we've got a good recall.
###Code
intent_words = {
"gratitude": ['thank', 'brilliant', 'great job', 'better', 'for sure'],
"technical_issue": ['overheat', 'fix', 'not switching off', 'noise', 'error', 'patch', 'connection', 'data corrupted'],
"anger": ['shit', 'suck', 'fuck', 'fvck', 'stupid'], # 'fvck', 'stupid' were added
"return": ['return', 'policy', 'refund'], # 'policy' and 'refund' added
"game_related": ['madden', 'fifa', 'kotor', 'creed', 'gears', 'scrabble',
'dlc', 'war', 'cod', 'halo', 'minecraft', 'wolfenstein',
'farcry', 'tomb raider', 'witcher', "dragon age", "mass effect",
"me1", "me2", "me3", "dragon age"],
"help": ["suggestions", "help"],
"digital-purchase": ["dlc", "code"]
}
###Output
_____no_output_____
###Markdown
Note that for the video games I've been using s2v instead. Let's try clustering?It's no suprise that it doesn't work very well.
###Code
import hdbscan
import umap
mod = hdbscan.HDBSCAN(min_cluster_size=5)
%%time
X = lang.fit(texts).transform(texts)
mod.fit(umap.UMAP(n_components=10).fit(X).transform(X))
pd.Series(mod.labels_).value_counts()
mod.condensed_tree_.plot()
for t in np.array(texts)[mod.labels_ == 23]:
print(t)
###Output
Maybe push them into fixing broken achievements on Xbox One?
will be Xbox Play Anywhere title?
I agree is awesome! #XboxHelp
hey I've got a question about the Xbox One X Scorpio edition.
on the Xbox one x, and use the option HDR on my games ?
Thatโs the Xbox one still use QR codeโs ๐ค
Where is my patch for Xbox one
Confusing thing is that the Xbox s controller has no issues
why ainโt Netflix or Hulu working on y Xbox
Xbox wonโt let me leave the party
I have that issue with my Xbox one original
Is there anyone who can send me a new Xbox One S?
Xbox Overheated Twice In The Past 4 Minutes.
I need help regarding in game keyboard freezing on Xbox one
I think something may be wrong with my Xbox maybe??
Is Xbox Live down?
Does anyone elses #XboxOneX sound like this?
you on xbox as well?
It works fine for me on Xbox One games but 360 games wonโt load my profile?
Also is it normal if my Xbox 360 E has a Xbxo 360 S hard drive?
100k happy thanksgivings Xbox
hiya any idea why my xbox keeps turning it self on
Daltaroo and Xbox one
Nope. To the internal hard drive on he Xbox
my Xbox one is bricked. 2 days of trying
lmao yeah if my Xbox worked gg
Mkl tL Xbox one
is the Xbox One S capable of 60 fps?
The Xbox 1 s
Is it the same as the Xb1 S?
#xbox who are you?
I just got a Xbox One S! Any tips or something?
Xbox One X instant on bug?
my Xbox one s isnโt allowing 4K
can you please add ace combat 6 and hawx to the reverse compatibility list
will a normal xbox one look better on a 4k tv or only the xbox one x
|
Basic_Matrix_factorization.ipynb | ###Markdown
Basic Matrix Factorization
###Code
import torch
class MatrixFactorization(torch.nn.Module):
def __init__(self, n_users, n_items, n_factors=20):
super().__init__()
self.user_factors = torch.nn.Embedding(n_users, n_factors, sparse=True)
self.item_factors = torch.nn.Embedding(n_items, n_factors, sparse=True)
def forward(self, user, item):
return (self.user_factors(user)*self.item_factors(item)).sum(1)
def predict(self, user, item):
return self.forward(user, item)
import pandas as pd
query = """
SELECT *
FROM EVIC.ratings
"""
ratings = pd.read_gbq(query, project_id="spike-sandbox", use_bqstorage_api=True)
ratings_sample
sample_users = 300
ratings_sample = ratings[ratings.user_id.isin(ratings.user_id.unique()[0:sample_users])].copy()
ratings_matrix = pd.pivot_table(ratings_sample, index='user_id', columns='movie_id', values='rating')
ratings_matrix.fillna(0., inplace=True)
ratings_matrix = ratings_matrix.values
torch_ratings_matrix = torch.from_numpy(ratings_matrix)
n_users = ratings_matrix.shape[0]
n_items = ratings_matrix.shape[1]
items = range(0, n_items)
users = range(0, n_users)
n_users, n_items
import random
## Training loop
#Movielens dataset with ratings scaled between [0, 1] to help with convergence.on the test set, error(RMSE) of 0.66
import itertools
model = MatrixFactorization(n_users, n_items, n_factors=20)
loss_fn = torch.nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-6)
losses = []
epochs = range(0, 1)
combinations = list(itertools.product(users, items))
for epoch in epochs:
for j, (user, item) in enumerate(combinations):
# get user, item and rating data
rating = torch.FloatTensor([torch_ratings_matrix[user, item]])
user = torch.LongTensor([int(user)])
item = torch.LongTensor([int(item)])
# predict
prediction = model(user, item)
loss = loss_fn(prediction, rating)
losses += [loss.cpu().detach().tolist()]
# backpropagate
loss.backward()
# update weights
optimizer.step()
if j%100_000 == 0:
print(f"{j} out of {len(combinations)}")
epoch += 1
from numba import njit
import numpy as np
import time
#Preallocate with expected number of larger than 0 dot products
larger_than_0_prods = 100_000_000
def filtered(user_embs, product_embs, min_cross_prod=0.):
"""
Calculates dot product between user_embs and product_embs
but keeps only the ones with
"""
filtered_m = np.empty(larger_than_0_prods, dtype=np.float32)
filtered_user_prod_ids = np.empty((larger_than_0_prods, 2), dtype=np.int32)
larger_than_0_count = 0
for user_id in range(0, len(user_embs)):
for prod_id in range(0, len(product_embs)):
dot_prod = np.dot(product_embs[prod_id], user_embs[user_id])
if dot_prod > min_cross_prod:
filtered_m[larger_than_0_count] = dot_prod
filtered_user_prod_ids[larger_than_0_count][0] = user_id
filtered_user_prod_ids[larger_than_0_count][1] = prod_id
larger_than_0_count += 1
return filtered_m, filtered_user_prod_ids
from numba import jit
@jit(nopython=True, nogil=True)
def njit_filtered(user_embs, product_embs, min_cross_prod=5.5):
"""
Calculates dot product between user_embs and product_embs
but keeps only the ones with dot producto > min_cross_prod
"""
filtered_m = np.empty(larger_than_0_prods, dtype=np.float64)
filtered_user_prod_ids = np.empty((larger_than_0_prods, 2), dtype=np.int64)
larger_than_0_count = 0
for user_id in range(0, len(user_embs)):
for prod_id in range(0, len(product_embs)):
dot_prod = np.dot(product_embs[prod_id], user_embs[user_id])
if dot_prod > min_cross_prod:
filtered_m[larger_than_0_count] = dot_prod
filtered_user_prod_ids[larger_than_0_count][0] = user_id
filtered_user_prod_ids[larger_than_0_count][1] = prod_id
larger_than_0_count += 1
return filtered_m, filtered_user_prod_ids
n_users = 1_000_000
n_prods = 10_000
n_factors = 30
def convert_to_32(x):
return np.array(x, dtype=np.float32)
user_embs = np.random.randn(n_users, n_factors)
product_embs = np.random.randn(n_prods, n_factors)
start = time.time()
filtered_m, filtered_user_prod_ids = filtered(user_embs,
product_embs, min_cross_prod=5.5)
time1 = (time.time() - start)/60
print(f"{n_users * n_prods} combinations done in {time1} mins")
start = time.time()
filtered_m, filtered_user_prod_ids = njit_filtered(user_embs, product_embs,
min_cross_prod=35.5)
time2 = (time.time() - start)/60
print(f"{n_users * n_prods} combinations done in {time2} mins")
print(f"speedup: {time1/time2}")
%%cython
import numpy as np
cimport numpy as np
DTYPE = np.int
ctypedef np.int_t DTYPE_t
ctypedef np.float_t DTYPEF_t
def f(np.ndarray[DTYPEF_t, ndim=1] user_embs,
product_embs, float min_cross_prod):
"""
Modifies
"""
cdef np.ndarray[DTYPEF_t, ndim=2] filtered_m = np.empty(non_zero_combinations, dtype=np.float32)
filtered_user_prod_ids = np.empty((non_zero_combinations, 2), dtype=np.int32)
non_zero_count = 0
for user_id in range(0, len(user_embs)):
for prod_id in range(0, len(product_embs)):
dot_prod = np.dot(product_embs[prod_id], user_embs[user_id])
if dot_prod > min_cross_prod:
filtered_m[non_zero_count] = dot_prod
filtered_user_prod_ids[non_zero_count] = [user_id, prod_id]
non_zero_count += 1
return filtered_m, filtered_user_prod_ids
len(prods_embs)ยบ
import time
start = time.time()
filtered_m, filtered_user_prod_ids = f(user_embs, product_embs, min_cross_prod=0.)
print(f"{n_users * n_prods} combinations done in {(time.time() - start)/60} mins")
2+3
2+2
for epoch in range(epochs):
epoch_loss = train_one_epoch( model, training_data_generator, loss_fn, optimizer, epoch, device)
len(avers)
%matplotlib inline
import matplotlib.pyplot as plt
fig, axes = plt.subplots(3, 1, figsize=(10, 12))
axes[0].plot(losses)
axes[1].plot(pd.Series(losses).rolling(window=500).mean())
axes[2].plot(pd.Series(losses).rolling(window=5000).mean())
detached.tolist()
%matplotlib inline
losses[0]
###Output
_____no_output_____ |
.ipynb_checkpoints/1_szekelyhon_parser-checkpoint.ipynb | ###Markdown
Parse past X years
###Code
keyword='medve'
baseurl=u'https://szekelyhon.ro/kereses?op=search&src_words='
start='2020-01'
end='2020-06'
dates=[]
datelist = pd.date_range(start=pd.to_datetime(start), end=pd.to_datetime(end), freq='M').tolist()
for date in datelist:
dates.append(str(date)[:10])
dates[:5]
def extractor(time1,time2):
time1=dates[i]
time2=dates[i+1]
print('Parsing...',time1,'-',time2)
url=baseurl+keyword+'&src_time1='+time1+'&src_time2='+time2
html = urllib.request.urlopen(url).read()
# soup = bs.BeautifulSoup(html,'lxml')
soup = bs.BeautifulSoup(html,"html.parser")
return soup.findAll("div", {"class": "cikkocka2c"})
divs=[]
for i in range(len(dates)-1):
time1=dates[i]
time2=dates[i+1]
divs.append(extractor(time1,time2))
def date_hu_en(i):
date=i[6:-4]
if date=='augusztus': m='08'
elif date=='december': m='12'
elif date=='februรกr': m='02'
elif date=='januรกr': m='01'
elif date=='jรบlius': m='07'
elif date=='jรบnius': m='06'
elif date=='mรกjus': m='05'
elif date=='mรกrcius': m='03'
elif date=='november': m='11'
elif date==u'oktรณber': m='10'
elif date==u'szeptember': m='09'
elif date==u'รกprilis': m='04'
else: return date
return i[:4]+'-'+m+'-'+i[-3:-1]
def find_all(s, ch):
return [i for i, letter in enumerate(s) if letter == ch]
from utils import text_processor
hirek=[]
tagset=set()
for i in range(len(dates)-1):
time2=dates[i+1]
divgroup=divs[i]
for div in divgroup:
icat=''
img=div.find('img')
if img !=None:
img=img['src']
#infer image category from image link
icats=find_all(img,'/')
if len(icats)>4:
icat=img[icats[3]+1:icats[4]]
tags=div.find("div", {"class": "tags_con1"})
if tags!=None:
tags=[j.text.strip() for j in tags.findAll('div')]
idiv=div.find("div", {"class": "catinner"})
if idiv!=None:
idiv=idiv.find('div')
content=div.find('p')
date=idiv.text[idiv.text.find('20'):idiv.text.find(',')]
title=div.find('h2').text
if content==None:
sdiv=str(div)[::-1]
content=sdiv[:sdiv.find('>a/<')].replace('\r','').replace('\t','').replace('\n','')[::-1][:-6]
else: content=content.text
content=content.replace('</div><div class="clear"></div></div><div class="clear"></div>','')
link=div.findAll('a')[-1]['href']
#infer category from link
cats=find_all(link,'/')
if len(cats)>3:
cat=link[cats[2]+1:cats[3]]
else: cat=''
#infer attack from plain text
relevant,severity,deaths=text_processor(title,content)
if tags!=None:
notags=[u'Hรบsvรฉt',u'Film',u'Egรฉszsรฉgรผgy',u'Kรผlfรถld',u'Szรญnhรกz',u'รnnep']
for notag in notags:
if notag in tags:
relevant=-1
break
if ((relevant>-1)&\
(cat not in ['sport','muvelodes','sms-e-mail-velemeny','tusvanyos'])&\
(title not in [u'Rรถviden'])):
if tags!=None:
tagset=tagset.union(set(tags))
if 'medve' in tags:
relevant=1
hirek.append({'date':date_hu_en(date),
'hudate':date,
'title':title,
'image':img,
'tags':repr(tags),
'content':content,
'link':link,
'category':cat,
'icategory':icat,
'relevant':relevant,
'severity':severity,
'deaths':deaths,
'duplicate':0
})
###Output
_____no_output_____
###Markdown
รsszes medvรฉs hรญr
###Code
df=pd.DataFrame().from_dict(hirek)
df['date']=pd.to_datetime(df['date'])
df=df.sort_values('date').drop_duplicates().reset_index(drop=True)
len(hirek)
###Output
_____no_output_____
###Markdown
Save to medve Excel. Manual curation
###Code
dm=df[[ 'date', 'hudate', 'link','image', 'category','icategory','tags','title',
'content']]
dc=df[['title','content','relevant', 'severity','deaths','duplicate']]
#save parsed data
dm.to_excel('data/szekelyhon_medve.xlsx')
#save data for curation
#1 if you dont have savedata yet
existing_savedata=False
if not existing_savedata:
dc.to_excel('data/szekelyhon_medve_curated.xlsx')
#2 if you already have savedata
else:
dc2=pd.read_excel('data/szekelyhon_medve_curated.xlsx')
dc2.combine_first(dc).to_excel('data/szekelyhon_medve_curated.xlsx')
###Output
_____no_output_____ |
statistics/bootstrap_method.ipynb | ###Markdown
Bootstrap method Inspired by [Statistics for Hackers](https://speakerdeck.com/jakevdp/statistics-for-hackers) by Jake VanderPlas
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import math
plt.style.use('ggplot')
draw_space = np.array([48, 24, 51, 12, 21, 41, 25, 23, 32, 61,
19, 24, 29, 21, 23, 13, 32, 18, 42, 18])
###Output
_____no_output_____
###Markdown
Mean:
###Code
draw_space.mean()
###Output
_____no_output_____
###Markdown
Standard error of the mean:
###Code
1/math.sqrt(20) * math.sqrt(((draw_space - draw_space.mean())**2).sum() * (1/19))
###Output
_____no_output_____
###Markdown
Now let's calculate the mean and standard error of the mean from the bootstrap samples:
###Code
xbar = np.array([])
for i in range(10000):
xbar = np.append(xbar, np.random.choice(draw_space, size=len(draw_space), replace=True).mean())
xbar.mean()
xbar.std()
###Output
_____no_output_____ |
debugspace/DemoForVSCode.ipynb | ###Markdown
Debugger DemoPlease set VSCode workspace directory to `JISDLab/`
###Code
var dbg = new Debugger("jisd.demo.HelloWorld", "-cp sample");
dbg.watch(20);
dbg.watch(22);
dbg.run(1000);
dbg.exit();
ArrayList<DebugResult> results = dbg.getResults();
results.forEach(res -> {
println("-----------------------------");
var loc = res.getLocation();
println(loc.getLineNumber());
println(loc.getVarName());
println(res.getLatestValue());
});
###Output
>> Debugger Info: Deferring breakpoint in jisd.demo.HelloWorld. It will be set after the class is loaded.
>> Debugger Info: Deferring breakpoint in jisd.demo.HelloWorld. It will be set after the class is loaded.
>> Debugger Info: Debugger started.
Hello, Bob
Hello, Alice
>> Debugger Info: VM exited.
-----------------------------
20
a
a=0
-----------------------------
20
args
args=instance of java.lang.String[0] (id=69)
-----------------------------
20
hello
hello=instance of jisd.demo.HelloWorld(id=71)
-----------------------------
20
me
me="Alice"
-----------------------------
22
a
a=1
-----------------------------
22
args
args=instance of java.lang.String[0] (id=69)
-----------------------------
22
hello
hello=instance of jisd.demo.HelloWorld(id=71)
-----------------------------
22
me
me="Alice"
###Markdown
Graph Demo
###Code
ArrayList<Double> x = new ArrayList<>();
ArrayList<Double> y = new ArrayList<>();
ArrayList<DebugResult> resA = dbg.getResults("a");
ArrayList<Double> valA = new ArrayList<>();
ArrayList<Double> lineA = new ArrayList<>();
int sizeA = resA.size();
for (int i = 0; i < sizeA; i++) {
DebugResult res = resA.get(i);
Location loc = res.getLocation();
double val = Double.parseDouble(res.getLatestValue().getValue());
double line = loc.getLineNumber();
valA.add(val);
lineA.add(line);
}
int resNextIndex = 0;
double val = valA.get(resNextIndex);
double lLine = lineA.get(resNextIndex++);
double rLine = lineA.get(resNextIndex++);
double xMin = 0.0;
double xMax = 50.0;
for (double i = lLine; i < xMax; i += 0.1) {
x.add(i);
if (i >= lLine && i < rLine) {
y.add(val);
} else if (i >= rLine) {
val = valA.get(resNextIndex-1);
lLine = rLine;
rLine = (resNextIndex < sizeA) ? lineA.get(resNextIndex++) : xMax;
y.add(val);
} else {
y.add(0.0);
}
}
XYChart chart = QuickChart.getChart("Sample", "x", "y", "a", x, y);
chart.getStyler().setXAxisMin(xMin);
chart.getStyler().setXAxisMax(xMax);
BitmapEncoder.getBufferedImage(chart);
###Output
_____no_output_____
###Markdown
Static Infomation Demo
###Code
var sif = new StaticInfoFactory("debugspace", "sample"); // set srcDir and binDir
ClassInfo ci = sif.createClass("jisd.demo.HelloWorld")
ci.fieldNames()
ci.methodNames()
var fi = ci.field("helloTo");
fi.name()
var mi = ci.method("main(java.lang.String[])");
mi.localNames()
var li = mi.local("a")
li.canSet()
###Output
_____no_output_____
###Markdown
Execute External Program Demo Use %exec magic
###Code
%exec pwd
###Output
/Users/saku/Workspace/2020/JISDLab
###Markdown
Use debug.Utility.exec()
###Code
var res = exec("pwd").get() // var res = exec("powershell -Command pwd").get()
res[0] // stdout
res[1] // stderr
res[2] // exit code (String)
###Output
_____no_output_____ |
Interactive_Norms.ipynb | ###Markdown
Subsurface Data Analytics Interactive Demonstration of Machine Learning Norms Michael Pyrcz, Associate Professor, University of Texas at Austin [Twitter](https://twitter.com/geostatsguy) | [GitHub](https://github.com/GeostatsGuy) | [Website](http://michaelpyrcz.com) | [GoogleScholar](https://scholar.google.com/citations?user=QVZ20eQAAAAJ&hl=en&oi=ao) | [Book](https://www.amazon.com/Geostatistical-Reservoir-Modeling-Michael-Pyrcz/dp/0199731446) | [YouTube](https://www.youtube.com/channel/UCLqEr-xV-ceHdXXXrTId5ig) | [LinkedIn](https://www.linkedin.com/in/michael-pyrcz-61a648a1) | [GeostatsPy](https://github.com/GeostatsGuy/GeostatsPy) PGE 383 Exercise: Interactive Demonstration of Machine Learning NormsHere's a simple workflow, demonstration of predictive machine learning model norms. We use a:* linear regression model* 1 preditor feature and 1 response featurefor an high interpretability model/ simple illustration. NormsGiven a vector of error over the $n_{train}$ training data.\begin{equation}\Delta y \rightarrow \Delta y_i, \forall i = 1,\dots,n_{train}\end{equation}We require a summarization of the error as a single value, this is a norm.A norm has the following properties:* norm of a vector maps the vaector values to a summary measures $\rightarrow [0,\infty)$Common norms include, Manhattan, Euclidean and the general p-norm.**Manhattan Norm** is defined as:\begin{equation}||\Delta y||_1 = \sum_{i=1}^{n_{train}} |\Delta y_i| \end{equation}**Euclidean Norm** is defined as:\begin{equation}||\Delta y||_2 = \sqrt{ \sum_{i=1}^{n_{train}} \left( \Delta y_i \right)^2 }\end{equation}**p-Norm** is defined as:\begin{equation}||\Delta y||_p = \left( \sum_{i=1}^{n_{train}} \left( \Delta y_i \right)^p \right)^{\frac{1}{p}}\end{equation} Workflow GoalsLearn the basics of machine learning training, tuning for model generalization while avoiding model overfit.This includes:* Demonstrate model training and tuning by hand with an interactive exercies* Demonstrate the role of data error in leading to model overfit with complicated models Import Required PackagesWe will also need some standard packages. These should have been installed with Anaconda 3.
###Code
%matplotlib inline
import sys # supress output to screen for interactive variogram modeling
import io
import numpy as np # arrays and matrix math
import pandas as pd # DataFrames
import matplotlib.pyplot as plt # plotting
from scipy.optimize import minimize # linear regression training by-hand with variable norms
from ipywidgets import interactive # widgets and interactivity
from ipywidgets import widgets
from ipywidgets import Layout
from ipywidgets import Label
from ipywidgets import VBox, HBox
###Output
_____no_output_____
###Markdown
Declare FunctionsWe have functions to perform linear regression for any norm. The code was modified from [N. Wouda](https://stackoverflow.com/questions/51883058/l1-norm-instead-of-l2-norm-for-cost-function-in-regression-model).* I modified the original functions for a general p-norm linear regression method
###Code
def predict(X, params): # linear prediction
return X.dot(params)
def loss_function(params, X, y, p): # custom p-norm, linear regression cost function
return np.sum(np.power(np.abs(y - predict(X, params)),p))
###Output
_____no_output_____
###Markdown
Interactive DashboardThis code designed the interactive dashboard, prediction model and plots
###Code
# widgets and dashboard
l = widgets.Text(value=' Machine Learning Norms Demo, Prof. Michael Pyrcz, The University of Texas at Austin',layout=Layout(width='950px', height='30px'))
p_norm = widgets.FloatSlider(min=0.1, max = 10, value=1.0, step = 0.2, description = '$L^{p}$',orientation='horizontal', style = {'description_width': 'initial'}, continuous_update=False)
n = widgets.IntSlider(min=15, max = 80, value=30, step = 1, description = 'n',orientation='horizontal', style = {'description_width': 'initial'}, continuous_update=False)
std = widgets.FloatSlider(min=0.0, max = .95, value=0.00, step = 0.05, description = 'Error (St.Dev.)',orientation='horizontal',style = {'description_width': 'initial'}, continuous_update=False)
xn = widgets.FloatSlider(min=0, max = 1.0, value=0.5, step = 0.05, description = '$X_{n+1}$',orientation='horizontal',style = {'description_width': 'initial'}, continuous_update=False)
yn = widgets.FloatSlider(min=0, max = 1.0, value=0.5, step = 0.05, description = '$Y_{n+1}$',orientation='horizontal', style = {'description_width': 'initial'}, continuous_update=False)
ui1 = widgets.HBox([p_norm,n,std],)
ui2 = widgets.HBox([xn,yn],)
ui = widgets.VBox([l,ui1,ui2],)
def run_plot(p_norm,n,std,xn,yn): # make data, fit models and plot
np.random.seed(73073) # set random number seed for repeatable results
X_seq = np.linspace(0,100.0,1000) # make data and add noise
X_seq = np.asarray([np.ones((len(X_seq),)), X_seq]).T
X = np.random.rand(n)*0.5
y = X*X + 0.0 # fit a parabola
y = y + np.random.normal(loc = 0.0,scale=std,size=n) # add noise
X = np.asarray([np.ones((n,)), X]).T # concatenate a vector of 1's for the constant term
X = np.vstack([X,[1,xn]]); y = np.append(y,yn) # add the user specified data value to X and y
x0 = [0.5,0.5] # initial guess of model parameters
p = 2.0
output_l2 = minimize(loss_function, x0, args=(X, y, p)) # train the L2 norm linear regression model
p = 1.0
output_l1 = minimize(loss_function, x0, args=(X, y, p)) # train the L1 norm linear regression model
p = 3.0
output_l3 = minimize(loss_function, x0, args=(X, y, p)) # train the L3 norm linear regression model
p = p_norm
output_lcust = minimize(loss_function, x0, args=(X, y, p)) # train the p-norm linear regression model
y_hat_l1 = predict(X_seq, output_l1.x) # predict over the range of X for all models
y_hat_l2 = predict(X_seq, output_l2.x)
y_hat_l3 = predict(X_seq, output_l3.x)
y_hat_lcust = predict(X_seq, output_lcust.x)
plt.subplot(111) # plot the results
plt.scatter(X[:(n-1),1],y[:(n-1)],s=20,facecolor = 'yellow', edgecolor = 'black', alpha = 0.4)
plt.scatter(X[n,1],y[n],s=40,marker='^',facecolor = 'orange', edgecolor = 'black', alpha = 0.4)
plt.plot(X_seq[:,1],y_hat_l1,c = 'blue',alpha = 0.3,label = "L1 Norm")
plt.plot(X_seq[:,1],y_hat_l2,c = 'red',alpha = 0.3,label = "L2 Norm")
plt.plot(X_seq[:,1],y_hat_l3,c = 'green',alpha = 0.3,label = "L3 Norm")
plt.plot(X_seq[:,1],y_hat_lcust,c = 'black',alpha = 1.0,label = "L"+ str(p_norm) + " Norm")
plt.xlabel(r'Predictor Feature, $X_{1}$'); plt.ylabel(r'Response Feature, $y$'); plt.title('Linear Regression with Various Norms')
plt.xlim([0.0,1.0]); plt.ylim([0.0,1.0])
plt.legend(loc = 'lower right')
plt.subplots_adjust(left=0.0, bottom=0.0, right=1.5, top=1.6, wspace=0.9, hspace=0.3)
plt.show()
# connect the function to make the samples and plot to the widgets
interactive_plot = widgets.interactive_output(run_plot, {'p_norm':p_norm,'n':n,'std':std,'xn':xn,'yn':yn})
interactive_plot.clear_output(wait = True) # reduce flickering by delaying plot updating
###Output
_____no_output_____
###Markdown
Interactive Machine Learning Norms Demonstation Michael Pyrcz, Associate Professor, The University of Texas at Austin Observe the impact of choice of norm with variable number of sample data, the data noise, and an outlier! The Inputs* **p-norm** - 1 = Manhattan norm, 2 = Euclidean norm, etc., **n** - number of data, **Error** - random error in standard deviations* **$x_{n+1}$**, **$y_{n+1}$** - x and y location of an additional data value
###Code
display(ui, interactive_plot) # display the interactive plot
###Output
_____no_output_____ |
Activities/in_class_activity.ipynb | ###Markdown
For the given list of strings, return common letters for the strings that starts with letter 'A'- Implement this in Filter + Reduce way -> The reason: imagine, the fruit list is very large
###Code
## for example:
fruit = ["Apple", "Banana", "Pear", "Apricot", "Orange"]
# common letters for the strings that starts with letter 'A' are: 'A', 'p'
## Hint:
set("Apple")
set("Apricot")
set("Apple").intersection(set("Apricot"))
###Output
_____no_output_____
###Markdown
Obtain the largest element of a list (without using max)- Implement it in reduce way
###Code
reduce(lambda x, y: x if x > y else y, [1, 5, 2, 10, 13, 2])
reduce(lambda x, y: max(x, y), [1, 5, 2, 10, 13, 2])
# the above operations is different in terms of computation with the following if ls (input list argument) is too large
max([1, 5, 2, 10, 13, 2])
###Output
_____no_output_____ |
pyspark-udacity/N02-spark_maps_and_lazy_evaluation.ipynb | ###Markdown
The code cell ran quite quickly. This is because of lazy evaluation. Spark does not actually execute the map step unless it needs to."RDD" in the output refers to resilient distributed dataset. RDDs are exactly what they say they are: fault-tolerant datasets distributed across a cluster. This is how Spark stores data.To get Spark to actually run the map step, you need to use an "action". One available action is the collect method. The collect() method takes the results from all of the clusters and "collects" them into a single list on the master node.
###Code
distributed_song_log.map(convert_song_to_lowercase).collect()
# Spark is not changing the original data set:
# Spark is merely making a copy. You can see this by running collect() on the original dataset.
distributed_song_log.collect()
###Output
_____no_output_____
###Markdown
You do not always have to write a custom function for the map step. You can also use anonymous (lambda) functions as well as built-in Python functions like string.lower().Anonymous functions are actually a Python feature for writing functional style programs.
###Code
distributed_song_log.map(lambda song: song.lower()).collect()
###Output
_____no_output_____ |
Practical_Statistics/Confidence_Intervals/Sampling_Distributions-Difference_in_Means.ipynb | ###Markdown
Confidence Interval - Difference In MeansHere you will look through the example from the last video, but you will also go a couple of steps further into what might actually be going on with this data.
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(42)
full_data = pd.read_csv('coffee_dataset.csv')
sample_data = full_data.sample(200)
sample_data.height.mean()
sample_data.head()
sample_data['age'].unique()
###Output
_____no_output_____
###Markdown
`1.` For 10,000 iterations, bootstrap sample your sample data, compute the difference in the average heights for coffee and non-coffee drinkers. Build a 99% confidence interval using your sampling distribution. Use your interval to start answering the first quiz question below.
###Code
diff = []
for _ in range(10000):
bootsample_height = sample_data.sample(200, replace=True)
mean_coff = bootsample_height[bootsample_height['drinks_coffee'] == True]['height'].mean()
mean_nocoff = bootsample_height[bootsample_height['drinks_coffee'] == False]['height'].mean()
diff.append(mean_coff - mean_nocoff)
plt.hist(diff);
np.percentile(diff, 0.5), np.percentile(diff, 99.5)
###Output
_____no_output_____
###Markdown
`2.` For 10,000 iterations, bootstrap sample your sample data, compute the difference in the average heights for those older than 21 and those younger than 21. Build a 99% confidence interval using your sampling distribution. Use your interval to finish answering the first quiz question below.
###Code
diff_h_a = []
for _ in range(10000):
bootsample = sample_data.sample(200, replace=True)
mean_height_ov_21 = bootsample[bootsample['age'] == '>=21']['height'].mean()
mean_height_un_21 = bootsample[bootsample['age'] == '<21']['height'].mean()
diff_h_a.append(mean_height_ov_21 - mean_height_un_21)
plt.hist(diff_h_a);
np.percentile(diff_h_a, 0.5), np.percentile(diff_h_a, 99.5)
###Output
_____no_output_____
###Markdown
`3.` For 10,000 iterations bootstrap your sample data, compute the **difference** in the average height for coffee drinkers and the average height for non-coffee drinkers for individuals **under** 21 years old. Using your sampling distribution, build a 95% confidence interval. Use your interval to start answering question 2 below.
###Code
diff_h_a_c = []
for _ in range(10000):
bootsample = sample_data.sample(200, replace=True)
mean_height_un21_coff = bootsample[(bootsample['age'] == '<21') & (bootsample['drinks_coffee'] == True)]['height'].mean()
mean_height_un21_noncoff = bootsample[(bootsample['age'] == '<21') & (bootsample['drinks_coffee'] == False)]['height'].mean()
diff_h_a_c.append(mean_height_un21_coff - mean_height_un21_noncoff)
plt.hist(diff_h_a_c);
np.percentile(diff_h_a_c, 0.5), np.percentile(diff_h_a_c, 99.5)
###Output
_____no_output_____
###Markdown
`4.` For 10,000 iterations bootstrap your sample data, compute the **difference** in the average height for coffee drinkers and the average height for non-coffee drinkers for individuals **over** 21 years old. Using your sampling distribution, build a 95% confidence interval. Use your interval to finish answering the second quiz question below. As well as the following questions.
###Code
diff_h_c = []
for _ in range(10000):
bootsample = sample_data.sample(200, replace=True)
mean_height_ov21_coff = bootsample[(bootsample['age'] == '>=21') & (bootsample['drinks_coffee'] == True)]['height'].mean()
mean_height_ov21_noncoff = bootsample[(bootsample['age'] == '>=21') & (bootsample['drinks_coffee'] == False)]['height'].mean()
diff_h_c.append(mean_height_ov21_coff - mean_height_ov21_noncoff)
plt.hist(diff_h_c);
np.percentile(diff_h_c, 0.5), np.percentile(diff_h_c, 99.5)
###Output
_____no_output_____ |
python/itrdb_tree_ring_download.ipynb | ###Markdown
geojson file
###Code
root_path = "https://www1.ncdc.noaa.gov/pub/data/metadata/published/paleo/json"
feature_collection = {"type":"FeatureCollection",
"features": []}
files = []
with urllib.request.urlopen(root_path) as url:
html_doc = url.read()
soup = BeautifulSoup(html_doc, 'html.parser')
conts = soup.body.table.contents
for line in conts:
if line != '\n':
fname = line.get_text()
if "tree" in fname:
record_name = fname[:fname.index('.json')] + ".json"
full_path = root_path + "/" + record_name
with urllib.request.urlopen(full_path) as url:
json_doc_string = url.read()
json_doc = json.loads(json_doc_string)
orig_file = full_path
try:
study_id = json_doc["NOAAStudyId"]
study_code = json_doc["studyCode"]
resource = json_doc['onlineResourceLink']
doi = json_doc["doi"]
investigators = json_doc["investigators"]
site_coords = json_doc['site'][0]['geo']['geometry']['coordinates']
full_data = json_doc['site'][0]['paleoData']
site_name = json_doc['site'][0]['siteName']
common_species = json_doc['site'][0]['paleoData'][0]['species'][0]['commonName']
scientific_species = json_doc['site'][0]['paleoData'][0]['species'][0]['scientificName']
code_species = json_doc['site'][0]['paleoData'][0]['species'][0]['speciesCode']
earliest_date = json_doc['site'][0]['paleoData'][0]['earliestYear']
most_recent_date = json_doc['site'][0]['paleoData'][0]['mostRecentYear']
geojson = {"type": "Feature",
"geometry": {
"type": "Point",
"coordinates": [site_coords[1], site_coords[0]]
},
"properties": {
"orig_filename": orig_file,
"study_ID": study_id,
"doi": doi,
"investigators": investigators,
"lat": site_coords[0],
"lon": site_coords[1],
"site_name": site_name,
"species_name_com": common_species,
"species_name_sci": scientific_species,
"species_code": code_species,
"earliest_year": earliest_date,
"most_recent_year": most_recent_date,
"data": full_data,
"study_code": study_code,
"noaa_online_resource_page": resource
}
}
feature_collection["features"].append(geojson)
except IndexError:
print(full_path)
# do json encoding
with open('./itrdb.geojson', 'w') as outfile:
json.dump(feature_collection, outfile)
###Output
_____no_output_____
###Markdown
data txt file
###Code
root_path = "https://www1.ncdc.noaa.gov/pub/data/metadata/published/paleo/json"
feature_collection = {"type":"FeatureCollection",
"features": []}
files = []
with urllib.request.urlopen(root_path) as url:
html_doc = url.read()
soup = BeautifulSoup(html_doc, 'html.parser')
conts = soup.body.table.contents
output_data_table_name = "C:/Users/Jacob/Projects/itrdb/itrdb_chronology_data.txt"
data_table_file = open(output_data_table_name, "w")
data_table_file.write("site_id")
data_file_thresh = 7
var_thresh = 3
for i in range(data_file_thresh): # 7 is arbitrary, (an assumption) I can't see there being more than 10 data files associated with a single site??
num = "0" + str(i)
data_table_file.write(",u_"+num+", u_"+num+"_desc, u_"+num+"_keyword")
for v in range(var_thresh): # 3 is arbitrary, I can't see there being more than 3 variables
v_num = "0" + str(v)
data_table_file.write(", v_"+num+"_"+v_num+"_desc, v_"+num+"_"+v_num+"_meth, v_"+num+"_"+v_num+"_det, v_"+num+"_"+v_num+"_unit")
data_table_file.write("\n")
for line in conts:
if line != '\n':
fname = line.get_text()
if "tree" in fname:
record_name = fname[:fname.index('.json')] + ".json"
full_path = root_path + "/" + record_name
with urllib.request.urlopen(full_path) as url:
json_doc_string = url.read()
json_doc = json.loads(json_doc_string)
orig_file = full_path
try:
study_id = json_doc["NOAAStudyId"]
# getting paleoData
data_table_file.write(study_id)
whole_line = ""
n = 0
while n < len(json_doc['site'][0]['paleoData'][0]['dataFile']):
data_url = json_doc['site'][0]['paleoData'][0]['dataFile'][n]["fileUrl"]
data_desc = json_doc['site'][0]['paleoData'][0]['dataFile'][n]["urlDescription"]
keyword = json_doc['site'][0]['paleoData'][0]['dataFile'][n]["NOAAKeywords"][0].split(">")[-1]
file_str = "," + data_url + "," + data_desc + "," + keyword
var_str = ""
v = 0
while v < len(json_doc['site'][0]['paleoData'][0]['dataFile'][n]["variables"]):
var_desc = str(json_doc['site'][0]['paleoData'][0]['dataFile'][n]["variables"][v]["cvWhat"].split(">")[-1])
var_meth = str(json_doc['site'][0]['paleoData'][0]['dataFile'][n]["variables"][v]["cvMethod"])
var_det = str(json_doc['site'][0]['paleoData'][0]['dataFile'][n]["variables"][v]["cvDetail"])
var_unit = str(json_doc['site'][0]['paleoData'][0]['dataFile'][n]["variables"][v]["cvUnit"])
if var_desc == "null" or var_desc == None or var_desc == "None":
var_desc = ""
if var_meth == "null" or var_meth == None or var_meth == "None":
var_meth = ""
if var_det == "null" or var_det == None or var_det == "None":
var_det = ""
if var_unit == "null" or var_unit == None or var_unit == "None":
var_unit = ""
var_str += "," + var_desc + "," + var_meth + "," + var_det + "," + var_unit
v+=1
if v < var_thresh: # if didn't get enough variables to fill the row
extras_v = ", , , , "
extras_v*=(var_thresh - v)
var_str += extras_v
whole_line+=file_str + var_str
n+=1
if n < data_file_thresh:
extras = ", , , , , , , "
extras*=(data_file_thresh - n)
whole_line+=extras
data_table_file.write(whole_line + "\n")
except IndexError:
print(full_path)
###Output
_____no_output_____
###Markdown
geojson to shapefile
###Code
import json
from osgeo import ogr
from osgeo import osr
def geojson_to_shapefile(input_filename, output_filename):
contents = open(input_filename)
contents = contents.read()
data = json.loads(contents)
# set up the shapefile driver
driver = ogr.GetDriverByName("ESRI Shapefile")
# create the data source
data_source = driver.CreateDataSource(output_filename)
# create the spatial reference, WGS84
srs = osr.SpatialReference()
srs.ImportFromEPSG(4326)
# create the layer
layer = data_source.CreateLayer("itrdb", srs, ogr.wkbPoint)
studyId = ogr.FieldDefn("studyID", ogr.OFTString)
studyId.SetWidth(20)
layer.CreateField(studyId)
f = ogr.FieldDefn("filename", ogr.OFTString)
f.SetWidth(100)
layer.CreateField(f)
d = ogr.FieldDefn("doi", ogr.OFTString)
d.SetWidth(100)
layer.CreateField(d)
i = ogr.FieldDefn("invstgtrs", ogr.OFTString)
i.SetWidth(150)
layer.CreateField(i)
layer.CreateField(ogr.FieldDefn("lat", ogr.OFTReal))
layer.CreateField(ogr.FieldDefn("lon", ogr.OFTReal))
sn = ogr.FieldDefn("sitename", ogr.OFTString)
sn.SetWidth(150)
layer.CreateField(sn)
spc = ogr.FieldDefn("sppCom", ogr.OFTString)
spc.SetWidth(150)
layer.CreateField(spc)
sps = ogr.FieldDefn("sppSci", ogr.OFTString)
sps.SetWidth(150)
layer.CreateField(sps)
scode = ogr.FieldDefn("sppCode", ogr.OFTString)
scode.SetWidth(10)
layer.CreateField(scode)
layer.CreateField(ogr.FieldDefn("earliest", ogr.OFTInteger))
layer.CreateField(ogr.FieldDefn("mostRecent", ogr.OFTInteger))
studycode = ogr.FieldDefn("studyCode", ogr.OFTString)
studycode.SetWidth(20)
layer.CreateField(studycode)
noaap = ogr.FieldDefn("noaaPage", ogr.OFTString)
noaap.SetWidth(150)
layer.CreateField(noaap)
n = 0
while n < len(data['features']):
# create the feature
feature = ogr.Feature(layer.GetLayerDefn())
# Set the attributes using the values from the delimited text file
feature.SetField("studyID", data['features'][n]['properties']["study_ID"])
feature.SetField("filename", data['features'][n]['properties']["orig_filename"])
feature.SetField("doi", data['features'][n]['properties']["doi"])
feature.SetField("invstgtrs", data['features'][n]['properties']["investigators"])
feature.SetField("lat", float(data['features'][n]['properties']["lat"]))
feature.SetField("lon", float(data['features'][n]['properties']["lon"]))
feature.SetField("sitename", data['features'][n]['properties']["site_name"])
feature.SetField("sppCom", data['features'][n]['properties']["species_name_com"][0])
feature.SetField("sppSci", data['features'][n]['properties']["species_name_sci"])
feature.SetField("sppCode", data['features'][n]['properties']["species_code"])
feature.SetField("earliest", int(data['features'][n]['properties']["earliest_year"]))
feature.SetField("mostRecent", int(data['features'][n]['properties']["most_recent_year"]))
feature.SetField("studyCode", data['features'][n]['properties']["study_code"])
feature.SetField("noaaPage", data['features'][n]['properties']["noaa_online_resource_page"])
point = ogr.Geometry(ogr.wkbPoint)
point.AddPoint(float(data['features'][n]['geometry']['coordinates'][0]), float(data['features'][n]['geometry']['coordinates'][1]))
# Set the feature geometry using the polygon
feature.SetGeometry(point)
# Create the feature in the layer (shapefile)
layer.CreateFeature(feature)
# Dereference the feature
feature = None
n+=1
data_source = None
geojson_to_shapefile("C:/Users/Jacob/Projects/itrdb/data/itrdb.geojson", "C:/Users/Jacob/Projects/itrdb/data/itrdb.shp")
###Output
_____no_output_____ |
src/GloVe embeddings.ipynb | ###Markdown
Env
###Code
cd ..
###Output
/Users/svo6059/PycharmProjects/CISS_Project
###Markdown
Imports
###Code
from project.preprocessing.generator import glove_generator
from nltk import word_tokenize
from sklearn.manifold import TSNE
from sklearn.linear_model import LogisticRegression
from sklearn.base import TransformerMixin
import json
import pandas as pd
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import pandas
%load_ext autoreload
%autoreload 2
sns.set_palette('deep', color_codes=True)
from deeppavlov.models.embedders.glove_embedder import GloVeEmbedder
embedder = GloVeEmbedder(load_path="data/models/glove.txt",
pad_zero=False # means whether to pad up to the longest sample in a batch
)
###Output
[nltk_data] Downloading package punkt to /Users/svo6059/nltk_data...
[nltk_data] Package punkt is already up-to-date!
[nltk_data] Downloading package stopwords to
[nltk_data] /Users/svo6059/nltk_data...
[nltk_data] Package stopwords is already up-to-date!
[nltk_data] Downloading package perluniprops to
[nltk_data] /Users/svo6059/nltk_data...
[nltk_data] Package perluniprops is already up-to-date!
[nltk_data] Downloading package nonbreaking_prefixes to
[nltk_data] /Users/svo6059/nltk_data...
[nltk_data] Package nonbreaking_prefixes is already up-to-date!
2019-06-26 20:00:08.324 INFO in 'deeppavlov.models.embedders.glove_embedder'['glove_embedder'] at line 52: [loading GloVe embeddings from `/Users/svo6059/PycharmProjects/CISS_Project/data/models/glove.txt`]
/usr/local/anaconda3/envs/pavlov/lib/python3.6/site-packages/smart_open/smart_open_lib.py:398: UserWarning: This function is deprecated, use smart_open.open instead. See the migration notes for details: https://github.com/RaRe-Technologies/smart_open/blob/master/README.rst#migrating-to-the-new-open-function
'See the migration notes for details: %s' % _MIGRATION_NOTES_URL
###Markdown
Functions Generator
###Code
import tensorflow as tf
###Output
_____no_output_____
###Markdown
Sklearn class GloVeWord(TransformerMixin): def __init__(self, max_size=None): if max_size is None: self.fit_size = True self.max_size = None else: self.fit_size = True self.max_size = max_size self.max_size = max([len(x for x in X)]) def fit(self, X, y): if type(X[0]) is not str: pass if self.fit_size: paxs def transform(self): class AverageSentence(TransformerMixin): """ Applies glove embeddings Parameters ---------- method : string possible methods are 'concat' and 'average' """ def __init__(self, embedding, method='concat'): self.method = method def fit(self, X, y=None): assert type(X[0]) is not str self.max_size = max([len(x for x in X)]) return self def transform(self, X, y=None): assert type(X[0]) is not str emb = embedder(X) if self.method == 'concat': return emb.reshape(X.shape[0], -1) elif self.method == 'average': return emb.mean(axis=1) else: raise NotImplementedError Metrics Data loading
###Code
train = pd.read_json('data/train.json')
dev = pd.read_json('data/dev.json')
%%time
train['sentence'] = train.sentence.apply(word_tokenize)
train['question'] = train.question.apply(word_tokenize)
%%time
dev['sentence'] = dev.sentence.apply(word_tokenize)
dev['question'] = dev.question.apply(word_tokenize)
print(train.shape)
print(dev.shape)
###Output
(457135, 5)
(54876, 5)
###Markdown
Body Tokenize
###Code
train.sentence.apply(len).value_counts().sort_index().plot()
train.question.apply(len).value_counts().sort_index().plot()
plt.xlim(0,70)
###Output
_____no_output_____
###Markdown
Test generator
###Code
from sklearn.metrics import precision_score
from keras_metrics import binary_f1_score, binary_precision
s = embedder(train.head(6).sentence)
###Output
_____no_output_____
###Markdown
Shallow model
###Code
inp = tf.keras.layers.Input((200,))
x = tf.keras.layers.Dense(50, activation='relu')(inp)
out = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs=inp, outputs=out)
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy',
tf.keras.metrics.Precision(),
tf.keras.metrics.Recall()])
###Output
_____no_output_____
###Markdown
Unbalanced results
###Code
BATCH_SIZE = 128
epoch_steps = train.shape[0]//BATCH_SIZE
val_steps = dev.shape[0]//BATCH_SIZE
gen = glove_generator(train, BATCH_SIZE, embedder=embedder)
history = model.fit_generator(gen, steps_per_epoch=epoch_steps, epochs=10,
validation_data = glove_generator(dev, BATCH_SIZE, embedder=embedder),
validation_steps = val_steps)
def confusion_matrix_generation(trained_model, dev_set, batch=128, normalize=False, threshold=.5):
from sklearn.metrics import confusion_matrix
ypred = trained_model.predict_generator(glove_generator(dev_set, batch, embedder=embedder), steps = val_steps)
true_gen = glove_generator(dev_set, batch, embedder=embedder, only_labels=True)
ytrue = pd.concat([next(true_gen) for i in range(val_steps)]).values
conf = confusion_matrix(ytrue, ypred>threshold)
if normalize:
return (conf.T/conf.sum(axis=1)).T
else:
return conf
def plot_results(model, history, dev, normalize=True):
fig = plt.figure(figsize=(6,6), dpi=150)
shape = (2, 3)
res = pd.DataFrame(history.history).reset_index()
res = res.melt(id_vars = 'index')
res['val'] = res.variable.apply(lambda s: 'val' if 'val' in s else 'train')
res['variable'] = res.variable.apply(lambda s: s[4:] if 'val' in s else s)
ax0 = plt.subplot2grid(shape, (0,1), 1,2, fig=fig)
sns.lineplot(x = 'index', y='value', hue='variable', style='val', data=res, ax=ax0)
ax0.legend(loc=(-.6,.1), frameon=False)
sns.despine(); ax0.set_ylabel(''), ax0.set_xlabel('epoch')
ax1 = plt.subplot2grid((4, 3), (2,0), 1,1)
confusion = confusion_matrix_generation(model, dev, normalize=normalize)
sns.heatmap(confusion, annot=True, fmt='.2f' if normalize else 'd', ax=ax1, cbar=False)
ax1.set_ylabel('True label')
ax1.set_xlabel('Predicion')
plot_results(model, history, dev, False)
###Output
_____no_output_____
###Markdown
Balanced data
###Code
weight = train.label.value_counts()[0]/train.label.value_counts()[1]
inp = tf.keras.layers.Input((200,))
x = tf.keras.layers.Dense(50, activation='relu')(inp)
out = tf.keras.layers.Dense(1, activation='sigmoid')(x)
model = tf.keras.Model(inputs=inp, outputs=out)
model.compile(optimizer='adam', loss='binary_crossentropy',
metrics=['accuracy',
tf.keras.metrics.Precision(),
tf.keras.metrics.Recall()])
BATCH_SIZE = 256
epoch_steps = train.shape[0]//BATCH_SIZE
val_steps = dev.shape[0]//BATCH_SIZE
gen = glove_generator(train, BATCH_SIZE, embedder=embedder)
history = model.fit_generator(gen, steps_per_epoch=epoch_steps, epochs=10,
validation_data = glove_generator(dev, BATCH_SIZE, embedder=embedder),
validation_steps = val_steps, class_weight={0:1, 1:weight}, use_multiprocessing=True)
plot_results(model, history, dev, True)
###Output
_____no_output_____ |
DifferentRegionsCorrelatedLatents/s31_TH_MO.ipynb | ###Markdown
Focus on what matters: inferring low-dimensional dynamics from neural recordings**By Neuromatch Academy**__Content creators:__ Marius Pachitariu, Pedram Mouseli, Lucas Tavares, Jonny Coutinho, Blessing Itoro, Gaurang Mahajan, Rishika Mohanta **Our 2021 Sponsors, including Presenting Sponsor Facebook Reality Labs** --- Objective: It is very difficult to interpret the activity of single neurons in the brain, because their firing patterns are noisy, and it is not clear how a single neuron can contribute to cognition and behavior. However, neurons in the brain participate in local, regional and brainwide dynamics. No neuron is isolated from these dynamics, and much of a single neuron's activity can be predicted from the dynamics. Furthermore, only populations of neurons as a whole can control cognition and behavior. Hence it is crucial to identify these dynamical patterns and relate them to stimuli or behaviors. In this notebook, we generate simulated data from a low-dimensional dynamical system and then use seq-to-seq methods to predict one subset of neurons from another. This allows us to identify the low-dimensional dynamics that are sufficient to explain the activity of neurons in the simulation. The methods described in this notebook can be applied to large-scale neural recordings of hundreds to tens of thousans of neurons, such as the ones from the NMA-CN course. --- Setup
###Code
# Imports
import torch
import numpy as np
import torch.nn as nn
import torch.nn.functional as F
from matplotlib import pyplot as plt
import math
from sklearn.linear_model import LinearRegression
import copy
# @title Figure settings
from matplotlib import rcParams
rcParams['figure.figsize'] = [20, 4]
rcParams['font.size'] =15
rcParams['axes.spines.top'] = False
rcParams['axes.spines.right'] = False
rcParams['figure.autolayout'] = True
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print(device)
def pearson_corr_tensor(input, output):
rpred = output.detach().cpu().numpy()
rreal = input.detach().cpu().numpy()
rpred_flat = np.ndarray.flatten(rpred)
rreal_flat = np.ndarray.flatten(rreal)
corrcoeff = np.corrcoef(rpred_flat, rreal_flat)
return corrcoeff[0,1]
#@title Set random seed
#@markdown Executing `set_seed(seed=seed)` you are setting the seed
# for DL its critical to set the random seed so that students can have a
# baseline to compare their results to expected results.
# Read more here: https://pytorch.org/docs/stable/notes/randomness.html
# Call `set_seed` function in the exercises to ensure reproducibility.
import random
import torch
def set_seed(seed=None, seed_torch=True):
if seed is None:
seed = np.random.choice(2 ** 32)
random.seed(seed)
np.random.seed(seed)
if seed_torch:
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
print(f'Random seed {seed} has been set.')
# In case that `DataLoader` is used
def seed_worker(worker_id):
worker_seed = torch.initial_seed() % 2**32
np.random.seed(worker_seed)
random.seed(worker_seed)
###Output
_____no_output_____
###Markdown
**Note:** If `cuda` is not enabled, go to `Runtime`--> `Change runtime type` and in `Hardware acceleration` choose `GPU`.
###Code
# Data Loading
#@title Data retrieval
import os, requests
fname = []
for j in range(3):
fname.append('steinmetz_part%d.npz'%j)
url = ["https://osf.io/agvxh/download"]
url.append("https://osf.io/uv3mw/download")
url.append("https://osf.io/ehmw2/download")
for j in range(len(url)):
if not os.path.isfile(fname[j]):
try:
r = requests.get(url[j])
except requests.ConnectionError:
print("!!! Failed to download data !!!")
else:
if r.status_code != requests.codes.ok:
print("!!! Failed to download data !!!")
else:
with open(fname[j], "wb") as fid:
fid.write(r.content)
alldat = np.array([])
for j in range(len(fname)):
alldat = np.hstack((alldat, np.load('steinmetz_part%d.npz'%j, allow_pickle=True)['dat']))
#@title Print Keys
print(alldat[0].keys())
#@title Define Steinmetz Class
class SteinmetzSession:
data = []
binSize = 10
nTrials = []
nNeurons = []
trialLen = 0
trimStart = "trialStart"
trimEnd = "trialEnd"
def __init__(self, dataIn):
self.data = copy.deepcopy(dataIn)
dims1 = np.shape(dataIn['spks'])
self.nTrials = dims1[1]
self.nNeurons = dims1[0]
self.trialLen = dims1[2]
def binData(self, binSizeIn): # Inputs: data, scalar for binning. Combines binSizeIn bins together to bin data smaller Ex. binSizeIn of 5 on the original dataset combines every 5 10 ms bins into one 50 ms bin across all trials.
varsToRebinSum = ['spks']
varsToRebinMean = ['wheel', 'pupil']
spikes = self.data['spks']
histVec = range(0,self.trialLen+1, binSizeIn)
spikesBin = np.zeros((self.nNeurons, self.nTrials, len(histVec)))
print(histVec)
for trial in range(self.nTrials):
spikes1 = np.squeeze(spikes[:,trial,:])
for time1 in range(len(histVec)-1):
spikesBin[:,trial, time1] = np.sum(spikes1[:, histVec[time1]:histVec[time1+1]-1], axis=1)
spikesBin = spikesBin[:,:,:-1]
self.data['spks'] = spikesBin
self.trialLen = len(histVec) -1
self.binSize = self.binSize*binSizeIn
s = "Binned spikes, turning a " + repr(np.shape(spikes)) + " matrix into a " + repr(np.shape(spikesBin)) + " matrix"
print(s)
def plotTrial(self, trialNum): # Basic function to plot the firing rate during a single trial. Used for debugging trimming and binning
plt.imshow(np.squeeze(self.data['spks'][:,trialNum,:]), cmap='gray_r', aspect = 'auto')
plt.colorbar()
plt.xlabel("Time (bins)")
plt.ylabel("Neuron #")
def realign_data_to_movement(self,length_time_in_ms): # input has to be n * nTrials * nbins
align_time_in_bins = np.round(self.data['response_time']/self.binSize*1000)+ int(500/self.binSize) # has to add 0.5 s because the first 0.5 s is pre-stimulus
length_time_in_bins = int(length_time_in_ms/self.binSize)
validtrials = self.data['response']!=0
maxtime = self.trialLen
newshape = (self.nNeurons,self.nTrials)
newshape+=(length_time_in_bins,)
newdata = np.empty(newshape)
for count,align_time_curr_trial in enumerate(align_time_in_bins):
if (validtrials[count]==0)|(align_time_curr_trial+length_time_in_bins>maxtime) :
validtrials[count] = 0
else:
newdata[:,count,:]= self.data['spks'][:,count,int(align_time_curr_trial):int(align_time_curr_trial)+length_time_in_bins]
# newdata = newdata[:,validtrials,:]
self.data['spks'] = newdata
# self.validtrials = validtrials
print('spikes aligned to movement, returning validtrials')
return validtrials
def get_areas(self):
print(set(list(self.data['brain_area'])))
def extractROI(self, region): #### extract neurons from single region
rmrt=list(np.where(self.data['brain_area']!=region))[0]
print(f' removing data from {len(rmrt)} neurons not contained in {region} ')
self.data['spks']=np.delete(self.data['spks'],rmrt,axis=0)
neur=len(self.data['spks'])
print(f'neurons remaining in trial {neur}')
self.data['brain_area']=np.delete(self.data['brain_area'],rmrt,axis=0)
self.data['ccf']=np.delete(self.data['ccf'],rmrt,axis=0)
def FlattenTs(self):
self.data['spks']=np.hstack(self.data['spks'][:])
def removeTrialAvgFR(self):
mFR = self.data['spks'].mean(1)
mFR = np.expand_dims(mFR, 1).repeat(self.data['spks'].shape[1],axis = 1)
print(np.shape(self.data['spks']))
print(np.shape(mFR))
self.data['spks'] = self.data['spks'].astype(float)
self.data['spks'] -= mFR
def permdims(self):
return torch.permute(torch.tensor(self.data['spks']),(2,1,0))
def smoothFR(self, smoothingWidth):# TODO: Smooth the data and save it back to the data structure
return 0
###Output
_____no_output_____
###Markdown
select session and area
###Code
# set the sessions
session_num = 30
curr_session=SteinmetzSession(alldat[session_num])
# some preprocessing
validtrials = curr_session.realign_data_to_movement(500) # get 500 ms from movement time,
# cannot get realign and binning to work the same time =[
# print areas
curr_session.get_areas()
# CHANGE ME
# set areas
PA_name = 'MOs' # predicted area
IA_name = 'TH' # input area
# Set input/hyperparameters here:
ncomp = 10
learning_rate_start = 0.005
nTr = np.argwhere(validtrials) # since the other trials were defaulted to a zero value, only plot the valid trials
## plot a trial
plt.figure()
curr_session.plotTrial(nTr[1])
plt.title('All')
PA = copy.deepcopy(curr_session)
###remove all neurons not in motor cortex
PA.extractROI(PA_name)
### plot a trial from motor neuron
plt.figure()
PA.plotTrial(nTr[1])
plt.title('Predicted Area')
### permute the trials
PAdata = PA.permdims().float().to(device)
PAdata = PAdata[:,validtrials,:]
print(PAdata.shape)
if IA_name == 'noise':
# generate some negative controls:
neg_control_randn,_ = torch.max(torch.randn(PAdata.shape),0) # for now say the shape of noise matches the predicted area, I doubt that matters?
plt.imshow(neg_control_randn.numpy(),cmap = 'gray_r',aspect = 'auto')
plt.title('Random noise')
else:
IA = copy.deepcopy(curr_session)
###remove all neurons not in motor cortex
IA.extractROI(IA_name)
### plot a trial from motor neuron
plt.figure()
IA.plotTrial(nTr[1])
plt.title('Input Area')
IAdata = IA.permdims().float().to(device)
IAdata = IAdata[:,validtrials,:]
print(IAdata.shape)
#@title get indices for trials (split into ~60%, 30%,10%)
N = PAdata.shape[1]
np.random.seed(42)
ii = torch.randperm(N).tolist()
idx_train = ii[:math.floor(0.6*N)]
idx_val = ii[math.floor(0.6*N):math.floor(0.9*N)]
idx_test = ii[math.floor(0.9*N):]
#@title split into train, test and validation set
x0 = IAdata
x0_train = IAdata[:,idx_train,:]
x0_val = IAdata[:,idx_val,:]
x0_test = IAdata[:,idx_test,:]
x1 = PAdata
x1_train = PAdata[:,idx_train,:]
x1_val = PAdata[:,idx_val,:]
x1_test = PAdata[:,idx_test,:]
NN1 = PAdata.shape[2]
NN2 = IAdata.shape[2]
###Output
_____no_output_____
###Markdown
Our RNN model
###Code
class Net_singleinput(nn.Module): # our model
def __init__(self, ncomp, NN2, NN1, bidi=True): # NN2 is input dim, NN1 is output dim
super(Net_singleinput, self).__init__()
# play with some of the options in the RNN!
self.rnn1 = nn.RNN(NN2, ncomp, num_layers = 1, dropout = 0, # PA
bidirectional = bidi, nonlinearity = 'tanh')
self.fc = nn.Linear(ncomp,NN1)
def forward(self, x0,x1):
y = self.rnn1(x0)[0] # ncomp IAs
if self.rnn1.bidirectional:
# if the rnn is bidirectional, it concatenates the activations from the forward and backward pass
# we want to add them instead, so as to enforce the latents to match between the forward and backward pass
q = (y[:, :, :ncomp] + y[:, :, ncomp:])/2
else:
q = y
# the softplus function is just like a relu but it's smoothed out so we can't predict 0
# if we predict 0 and there was a spike, that's an instant Inf in the Poisson log-likelihood which leads to failure
z = F.softplus(self.fc(q), 10)
return z, q
###Output
_____no_output_____
###Markdown
train model
###Code
# @title train loop
# you can keep re-running this cell if you think the cost might decrease further
# we define the Poisson log-likelihood loss
def Poisson_loss(lam, spk):
return lam - spk * torch.log(lam)
def train(net,train_input,train_output,val_input,val_output,niter = 400):
set_seed(42)
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate_start)
training_cost = []
val_cost = []
for k in range(niter):
### training
optimizer.zero_grad()
# the network outputs the single-neuron prediction and the latents
z,_= net(train_input,train_output)
# our log-likelihood cost
cost = Poisson_loss(z, train_output).mean()
# train the network as usual
cost.backward()
optimizer.step()
training_cost.append(cost.item())
### test on validation data
z_val,_ = net(val_input,val_output)
cost = Poisson_loss(z_val, val_output).mean()
val_cost.append(cost.item())
if k % 100 == 0:
print(f'iteration {k}, cost {cost.item():.4f}')
return training_cost,val_cost
# @title train model PA->PA only
net_PAPA = Net_singleinput(ncomp, NN1, NN1, bidi = False).to(device)
net_PAPA.fc.bias.data[:] = x1.mean((0,1))
training_cost_PAPA,val_cost_PAPA = train(net_PAPA,x1_train,x1_train,x1_val,x1_val) # train
# @title train model IA->PA only
net_IAPA = Net_singleinput(ncomp, NN2, NN1, bidi = False).to(device)
net_IAPA.fc.bias.data[:] = x1.mean((0,1))
training_cost_IAPA,val_cost_IAPA = train(net_IAPA,x0_train,x1_train,x0_val,x1_val) # train
###Output
Random seed 42 has been set.
iteration 0, cost 0.1730
iteration 100, cost 0.0830
iteration 200, cost 0.0806
iteration 300, cost 0.0807
###Markdown
Some plots and analyses
###Code
#@title plot the training side-by-side
plt.figure(figsize = [8,6])
plt.plot(training_cost_PAPA,'green')
plt.plot(val_cost_PAPA,'green',linestyle = '--')
plt.plot(training_cost_IAPA,'orange')
plt.plot(val_cost_IAPA,'orange',linestyle = '--')
plt.legend(['training cost (intrinsic)','validation cost (intrinsic)','training cost(extrinsic)',
'validation cost (extrinsic)'])
plt.title('Training cost over epochs')
plt.ylabel('cost')
plt.xlabel('epochs')
# see if the latents are correlated?
plotx = np.array(range(IAdata.shape[0]))
z_PAPA,y_PAPA= net_PAPA(x1_train,x1_train)
plt.figure(figsize = [6.4, 4.8])
plt.subplot(2,1,1)
plt.plot(plotx*10,y_PAPA[:,0,:].detach().cpu().numpy(),color = 'green')
plt.title('intrinsic prediction model latents')
plt.ylabel('A.U.')
z_IAPA,y_IAPA= net_IAPA(x0_train,x1_train)
plt.subplot(2,1,2)
plt.plot(plotx*10,y_IAPA[:,0,:].detach().cpu().numpy(),color = 'orange')
plt.title('extrinsic prediction model latents')
plt.xlabel('time (ms)')
plt.ylabel('A.U.')
plt.ylim([-1,1])
print(F.cosine_similarity(z_PAPA.flatten(start_dim = 0,end_dim = 1).T,z_IAPA.flatten(start_dim = 0,end_dim = 1).T).mean())
print(F.cosine_similarity(z_PAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).mean())
print(F.cosine_similarity(z_IAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).mean())
plt.figure(figsize = [8,6])
plt.hist(F.cosine_similarity(z_PAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).detach().cpu().numpy(),color = 'green')
plt.hist(F.cosine_similarity(z_IAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).detach().cpu().numpy(),color = 'orange')
plt.legend(('intrinsic','extrinsic'))
plt.vlines(F.cosine_similarity(z_PAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).mean(),0,100,'r')
plt.vlines(F.cosine_similarity(z_IAPA.flatten(start_dim = 0,end_dim = 1).T,x1_train.flatten(start_dim = 0,end_dim = 1).T).mean(),0,100,'r')
plt.title('cosine similarity by neuron')
plt.ylabel('number of neurons predicted')
plt.xlabel('cosine similarity')
def regress_tensor(X,y):
X = X.detach().cpu().numpy()
y = y.flatten().detach().cpu().numpy().reshape(-1,1)
# print(X.shape)
# print(y.shape)
model = LinearRegression()
model.fit(X, y)
r_sq = model.score(X, y)
print('coefficient of determination:', r_sq)
return r_sq
rsqmat = []
for i in range(ncomp):
rsqmat.append(regress_tensor(y_IAPA.flatten(start_dim = 0,end_dim = 1),y_PAPA[:,:,i].reshape(-1,1)))
Avg_rsq = sum(rsqmat)/len(rsqmat)
print('Average Rsq for predicting the %i latents in IAPA from a linear combination of %i latents in PAPA is %2.3f'%(ncomp,ncomp,Avg_rsq))
max_rsq = max(rsqmat)
print('Max Rsq for predicting the %i latents in IAPA from a linear combination of %i latents in PAPA is %2.3f'%(ncomp,ncomp,max_rsq))
###Output
Average Rsq for predicting the 10 latents in IAPA from a linear combination of 10 latents in PAPA is 0.177
Max Rsq for predicting the 10 latents in IAPA from a linear combination of 10 latents in PAPA is 0.272
|
CRIM_CSV_Viewer_11_2020.ipynb | ###Markdown
Pandas for CRIM CSV Import the Tools
###Code
%load_ext nb_black
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
import seaborn as sns
###Output
_____no_output_____
###Markdown
Import the CSV Created by CRIM Intervals
###Code
Results = pd.read_csv('/Users/rfreedma/Documents/Python_Projects/CRIM-notebooks/CRIM/CRIM_CSV_Files/all_crim_matches_generic_6_20.csv')
Results.rename(columns=
{'Pattern Generating Match': 'Pattern_Generating_Match',
'Pattern matched':'Pattern_Matched',
'Piece Title': 'Piece_Title',
'First Note Measure Number': 'Start_Measure',
'Last Note Measure Number': 'Stop_Measure',
'Note Durations': 'Note_Durations'
},
inplace=True)
df = Results.drop(columns=['EMA', 'EMA url'])
df
###Output
_____no_output_____
###Markdown
Inspect It
###Code
df.head(7)
###Output
_____no_output_____
###Markdown
Basic Information
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 66396 entries, 0 to 66395
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Pattern_Generating_Match 66396 non-null object
1 Pattern_Matched 66396 non-null object
2 Piece_Title 66396 non-null object
3 Part 66396 non-null object
4 Start_Measure 66396 non-null int64
5 Stop_Measure 66396 non-null int64
6 Note_Durations 66396 non-null object
dtypes: int64(2), object(5)
memory usage: 3.5+ MB
###Markdown
Get "Counts" for Values of a Particular Column
###Code
e = df['Piece_Title'].value_counts().to_frame()
e
s = df['Pattern_Generating_Match'].value_counts()
s.to_frame()
df = Results.groupby(by='Pattern_Generating_Match')
df.head()
###Output
_____no_output_____
###Markdown
Plot Values as Histogram
###Code
plt.figure(figsize=(16,6))
sns.countplot(x='Piece_Title',data=Results)
###Output
_____no_output_____
###Markdown
Filtering-using dot notation, produces boolean series- using df filter produces filtered frame
###Code
(
Results
.Piece_Title
.str.contains('Confitemini').sum()
)
Filtered_Results = Results[Results['Piece_Title'].str.contains('Confitemini')]
Filtered_Results
###Output
_____no_output_____
###Markdown
Get Data for an Individual Value with a Column For example: a particular Piece within Piece_Title
###Code
Piece_Detail = Results.Piece_Title == 'Missa Confitemini Kyrie'
Piece_Detail.head(4)
###Output
_____no_output_____
###Markdown
Sort Data According to Selected Columns
###Code
Results.sort_values(['Piece_Title','First_Note_Measure_Number'])
Filter_by_Type = Results["Part"].str.contains("Tenor")
Out = Filter_by_Type
print(Out)
Filter_by_Piece = Results["Piece_Title"].str.contains("Kyrie")
Filter_by_Piece
plt.figure(figsize=(12,6))
Results.First_Note_Measure_Number.value_counts().plot();
#sns.set_theme(style="whitegrid", palette="muted")
# Load the penguins dataset
#df = sns.load_dataset("Brumel")
plt.figure(figsize=(16,26))
# Draw a categorical scatterplot to show each observation
ax = sns.swarmplot(data=Results, y="Pattern_matched", x="First_Note_Measure_Number", hue="Part");
ax.set(ylabel="")
#sns.set_theme(style="whitegrid", palette="muted")
# Load the penguins dataset
#df = sns.load_dataset("Brumel")
plt.figure(figsize=(16,26))
# Draw a categorical scatterplot to show each observation
ax = sns.boxplot(data=Results, y="Last_Note_Measure_Number", x="First_Note_Measure_Number", hue="Part");
ax.set(ylabel="")
sorted_output = Results.sort_values(['Piece_Title','First_Note_Measure_Number'])
#sorted_output.to_csv("sorted_patterns.csv")
#print(sorted_output)
pd.DataFrame(sorted_output)
grouped = Results.groupby(['Pattern_Generating_Match', 'Note_Durations'])
#grouped.describe()
#gb = df.groupby("A")
#gb.count() # or,
#grouped.get_group(['Pattern_Generating_Match', 'Note_Durations'])
#grouped = Heth.groupby('Pattern_Generating_Match').apply(print)
grouped
pd.DataFrame(grouped)
#DF_output = pd.DataFrame(grouped)
#DF_output.to_csv("grouped_test.csv")
for Pattern_Generating_Match, Pattern_Generating_Match_df in grouped:
print(Pattern_Generating_Match)
print( )
print(Pattern_Generating_Match_df[['Piece_Title','First_Note_Measure_Number','Part']].head())
print( )
pd.DataFrame(Pattern_Generating_Match_df)
###Output
_____no_output_____
###Markdown
View Results of CRIM Classifier- Update filename to match the output of the one produced by CRIM Intervals Engine
###Code
pd.read_csv('test1.csv', usecols=['Pattern Generating Match', 'Classification Type', 'Soggetti 1 Part', 'Soggetti 1 Measure', 'Soggetti 2 Part', 'Soggetti 2 Measure', 'Soggetti 3 Part', 'Soggetti 3 Measure', 'Soggetti 4 Part', 'Soggetti 4 Measure'])
classified_data = pd.read_csv('test1.csv', usecols=['Pattern Generating Match', 'Classification Type', 'Soggetti 1 Part', 'Soggetti 1 Measure', 'Soggetti 2 Part', 'Soggetti 2 Measure', 'Soggetti 3 Part', 'Soggetti 3 Measure', 'Soggetti 4 Part', 'Soggetti 4 Measure'])
sns.catplot(x="Soggetti 1 Measure", y="Classification Type", data=classified_data)
#measure_list = ["Soggetti 1 Measure", "Soggetti 2 Measure"]
#voice_list = ["Soggetti 1 Part", "Soggetti 2 Part"]
# Draw a categorical scatterplot to show each observation
#ax = sns.boxplot(data=classified_data, y="column_list", x="voice_list", hue="Classification Type");
#ax.set(ylabel="")
###Output
_____no_output_____ |
_notebooks/2021-01-07-keras-starting-stoping-resuming.ipynb | ###Markdown
Keras Starting, Stoping, Resuming> Keras Starting, Stoping, Resuming. This is the 1st step to perform when training a model. It is an exploratory approach to identify suitable learning rates. Once we have suitable learning rate we can further continue with initial learning rate finder, cycles, decay schedulers learning rates.- toc: true - badges: true- comments: true- categories: [Keras]- image: images/chart-preview.png 1. Warum mรผssen wir das Training starten, stoppen und fortsetzen?Dies ist der 1. Schritt, der beim Training eines Modells erforderlich ist. Es ist ein exploratives Vorgehen, um geeignete Lernraten zu identifizieren. Sobald wir eine geeignete Lernrate haben, kรถnnen wir weiterhin mit der Genauigkeitanpassung des Modelles anhand initial learning rate, decay and cycle schedulers fortsetzen.Es gibt eine Reihe von Grรผnden warum wir das Training eines Modelles starten, stoppen oder fortsetzen mรผssen. Die beiden Hauptgrรผnde sind: - Die Trainingssizung wird abgebrochen und das Training wird gestoppt (wegen eines Stromaussfalls, der รberschreitung einer GPU-Sitzung) - Mann will direkt die Lernrate anpassen -"on the fly"- um die Genauigkeit des Modelles zu verbessern. Dies gescheht normalerweise durch die Verringerung der Lernate um eine Grรถรenordnung Die Verlustfunktion eines neuronalen Netzwerkes beginnt sehr hoch, fรคllt aber sehr schnell ab. Die Genugkeit des Modelles ist am Anfang sehr niedrig, steigt aber sehr schenll an. Schlieรlich erreichen die Genauigkeit und die Verlustfunktion ein Plateau.  - Die Verlustfunktion beginnt sehr hoch, fรคllt dann aber schnell ab - Die Genauigkeit ist anfangs sehr niedrig, steigt dann aber schnell an - Schlieรlich erreichen Verlust und Genauigkeit ein Plateau Was passiert um Epoche 30 herum?Warum sinkt der Verlust so dramatisch? Und warum steigt die Genauigkeit so gewaltig an?Der Grund fรผr dieses Verhalten ist: - Das Training wurde gestoppt - Die Lernrate wurde um eine Grรถรenordnung herabgesetzt (Fรผr die Lernrate ist die Standardpraxis, sie um eine Grรถรenordnung zu senken) - Das Training wurde wieder fortgesetzt. Wir das Training weiter fortgefรผhrt und die Lernrate stรคnding reduziert, so wird sie schlieรlich sehr gering sein. Je kleiner die Lernrate ist, desto geringer ist der Einfluss auf die Genauigkeit.Letztendlich gibt es zwei Probleme: - Die Lernrate wird sehr klein sein, was wiederum dazu fรผhrt dass die Modell-Gewichstsaktualisierungen sehr klein werden und das Modell somit keine sinvollen Forschritte machen kann. - Wir fangen an, aufgrund der kleinen Lernrate zu รผberanpassen. Das Modell sinkt im Bereiche mit niedrigen Verlustwerte des Verlustslandschaft an, passt sie รผbermรคssig an die Trainingsdaten an und generalisiert sich nicht auf die Validierungsdaten. 2. Warum nicht die Lernrate-Scheduler oder Lernrate-Decay verwenden?Wann das Ziel darin besteht die Modellgenaugkeit durch das Absenken der Lernrate zu verbessern, warum dann nicht einfach die Lernrate-Scheduler oder die Lernrate-Decay zurรผckgreifen? Das Problem ist dass man mรถglicherweise keine gute Vorstellung von der Scheduler- und Decay Parameterwerten hat: - Die ungefรคhre Anzahl der Epochen, fรผr die trainiert werden soll - Was eine angemessene anfรคngliche Lernrate ist - Welcher Lernratenbereich fรผr CLRs verwendet werden soll, die Lernrate anzupassen und das Training an der Stelle fortzuseten an den wir aufgehรถrt haben (Lernrate Schedueler und Decay bitten in Regel es nicht) 3. Vorteile des ctrl + c-Trainings - Feinere Kontrolle รผber das Modell - Bittet die Mรถglichkeit an das Modell bei einem bestimmten Epoch manuell zu pausieren - Sobald man ein paar Experimente mit "ctrl + c" dรผrchgefรผhrt hat, wird man eine gute Vorstellung von den geeigneten PHypaerparametern haben. Wenn das der Fall ist, kann man weiter Lernrate-Scheduler und Lernarte-Decay verwenden um die Genauigkeit des Modelles weiterhin zu erhรถhen.
###Code
import tensorflow as tf
from tensorflow.keras.datasets import fashion_mnist
import cv2
import argparse
import numpy as np
from resnet import ResNet
from callbacks.epochcheckpoint import EpochCheckpoint
from callbacks.trainingmonitor import TrainingMonitor
import os
import sklearn
import keras.backend as K
###Output
_____no_output_____
###Markdown
4. Argparser for model start, stop, resume - checkpoints paths: at each x-th epoch the model will be saved - if model is given than model is loaded - if start-epoch is given then this epoch will be loaded for plot display
###Code
# argparser for model_checkpoints, start-epoch
ap = argparse.ArgumentParser()
ap.add_argument("-c", "--checkpoints", default = "checkpoints", help="path to output checkpoint directory")
ap.add_argument("-m", "--model", default = "checkpoints/epoch_25.hdf5", type=str, help="path to *specific* model checkpoint to load")
ap.add_argument("-s", "--start-epoch", type=int, default=25, help="epoch to restart training at")
args = vars(ap.parse_args([]))
###Output
_____no_output_____
###Markdown
3. Load training tf dataset
###Code
#load training test data
((trainX, trainY), (testX, testY)) = fashion_mnist.load_data()
print(trainX.shape, trainY.shape, testX.shape, testY.shape)
###Output
(60000, 28, 28) (60000,) (10000, 28, 28) (10000,)
###Markdown
4. Load, rescale, reshape images using OpenCV
###Code
#fashion_mnist_dataset contains images of (28, 28), but our model was trained for images of (32,32)
#resize all images to (32, 32)
trainX = np.array([cv2.resize(image, (32, 32)) for image in trainX])
testX = np.array([cv2.resize(image,(32, 32)) for image in testX])
#scale images between (0, 1)
trainX = trainX.astype("float32")/ 255.
testX = testX.astype("float32")/ 255.
#reshape data to include batch and channel dimensions --> (batch/len(dataset), size1, size2, no_channels)
trainX = trainX.reshape(len(trainX), 32, 32, 1)
testX = testX.reshape(len(testX), 32, 32, 1)
print(trainX.shape, testX.shape, trainY.shape, testY.shape)
###Output
(60000, 32, 32, 1) (10000, 32, 32, 1) (60000,) (10000,)
###Markdown
5. Label Binarizer - Y-data is given as numbers between 0...9 ->corresponding to 10 categories -> its shape is (no of obsevations, ) - Y-data is transformed into a (no of observations, 10)-matrix - obs1 :(0, 0, 0, 0, 0, 1, 0, 0, 0, 0)
###Code
# binarize labels
from sklearn.preprocessing import LabelBinarizer
lb = LabelBinarizer()
trainY = lb.fit_transform(trainY)
testY = lb.transform(testY)
# initialize data augumentation for training and testing
#trainAug = tf.keras.preprocessing.image.ImageDataGenerator()
###Output
_____no_output_____
###Markdown
6. Model start or load - if the model is loaded than we still can make changes to it and continue running it, i.e. modify the learning rate
###Code
if args["model"] == None:
optimizer = tf.keras.optimizers.SGD(lr = 0.001)
loss = tf.keras.losses.BinaryCrossentropy()
model = ResNet.build(32, 32, 1, 10, (9, 9, 9),(64, 64, 128, 256), reg=0.0001)
model.compile(optimizer = optimizer, loss = loss, metrics = ["accuracy"])
else:
print("INFO: loading model", args["model"], "...")
tf.keras.models.load_model(args["model"])
print("INFO lr = {}", format(K.get_value(model.optimizer.lr)))
K.set_value(model.optimizer.lr, 1e-01)
print("INFO lr = {}", format(K.get_value(model.optimizer.lr)))
###Output
INFO: loading model checkpoints/epoch_25.hdf5 ...
INFO lr = {} 0.10000000149011612
INFO lr = {} 0.10000000149011612
###Markdown
7. Callbacks The models will be saved in hdf5 format. This only stores the weights of the model, so the arhictecture and rest of the parameters are not required. To save the whole model one needs to use h5 format.
###Code
plotPath = os.path.sep.join(["output", "resnet_fashion_mnist.png"])
jsonPath = os.path.sep.join(["output", "resnet_fashion_mnist.json"])
# construct the set of callbacks
callbacks = [EpochCheckpoint(args["checkpoints"], every=1, startAt=args["start_epoch"]),
TrainingMonitor(plotPath, jsonPath=jsonPath, startAt=args["start_epoch"])]
trainX, trainY = trainX[:64, :, :, :], trainY[:64]
testX, testY = testX[:64, :, :, :], testY[:64]
model.fit(trainX, trainY, batch_size=8,\
validation_data=(testX, testY),\
steps_per_epoch=len(trainX)//16,\
epochs=3, callbacks=callbacks)
print("Done")
###Output
Epoch 1/3
4/4 [==============================] - ETA: 0s - loss: 0.6501 - accuracy: 0.6250checkpoints 25
4/4 [==============================] - 3s 763ms/step - loss: 0.6501 - accuracy: 0.6250 - val_loss: 0.7446 - val_accuracy: 0.2656
Epoch 2/3
4/4 [==============================] - ETA: 0s - loss: 0.6804 - accuracy: 0.5625checkpoints 26
4/4 [==============================] - 3s 816ms/step - loss: 0.6804 - accuracy: 0.5625 - val_loss: 0.7338 - val_accuracy: 0.2812
Epoch 3/3
4/4 [==============================] - ETA: 0s - loss: 0.6872 - accuracy: 0.4062checkpoints 27
4/4 [==============================] - 4s 882ms/step - loss: 0.6872 - accuracy: 0.4062 - val_loss: 0.7354 - val_accuracy: 0.2344
Done
|
Copy_of_Cats_vs_Dogs_with_Data_Augmentation.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Dogs vs Cats Image Classification With Image Augmentation Run in Google Colab View source on GitHub In this tutorial, we will discuss how to classify images into pictures of cats or pictures of dogs. We'll build an image classifier using `tf.keras.Sequential` model and load data using `tf.keras.preprocessing.image.ImageDataGenerator`. Specific concepts that will be covered:In the process, we will build practical experience and develop intuition around the following concepts* Building _data input pipelines_ using the `tf.keras.preprocessing.image.ImageDataGenerator` class โ How can we efficiently work with data on disk to interface with our model?* _Overfitting_ - what is it, how to identify it, and how can we prevent it?* _Data Augmentation_ and _Dropout_ - Key techniques to fight overfitting in computer vision tasks that we will incorporate into our data pipeline and image classifier model. We will follow the general machine learning workflow:1. Examine and understand data2. Build an input pipeline3. Build our model4. Train our model5. Test our model6. Improve our model/Repeat the process**Before you begin**Before running the code in this notebook, reset the runtime by going to **Runtime -> Reset all runtimes** in the menu above. If you have been working through several notebooks, this will help you avoid reaching Colab's memory limits. Importing packages Let's start by importing required packages:* os โ to read files and directory structure* numpy โ for some matrix math outside of TensorFlow* matplotlib.pyplot โ to plot the graph and display images in our training and validation data
###Code
from __future__ import absolute_import, division, print_function, unicode_literals
try:
# Use the %tensorflow_version magic if in colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Data Loading To build our image classifier, we begin by downloading the dataset. The dataset we are using is a filtered version of Dogs vs. Cats dataset from Kaggle (ultimately, this dataset is provided by Microsoft Research).In previous Colabs, we've used TensorFlow Datasets, which is a very easy and convenient way to use datasets. In this Colab however, we will make use of the class `tf.keras.preprocessing.image.ImageDataGenerator` which will read data from disk. We therefore need to directly download *Dogs vs. Cats* from a URL and unzip it to the Colab filesystem.
###Code
_URL = 'https://storage.googleapis.com/mledu-datasets/cats_and_dogs_filtered.zip'
zip_dir = tf.keras.utils.get_file('cats_and_dogs_filterted.zip', origin=_URL, extract=True)
###Output
_____no_output_____
###Markdown
The dataset we have downloaded has following directory structure.cats_and_dogs_filtered|__ train |______ cats: [cat.0.jpg, cat.1.jpg, cat.2.jpg ....] |______ dogs: [dog.0.jpg, dog.1.jpg, dog.2.jpg ...]|__ validation |______ cats: [cat.2000.jpg, cat.2001.jpg, cat.2002.jpg ....] |______ dogs: [dog.2000.jpg, dog.2001.jpg, dog.2002.jpg ...] We'll now assign variables with the proper file path for the training and validation sets.
###Code
base_dir = os.path.join(os.path.dirname(zip_dir), 'cats_and_dogs_filtered')
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'validation')
train_cats_dir = os.path.join(train_dir, 'cats') # directory with our training cat pictures
train_dogs_dir = os.path.join(train_dir, 'dogs') # directory with our training dog pictures
validation_cats_dir = os.path.join(validation_dir, 'cats') # directory with our validation cat pictures
validation_dogs_dir = os.path.join(validation_dir, 'dogs') # directory with our validation dog pictures
###Output
_____no_output_____
###Markdown
Understanding our data Let's look at how many cats and dogs images we have in our training and validation directory
###Code
num_cats_tr = len(os.listdir(train_cats_dir))
num_dogs_tr = len(os.listdir(train_dogs_dir))
num_cats_val = len(os.listdir(validation_cats_dir))
num_dogs_val = len(os.listdir(validation_dogs_dir))
total_train = num_cats_tr + num_dogs_tr
total_val = num_cats_val + num_dogs_val
print('total training cat images:', num_cats_tr)
print('total training dog images:', num_dogs_tr)
print('total validation cat images:', num_cats_val)
print('total validation dog images:', num_dogs_val)
print("--")
print("Total training images:", total_train)
print("Total validation images:", total_val)
###Output
_____no_output_____
###Markdown
Setting Model Parameters For convenience, let us set up variables that will be used later while pre-processing our dataset and training our network.
###Code
BATCH_SIZE = 100
IMG_SHAPE = 150 # Our training data consists of images with width of 150 pixels and height of 150 pixels
###Output
_____no_output_____
###Markdown
After defining our generators for training and validation images, **flow_from_directory** method will load images from the disk and will apply rescaling and will resize them into required dimensions using single line of code. Data Augmentation Overfitting often occurs when we have a small number of training examples. One way to fix this problem is to augment our dataset so that it has sufficient number and variety of training examples. Data augmentation takes the approach of generating more training data from existing training samples, by augmenting the samples through random transformations that yield believable-looking images. The goal is that at training time, your model will never see the exact same picture twice. This exposes the model to more aspects of the data, allowing it to generalize better.In **tf.keras** we can implement this using the same **ImageDataGenerator** class we used before. We can simply pass different transformations we would want to our dataset as a form of arguments and it will take care of applying it to the dataset during our training process.To start off, let's define a function that can display an image, so we can see the type of augmentation that has been performed. Then, we'll look at specific augmentations that we'll use during training.
###Code
# This function will plot images in the form of a grid with 1 row and 5 columns where images are placed in each column.
def plotImages(images_arr):
fig, axes = plt.subplots(1, 5, figsize=(20,20))
axes = axes.flatten()
for img, ax in zip(images_arr, axes):
ax.imshow(img)
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Flipping the image horizontally We can begin by randomly applying horizontal flip augmentation to our dataset and seeing how individual images will look after the transformation. This is achieved by passing `horizontal_flip=True` as an argument to the `ImageDataGenerator` class.
###Code
image_gen = ImageDataGenerator(rescale=1./255, horizontal_flip=True)
train_data_gen = image_gen.flow_from_directory(batch_size=BATCH_SIZE,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE))
###Output
_____no_output_____
###Markdown
To see the transformation in action, let's take one sample image from our training set and repeat it five times. The augmentation will be randomly applied (or not) to each repetition.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Rotating the image The rotation augmentation will randomly rotate the image up to a specified number of degrees. Here, we'll set it to 45.
###Code
image_gen = ImageDataGenerator(rescale=1./255, rotation_range=45)
train_data_gen = image_gen.flow_from_directory(batch_size=BATCH_SIZE,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE, IMG_SHAPE))
###Output
_____no_output_____
###Markdown
To see the transformation in action, let's once again take a sample image from our training set and repeat it. The augmentation will be randomly applied (or not) to each repetition.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Applying Zoom We can also apply Zoom augmentation to our dataset, zooming images up to 50% randomly.
###Code
image_gen = ImageDataGenerator(rescale=1./255, zoom_range=0.5)
train_data_gen = image_gen.flow_from_directory(batch_size=BATCH_SIZE,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE, IMG_SHAPE))
###Output
_____no_output_____
###Markdown
One more time, take a sample image from our training set and repeat it. The augmentation will be randomly applied (or not) to each repetition.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Putting it all together We can apply all these augmentations, and even others, with just one line of code, by passing the augmentations as arguments with proper values.Here, we have applied rescale, rotation of 45 degrees, width shift, height shift, horizontal flip, and zoom augmentation to our training images.
###Code
image_gen_train = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
train_data_gen = image_gen_train.flow_from_directory(batch_size=BATCH_SIZE,
directory=train_dir,
shuffle=True,
target_size=(IMG_SHAPE,IMG_SHAPE),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Let's visualize how a single image would look like five different times, when we pass these augmentations randomly to our dataset.
###Code
augmented_images = [train_data_gen[0][0][0] for i in range(5)]
plotImages(augmented_images)
###Output
_____no_output_____
###Markdown
Creating Validation Data generator Generally, we only apply data augmentation to our training examples, since the original images should be representative of what our model needs to manage. So, in this case we are only rescaling our validation images and converting them into batches using ImageDataGenerator.
###Code
image_gen_val = ImageDataGenerator(rescale=1./255)
val_data_gen = image_gen_val.flow_from_directory(batch_size=BATCH_SIZE,
directory=validation_dir,
target_size=(IMG_SHAPE, IMG_SHAPE),
class_mode='binary')
###Output
_____no_output_____
###Markdown
Model Creation Define the modelThe model consists of four convolution blocks with a max pool layer in each of them.Before the final Dense layers, we're also applying a Dropout probability of 0.5. It means that 50% of the values coming into the Dropout layer will be set to zero. This helps to prevent overfitting.Then we have a fully connected layer with 512 units, with a `relu` activation function. The model will output class probabilities for two classes โ dogs and cats โ using `softmax`.
###Code
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(32, (3,3), activation='relu', input_shape=(150, 150, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(128, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512, activation='relu'),
tf.keras.layers.Dense(2, activation='softmax')
])
###Output
_____no_output_____
###Markdown
Compiling the modelAs usual, we will use the `adam` optimizer. Since we output a softmax categorization, we'll use `sparse_categorical_crossentropy` as the loss function. We would also like to look at training and validation accuracy on each epoch as we train our network, so we are passing in the metrics argument.
###Code
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Model SummaryLet's look at all the layers of our network using **summary** method.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Train the model It's time we train our network.Since our batches are coming from a generator (`ImageDataGenerator`), we'll use `fit_generator` instead of `fit`.
###Code
epochs=100
history = model.fit_generator(
train_data_gen,
steps_per_epoch=int(np.ceil(total_train / float(BATCH_SIZE))),
epochs=epochs,
validation_data=val_data_gen,
validation_steps=int(np.ceil(total_val / float(BATCH_SIZE)))
)
###Output
_____no_output_____
###Markdown
Visualizing results of the training We'll now visualize the results we get after training our network.
###Code
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']
loss = history.history['loss']
val_loss = history.history['val_loss']
epochs_range = range(epochs)
plt.figure(figsize=(8, 8))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')
plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
###Output
_____no_output_____ |
nbs/2/1-Numerical-Data-Pandas-Primer.ipynb | ###Markdown
1-D Series
###Code
s = pd.Series([4, 7, -5, 3])
s
s.index
s.values
s[1:3]
s2 = s**2
s2
s2+s
print(np.sum(s))
print(np.mean(s))
print(np.std(s))
s3 = pd.Series([4, 7, -5, 3], index=['d', 'b', 'a', 'c'])
s3
# !!!
s2+s3
s4 = pd.Series({'d':10, 'b':12, 'a':3, 'c':9})
s4
s3+s4
###Output
_____no_output_____
###Markdown
x-D DataframesThe most important feature pandas gives us access to is the `DataFrame`. Dataframes are two-dimensional stucutres that you can think of very much like a spreadsheet with named columns and rows. In fact, it supports reading in CSV and Excel files.First, let's read a CSV containing population information of the States of the USA.
###Code
# Create a DataFrame, `df`, from the csv located in `data/population.csv`
df = pd.read_csv('../../data/population.csv')
###Output
_____no_output_____
###Markdown
To see what our data looks like, we can use `df.head(n)` to see the first `n` rows, with n=5 the default:
###Code
df.head()
###Output
_____no_output_____
###Markdown
We see that for each state we have two IDs and then 6 columns of years - in this case each column is the population of that state during the given year.We can acess columns by referencing the column name just like a python dictionary:
###Code
df['2010'].head()
###Output
_____no_output_____
###Markdown
We can get multiple columns at once by passing a list of column names:
###Code
df[['2010', '2011']].head()
###Output
_____no_output_____
###Markdown
And then we can access groups of rows using a range of row IDs:
###Code
df[5:10]
###Output
_____no_output_____
###Markdown
Accessing individual rows is different. You can't just use `df[i]`. Instead you need to use df.loc:
###Code
df.loc[20]
type(df.loc[20])
###Output
_____no_output_____
###Markdown
Pandas gives us an easy way to get summary statistics of our data:
###Code
df.describe()
###Output
_____no_output_____
###Markdown
One thing you might notice is that describe only lists the numeric columns, but `Id2` is included in that even though it would be better to treat it as a string. pandas tries to guess the datatype of each column and in this case, all of the values in `Id2` are integers, so it gets treated as an integer.We can see the datatype details:
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
We can cast `Id2` to a string using `astype` and then override the original column:
###Code
df['Id2'] = df['Id2'].astype(str)
df.dtypes
###Output
_____no_output_____
###Markdown
Or we could have specified the data type when we originally read the CSV:
###Code
# Pass a dictionary to the dtype parameter with `'column': dtype`
df = pd.read_csv('../../data/population.csv', dtype={'Id2': str})
df.dtypes
###Output
_____no_output_____
###Markdown
Operations and FilteringEach column of data behaves very much like a normal numpy array and thus can be used for mathematical operations. For example, to get the population change from 2014 to 2015 for each state:
###Code
df['2015'] - df['2014']
###Output
_____no_output_____
###Markdown
Rather than continually computing that value, we can save it to a new column in the DataFrame. Let's make a new column called 'change':
###Code
df['pop_change'] = df['2015'] - df['2014']
df.head()
###Output
_____no_output_____
###Markdown
Just like numpy, we can also do element-wise comparisons. For example, to find out whether a state's population decreased over that year:
###Code
df['pop_change'] < 0
###Output
_____no_output_____
###Markdown
The `True` values are the states with negative population change (decrease). But that boolean array by itself isn't very useful. We can use that array as an index to filter our DataFrame:
###Code
df[df['pop_change'] < 0]
###Output
_____no_output_____
###Markdown
Now we have a subset of the DataFrame with only the decreasing populations. Statistical Operations
###Code
print(df.mean())
print(df.std())
print(df.max())
print(df.sum())
# or over single columns
print(df[['2010','2011']].mean())
###Output
2010 6.020546e+06
2011 6.065338e+06
dtype: float64
###Markdown
Merging DataFramesIf you have data across multiple files, as long as there is a common column they can be joined. To start with, let's read in a CSV which contains the number of housing units in the state for each year from 2010-2015.
###Code
housing = pd.read_csv('../../data/housing.csv', dtype={'Id2': str})
housing.head()
###Output
_____no_output_____
###Markdown
Since the Id column is shared, it can easily be merged with our original DataFrame:
###Code
merged = df.merge(housing, on='Id')
merged.head()
###Output
_____no_output_____
###Markdown
Since the column names are all shared, pandas appends '_x' and '_y' to columns from the left and right dataframes, respectively.This isn't very user-friendly, so we can use the parameter `suffixes` to specify custom labels to append. Furthermore, we can also specify `Id2` and `Geography` in `on` so we don't duplicate those columns.
###Code
merged = df.merge(housing, on=['Id', 'Id2', 'Geography'], suffixes=('_population', '_housing'))
merged.head()
###Output
_____no_output_____
###Markdown
We can also notice that when we did the merge, we lost one row. That is because the housing dataset didn't contain data for Puerto Rico.
###Code
print('Population:', len(df), 'Merged:', len(merged))
###Output
Population: 52 Merged: 51
###Markdown
Now we can do something involving both datasets. For example, finding the ratio of people to houses:
###Code
merged['ratio'] = merged['2015_population']/merged['2015_housing']
merged['ratio'].head()
###Output
_____no_output_____
###Markdown
Now let's use `sort_values` to view the states with the lowest ratio of people to houses and view just the state name and ratio columns:
###Code
# Sort the data by ratio
merged_sorted = merged.sort_values(by=['ratio'])
# Just get the `Geography` and `ratio` columns
merged_subset = merged_sorted[['Geography', 'ratio']]
# View the first 5
merged_subset.head()
###Output
_____no_output_____
###Markdown
And now to view the top 5 use ascending=False:
###Code
merged.sort_values(by=['ratio'], ascending=False)[['Geography', 'ratio']].head()
###Output
_____no_output_____
###Markdown
Grouping Rows by ValueSometimes you'd like to aggregate groups of similar rows. For instance, let's compare the change in housing stock between the states with decreasing population to those with increasing population. First let's make a column for the housing change and make a column with either True or False for whether the population is increasing.
###Code
merged['housing_change'] = merged['2015_housing'] - merged['2014_housing']
merged['pop_change_increasing'] = merged['pop_change'] > 0
###Output
_____no_output_____
###Markdown
Then use `groupby` to group our rows by whether they had an increasing or decreasing population change from 2014-2015:
###Code
grouped = merged.groupby('pop_change_increasing')
###Output
_____no_output_____
###Markdown
Then we can run aggeregate functions on our groups or `describe` to run the same summary statistics we did before:
###Code
grouped.describe()['housing_change']
###Output
_____no_output_____
###Markdown
We can see that the average housing increase for states with decreasing population is lower. But the change in housing for all those states is still positive. Statistical Operations
###Code
grouped.mean()
grouped.mean()['housing_change']
###Output
_____no_output_____ |
Data Science Course/1. Programming/3. Python (with solutions)/Module 1 - Python Intro and Numpy/Practice Solution/02-Numpy Exercises - Solutions.ipynb | ###Markdown
02-Numpy Exercises - Solutions____ KeytoDataScience.com Now that we've learned about NumPy let's test your knowledge. We'll start off with a few simple tasks and then you'll be asked some more complicated questions. 1. NumPy Arrays Import NumPy as np
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
Create an array of 10 zeros
###Code
np.zeros(10)
###Output
_____no_output_____
###Markdown
Create an array of 10 ones
###Code
np.ones(10)
###Output
_____no_output_____
###Markdown
Create an array of 10 fives
###Code
np.ones(10) * 5
###Output
_____no_output_____
###Markdown
Create an array of the integers from 10 to 50
###Code
np.arange(10,51)
###Output
_____no_output_____
###Markdown
Create an array of all the even integers from 10 to 50
###Code
np.arange(10,51,2)
###Output
_____no_output_____
###Markdown
Create a 3x3 matrix with values ranging from 0 to 8
###Code
np.arange(9).reshape(3,3)
###Output
_____no_output_____
###Markdown
Create a 3x3 identity matrix
###Code
np.eye(3)
###Output
_____no_output_____
###Markdown
Use NumPy to generate a random number between 0 and 1
###Code
np.random.rand(1)
###Output
_____no_output_____
###Markdown
Use NumPy to generate an array of 25 random numbers sampled from a standard normal distribution
###Code
np.random.randn(25)
###Output
_____no_output_____
###Markdown
Create the following matrix:
###Code
np.arange(1,101).reshape(10,10) / 100
###Output
_____no_output_____
###Markdown
Create an array of 20 linearly spaced points between 0 and 1:
###Code
np.linspace(0,1,20)
###Output
_____no_output_____
###Markdown
2. Numpy Indexing and SelectionNow you will be given a few matrices, and be asked to replicate the resulting matrix outputs:
###Code
mat = np.arange(1,26).reshape(5,5)
mat
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[2:,1:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3,4]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[:3,1:2]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[4,:]
# WRITE CODE HERE THAT REPRODUCES THE OUTPUT OF THE CELL BELOW
# BE CAREFUL NOT TO RUN THE CELL BELOW, OTHERWISE YOU WON'T
# BE ABLE TO SEE THE OUTPUT ANY MORE
mat[3:5,:]
###Output
_____no_output_____
###Markdown
3. Numpy Operations Get the sum of all the values in mat
###Code
mat.sum()
###Output
_____no_output_____
###Markdown
Get the standard deviation of the values in mat
###Code
mat.std()
###Output
_____no_output_____
###Markdown
Get the sum of all the columns in mat
###Code
mat.sum(axis=0)
###Output
_____no_output_____ |
examples/CentralityAnalysis/PCN_CentralityAnalysis.ipynb | ###Markdown
THIS NOTEBOOK CONTAINS AN EXAMPLE OF A NODE CENTRALITY ANALYSIS ALGORITHM, IN THIS CASE EIGENVECTOR CENTRALITY, APPLIED TO A Protein Contact Network OF THE SARSCOV2 SPIKE PROTEIN
###Code
#handle different path separators
from sys import platform
if platform == "linux" or platform == "linux2":
# linux
add_slash_to_path = '/'
elif platform == "darwin":
# OS X
add_slash_to_path = '/'
elif platform == "win32":
# Windows...
add_slash_to_path = '\\'
import os
try:
from pcn.pcn_miner import pcn_miner, pcn_pymol_scripts #installed with pip
except:
try:
import sys #git cloned
cwd = os.getcwd()
exd = os.path.abspath(os.path.join(cwd, os.pardir))
pcnd = os.path.abspath(os.path.join(exd, os.pardir)) + add_slash_to_path + "pcn"
sys.path.append(pcnd)
from pcn_miner import pcn_miner, pcn_pymol_scripts
except:
raise ImportError("PCN-Miner is not correctly installed.")
import numpy as np
import subprocess
import networkx as nx
output_path = ""
adj_path = "Adj\\"
protein = "6vxx"
protein_path = "{}.pdb".format(protein)
atoms = pcn_miner.readPDBFile(protein_path) #read
coordinates = pcn_miner.getResidueCoordinates(atoms)
coordinates
dict_residue_name = pcn_miner.associateResidueName(coordinates)
residue_names = np.array(list (dict_residue_name.items()))
residue_names
A = pcn_miner.adjacent_matrix(output_path, coordinates, protein, 4, 8)
A
G = nx.from_numpy_array(A)
residue_names_1 = np.array(residue_names[:, 1], dtype = str)
centrality_measures = pcn_miner.eigenvector_c(G, residue_names_1)
centrality_measures
pcn_miner.save_centralities(output_path, centrality_measures, protein, "eigenvector_centrality")
pcn_pymol_scripts.pymol_plot_centralities(output_path, centrality_measures, protein_path, "eigenvector_centrality")
filepath = "Centralities\eigenvector_centrality\Sessions\{}_eigenvector_centrality_session.pse".format(protein)
if platform == "win32":
os.startfile(filepath)
else:
subprocess.run(["pymol", filepath])
###Output
_____no_output_____ |
content/courses/ml_intro/12_validation/05_validation.ipynb | ###Markdown
Generalizing to different populationsThe fact that the learning curve shows a convergence between the training and test samples (at least when our sample is larger) provides some assurance that our model will continue to perform comparably well when tested on new observations sampled from the same population. This does *not*, however, mean that performance will remain comparable when tested on new *populations*. If our goal is to generalize beyond the population we sampled from in our training sample, it's advisable to compute validation curves that evaluate generalization performance in as realistic a way as possible.For example, if we intend to apply our age-prediction model to countries that are undersampled in our existing data (or not sampled at all), we might want to quantify how well the model generalizes across countries that *are* adequately sampled. Let's take a look at the country representation in our Johnson (2014) dataset:
###Code
# Show 20 most common countries in the dataset
data['COUNTRY'].value_counts()[:20]
###Output
_____no_output_____
###Markdown
There's far more data from US participants than other countries, so let's train our linear regression modelโonce again predicting age from the 300 itemsโon half of the US subset. Then we'll evaluate its performance both in the other half of the US subset, and in the full sample for several other countries (all those with more than 500 data points).
###Code
# Split US data in two
us_data = data.query('COUNTRY == "USA"')
n_usa = len(us_data)
inds = np.random.choice(n_usa, n_usa // 2, replace=False)
us_train = us_data.iloc[inds]
us_test = us_data.iloc[~inds]
# Train model and evaluate in-sample
model = LinearRegression()
items, age = get_features(us_train, 'items', 'AGE')
model.fit(items, age)
train_score = r2_score(age, model.predict(items))
print(f"R^2 in training half of US sample: {train_score:.2f}")
# Evaluate in testing half of US data
items, age = get_features(us_test, 'items', 'AGE')
test_score = r2_score(age, model.predict(items))
print(f"R^2 in testing half of US sample: {test_score:.2f}\n")
# Get data for all countries other than USA with >= 500 observations
countries = data.groupby('COUNTRY').filter(lambda x: len(x) >= 500)
countries = countries.query('COUNTRY != "USA"')
# Loop over countries and test performance in each one
results = []
for name, country_data in countries.groupby('COUNTRY'):
items, age = get_features(country_data, 'items', 'AGE')
country_score = r2_score(age, model.predict(items))
n_obs = len(country_data)
results.append([name, round(country_score, 2), n_obs])
results = pd.DataFrame(results, columns=['country', 'R^2', 'n'])
print("Other countries:")
results.sort_values('R^2', ascending=False)
###Output
R^2 in training half of US sample: 0.51
R^2 in testing half of US sample: 0.50
Other countries:
|
Data analysis- Campus recruitment.ipynb | ###Markdown
 IntroductionThis week, we are looking into the campus recruitment dataset. First off, what exactly is a job placement? A job placement is very similar to an internship, just much longer. Doing placement as part of the course provides huge benefit by providing great working experience and increase your employability when you are ready to enter the market. No, you won't be just making coffee although you will be responsible for quite a fair amount of general administrative duties. Since you are considered an employee in the company. you will have the opportunity to develop your skills through meatier assignments. After you have completed your job placement, you may be given the opportunity to join the company if you exceed their expectations! For this dataset, we have created the following objectives to answer common questions frequently asked. Objectives1. Which factor influenced a candidate in getting placed?2. Does percentage matters for one to get placed?3. Which degree specialization is much demanded by corporate?4. Play with the data conducting all statistical tests. Importing required packages
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import LabelEncoder
# Creating a function to print
def overview():
data =pd.read_csv('../input/factors-affecting-campus-placement/Placement_Data_Full_Class.csv')
print("First 5 lines of data:\n")
print(data.head())
print("\n\n\n")
print("There are {} rows and {} columns".format(data.shape[0], data.shape[1]))
print("\n\n\n")
print("Data types:\n")
print(data.dtypes)
print("\n\n\n")
print("% of missing values per column:\n")
print(data.isnull().mean().round(2)*100)
print("Statistical summary:\n")
print(data.describe())
return data
data = overview()
###Output
First 5 lines of data:
sl_no gender ssc_p ssc_b hsc_p hsc_b hsc_s degree_p \
0 1 M 67.00 Others 91.00 Others Commerce 58.00
1 2 M 79.33 Central 78.33 Others Science 77.48
2 3 M 65.00 Central 68.00 Central Arts 64.00
3 4 M 56.00 Central 52.00 Central Science 52.00
4 5 M 85.80 Central 73.60 Central Commerce 73.30
degree_t workex etest_p specialisation mba_p status salary
0 Sci&Tech No 55.0 Mkt&HR 58.80 Placed 270000.0
1 Sci&Tech Yes 86.5 Mkt&Fin 66.28 Placed 200000.0
2 Comm&Mgmt No 75.0 Mkt&Fin 57.80 Placed 250000.0
3 Sci&Tech No 66.0 Mkt&HR 59.43 Not Placed NaN
4 Comm&Mgmt No 96.8 Mkt&Fin 55.50 Placed 425000.0
There are 215 rows and 15 columns
Data types:
sl_no int64
gender object
ssc_p float64
ssc_b object
hsc_p float64
hsc_b object
hsc_s object
degree_p float64
degree_t object
workex object
etest_p float64
specialisation object
mba_p float64
status object
salary float64
dtype: object
% of missing values per column:
sl_no 0.0
gender 0.0
ssc_p 0.0
ssc_b 0.0
hsc_p 0.0
hsc_b 0.0
hsc_s 0.0
degree_p 0.0
degree_t 0.0
workex 0.0
etest_p 0.0
specialisation 0.0
mba_p 0.0
status 0.0
salary 31.0
dtype: float64
Statistical summary:
sl_no ssc_p hsc_p degree_p etest_p mba_p \
count 215.000000 215.000000 215.000000 215.000000 215.000000 215.000000
mean 108.000000 67.303395 66.333163 66.370186 72.100558 62.278186
std 62.209324 10.827205 10.897509 7.358743 13.275956 5.833385
min 1.000000 40.890000 37.000000 50.000000 50.000000 51.210000
25% 54.500000 60.600000 60.900000 61.000000 60.000000 57.945000
50% 108.000000 67.000000 65.000000 66.000000 71.000000 62.000000
75% 161.500000 75.700000 73.000000 72.000000 83.500000 66.255000
max 215.000000 89.400000 97.700000 91.000000 98.000000 77.890000
salary
count 148.000000
mean 288655.405405
std 93457.452420
min 200000.000000
25% 240000.000000
50% 265000.000000
75% 300000.000000
max 940000.000000
###Markdown
Dealing with NaN values - Since we have 31/215 NaN values, it's not practical to remove them since they can affect the overall integrity of the data. We will use SimpleImputer to replace those NaN with median values.
###Code
data = data.fillna(0)
data.isnull().sum()
###Output
_____no_output_____
###Markdown
Can gender affects salary?
###Code
plt.figure(figsize = (8, 20))
sns.boxplot(data = data, x = 'gender', y = 'salary',showfliers = False).set_title("Barplot showing salary by gender") #outlier not shown here
###Output
_____no_output_____
###Markdown
- Finding median for female since we can't see clearly.
###Code
data[data['gender'] == 'F'].salary.median()
###Output
_____no_output_____
###Markdown
- It seems like the median between males and females are very close. The distribution of salary seems to be wider for males. Does academic results affect salary obtained?- Do note that these data will only include students that got the job placement.
###Code
# Secondary Education percentage- 10th Grade vs Salary
sns.regplot(data = data, x ='ssc_p', y = 'salary' ).set_title("Regression plot: Secondary Education percentage- 10th Grade vs Salary")
# Higher Secondary Education percentage- 12th Grade vs Salary
sns.regplot(data = data, x ='hsc_p', y = 'salary' ).set_title("Regression plot: Higher Secondary Education percentage- 12th Grade vs Salary")
# Degree percentage vs Salary
sns.regplot(data = data, x ='degree_p', y = 'salary' ).set_title("Regression plot: Degree percentage vs Salary")
# Employability test percentage vs salary
sns.regplot(data = data, x ='etest_p', y = 'salary' ).set_title("Regression plot: Employability test percentage vs salary")
# MBA test percentage vs salary
sns.regplot(data = data, x ='mba_p', y = 'salary').set_title("Regression plot: MBA test percentage vs salary ")
###Output
_____no_output_____
###Markdown
- We can clearly see that there is no strong correlation between academic scores and salary. Can type of specialisation affect placement?
###Code
# Look at placement between gender
plt.rc('axes', labelsize=15) # fontsize of the x and y labels
plt.rc('xtick', labelsize=13) # fontsize of the tick labels
plt.rc('ytick', labelsize=13)
plt.figure(figsize = (8, 10))
sns.countplot(data = data, x = 'gender', hue = 'status', palette = "RdBu").set_title("Barplot showing placement between gender")
###Output
_____no_output_____
###Markdown
It seems like more males are able to obtain the job placement.
###Code
# Look at placement among specialization in higher secondary education
plt.rc('axes', labelsize=15) # fontsize of the x and y labels
plt.rc('xtick', labelsize=13) # fontsize of the tick labels
plt.rc('ytick', labelsize=13)
plt.figure(figsize = (8, 10))
sns.countplot(data = data, x = 'hsc_s', hue = 'status', palette = "RdBu").set_title("Barplot showing placement among specialisation")
# Look at placement among degree specialization
plt.rc('axes', labelsize=15) # fontsize of the x and y labels
plt.rc('xtick', labelsize=13) # fontsize of the tick labels
plt.rc('ytick', labelsize=13)
plt.figure(figsize = (8, 10))
sns.countplot(data = data, x = 'degree_t', hue = 'status', palette = "RdBu").set_title("Barplot showing placement among specialisation (degree)")
# Look at placement among master specialization
plt.rc('axes', labelsize=15) # fontsize of the x and y labels
plt.rc('xtick', labelsize=13) # fontsize of the tick labels
plt.rc('ytick', labelsize=13)
plt.figure(figsize = (8, 10))
sns.countplot(data = data, x = 'specialisation', hue = 'status', palette = "RdBu").set_title("Barplot showing placement among specialisation (masters)")
###Output
_____no_output_____
###Markdown
- From what we see here, it seems like business related specialisation tend to have a higher chance of getting a job placement. Will work experience affect placement
###Code
# Look at placement among work experience
plt.rc('axes', labelsize=15) # fontsize of the x and y labels
plt.rc('xtick', labelsize=13) # fontsize of the tick labels
plt.rc('ytick', labelsize=13)
plt.figure(figsize = (8, 10))
sns.countplot(data = data, x = 'workex', hue = 'status', palette = "RdBu").set_title("Barplot showing placement among different work experience")
###Output
_____no_output_____
###Markdown
- It seems like work experience has no effect on job placement. Using logistic regression to predict the chance of getting a job placement
###Code
# Use label encoder to change categorical data to numerical
le = LabelEncoder()
# Implementing LE on gender
le.fit(data.gender.drop_duplicates())
data.gender = le.transform(data.gender)
# Implementing LE on ssc_b
le.fit(data.ssc_b.drop_duplicates())
data.ssc_b = le.transform(data.ssc_b)
# Implementing LE on hsc_b
le.fit(data.hsc_b.drop_duplicates())
data.hsc_b = le.transform(data.hsc_b)
# Implementing LE on hsc_s
le.fit(data.hsc_s.drop_duplicates())
data.hsc_s = le.transform(data.hsc_s)
# Implementing LE on degree_t
le.fit(data.degree_t.drop_duplicates())
data.degree_t = le.transform(data.degree_t)
# Implementing LE on workex
le.fit(data.workex.drop_duplicates())
data.workex = le.transform(data.workex)
# Implementing LE on specialisation
le.fit(data.specialisation.drop_duplicates())
data.specialisation = le.transform(data.specialisation)
# Implementing LE on status
le.fit(data.status.drop_duplicates())
data.status = le.transform(data.status)
plt.figure(figsize=(15,10))
corrMatrix = data.corr()
sns.heatmap(corrMatrix, annot=True)
plt.show()
###Output
_____no_output_____
###Markdown
Creating test split- I will exclude salary from X since people that got the job placement will get a salary.
###Code
# Assigning X and y
X = data.drop(['status', 'sl_no', 'salary'], axis=1)
y = data['status']
# Implementing train and test splits
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 42)
# Looking into the shape of training and test dataset
print(X_train.shape)
print(X_test.shape)
# instantiate the model
logreg = LogisticRegression(solver='liblinear', random_state=0)
# Fitting the model
logreg.fit(X_train, y_train)
y_pred_test = logreg.predict(X_test)
print('Model accuracy score: {0:0.4f}'. format(accuracy_score(y_test, y_pred_test)))
###Output
Model accuracy score: 0.9070
|
src/haskell-programming-from-first-principles/11-algebraic-datatypes.ipynb | ###Markdown
Chapter 11: Algebraic datatypes Multiple choice
###Code
data Weekday =
Monday
| Tuesday
| Wednesday
| Thursday
| Friday
###Output
_____no_output_____
###Markdown
`Weekday` is a type with five data constructors
###Code
f Friday = "Miller Time"
:t f
-- f :: Weekday -> String
###Output
_____no_output_____
###Markdown
Types defined with the `data` keyword must begin with a capital letter. The function `g xs = xs !! (length xs - 1)` delivers the final element of `xs`. Ciphers ---
###Code
import Data.Char
punctuations = " "
base = ord 'a'
end = ord 'z'
size = end - base + 1
import Data.Maybe
data LetterOrPunctuation = Letter Char | Punctuation Char deriving Show
unLetter :: LetterOrPunctuation -> Char
unLetter (Letter x) = x
unLetter (Punctuation x) = x
getLetter :: Char -> Maybe LetterOrPunctuation
getLetter x
| x `elem` punctuations = (Just . Punctuation) x
| l < base = Nothing
| l > end = Nothing
| otherwise = (Just . Letter) x
where l = ord x
getLetterFromInt :: Int -> Maybe LetterOrPunctuation
getLetterFromInt = getLetter . chr
unsafeGetLetter :: Char -> LetterOrPunctuation
unsafeGetLetter = fromJust . getLetter
unsafeGetLetterFromInt :: Int -> LetterOrPunctuation
unsafeGetLetterFromInt = fromJust . getLetterFromInt
caesarLetter :: Int -> LetterOrPunctuation -> LetterOrPunctuation
caesarLetter _ (Punctuation x) = Punctuation x
caesarLetter shift (Letter x) = (unsafeGetLetterFromInt . f . ord) x where
f i = (i - base + shift) `mod` size + base
caesarLetter 3 (Letter 'a')
getLetters :: String -> Maybe [LetterOrPunctuation]
getLetters = traverse getLetter
unsafeGetLetters :: String -> [LetterOrPunctuation]
unsafeGetLetters = fromJust . getLetters
unLetters :: [LetterOrPunctuation] -> String
unLetters = fmap unLetter
caesar :: Int -> [LetterOrPunctuation] -> [LetterOrPunctuation]
caesar shift = fmap (caesarLetter shift)
unLetters <$> caesar 3 <$> getLetters "qwer"
unLetters <$> caesar 3 <$> getLetters "qw er"
uncaesar :: Int -> [LetterOrPunctuation] -> [LetterOrPunctuation]
uncaesar shift = caesar (-shift)
unLetters . uncaesar 3 <$> caesar 3 <$> getLetters "qwer"
unLetters . uncaesar 3 <$> caesar 3 <$> getLetters "qw er"
###Output
_____no_output_____
###Markdown
---
###Code
zipIgnoringPunctuation :: [a] -> [LetterOrPunctuation] -> [(Maybe a, LetterOrPunctuation)]
zipIgnoringPunctuation = f where
f [] _ = []
f _ [] = []
f xs (y@(Punctuation _):yt) = (Nothing, y):f xs yt
f (x:xt) (y:yt) = (Just x, y):f xt yt
zipIgnoringPunctuation [1, 2, 3, 4] <$> getLetters "qwer"
zipIgnoringPunctuation [1, 2, 3, 4] <$> getLetters "qw er"
toShift :: LetterOrPunctuation -> Maybe Int
toShift (Letter x) = Just $ ord x - base
toShift (Punctuation _) = Nothing
import Control.Monad (join)
import Data.Bifunctor (first)
vigenere' :: (Int -> Int) -> [LetterOrPunctuation] -> [LetterOrPunctuation] -> [LetterOrPunctuation]
vigenere' mapShift word s = encoded where
shifts = cycle . fmap (fmap mapShift . toShift) $ word
dirtyPairs = zipIgnoringPunctuation shifts s
pairs = fmap (first join) dirtyPairs
-- f Nothing l = l
-- f (Just s) l = caesarLetter s l
f s l = fromMaybe l (caesarLetter <$> s <*> pure l)
encoded = fmap (uncurry f) pairs
vigenere :: [LetterOrPunctuation] -> [LetterOrPunctuation] -> [LetterOrPunctuation]
vigenere = vigenere' id
unLetters <$> (vigenere <$> getLetters "ally" <*> getLetters "meet at dawn")
unvigenere :: [LetterOrPunctuation] -> [LetterOrPunctuation] -> [LetterOrPunctuation]
unvigenere = vigenere' (*(-1))
input = getLetters "meet at dawn"
keyword = getLetters "ally"
code = (vigenere <$> keyword <*>)
uncode = (unvigenere <$> keyword <*>)
coded = code input
uncoded = uncode coded
unLetters <$> uncoded
###Output
_____no_output_____
###Markdown
As-patterns
###Code
import Data.Char
isSubsequenceOf :: Eq a => [a] -> [a] -> Bool
isSubsequenceOf [] _ = True
isSubsequenceOf _ [] = False
isSubsequenceOf sub@(c:cs) (c':cs')
| c == c' = isSubsequenceOf cs cs'
| otherwise = isSubsequenceOf sub cs'
isSubsequenceOf "blah" "blahwoot"
isSubsequenceOf "blah" "wootblah"
isSubsequenceOf "blah" "wboloath"
isSubsequenceOf "blah" "wootbla"
isSubsequenceOf "blah" ""
isSubsequenceOf "" "blahwoot"
capitalizeWords :: String -> [(String, String)]
capitalizeWords = fmap f . words where
capitalize [] = []
capitalize (c:cs) = toUpper c:cs
tuplify a = (a, a)
f = fmap capitalize . tuplify
capitalizeWords "hello world"
capitalizeWords :: String -> [(String, String)]
capitalizeWords = fmap f . words where
f [] = ([], [])
f s@(c:cs) = (s, toUpper c:cs)
capitalizeWords "hello world"
###Output
_____no_output_____
###Markdown
Language exercises
###Code
capitalizeWord :: String -> String
capitalizeWord [] = []
capitalizeWord (c:cs) = toUpper c:cs
capitalizeWord "Titter"
capitalizeWord "titter"
import Data.List (intercalate)
import Data.List.Split (splitOn)
capitalizeParagraph :: String -> String
capitalizeParagraph = intercalate sep . fmap capitalizeWord . splitOn sep
where sep = ". "
capitalizeParagraph "blah. woot ha."
###Output
_____no_output_____
###Markdown
Phone exercise
###Code
import Data.List
import Data.Maybe
newtype PositiveInt = PositiveInt Int deriving Show
makePositiveInt :: Int -> Maybe PositiveInt
makePositiveInt x
| x > 0 = (Just . PositiveInt) x
| otherwise = Nothing
unsafeMakePositiveInt :: Int -> PositiveInt
unsafeMakePositiveInt = fromJust . makePositiveInt
data DaPhoneButton =
One
| Two
| Three
| Four
| Five
| Six
| Seven
| Eight
| Nine
| Star
| Zero
| Hash
deriving Show
data DaPhoneRawInput = DaPhoneRawInput DaPhoneButton PositiveInt deriving Show
data DaPhoneControl = ToUpper deriving (Show, Eq)
data DaPhoneInput = LetterInput Char | ControlInput DaPhoneControl deriving (Show, Eq)
data DaPhoneChar =
CharSimple Char
| CharComplex DaPhoneControl Char
deriving Show
sets :: DaPhoneButton -> [DaPhoneInput]
sets One = fmap LetterInput "1"
sets Two = fmap LetterInput "abc2"
sets Three = fmap LetterInput "def3"
sets Four = fmap LetterInput "ghi4"
sets Five = fmap LetterInput "jkl5"
sets Six = fmap LetterInput "mno6"
sets Seven = fmap LetterInput "pqrs7"
sets Eight = fmap LetterInput "tuv8"
sets Nine = fmap LetterInput "wxyz9"
sets Star = ControlInput ToUpper : fmap LetterInput "*"
sets Zero = fmap LetterInput " +_0"
sets Hash = fmap LetterInput ".,#"
cycles :: DaPhoneButton -> [DaPhoneInput]
cycles = cycle . sets
input2raw :: DaPhoneInput -> DaPhoneRawInput
input2raw i
| Just n <- i `elemIndex` sets One = DaPhoneRawInput One (PositiveInt (n + 1))
| Just n <- i `elemIndex` sets Two = DaPhoneRawInput Two (PositiveInt (n + 1))
| Just n <- i `elemIndex` sets Three = DaPhoneRawInput Three (PositiveInt (n + 1))
| Just n <- i `elemIndex` sets Four = DaPhoneRawInput Four (PositiveInt (n + 1))
| Just n <- i `elemIndex` sets Five = DaPhoneRawInput Five (PositiveInt (n + 1))
| Just n <- i `elemIndex` sets Six = DaPhoneRawInput Six (PositiveInt (n + 1))
| Just n <- i `elemIndex` sets Seven = DaPhoneRawInput Seven (PositiveInt (n + 1))
| Just n <- i `elemIndex` sets Eight = DaPhoneRawInput Eight (PositiveInt (n + 1))
| Just n <- i `elemIndex` sets Nine = DaPhoneRawInput Nine (PositiveInt (n + 1))
| Just n <- i `elemIndex` sets Star = DaPhoneRawInput Star (PositiveInt (n + 1))
| Just n <- i `elemIndex` sets Zero = DaPhoneRawInput Zero (PositiveInt (n + 1))
| Just n <- i `elemIndex` sets Hash = DaPhoneRawInput Hash (PositiveInt (n + 1))
preprocess :: DaPhoneRawInput -> DaPhoneInput
preprocess (DaPhoneRawInput button (PositiveInt n)) = cycles button !! (n - 1)
process' :: [DaPhoneInput] -> String
process' [] = []
process' (ControlInput ToUpper:[]) = undefined
process' (ControlInput ToUpper:ControlInput ToUpper:_) = error "impossible"
process' (ControlInput ToUpper:LetterInput c:is) = toUpper c : process' is
process' (LetterInput c:is) = c : process' is
process :: [DaPhoneRawInput] -> String
process = process' . fmap preprocess
convertChar :: Char -> DaPhoneChar
convertChar c
| isUpper c = CharComplex ToUpper (toLower c)
| otherwise = CharSimple c
char2input :: DaPhoneChar -> [DaPhoneInput]
char2input (CharSimple c) = [LetterInput c]
char2input (CharComplex control c) = [ControlInput control, LetterInput c]
convertString :: String -> [DaPhoneChar]
convertString = fmap convertChar
reverseProcess :: String -> [DaPhoneRawInput]
reverseProcess = fmap input2raw . (>>= char2input) . convertString
convertString "Wanna"
reverseProcess "Wanna"
convo :: [String]
convo =
[
"Wanna play 20 questions",
"Ya",
"U 1st haha",
"Lol ok. Have u ever tasted alcohol lol",
"Lol ya",
"Wow ur cool haha. Ur turn",
"Ok. Do u think I am pretty Lol",
"Lol ya",
"Haha thanks just making sure rofl ur turn"
]
fmap convertString convo
keypresses :: DaPhoneRawInput -> PositiveInt
keypresses (DaPhoneRawInput _ n) = n
unPositiveInt :: PositiveInt -> Int
unPositiveInt (PositiveInt n) = n
fingerTaps :: Char -> Int
fingerTaps = sum . fmap (unPositiveInt . keypresses . input2raw) . char2input . convertChar
convertString "Wanna"
fmap fingerTaps "Wanna"
fmap fingerTaps "wanna"
import Data.Function (on)
mostPopular :: Ord a => [a] -> a
mostPopular = head . maximumBy (compare `on` length) . group . sort
mostPopularLetter :: String -> Char
mostPopularLetter = mostPopular . filter (/= ' ')
l = mostPopularLetter "asdfasdfzzzZZZZZ "
l
mostPopularLetter ""
fingerTaps l
mostPopularLetterOverall :: [String] -> Char
mostPopularLetterOverall = mostPopularLetter . join
mostPopularLetterOverall convo
mostPopularWord :: String -> String
mostPopularWord = mostPopular . words
mostPopularWord "hello world hello"
mostPopularWordOverall :: [String] -> String
mostPopularWordOverall = mostPopularWord . unwords
mostPopularWordOverall convo
mostPopularWordOverallIgnoringCase = mostPopularWord . fmap toLower . unwords $ convo
mostPopularWordOverallIgnoringCase
###Output
_____no_output_____
###Markdown
Hutton's Razor
###Code
data Expr
= Lit Integer
| Add Expr Expr
eval :: Expr -> Integer
eval (Lit x) = x
eval (Add e1 e2) = ((+) `on` eval) e1 e2
eval (Add (Lit 1) (Lit 9001))
9002
instance Show Expr where
show (Lit x) = show x
show (Add e1 e2) = show' e1 ++ " + " ++ show' e2 where
show' e@(Lit _) = show e
show' e = "(" ++ show e ++ ")"
Add (Lit 1) (Add (Lit 2) (Lit 55))
a1 = Add (Lit 9001) (Lit 1)
a2 = Add a1 (Lit 20001)
a3 = Add (Lit 1) a2
a3
###Output
_____no_output_____ |
[Dacon] Jeju_Big-2007/Untitled.ipynb | ###Markdown
1,2,3,4 -> 72,3,4,5 -> 83,4,5,6 -> 94,5,6,7 -> 105,6,7,8 -> 116,7,8,9 -> 127,8,9,10 -> 18,9,10,11 -> 29,10,11,12 -> 310,11,12,1 -> 4
###Code
def build_dataset(tts):
t1, t2, t3, t4, ty = tts
x1 = df_num[(df_merged['REG_YEAR'] + df_merged['REG_MONTH']) == t1]['AMT'].values
x2 = df_num[(df_merged['REG_YEAR'] + df_merged['REG_MONTH']) == t2]['AMT'].values
x3 = df_num[(df_merged['REG_YEAR'] + df_merged['REG_MONTH']) == t3]['AMT'].values
x4 = df_num[(df_merged['REG_YEAR'] + df_merged['REG_MONTH']) == t4]['AMT'].values
y = df_num[(df_merged['REG_YEAR'] + df_merged['REG_MONTH']) == ty]['AMT'].values
return np.array([x1,x2,x3,x4,y])
reg_comb = [
['201901', '201902', '201903', '201904', '201907'],
['201902', '201903', '201904', '201905', '201908'],
['201903', '201904', '201905', '201906', '201909'],
['201904', '201905', '201906', '201907', '201910'],
['201905', '201906', '201907', '201908', '201911'],
['201906', '201907', '201908', '201909', '201912'],
['201907', '201908', '201909', '201910', '202001'],
['201908', '201909', '201910', '201911', '202002'],
['201909', '201910', '201911', '201912', '202003'],
]
reg_test = ['201910', '201911', '201912', '202001', '202004']
ds = []
for reg in reg_comb:
xy = build_dataset(reg)
ds.append(xy)
ds = np.array(ds)
xy = build_dataset(reg_test)
x_te = np.array([np.log1p(xy[:4])])
from tensorflow.keras import backend as k
from tensorflow.keras.layers import Input, Dense, Dropout, concatenate
from tensorflow.keras.layers import Bidirectional, LSTM, Attention
from tensorflow.keras.models import Model
from tensorflow.keras import regularizers
from tensorflow.keras.optimizers import Adam, RMSprop
x_trn = np.log1p(ds[:-1,:4])
y_trn = np.log1p(ds[:-1,-1:])
x_val = np.log1p(ds[-1:, :4])
y_val = np.log1p(ds[-1:, -1:])
k.clear_session()
# xYear = Input(batch_shape=(None, 1))
# xMonth = Input(batch_shape=(None, 12))
# xYearEmb = Dense(5)(xYear)
# xMonthEmb = Dense(5)(xMonth)
xInput = Input(batch_shape=(None, 4, 697))
xDrop = Dropout(0.3)(xInput)
xLstm = Bidirectional(LSTM(64, return_sequences=True))(xDrop)
xLstm = Bidirectional(LSTM(64))(xLstm)
xDense = Dense(128, activation='relu', kernel_regularizer=regularizers.l2(0.01))(xLstm)
# xConcat = concatenate([xLstm, xYearEmb, xMonthEmb])
xOutput = Dense(697)(xDense)
xOutput = k.expand_dims(xOutput, axis=1)
model = Model(xInput, xOutput)
model.compile(
loss='mean_squared_error',
optimizer=Adam(learning_rate=0.005))
print(model.summary())
model.fit(x_trn, y_trn, epochs=500, validation_data=[x_val, y_val], verbose=1)
###Output
Train on 8 samples, validate on 1 samples
Epoch 1/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0848 - val_loss: 6.9309
Epoch 2/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0840 - val_loss: 6.7770
Epoch 3/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0826 - val_loss: 6.9015
Epoch 4/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0806 - val_loss: 6.8181
Epoch 5/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0787 - val_loss: 6.8442
Epoch 6/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0775 - val_loss: 6.8705
Epoch 7/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0769 - val_loss: 6.7987
Epoch 8/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0767 - val_loss: 6.8991
Epoch 9/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0763 - val_loss: 6.7861
Epoch 10/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0755 - val_loss: 6.8885
Epoch 11/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0742 - val_loss: 6.8087
Epoch 12/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0727 - val_loss: 6.8494
Epoch 13/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0714 - val_loss: 6.8473
Epoch 14/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0705 - val_loss: 6.8111
Epoch 15/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0699 - val_loss: 6.8746
Epoch 16/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0694 - val_loss: 6.7921
Epoch 17/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0688 - val_loss: 6.8787
Epoch 18/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0679 - val_loss: 6.7955
Epoch 19/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0668 - val_loss: 6.8617
Epoch 20/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0657 - val_loss: 6.8147
Epoch 21/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0646 - val_loss: 6.8356
Epoch 22/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0636 - val_loss: 6.8369
Epoch 23/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0627 - val_loss: 6.8125
Epoch 24/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0620 - val_loss: 6.8536
Epoch 25/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0613 - val_loss: 6.7960
Epoch 26/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0606 - val_loss: 6.8641
Epoch 27/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0599 - val_loss: 6.7848
Epoch 28/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0592 - val_loss: 6.8706
Epoch 29/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0584 - val_loss: 6.7771
Epoch 30/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0577 - val_loss: 6.8738
Epoch 31/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0569 - val_loss: 6.7726
Epoch 32/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0561 - val_loss: 6.8729
Epoch 33/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0553 - val_loss: 6.7724
Epoch 34/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0544 - val_loss: 6.8667
Epoch 35/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0534 - val_loss: 6.7775
Epoch 36/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0524 - val_loss: 6.8554
Epoch 37/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0513 - val_loss: 6.7864
Epoch 38/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0503 - val_loss: 6.8417
Epoch 39/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0493 - val_loss: 6.7960
Epoch 40/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0483 - val_loss: 6.8292
Epoch 41/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0474 - val_loss: 6.8036
Epoch 42/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0465 - val_loss: 6.8197
Epoch 43/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0456 - val_loss: 6.8082
Epoch 44/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0448 - val_loss: 6.8129
Epoch 45/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0439 - val_loss: 6.8107
Epoch 46/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0431 - val_loss: 6.8075
Epoch 47/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0422 - val_loss: 6.8126
Epoch 48/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0414 - val_loss: 6.8018
Epoch 49/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0406 - val_loss: 6.8163
Epoch 50/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0398 - val_loss: 6.7925
Epoch 51/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0390 - val_loss: 6.8263
Epoch 52/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0382 - val_loss: 6.7732
Epoch 53/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0377 - val_loss: 6.8541
Epoch 54/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0375 - val_loss: 6.7284
Epoch 55/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0383 - val_loss: 6.9261
Epoch 56/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0411 - val_loss: 6.6434
Epoch 57/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0475 - val_loss: 7.0249
Epoch 58/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0527 - val_loss: 6.6297
Epoch 59/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0483 - val_loss: 6.8740
Epoch 60/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0342 - val_loss: 6.8663
Epoch 61/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0328 - val_loss: 6.6552
Epoch 62/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0408 - val_loss: 6.9314
Epoch 63/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0368 - val_loss: 6.7797
Epoch 64/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0287 - val_loss: 6.7051
Epoch 65/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0317 - val_loss: 6.9204
Epoch 66/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0335 - val_loss: 6.7428
Epoch 67/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0275 - val_loss: 6.7422
Epoch 68/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0265 - val_loss: 6.8966
Epoch 69/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0291 - val_loss: 6.7294
Epoch 70/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0256 - val_loss: 6.7675
Epoch 71/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0232 - val_loss: 6.8734
Epoch 72/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0251 - val_loss: 6.7252
Epoch 73/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0233 - val_loss: 6.7832
Epoch 74/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0205 - val_loss: 6.8518
Epoch 75/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0216 - val_loss: 6.7247
Epoch 76/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0207 - val_loss: 6.7937
Epoch 77/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0181 - val_loss: 6.8330
Epoch 78/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0183 - val_loss: 6.7272
Epoch 79/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0180 - val_loss: 6.8014
Epoch 80/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0158 - val_loss: 6.8153
Epoch 81/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0153 - val_loss: 6.7307
Epoch 82/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0153 - val_loss: 6.8063
Epoch 83/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0135 - val_loss: 6.7984
Epoch 84/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0126 - val_loss: 6.7361
Epoch 85/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0125 - val_loss: 6.8091
Epoch 86/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0113 - val_loss: 6.7826
Epoch 87/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0101 - val_loss: 6.7435
Epoch 88/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0098 - val_loss: 6.8087
Epoch 89/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0089 - val_loss: 6.7679
Epoch 90/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0077 - val_loss: 6.7528
Epoch 91/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0071 - val_loss: 6.8045
Epoch 92/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0065 - val_loss: 6.7560
Epoch 93/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0054 - val_loss: 6.7636
Epoch 94/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0045 - val_loss: 6.7958
Epoch 95/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0040 - val_loss: 6.7482
Epoch 96/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0032 - val_loss: 6.7741
Epoch 97/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0022 - val_loss: 6.7828
Epoch 98/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0015 - val_loss: 6.7463
Epoch 99/500
8/8 [==============================] - 0s 2ms/sample - loss: 2.0008 - val_loss: 6.7814
Epoch 100/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9999 - val_loss: 6.7678
Epoch 101/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9990 - val_loss: 6.7508
Epoch 102/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9984 - val_loss: 6.7825
Epoch 103/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9976 - val_loss: 6.7543
Epoch 104/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9967 - val_loss: 6.7599
Epoch 105/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9959 - val_loss: 6.7755
Epoch 106/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9952 - val_loss: 6.7472
Epoch 107/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9945 - val_loss: 6.7687
Epoch 108/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9936 - val_loss: 6.7625
Epoch 109/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9928 - val_loss: 6.7490
Epoch 110/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9921 - val_loss: 6.7708
Epoch 111/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9914 - val_loss: 6.7499
Epoch 112/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9906 - val_loss: 6.7572
Epoch 113/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9898 - val_loss: 6.7631
Epoch 114/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9890 - val_loss: 6.7454
Epoch 115/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9883 - val_loss: 6.7634
Epoch 116/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9875 - val_loss: 6.7503
Epoch 117/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9867 - val_loss: 6.7508
Epoch 118/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9860 - val_loss: 6.7594
Epoch 119/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9852 - val_loss: 6.7434
Epoch 120/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9845 - val_loss: 6.7575
Epoch 121/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9837 - val_loss: 6.7476
Epoch 122/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9829 - val_loss: 6.7476
Epoch 123/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9822 - val_loss: 6.7540
Epoch 124/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9814 - val_loss: 6.7413
Epoch 125/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9807 - val_loss: 6.7532
Epoch 126/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9799 - val_loss: 6.7427
Epoch 127/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9792 - val_loss: 6.7464
Epoch 128/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9784 - val_loss: 6.7471
Epoch 129/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9777 - val_loss: 6.7399
Epoch 130/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9769 - val_loss: 6.7488
Epoch 131/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9762 - val_loss: 6.7376
Epoch 132/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9754 - val_loss: 6.7460
Epoch 133/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9747 - val_loss: 6.7389
Epoch 134/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9739 - val_loss: 6.7410
Epoch 135/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9732 - val_loss: 6.7411
Epoch 136/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9724 - val_loss: 6.7363
Epoch 137/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9717 - val_loss: 6.7420
Epoch 138/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9710 - val_loss: 6.7332
Epoch 139/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9702 - val_loss: 6.7413
Epoch 140/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9695 - val_loss: 6.7317
Epoch 141/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9688 - val_loss: 6.7395
Epoch 142/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9680 - val_loss: 6.7309
Epoch 143/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9673 - val_loss: 6.7373
Epoch 144/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9666 - val_loss: 6.7301
Epoch 145/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9658 - val_loss: 6.7353
Epoch 146/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9651 - val_loss: 6.7289
Epoch 147/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9644 - val_loss: 6.7338
Epoch 148/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9636 - val_loss: 6.7272
Epoch 149/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9629 - val_loss: 6.7332
Epoch 150/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9622 - val_loss: 6.7244
Epoch 151/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9615 - val_loss: 6.7339
Epoch 152/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9607 - val_loss: 6.7196
Epoch 153/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9600 - val_loss: 6.7375
Epoch 154/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9593 - val_loss: 6.7107
Epoch 155/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9587 - val_loss: 6.7477
Epoch 156/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9581 - val_loss: 6.6919
Epoch 157/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9577 - val_loss: 6.7748
Epoch 158/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9576 - val_loss: 6.6497
Epoch 159/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9585 - val_loss: 6.8433
Epoch 160/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9613 - val_loss: 6.5656
Epoch 161/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9680 - val_loss: 6.9683
Epoch 162/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9775 - val_loss: 6.4996
Epoch 163/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9827 - val_loss: 6.9150
Epoch 164/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9679 - val_loss: 6.6812
Epoch 165/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9522 - val_loss: 6.6121
Epoch 166/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9568 - val_loss: 6.9155
Epoch 167/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9660 - val_loss: 6.5888
Epoch 168/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9586 - val_loss: 6.7208
Epoch 169/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9489 - val_loss: 6.8406
Epoch 170/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9549 - val_loss: 6.5778
Epoch 171/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9579 - val_loss: 6.7773
Epoch 172/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9487 - val_loss: 6.7758
Epoch 173/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9480 - val_loss: 6.5948
Epoch 174/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9530 - val_loss: 6.7990
Epoch 175/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9482 - val_loss: 6.7315
Epoch 176/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9443 - val_loss: 6.6188
Epoch 177/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9480 - val_loss: 6.8008
Epoch 178/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9464 - val_loss: 6.7004
Epoch 179/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9422 - val_loss: 6.6420
Epoch 180/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9437 - val_loss: 6.7931
Epoch 181/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9440 - val_loss: 6.6796
Epoch 182/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9405 - val_loss: 6.6642
Epoch 183/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9403 - val_loss: 6.7811
Epoch 184/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9413 - val_loss: 6.6657
Epoch 185/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9389 - val_loss: 6.6837
Epoch 186/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9376 - val_loss: 6.7652
Epoch 187/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9385 - val_loss: 6.6571
Epoch 188/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9372 - val_loss: 6.7008
Epoch 189/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9354 - val_loss: 6.7475
Epoch 190/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9357 - val_loss: 6.6540
Epoch 191/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9352 - val_loss: 6.7148
Epoch 192/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9335 - val_loss: 6.7281
Epoch 193/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9331 - val_loss: 6.6558
Epoch 194/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9330 - val_loss: 6.7246
Epoch 195/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9317 - val_loss: 6.7084
Epoch 196/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9308 - val_loss: 6.6627
Epoch 197/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9307 - val_loss: 6.7289
Epoch 198/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9299 - val_loss: 6.6903
Epoch 199/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9288 - val_loss: 6.6738
Epoch 200/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9284 - val_loss: 6.7264
Epoch 201/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9279 - val_loss: 6.6761
Epoch 202/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9270 - val_loss: 6.6871
Epoch 203/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9262 - val_loss: 6.7170
Epoch 204/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9258 - val_loss: 6.6684
Epoch 205/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9251 - val_loss: 6.6996
Epoch 206/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9242 - val_loss: 6.7023
Epoch 207/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9236 - val_loss: 6.6685
Epoch 208/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9231 - val_loss: 6.7071
Epoch 209/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9224 - val_loss: 6.6862
Epoch 210/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9216 - val_loss: 6.6760
Epoch 211/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9210 - val_loss: 6.7061
Epoch 212/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9205 - val_loss: 6.6738
Epoch 213/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9197 - val_loss: 6.6871
Epoch 214/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9190 - val_loss: 6.6964
Epoch 215/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9184 - val_loss: 6.6698
Epoch 216/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9178 - val_loss: 6.6955
Epoch 217/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9171 - val_loss: 6.6824
Epoch 218/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9164 - val_loss: 6.6750
Epoch 219/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9158 - val_loss: 6.6950
Epoch 220/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9152 - val_loss: 6.6716
Epoch 221/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9146 - val_loss: 6.6844
Epoch 222/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9139 - val_loss: 6.6852
Epoch 223/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9133 - val_loss: 6.6704
Epoch 224/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9127 - val_loss: 6.6890
Epoch 225/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9120 - val_loss: 6.6731
Epoch 226/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9114 - val_loss: 6.6774
Epoch 227/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9107 - val_loss: 6.6833
Epoch 228/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9101 - val_loss: 6.6685
Epoch 229/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9095 - val_loss: 6.6833
Epoch 230/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9089 - val_loss: 6.6721
Epoch 231/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9082 - val_loss: 6.6737
Epoch 232/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9076 - val_loss: 6.6793
Epoch 233/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9070 - val_loss: 6.6670
Epoch 234/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9064 - val_loss: 6.6790
Epoch 235/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9057 - val_loss: 6.6691
Epoch 236/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9051 - val_loss: 6.6718
Epoch 237/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9045 - val_loss: 6.6743
Epoch 238/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9039 - val_loss: 6.6657
Epoch 239/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9033 - val_loss: 6.6752
Epoch 240/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9027 - val_loss: 6.6652
Epoch 241/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9020 - val_loss: 6.6709
Epoch 242/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9014 - val_loss: 6.6683
Epoch 243/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9008 - val_loss: 6.6653
Epoch 244/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9002 - val_loss: 6.6706
Epoch 245/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8996 - val_loss: 6.6622
Epoch 246/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8990 - val_loss: 6.6696
Epoch 247/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8984 - val_loss: 6.6622
Epoch 248/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8978 - val_loss: 6.6661
Epoch 249/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8971 - val_loss: 6.6637
Epoch 250/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8965 - val_loss: 6.6621
Epoch 251/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8959 - val_loss: 6.6649
Epoch 252/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8953 - val_loss: 6.6592
Epoch 253/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8947 - val_loss: 6.6647
Epoch 254/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8941 - val_loss: 6.6576
Epoch 255/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8935 - val_loss: 6.6634
Epoch 256/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8929 - val_loss: 6.6570
Epoch 257/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8923 - val_loss: 6.6614
Epoch 258/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8917 - val_loss: 6.6567
Epoch 259/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8911 - val_loss: 6.6592
Epoch 260/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8905 - val_loss: 6.6565
Epoch 261/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8900 - val_loss: 6.6572
Epoch 262/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8894 - val_loss: 6.6560
Epoch 263/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8888 - val_loss: 6.6554
Epoch 264/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8882 - val_loss: 6.6554
Epoch 265/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8876 - val_loss: 6.6537
Epoch 266/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8870 - val_loss: 6.6548
Epoch 267/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8864 - val_loss: 6.6519
Epoch 268/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8858 - val_loss: 6.6544
Epoch 269/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8853 - val_loss: 6.6497
Epoch 270/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8847 - val_loss: 6.6547
Epoch 271/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8841 - val_loss: 6.6464
Epoch 272/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8835 - val_loss: 6.6566
Epoch 273/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8829 - val_loss: 6.6408
Epoch 274/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8824 - val_loss: 6.6623
Epoch 275/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8819 - val_loss: 6.6295
Epoch 276/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8814 - val_loss: 6.6774
Epoch 277/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8810 - val_loss: 6.6043
Epoch 278/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8810 - val_loss: 6.7166
Epoch 279/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8817 - val_loss: 6.5472
Epoch 280/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8842 - val_loss: 6.8160
Epoch 281/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8905 - val_loss: 6.4392
Epoch 282/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9040 - val_loss: 6.9830
Epoch 283/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9200 - val_loss: 6.3815
Epoch 284/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9229 - val_loss: 6.8502
Epoch 285/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8939 - val_loss: 6.6496
Epoch 286/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8757 - val_loss: 6.4770
Epoch 287/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8911 - val_loss: 6.8965
Epoch 288/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.9000 - val_loss: 6.5211
Epoch 289/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8822 - val_loss: 6.5914
Epoch 290/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8748 - val_loss: 6.8318
Epoch 291/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8882 - val_loss: 6.4895
Epoch 292/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8855 - val_loss: 6.6629
Epoch 293/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8722 - val_loss: 6.7652
Epoch 294/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8784 - val_loss: 6.4943
Epoch 295/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8828 - val_loss: 6.6990
Epoch 296/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8721 - val_loss: 6.7160
Epoch 297/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8727 - val_loss: 6.5106
Epoch 298/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8783 - val_loss: 6.7124
Epoch 299/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8715 - val_loss: 6.6799
Epoch 300/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8692 - val_loss: 6.5286
Epoch 301/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8739 - val_loss: 6.7145
Epoch 302/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8702 - val_loss: 6.6540
Epoch 303/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8669 - val_loss: 6.5472
Epoch 304/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8702 - val_loss: 6.7115
Epoch 305/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8685 - val_loss: 6.6343
Epoch 306/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8651 - val_loss: 6.5641
Epoch 307/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8670 - val_loss: 6.7050
Epoch 308/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8666 - val_loss: 6.6186
Epoch 309/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8636 - val_loss: 6.5802
Epoch 310/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8643 - val_loss: 6.6969
Epoch 311/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8646 - val_loss: 6.6065
Epoch 312/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8622 - val_loss: 6.5952
Epoch 313/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8620 - val_loss: 6.6870
Epoch 314/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8626 - val_loss: 6.5971
Epoch 315/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8609 - val_loss: 6.6092
Epoch 316/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8601 - val_loss: 6.6755
Epoch 317/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8605 - val_loss: 6.5909
Epoch 318/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8595 - val_loss: 6.6219
Epoch 319/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8584 - val_loss: 6.6623
Epoch 320/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8585 - val_loss: 6.5880
Epoch 321/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8580 - val_loss: 6.6328
Epoch 322/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8568 - val_loss: 6.6478
Epoch 323/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8566 - val_loss: 6.5887
Epoch 324/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8564 - val_loss: 6.6408
Epoch 325/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8554 - val_loss: 6.6330
Epoch 326/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8548 - val_loss: 6.5932
Epoch 327/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8547 - val_loss: 6.6449
Epoch 328/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8540 - val_loss: 6.6189
Epoch 329/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8532 - val_loss: 6.6008
Epoch 330/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8529 - val_loss: 6.6442
Epoch 331/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8525 - val_loss: 6.6075
Epoch 332/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8518 - val_loss: 6.6105
Epoch 333/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8513 - val_loss: 6.6384
Epoch 334/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8509 - val_loss: 6.6003
Epoch 335/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8504 - val_loss: 6.6200
Epoch 336/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8497 - val_loss: 6.6286
Epoch 337/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8493 - val_loss: 6.5986
Epoch 338/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8489 - val_loss: 6.6267
Epoch 339/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8483 - val_loss: 6.6169
Epoch 340/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8477 - val_loss: 6.6022
Epoch 341/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8473 - val_loss: 6.6282
Epoch 342/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8468 - val_loss: 6.6066
Epoch 343/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8463 - val_loss: 6.6093
Epoch 344/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8458 - val_loss: 6.6238
Epoch 345/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8453 - val_loss: 6.6009
Epoch 346/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8448 - val_loss: 6.6163
Epoch 347/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8443 - val_loss: 6.6149
Epoch 348/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8438 - val_loss: 6.6013
Epoch 349/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8433 - val_loss: 6.6194
Epoch 350/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8429 - val_loss: 6.6057
Epoch 351/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8423 - val_loss: 6.6063
Epoch 352/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8418 - val_loss: 6.6162
Epoch 353/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8414 - val_loss: 6.6004
Epoch 354/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8409 - val_loss: 6.6118
Epoch 355/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8404 - val_loss: 6.6086
Epoch 356/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8399 - val_loss: 6.6012
Epoch 357/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8394 - val_loss: 6.6130
Epoch 358/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8390 - val_loss: 6.6013
Epoch 359/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8385 - val_loss: 6.6059
Epoch 360/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8380 - val_loss: 6.6083
Epoch 361/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8375 - val_loss: 6.5990
Epoch 362/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8371 - val_loss: 6.6088
Epoch 363/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8366 - val_loss: 6.6012
Epoch 364/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8361 - val_loss: 6.6020
Epoch 365/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8356 - val_loss: 6.6060
Epoch 366/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8352 - val_loss: 6.5977
Epoch 367/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8347 - val_loss: 6.6051
Epoch 368/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8342 - val_loss: 6.5998
Epoch 369/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8338 - val_loss: 6.5995
Epoch 370/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8333 - val_loss: 6.6031
Epoch 371/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8328 - val_loss: 6.5962
Epoch 372/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8324 - val_loss: 6.6021
Epoch 373/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8319 - val_loss: 6.5975
Epoch 374/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8314 - val_loss: 6.5979
Epoch 375/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8310 - val_loss: 6.5999
Epoch 376/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8305 - val_loss: 6.5947
Epoch 377/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8300 - val_loss: 6.5996
Epoch 378/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8296 - val_loss: 6.5948
Epoch 379/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8291 - val_loss: 6.5966
Epoch 380/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8287 - val_loss: 6.5964
Epoch 381/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8282 - val_loss: 6.5935
Epoch 382/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8278 - val_loss: 6.5968
Epoch 383/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8273 - val_loss: 6.5922
Epoch 384/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8269 - val_loss: 6.5953
Epoch 385/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8264 - val_loss: 6.5927
Epoch 386/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8260 - val_loss: 6.5927
Epoch 387/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8255 - val_loss: 6.5934
Epoch 388/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8251 - val_loss: 6.5906
Epoch 389/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8246 - val_loss: 6.5932
Epoch 390/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8242 - val_loss: 6.5896
Epoch 391/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8237 - val_loss: 6.5919
Epoch 392/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8233 - val_loss: 6.5895
Epoch 393/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8228 - val_loss: 6.5901
Epoch 394/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8224 - val_loss: 6.5896
Epoch 395/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8219 - val_loss: 6.5883
Epoch 396/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8215 - val_loss: 6.5895
Epoch 397/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8211 - val_loss: 6.5869
Epoch 398/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8206 - val_loss: 6.5889
Epoch 399/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8202 - val_loss: 6.5859
Epoch 400/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8198 - val_loss: 6.5879
Epoch 401/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8193 - val_loss: 6.5853
Epoch 402/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8189 - val_loss: 6.5867
Epoch 403/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8184 - val_loss: 6.5848
Epoch 404/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8180 - val_loss: 6.5855
Epoch 405/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8176 - val_loss: 6.5842
Epoch 406/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8171 - val_loss: 6.5844
Epoch 407/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8167 - val_loss: 6.5836
Epoch 408/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8163 - val_loss: 6.5833
Epoch 409/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8159 - val_loss: 6.5830
Epoch 410/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8154 - val_loss: 6.5822
Epoch 411/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8150 - val_loss: 6.5823
Epoch 412/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8146 - val_loss: 6.5812
Epoch 413/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8142 - val_loss: 6.5817
Epoch 414/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8137 - val_loss: 6.5801
Epoch 415/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8133 - val_loss: 6.5813
Epoch 416/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8129 - val_loss: 6.5787
Epoch 417/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8125 - val_loss: 6.5812
Epoch 418/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8121 - val_loss: 6.5769
Epoch 419/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8116 - val_loss: 6.5818
Epoch 420/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8112 - val_loss: 6.5740
Epoch 421/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8108 - val_loss: 6.5840
Epoch 422/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8104 - val_loss: 6.5687
Epoch 423/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8100 - val_loss: 6.5900
Epoch 424/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8096 - val_loss: 6.5577
Epoch 425/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8093 - val_loss: 6.6052
Epoch 426/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8091 - val_loss: 6.5333
Epoch 427/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8093 - val_loss: 6.6439
Epoch 428/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8101 - val_loss: 6.4778
Epoch 429/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8127 - val_loss: 6.7425
Epoch 430/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8192 - val_loss: 6.3672
Epoch 431/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8339 - val_loss: 6.9460
Epoch 432/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8576 - val_loss: 6.2607
Epoch 433/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8797 - val_loss: 6.9514
Epoch 434/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8586 - val_loss: 6.4403
Epoch 435/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8149 - val_loss: 6.4753
Epoch 436/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8099 - val_loss: 6.8694
Epoch 437/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8387 - val_loss: 6.3472
Epoch 438/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8372 - val_loss: 6.6570
Epoch 439/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8071 - val_loss: 6.7119
Epoch 440/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8121 - val_loss: 6.3639
Epoch 441/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8300 - val_loss: 6.7301
Epoch 442/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8138 - val_loss: 6.6069
Epoch 443/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8030 - val_loss: 6.4075
Epoch 444/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8175 - val_loss: 6.7430
Epoch 445/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8145 - val_loss: 6.5493
Epoch 446/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8014 - val_loss: 6.4527
Epoch 447/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8084 - val_loss: 6.7295
Epoch 448/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8117 - val_loss: 6.5168
Epoch 449/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8015 - val_loss: 6.4899
Epoch 450/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8029 - val_loss: 6.7065
Epoch 451/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8080 - val_loss: 6.4999
Epoch 452/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8014 - val_loss: 6.5210
Epoch 453/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7997 - val_loss: 6.6824
Epoch 454/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8044 - val_loss: 6.4923
Epoch 455/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8008 - val_loss: 6.5453
Epoch 456/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7978 - val_loss: 6.6579
Epoch 457/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.8012 - val_loss: 6.4898
Epoch 458/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7998 - val_loss: 6.5645
Epoch 459/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7965 - val_loss: 6.6347
Epoch 460/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7985 - val_loss: 6.4917
Epoch 461/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7985 - val_loss: 6.5794
Epoch 462/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7955 - val_loss: 6.6129
Epoch 463/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7962 - val_loss: 6.4964
Epoch 464/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7970 - val_loss: 6.5902
Epoch 465/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7947 - val_loss: 6.5925
Epoch 466/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7944 - val_loss: 6.5039
Epoch 467/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7953 - val_loss: 6.5972
Epoch 468/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7939 - val_loss: 6.5740
Epoch 469/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7930 - val_loss: 6.5136
Epoch 470/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7936 - val_loss: 6.6001
Epoch 471/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7929 - val_loss: 6.5577
Epoch 472/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7918 - val_loss: 6.5250
Epoch 473/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7920 - val_loss: 6.5987
Epoch 474/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7919 - val_loss: 6.5446
Epoch 475/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7908 - val_loss: 6.5377
Epoch 476/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7906 - val_loss: 6.5933
Epoch 477/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7906 - val_loss: 6.5352
Epoch 478/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7899 - val_loss: 6.5501
Epoch 479/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7893 - val_loss: 6.5842
Epoch 480/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7893 - val_loss: 6.5304
Epoch 481/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7890 - val_loss: 6.5611
Epoch 482/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7883 - val_loss: 6.5725
Epoch 483/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7881 - val_loss: 6.5302
Epoch 484/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7879 - val_loss: 6.5689
Epoch 485/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7873 - val_loss: 6.5599
Epoch 486/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7869 - val_loss: 6.5345
Epoch 487/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7867 - val_loss: 6.5721
Epoch 488/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7864 - val_loss: 6.5485
Epoch 489/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7859 - val_loss: 6.5420
Epoch 490/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7856 - val_loss: 6.5703
Epoch 491/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7853 - val_loss: 6.5405
Epoch 492/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7849 - val_loss: 6.5506
Epoch 493/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7845 - val_loss: 6.5638
Epoch 494/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7842 - val_loss: 6.5371
Epoch 495/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7839 - val_loss: 6.5577
Epoch 496/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7835 - val_loss: 6.5547
Epoch 497/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7831 - val_loss: 6.5388
Epoch 498/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7828 - val_loss: 6.5607
Epoch 499/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7825 - val_loss: 6.5459
Epoch 500/500
8/8 [==============================] - 0s 2ms/sample - loss: 1.7821 - val_loss: 6.5441
###Markdown
Real Test
###Code
pred = model.predict([x_te])
pred_decode = 10**pred
(pred_decode[0,0]<0).sum()
df_rt = df_merged[(df_merged['REG_YEAR'] + df_merged['REG_MONTH']) == '202004']
df_rt['AMT'] = pred_decode[0, 0]
gb_rt = df_rt.groupby(['CARD_SIDO_NM', 'STD_CLSS_NM']).agg({'AMT':'sum'})
# ์ ์ถ ํ์ผ ๋ง๋ค๊ธฐ
subm = pd.read_csv('input/submission.csv', index_col=0)
subm['AMT'] = np.concatenate([gb_rt['AMT'].values, gb_rt['AMT'].values])
subm.index.name = 'id'
subm.to_csv('submission_bigcat.csv', encoding='utf-8-sig')
subm.head()
subm
###Output
_____no_output_____ |
02_bootcamp/03-functions.ipynb | ###Markdown
Functions```pythondef function_name(arguments): body of the function do_some_logic_with_arguments() optionally return something return result```
###Code
def add(x, y):
s = x + y
return s
###Output
_____no_output_____
###Markdown
What if we want to add more numbers?
###Code
def add(x, y, z):
s = x + y + z
return s
###Output
_____no_output_____
###Markdown
What if we want to add an unknown number of numbers?
###Code
def add(*numbers):
s = 0
for num in numbers:
s += num
return s
add(1, 2, 3)
add(1, 2)
add(1)
add(1, 2, 3, 4, 5, 6, 7)
###Output
_____no_output_____
###Markdown
Tip: Positional Arguments are Tuples!
###Code
def add(*numbers):
print("Type of input:", type(numbers))
s = 0
for num in numbers:
s += num
return s
add(1, 2, 3)
###Output
_____no_output_____
###Markdown
Functions with Named Arguments
###Code
data = [{'name': "BYJU's", 'industry': 'e-tech', 'funding': 200000000},
{'name': 'Flipkart', 'industry': 'e-commerce', 'funding': 2500000000},
{'name': 'Shuttl', 'industry': 'transport', 'funding': 8048394}]
def narrate(name, funding, template, industry=None):
return template.format(name=name, funding=funding)
template = "{name} received a funding of {funding} USD."
narrate("BYJU", 20_000_000, template)
narrate(20_000_000, "Flipkart", template)
narrate(funding="1M", name="Shuttl", template=template)
for startup in data:
s = narrate(template=template, **startup)
print(s)
###Output
_____no_output_____
###Markdown
Tip: Named arguments are dictionaries!
###Code
template = "{name} is a startup in the {industry} industry with a funding of USD {funding}."
def narrate(template, **named_args):
print("Type of named_args:", type(named_args))
return template.format(**named_args)
narrate(template, **data[0])
for startup in data:
s = narrate(template, **startup)
print(s)
###Output
_____no_output_____
###Markdown
Exercise: Write a function which, given the dataset, finds the name of the startup with the highest funding.
###Code
# enter code here
###Output
_____no_output_____ |
DC Time series Analysis.ipynb | ###Markdown
Autocorrelation functionCheck autocorrelations for diferent lags
###Code
# Import the acf module and the plot_acf module from statsmodels
from statsmodels.tsa.stattools import acf
from statsmodels.graphics.tsaplots import plot_acf
# Compute the acf array of HRB
acf_array = acf(HRB['Earnings'])
print(acf_array)
# Plot the acf function
plot_acf(HRB, alpha=1)
plt.show()
# Import the plot_acf module from statsmodels and sqrt from math
from statsmodels.graphics.tsaplots import plot_acf
from math import sqrt
# Compute and print the autocorrelation of MSFT weekly returns
autocorrelation = returns['Adj Close'].autocorr()
print("The autocorrelation of weekly MSFT returns is %4.2f" %(autocorrelation))
# Find the number of observations by taking the length of the returns DataFrame
nobs = len(returns)
# Compute the approximate confidence interval
conf = 1.96/sqrt(nobs)
print("The approximate confidence interval is +/- %4.2f" %(conf))
# Plot the autocorrelation function with 95% confidence intervals and 20 lags using plot_acf
plot_acf(returns, alpha=0.05, lags=20)
plt.show()
###Output
_____no_output_____
###Markdown
White noise* Constant mean and variance* Zero autocorrelation at all lags
###Code
# Import the plot_acf module from statsmodels
from statsmodels.graphics.tsaplots import plot_acf
# Simulate white noise returns
returns = np.random.normal(loc=0.02, scale=0.05, size=1000)
# Print out the mean and standard deviation of returns
mean = np.mean(returns)
std = np.std(returns)
print("The mean is %5.3f and the standard deviation is %5.3f" %(mean,std))
# Plot returns series
plt.plot(returns)
plt.show()
# Plot autocorrelation function of white noise returns
plot_acf(returns, lags=20)
plt.show()
###Output
_____no_output_____
###Markdown
Random walk
###Code
# Generate 500 random steps with mean=0 and standard deviation=1
steps = np.random.normal(loc=0, scale=1, size=500)
# Set first element to 0 so that the first price will be the starting stock price
steps[0]=0
# Simulate stock prices, P with a starting price of 100
P = 100 + np.cumsum(steps)
# Plot the simulated stock prices
plt.plot(P)
plt.title("Simulated Random Walk")
plt.show()
# Generate 500 random steps
steps = np.random.normal(loc=0.001, scale=0.01, size=500) + 1
# Set first element to 1
steps[0]=1
# Simulate the stock price, P, by taking the cumulative product
P = 100 * np.cumprod(steps)
# Plot the simulated stock prices
plt.plot(P)
plt.title("Simulated Random Walk with Drift")
plt.show()
###Output
_____no_output_____
###Markdown
Run the 'Augmented Dickey-Fuller Test' from the statsmodels library to show that it does indeed follow a random walk.With the ADF test, the "null hypothesis" (the hypothesis that we either reject or fail to reject) is that the series follows a random walk. Therefore, a low p-value (say less than 5%) means we can reject the null hypothesis that the series is a random walk.
###Code
# Import the adfuller module from statsmodels
from statsmodels.tsa.stattools import adfuller
# Run the ADF test on the price series and print out the results
results = adfuller(AMZN['Adj Close'])
print(results)
# Just print out the p-value
print('The p-value of the test on prices is: ' + str(results[1]))
# Import the adfuller module from statsmodels
from statsmodels.tsa.stattools import adfuller
# Create a DataFrame of AMZN returns
AMZN_ret = AMZN.pct_change()
# Eliminate the NaN in the first row of returns
AMZN_ret = AMZN_ret.dropna()
# Run the ADF test on the return series and print out the p-value
results = adfuller(AMZN_ret['Adj Close'])
print('The p-value of the test on returns is: ' + str(results[1]))
###Output
_____no_output_____
###Markdown
Stationarity * Strong stationarity: Entire distribuition is time invariant* Weak stationarity: Mean, variance and autocorrelation are time invariantNon stationaty distribuition are difficult to model.Some transformation transform non stationary in stationary.Ex: A random walk is non stationay, but a white noise is.
###Code
# Import the plot_acf module from statsmodels
from statsmodels.graphics.tsaplots import plot_acf
# Seasonally adjust quarterly earnings
HRBsa = HRB.diff(4)
# Print the first 10 rows of the seasonally adjusted series
print(HRBsa.head(10))
# Drop the NaN data in the first four rows
HRBsa = HRBsa.dropna()
# Plot the autocorrelation function of the seasonally adjusted series
plot_acf(HRBsa)
plt.show()
###Output
_____no_output_____
###Markdown
Describe AR Model AR(1) model:R_t = fi.R_t-1 + mu + E_tif fi==1: random walkif fi==0: white noiseif -1<fi<1: stationary and stableif fi>0: mean reversionif fi>0: trend following(momentum)
###Code
# import the module for simulating data
from statsmodels.tsa.arima_process import ArmaProcess
# Plot 1: AR parameter = +0.9
plt.subplot(2,1,1)
ar1 = np.array([1, -0.9])
ma1 = np.array([1]) # lag zero coeficient
AR_object1 = ArmaProcess(ar1, ma1)
simulated_data_1 = AR_object1.generate_sample(nsample=1000)
plt.plot(simulated_data_1)
# Plot 2: AR parameter = -0.9
plt.subplot(2,1,2)
ar2 = np.array([1, 0.9])
ma2 = np.array([1])
AR_object2 = ArmaProcess(ar2, ma2)
simulated_data_2 = AR_object2.generate_sample(nsample=1000)
plt.plot(simulated_data_2)
plt.show()
# Import the ARMA module from statsmodels
from statsmodels.tsa.arima_model import ARMA
# Fit an AR(1) model to the first simulated data
mod = ARMA(simulated_data_1, order=(1,0))
res = mod.fit()
# Print out summary information on the fit
print(res.summary())
# Print out the estimate for the constant and for phi
print("When the true phi=0.9, the estimate of phi (and the constant) are:")
print(res.params)
# Import the ARMA module from statsmodels
from statsmodels.tsa.arima_model import ARMA
# Forecast the first AR(1) model
mod = ARMA(simulated_data_1, order=(1,0))
res = mod.fit()
res.plot_predict(start=990, end=1010)
plt.show()
###Output
_____no_output_____
###Markdown
Choosing a model Partial autocorrelation function PACF - indica quรฃo relevante รฉ aumentar o grau da funรงรฃo de autocorrelaรงรฃo.Critรฉrios de qualiade que colocam penalidades para a ordem do modelo:* Akaike information criterion* Baysian information criterion
###Code
# Import the modules for simulating data and for plotting the PACF
from statsmodels.tsa.arima_process import ArmaProcess
from statsmodels.graphics.tsaplots import plot_pacf
# Simulate AR(1) with phi=+0.6
ma = np.array([1])
ar = np.array([1, -0.6])
AR_object = ArmaProcess(ar, ma)
simulated_data_1 = AR_object.generate_sample(nsample=5000)
# Plot PACF for AR(1)
plot_pacf(simulated_data_1, lags=20)
plt.show()
# Simulate AR(2) with phi1=+0.6, phi2=+0.3
ma = np.array([1])
ar = np.array([1, -0.6, -0.3])
AR_object = ArmaProcess(ar, ma)
simulated_data_2 = AR_object.generate_sample(nsample=5000)
# Plot PACF for AR(2)
plot_pacf(simulated_data_2, lags=20)
plt.show()
# Import the module for estimating an ARMA model
from statsmodels.tsa.arima_model import ARMA
# Fit the data to an AR(p) for p = 0,...,6 , and save the BIC
BIC = np.zeros(7)
for p in range(7):
mod = ARMA(simulated_data_2, order=(p,0))
res = mod.fit()
# Save BIC for AR(p)
BIC[p] = res.bic
# Plot the BIC as a function of p
plt.plot(range(1,7), BIC[1:7], marker='o')
plt.xlabel('Order of AR Model')
plt.ylabel('Bayesian Information Criterion')
plt.show()
###Output
_____no_output_____
###Markdown
Describe MA Model MA(1) model: R_t = mu + E_t - theta.E_t-1if theta==0: white noiseif theta>0: momentumif theta<0: mean reversionMA models are stationary for all values of theta.One period autocorrelation = theta/(1 + theta^2)
###Code
# import the module for simulating data
from statsmodels.tsa.arima_process import ArmaProcess
# Plot 1: MA parameter = -0.9
plt.subplot(2,1,1)
ar1 = np.array([1])
ma1 = np.array([1, -0.9])
MA_object1 = ArmaProcess(ar1, ma1)
simulated_data_1 = MA_object1.generate_sample(nsample=1000)
plt.plot(simulated_data_1)
# Plot 2: MA parameter = +0.9
plt.subplot(2,1,2)
ar2 = np.array([1])
ma2 = np.array([1, 0.9])
MA_object2 = ArmaProcess(ar2, ma2)
simulated_data_2 = MA_object2.generate_sample(nsample=1000)
plt.plot(simulated_data_2)
plt.show()
# Import the ARMA module from statsmodels
from statsmodels.tsa.arima_model import ARMA
# Fit an MA(1) model to the first simulated data
mod = ARMA(simulated_data_1, order=(0,1))
res = mod.fit()
# Print out summary information on the fit
print(res.summary())
# Print out the estimate for the constant and for theta
print("When the true theta=-0.9, the estimate of theta (and the constant) are:")
print(res.params)
###Output
_____no_output_____
###Markdown
Describe ARMA Model ARMA(1) model: R_t = fi.R_t-1 + mu + E_t - theta.E_t-1It is possible to convert an AR model to a MA(infinity) model
###Code
# import datetime module
import datetime
# Change the first date to zero
intraday.iloc[0,0] = 0
# Change the column headers to 'DATE' and 'CLOSE'
intraday.columns = ['DATE', 'CLOSE']
# Examine the data types for each column
print(intraday.dtypes)
# Convert DATE column to numeric
intraday['DATE'] = pd.to_numeric(intraday['DATE'])
# Make the `DATE` column the new index
intraday = intraday.set_index('DATE')
# Notice that some rows are missing
print("If there were no missing rows, there would be 391 rows of minute data")
print("The actual length of the DataFrame is:", len(intraday))
# import datetime module
import datetime
# Change the first date to zero
intraday.iloc[0,0] = 0
# Change the column headers to 'DATE' and 'CLOSE'
intraday.columns = ['DATE', 'CLOSE']
# Examine the data types for each column
print(intraday.dtypes)
# Convert DATE column to numeric
intraday['DATE'] = pd.to_numeric(intraday['DATE'])
# Make the `DATE` column the new index
intraday = intraday.set_index('DATE')
# From previous step
intraday = intraday.reindex(range(391), method='ffill')
# Change the index to the intraday times
intraday.index = pd.date_range(start='2017-09-01 9:30', end='2017-09-01 16:00', freq='1min')
# Plot the intraday time series
intraday.plot(grid=True)
plt.show()
# Import plot_acf and ARMA modules from statsmodels
from statsmodels.graphics.tsaplots import plot_acf
from statsmodels.tsa.arima_model import ARMA
# Compute returns from prices and drop the NaN
returns = intraday.pct_change()
returns = returns.dropna()
# Plot ACF of returns with lags up to 60 minutes
plot_acf(returns, lags = 60)
plt.show()
# Fit the data to an MA(1) model
mod = ARMA(returns, order=(0,1))
res = mod.fit()
print(res.params)
###Output
_____no_output_____
###Markdown
Cointegration Models P(t) e Q(t) are random walks and are not forecastable.A linear combineation of P(t) e Q(t) might be forecastable.If that is true, P and Q are cointegrated.EX: An owner and a dog might be on a random walk, but their distance are mean reverting.Steps:* Regress P and Q and get slope c* Run augmented Dickey-Fuller Test on P(t) - c.Q(t) to test for a random walk.*
###Code
# Plot the prices separately
plt.subplot(2,1,1)
plt.plot(7.25*HO, label='Heating Oil')
plt.plot(NG, label='Natural Gas')
plt.legend(loc='best', fontsize='small')
# Plot the spread
plt.subplot(2,1,2)
plt.plot(7.25*HO-NG, label='Spread')
plt.legend(loc='best', fontsize='small')
plt.axhline(y=0, linestyle='--', color='k')
plt.show()
# Import the adfuller module from statsmodels
from statsmodels.tsa.stattools import adfuller
# Compute the ADF for HO and NG to test if we can reject the hypothesis of random walks on the series
result_HO = adfuller(HO['Close'])
print("The p-value for the ADF test on HO is ", result_HO[1])
result_NG = adfuller(NG['Close'])
print("The p-value for the ADF test on NG is ", result_NG[1])
# Compute the ADF of the spread to test if we can reject the hypothesis of random walks for the linear combination
result_spread = adfuller(7.25 * HO['Close'] - NG['Close'])
print("The p-value for the ADF test on the spread is ", result_spread[1])
# Import the statsmodels module for regression and the adfuller function
import statsmodels.api as sm
from statsmodels.tsa.stattools import adfuller
# Regress BTC on ETH
ETH = sm.add_constant(ETH)
result = sm.OLS(BTC,ETH).fit()
# Compute ADF
b = result.params[1]
adf_stats = adfuller(BTC['Price'] - b*ETH['Price'])
print("The p-value for the ADF test is ", adf_stats[1])
###Output
_____no_output_____
###Markdown
Case Study - NY Temperature
###Code
# Import the adfuller function from the statsmodels module
from statsmodels.tsa.stattools import adfuller
# Convert the index to a datetime object
temp_NY.index = pd.to_datetime(temp_NY.index, format='%Y')
# Plot average temperatures
temp_NY.plot()
plt.show()
# Compute and print ADF p-value
result = adfuller(temp_NY['TAVG'])
print("The p-value for the ADF test is ", result[1])
# Import the modules for plotting the sample ACF and PACF
from statsmodels.graphics.tsaplots import plot_acf, plot_pacf
# Take first difference of the temperature Series
chg_temp = temp_NY.diff()
chg_temp = chg_temp.dropna()
# Plot the ACF and PACF on the same page
fig, axes = plt.subplots(2,1)
# Plot the ACF
plot_acf(chg_temp, lags=20, ax=axes[0])
# Plot the PACF
plot_pacf(chg_temp, lags=20, ax=axes[1])
plt.show()
# Import the module for estimating an ARMA model
from statsmodels.tsa.arima_model import ARMA
# Fit the data to an AR(1) model and print AIC:
mod_ar1 = ARMA(chg_temp, order=(1, 0))
res_ar1 = mod_ar1.fit()
print("The AIC for an AR(1) is: ", res_ar1.aic)
# Fit the data to an AR(2) model and print AIC:
mod_ar2 = ARMA(chg_temp, order=(2, 0))
res_ar2 = mod_ar2.fit()
print("The AIC for an AR(2) is: ", res_ar2.aic)
# Fit the data to an ARMA(1,1) model and print AIC:
mod_arma11 = ARMA(chg_temp, order=(1, 1))
res_arma11 = mod_arma11.fit()
print("The AIC for an ARMA(1,1) is: ", res_arma11.aic)
# Import the ARIMA module from statsmodels
from statsmodels.tsa.arima_model import ARIMA
# Forecast temperatures using an ARIMA(1,1,1) model
mod = ARIMA(temp_NY, order=(1,1,1))
res = mod.fit()
# Plot the original series and the forecasted series
res.plot_predict(start='1872-01-01', end='2046-01-01')
plt.show()
###Output
_____no_output_____ |
house-prices-reg-techniques/e2e/feature-engineering.ipynb | ###Markdown
Feature Engineering In this approach we are going to do the below steps- Missing values- Temporal values- Categorical variables: remove rare labels- Standarise the values of the variables to the same range Step 1: Getting all the missing values
###Code
dataset = pd.read_csv("../data/train.csv")
dataset.shape
features_nan = [feature for feature in dataset.columns if dataset[feature].isnull().sum()>1 and dataset[feature].dtype=='O']
len(features_nan)
# Getting the missing value percentage
for feature in features_nan:
print("{} --> {}".format(feature, np.round(dataset[feature].isnull().mean()*100, 2)))
# Heatmap of missing values
missingno.matrix(dataset[features_nan])
###Output
_____no_output_____
###Markdown
step 1.1 Handling categorical varibles
###Code
# Function to replace missing categorical feature into a new label
def replace_cat_feature(dataset, columns):
data = dataset.copy()
data[columns] = data[columns].fillna('Missing')
return data
dataset = replace_cat_feature(dataset, features_nan)
dataset.head()
# Has no missing values
missingno.matrix(dataset[features_nan])
###Output
_____no_output_____
###Markdown
step 1.2 Handling numerical varibles
###Code
# Getting all the numerical values
numerical_with_nan = [feature for feature in dataset.columns if dataset[feature].isnull().sum() > 1 and dataset[feature].dtype != "O"]
for feature in numerical_with_nan:
print("{} -> {}".format(feature, np.round(dataset[feature].isnull().mean()*100, 4)))
# Replacing the null values with the median of the column
for feature in numerical_with_nan:
median_value = dataset[feature].median()
dataset[feature + '_nan'] = np.where(dataset[feature].isnull(), 1, 0)
dataset[feature].fillna(median_value, inplace=True)
# Getting all the nan values post transformation
print(dataset[numerical_with_nan].isnull().sum())
dataset.head()
# Getting the nan flag
for feature in numerical_with_nan:
print(dataset[feature + "_nan"].value_counts())
###Output
0 1201
1 259
Name: LotFrontage_nan, dtype: int64
0 1452
1 8
Name: MasVnrArea_nan, dtype: int64
0 1379
1 81
Name: GarageYrBlt_nan, dtype: int64
###Markdown
step 1.3 Handling temporal varibles
###Code
# Transforming the year feature
for feature in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
dataset[feature] = dataset['YrSold'] - dataset[feature]
dataset[['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']].head()
###Output
_____no_output_____
###Markdown
Step 2: Handling skewed numerical variablesSince all the numerical variables are skewed we are going to perform the log normal distribution
###Code
# Viz before log transformation
num_features = ['LotFrontage', 'LotArea', '1stFlrSF', 'GrLivArea', 'SalePrice']
for feature in num_features:
sns.histplot(data=dataset, x=feature, kde=True)
plt.show()
for feature in num_features:
dataset[feature] = np.log(dataset[feature])
# Viz. all the features after log transformation
for feature in num_features:
sns.histplot(data=dataset, x=feature, kde=True)
plt.show()
###Output
_____no_output_____
###Markdown
Step 3: Handling rare categorical features
###Code
categorical_features = [feature for feature in dataset.columns if dataset[feature].dtype == 'O']
len(categorical_features)
categorical_features
for feature in categorical_features:
temp = dataset.groupby(feature)['SalePrice'].count()/len(dataset)
temp_df = temp[temp>0.01].index
dataset[feature] = np.where(dataset[feature].isin(temp_df),dataset[feature], 'Rare_var')
dataset.head()
###Output
_____no_output_____
###Markdown
step 4: Featue Scaling
###Code
feature_scaling = [feature for feature in dataset.columns if feature not in ['Id', 'SalePrice']]
len(feature_scaling)
feature_scaling
scaler = MinMaxScaler()
scaler.fit_transform(dataset[feature_scaling])
###Output
_____no_output_____ |
notebook188f2af15e-Anjali.ipynb | ###Markdown
###Code
# Author: Anjali
# Commented: Ashok K Harnal
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
df = pd.read_csv("Churn_Modelling.csv")
df.info()
df.head(10)
# Define two general functions to annotate plots with percentages:
# See StackOverflow: https://stackoverflow.com/a/62053049
# To be used whereever countplot() is being used or catplot()
def with_hue(plot, feature, Number_of_levels, hue_levels):
"""
Number_of_levels: No of levels in main feature
hue_levels: No of levels in hue feature
"""
a = [p.get_height() for p in plot.patches]
patch = [p for p in plot.patches]
for i in range(Number_of_levels):
total = feature.value_counts().values[i]
for j in range(hue_levels):
percentage = '{:.1f}%'.format(100 * a[(j*Number_of_levels + i)]/total)
x = patch[(j*Number_of_levels + i)].get_x() + patch[(j*Number_of_levels + i)].get_width() / 2 - 0.15
y = patch[(j*Number_of_levels + i)].get_y() + patch[(j*Number_of_levels + i)].get_height()
ax.annotate(percentage, (x, y), size = 12)
plt.show()
def without_hue(plot, feature):
total = len(feature)
for p in plot.patches:
percentage = '{:.1f}%'.format(100 * p.get_height()/total)
x = p.get_x() + p.get_width() / 2 - 0.05
y = p.get_y() + p.get_height()
ax.annotate(percentage, (x, y), size = 12)
plt.show()
sns.countplot(df['Exited'],label="Count");
###$ More informative
ax =sns.countplot(df['Exited'],label="Count");
without_hue(ax,df.Exited)
df.drop(['RowNumber','CustomerId','Surname'],axis=1,inplace=True)
df.head()
df.isnull().sum()
ge=pd.crosstab(df.Exited,df.Gender)
ge
sns.countplot(x='Exited', hue='Gender', data=df)
ax = sns.countplot(x='Exited', hue='Gender', data=df)
with_hue(ax,df.Exited, 2,2)
sns.countplot(x='Exited', hue='Geography', data=df)
###$ Helps in comparing two adjacent bars
def percent_graph(grby,hue, data):
so = data.groupby(grby)[hue].value_counts(normalize = True)
so.name = '%count'
t = so.reset_index()
sns.set_theme(style="whitegrid")
sns.barplot(x = grby, y = '%count', hue= hue, data= t)
ax = sns.countplot(x='Exited', hue='Geography', data=df)
with_hue(ax,df.Exited, 2,3)
###$ Another moreinformative graph
plt.figure(figsize = (8,8))
percent_graph('Exited', 'Geography', df)
df1=pd.crosstab( df['Exited'], df['Geography'])
df1
plt.figure(figsize=(6,6))
(df1.loc[1] * 100.0 / df1.sum()).plot(x=df1.index, y=df1.values, kind='bar')
plt.ylabel('Churn percentage')
plt.title('Churn_Rate in Location')
###$ Please select only those columns for histogram which are continuous.
## For example, do not select HasCrCard, Exited etc
df.hist(figsize=(12,10),bins=20)
plt.show()
sns.displot(x='CreditScore', hue='Exited', data=df)
# Divide Credit_Score into 3 bins
df['Credit_Score'] = pd.qcut(df['CreditScore'], 3)
# Barplot - Shows approximate values based
# on the height of bars.
sns.barplot(x ='Credit_Score', y ='Exited',
data = df)
df.head()
sns.displot(x='Age', hue='Exited', data=df)
sns.violinplot(x ="Gender", y ="Age", hue ="Exited",
data = df, split = True)
sns.displot(x='Balance', hue='Exited', data=df)
sns.displot(x='EstimatedSalary', hue='Exited', data=df)
sns.jointplot(x ="Balance", y ="EstimatedSalary", hue ="Exited", data = df)
_, ax = plt.subplots(1,3, figsize = (12,8))
sns.countplot(x = "NumOfProducts", hue ='Exited', data = df, ax = ax[0])
sns.countplot(x = "HasCrCard", hue='Exited', data = df, ax = ax[1])
sns.countplot(x = 'IsActiveMember' , hue = 'Exited', data = df, ax = ax[2])
df2=pd.crosstab( df['Exited'], df['HasCrCard'])
df2
plt.figure(figsize=(6,6))
(df2.loc[1] * 100.0 / df2.sum()).plot(x=df2.index, y=df2.values, kind='bar')
plt.ylabel('Churn percentage')
plt.title('Churn_Rate vs hascr')
df3=pd.crosstab( df['Exited'], df['IsActiveMember'])
df3
plt.figure(figsize=(6,6))
(df3.loc[1] * 100.0 / df3.sum()).plot(x=df3.index, y=df3.values, kind='bar')
plt.ylabel('Churn percentage')
plt.title('Churn_Rate vs active')
df[["NumOfProducts","Exited"]].groupby(["NumOfProducts"],as_index=False).mean()
df4=pd.crosstab( df['Exited'], df['Tenure'])
df4
EDA
1. As Customer Id, Row_no and Surname will have no impact on our data set or will not be helpul in providing any insights.Hence these colomn can be dropped.
2.Churned out Customer are 20% of our dataset as compared to the retained customer which is almost 80% of the dataset.
3.Compartively female likely to exit the bank. Further we can explore it with the age group
4.People in the age group 35 to 50 likely to exit the bank.
5.Churn rate is less in both male and female as the age increases but quite high in the age group 40-45 in both male and female
6.There was no direct relationship in the estimated Salary of the customer with the customer data set but further can be explored with the other variables like Balance
7.Customer with Estimated Salary between 50000-100000 and Balance 100000-150000 likely to churn out of the bank.
8.Customer who has credit card and active member are less likely to churn out of the bank.
9.Customer with two number of products are likely to exit the bank.
10. On the basis of the geography, Customers from germany likely to leave bank.
11. Customers with credit score less than 600 have high tendency to churn out of the bank.
12. If the Tenure is 3yrs or less than 3 , churn rate is higher
###Output
_____no_output_____ |
005-vk_NLP - TF IDF.ipynb | ###Markdown
TF-IDF
###Code
import nltk
nltk.download()
paragraph = '''In a country like India with a galloping population, unfortunately nobody is paying attention to the issue
of population. Political parties are feeling shy, politicians are feeling shy, Parliament also does not adequately discuss
about the issue,โ said Naidu while addressing the 58th convocation of Indian Agricultural Research Institute (IARI).
He said, โYou know how population is growing, creating problems. See the problems in Delhi, traffic, more human beings,
more vehicles, more tension, less attention. If you have tension you cannot pay attention.โ
Emphasising on the need to increase food production to meet demand of growing population, Naidu said,
โIn future if population increases like this, and you are not able to adequately match it with increase in production,
there will be problem'''
import re
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem import WordNetLemmatizer
ps = PorterStemmer()
wordnet = WordNetLemmatizer()
sentences = nltk.sent_tokenize(paragraph)
corpus = []
for i in range(len(sentences)):
review = re.sub("[^a-zA-Z]", ' ', sentences[i])
review = review.lower()
review = review.split()
review = [wordnet.lemmatize(word) for word in review if word not in set(stopwords.words('english'))]
review = ' '.join(review)
corpus.append(review)
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer()
X = tfidf.fit_transform(corpus).toarray()
X.shape
type(X)
X
print(X[:,0])
print(X[:,0:5])
###Output
[[0. 0. 0. 0. 0. ]
[0. 0.20243885 0.16332639 0.20243885 0.20243885]
[0. 0. 0. 0. 0. ]
[0. 0. 0. 0. 0. ]
[0.18444605 0. 0.14880991 0. 0. ]]
###Markdown
TF-IDF Draw back of Bag of Words
###Code
# All the words have given same importance
# No Semantic information preserved
# For above two problems TF-IDF model is the solution
###Output
_____no_output_____
###Markdown
Steps in TF-IDF
###Code
# 1. Lower case the corpus or paragraph.
# 2. Tokenization.
# 3. TF: Term Frequency, IDF: Inverse Document Frequency, TF-IDF = TF*log(IDF).
# 4. TF = No. of occurance of a word in a document / No. of words in that document.
# 5. IDF = log(No. of documents/No. of documents containing the word)
# 6. TFIDF(word) = TF(Document, word) * IDF (word)
import nltk
nltk.download()
paragraph = '''In a country like India with a galloping population, unfortunately nobody is paying attention to the issue
of population. Political parties are feeling shy, politicians are feeling shy, Parliament also does not adequately discuss
about the issue,โ said Naidu while addressing the 58th convocation of Indian Agricultural Research Institute (IARI).
He said, โYou know how population is growing, creating problems. See the problems in Delhi, traffic, more human beings,
more vehicles, more tension, less attention. If you have tension you cannot pay attention.โ
Emphasising on the need to increase food production to meet demand of growing population, Naidu said,
โIn future if population increases like this, and you are not able to adequately match it with increase in production,
there will be problem'''
# Cleaning the Text
import re
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
from nltk.stem import WordNetLemmatizer
ps = PorterStemmer()
wordnet = WordNetLemmatizer()
sentences = nltk.sent_tokenize(paragraph)
# sentences
corpus = []
for i in range(len(sentences)):
review = re.sub("[^a-zA-Z]", ' ', sentences[i])
review = review.lower()
review = review.split()
review = [wordnet.lemmatize(word) for word in review if word not in set(stopwords.words('english'))]
review = ' '.join(review)
corpus.append(review)
# Creatung the TF-IDF Model
# # Creating the TF-IDF model
# from sklearn.feature_extraction.text import TfidfVectorizer
# cv = TfidfVectorizer()
# X = cv.fit_transform(corpus).toarray()
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf = TfidfVectorizer()
X = tfidf.fit_transform(corpus)
X.toarray()
type(X)
X.shape
print(X[:,0])
print(X[:,:])
###Output
(0, 26) 0.2611488808945384
(0, 5) 0.21677716168619507
(0, 38) 0.3236873066380182
(0, 34) 0.3236873066380182
(0, 51) 0.3236873066380182
(0, 41) 0.43355432337239014
(0, 18) 0.3236873066380182
(0, 23) 0.3236873066380182
(0, 29) 0.2611488808945384
(0, 9) 0.3236873066380182
(1, 21) 0.20243884765910772
(1, 25) 0.20243884765910772
(1, 44) 0.20243884765910772
(1, 3) 0.20243884765910772
(1, 24) 0.20243884765910772
(1, 8) 0.20243884765910772
(1, 49) 0.20243884765910772
(1, 1) 0.20243884765910772
(1, 32) 0.16332638763273197
(1, 45) 0.13557565561148596
(1, 13) 0.20243884765910772
(1, 2) 0.16332638763273197
(1, 4) 0.20243884765910772
(1, 35) 0.20243884765910772
(1, 40) 0.20243884765910772
: :
(3, 11) 0.3420339209721722
(3, 46) 0.3420339209721722
(3, 42) 0.2290641031273583
(3, 5) 0.2290641031273583
(4, 30) 0.18444604729119288
(4, 0) 0.18444604729119288
(4, 17) 0.18444604729119288
(4, 12) 0.18444604729119288
(4, 31) 0.18444604729119288
(4, 43) 0.36889209458238575
(4, 16) 0.18444604729119288
(4, 22) 0.5533381418735785
(4, 33) 0.18444604729119288
(4, 14) 0.18444604729119288
(4, 37) 0.18444604729119288
(4, 7) 0.18444604729119288
(4, 48) 0.14880990958778192
(4, 42) 0.12352566750705657
(4, 19) 0.14880990958778192
(4, 32) 0.14880990958778192
(4, 45) 0.12352566750705657
(4, 2) 0.14880990958778192
(4, 5) 0.12352566750705657
(4, 41) 0.24705133501411314
(4, 29) 0.14880990958778192
|
notebooks/join_data.ipynb | ###Markdown
Corregir error primera muerte C. Valenciana
###Code
data.loc[data.loc[data.CCAA == '1'].index -1, 'muertes'] = 1
data = data.drop(data.loc[data.CCAA == '1'].index).reset_index(drop=True)
###Output
_____no_output_____
###Markdown
รnico nombre para Castilla y La Mancha
###Code
data.loc[data.CCAA == 'Castilla-LaMancha', 'CCAA'] = 'CastillaLaMancha'
###Output
_____no_output_____
###Markdown
Damos formato a las columnas
###Code
data['fecha'] = pd.to_datetime(data['fecha'],format='%d.%m.%Y')
data['casos'] = pd.to_numeric(data.casos)
data['UCI'] = pd.to_numeric(data.UCI)
data['muertes'] = pd.to_numeric(data.muertes)
data = data.sort_values(by=['CCAA','fecha']).reset_index(drop = True)
for CCAA in data.CCAA.unique():
casos_hoy = data.loc[data.CCAA == CCAA,'casos'].values[1:]
casos_ayer = data.loc[data.CCAA == CCAA,'casos'].values[:-1]
data.loc[data.CCAA == CCAA,'nuevos'] = [np.nan]+list(casos_hoy-casos_ayer)
###Output
_____no_output_____
###Markdown
Casos en fin de semana
###Code
def date_lag(vect):
days = np.array([el.day for el in vect])
dif = days[1:] - days[:-1]
return np.where(dif>1)[0]
def get_splits(df,ind):
return df.loc[:ind],df.loc[ind+1:]
def get_new_lines(df, ind):
lines = pd.DataFrame({'CCAA': df.loc[ind-1:ind,'CCAA'].values,
'fecha': [df.loc[ind,'fecha'] + pd.DateOffset(1), df.loc[ind,'fecha'] + pd.DateOffset(2)],
'casos' : [np.nan, np.nan], 'IA' : [np.nan, np.nan],
'UCI' : [np.nan, np.nan], 'muertes' : [np.nan, np.nan]})
return lines
def get_line_eq(points):
x_coords, y_coords = zip(*points)
A = np.vstack([x_coords,np.ones(len(x_coords))]).T
m, c = np.linalg.lstsq(A, y_coords, rcond=-1)[0]
return m, c
def fill_gaps(df, var, ind, rnd):
point0 = (df.loc[ind,'fecha'].day,df.loc[ind,var])
point1 = (df.loc[ind+3,'fecha'].day,df.loc[ind+3,var])
m, c = get_line_eq([point0,point1])
val0 = np.round(m*df.loc[ind+1,'fecha'].day+c,rnd)
val1 = np.round(m*df.loc[ind+2,'fecha'].day+c,rnd)
return val0, val1
def CCAA_correction(df):
df = df.reset_index(drop=True)
ind = date_lag(df['fecha'])
while len(ind)>0:
split1, split2 = get_splits(df, ind[0])
lines = get_new_lines(df, ind[0])
df = pd.concat([split1, lines, split2]).reset_index(drop=True)
variables = list(df.columns)
c = variables.index('fecha')+1
rounds = [0,2,0,0]
for var, r in zip(variables[c:],rounds):
df.loc[ind[0]+1, var], df.loc[ind[0]+2, var] = fill_gaps(df, var, ind[0], r)
ind = date_lag(df['fecha'])
return df
_data_ = pd.DataFrame(columns = ['CCAA', 'fecha', 'casos', 'IA', 'UCI', 'muertes'])
for CCAA in data.CCAA.unique():
data_int = CCAA_correction(data[data.CCAA == CCAA])
ind = data_int[data_int['fecha'] == '2020-03-13'].index[0]
data_int.loc[ind+1:ind+2,'IA'] = fill_gaps(data_int, 'IA', ind, 2)
data_int.loc[ind+1:ind+2,'UCI'] = fill_gaps(data_int, 'UCI', ind, 0)
_data_ = _data_.append(data_int, ignore_index=True).reset_index(drop=True)
del data
data = _data_.copy()
del _data_
data.to_csv('../data/final_data/dataCOVID19_es.csv',index=False)
data
###Output
_____no_output_____ |
examples/1.0.1_HiView_tutorial/hiview_tutorial.ipynb | ###Markdown
**This is an tutorial of visualizing hierarchical protein network modules, with a script interfacing the DDOT python package (v1.0.1) and the HiView web browser (v2.6).****Author: Fan Zheng****Date: Aug. 2020** Get started Please check the DDOT package has been installed and all dependencies are satisfied. To complete this tutorial, you just need the upload script `tohiview.py`, and a few input files of that script. We will walk over the creation of hierarchical models and their visualization in HiView.
###Code
username = 'fzheng' # replace with your username
import getpass
passwd = getpass.getpass("Password here: ")
###Output
Passwd here: ยทยทยทยทยทยทยทยท
###Markdown
The available options for the upload script are listed below. Many options are available, but only `--ont`, `--hier_name`, `--ndex_acount` are required.`--ont` should be a 3-column tab-separated file defined in DDOT, which represents parent, child and type of the relationship.`--hier_name` is just a string to label the files. `--ndex_acount` contains 3 strings, the server name (http://test.ndexbio.org), a username, and a password.**Note:** so far we require using the NDEx test server, as this pipeline can potentially create a large number of networks in one's NDEx account.
###Code
%%bash
python ../../ddot/tohiview.py -h
###Output
usage: tohiview.py [-h] --ont ONT --hier_name HIER_NAME
[--ndex_account NDEX_ACCOUNT NDEX_ACCOUNT NDEX_ACCOUNT]
[--score SCORE] [--subnet_size SUBNET_SIZE SUBNET_SIZE]
[--node_attr NODE_ATTR] [--evinet_links EVINET_LINKS]
[--evinet_size EVINET_SIZE] [--gene_attr GENE_ATTR]
[--term_2_uuid TERM_2_UUID]
[--visible_cols [VISIBLE_COLS [VISIBLE_COLS ...]]]
[--max_num_edges MAX_NUM_EDGES] [--col_color COL_COLOR]
[--col_label COL_LABEL] [--rename RENAME] [--skip_main]
optional arguments:
-h, --help show this help message and exit
--ont ONT ontology file, 3 col table
--hier_name HIER_NAME
name of the hierarchy
--ndex_account NDEX_ACCOUNT NDEX_ACCOUNT NDEX_ACCOUNT
--score SCORE integrated edge score
--subnet_size SUBNET_SIZE SUBNET_SIZE
minimum and maximum term size to show network support
--node_attr NODE_ATTR
table file for attributes on systems
--evinet_links EVINET_LINKS
data frame for network support
--evinet_size EVINET_SIZE
data frame for network support
--gene_attr GENE_ATTR
table file for attributes on genes
--term_2_uuid TERM_2_UUID
if available, reuse networks that are already on NDEX
--visible_cols [VISIBLE_COLS [VISIBLE_COLS ...]]
a list, specified column names in the ode attribute
file will be shown as subsystem information
--max_num_edges MAX_NUM_EDGES
maximum number of edges uploaded
--col_color COL_COLOR
a column name in the node attribute file, used to
color the node (only works in node-link diagram)
--col_label COL_LABEL
a column name in the node attribute file, add as the
term label on the map
--rename RENAME if not None, rename name of subsystems specified by
this column in the node_attr file
--skip_main if true, do not update the main hierarchy
###Markdown
1. A simple hierarchy We will first create and upload a decoy hierarchy.
###Code
d = './data'
df = pd.read_csv(d + '/test1.ont', sep='\t', header=None)
df
###Output
_____no_output_____
###Markdown
Note this hierarchy is a DAG (directed acyclic graph). The node "Fine-3" has two parents: "Coarse-1" and "Coarse-2". In HiView, a circle "Fine-3" will be found nested under the circles of both "Coarse-1" and "Coarse-2". **Warning**: we require node names not containing "." and "_".
###Code
%%bash -s "$username" "$passwd"
python ../../ddot/tohiview.py --ont ./data/test1.ont --hier_name test1 --ndex_account http://test.ndexbio.org $1 $2
###Output
http://hiview.ucsd.edu/17ea0c7b-d763-11ea-9101-0660b7976219?type=test&server=http://test.ndexbio.org
###Markdown
Paste the above link to the browser to launch HiView to visualize this hierarchy (it will be a different UUID each time). In HiView, a hierarchy is represented by the "circle-packing" layout. The biggest circle represents the root (a node in the DAG with only outgoing edges); thus we require the input data to contain exactly one root. If there are multiple roots, the script will add a root on top of them. Double-clicking a circle expands deeper structures, one level at a time. 2. Adding integrated networks to communities HiView is a powerful platform to display nested communities (of multiple scales) in a network. It is often of interest to visualize edges in the source network that support a community. Precisely, for a source network $G = (V, E)$, a subnetwork of a community $s$ is defined as $G_s = (V_s, E_s)$, where $V_s \in V, E_s \in E$, and $\forall e = (u,v) \in E_s$, $u,v \in V_s$. This is achieved by the `--score` argument. It is a tab-separated file with three columns: `geneA`,`geneB`, and `score`. We recommend having the values of scores between (0,1). In this example, we will use a small sub-hierarchy with some gene-gene association scores. Let's see their format:
###Code
df_ont = pd.read_csv(d + '/test2.ont', sep='\t', header=None)
df_ont.head(3)
df_ont.tail(3)
df_score = pd.read_csv(d + '/test2_score.txt', sep='\t', header=None)
df_score.head(3)
###Output
_____no_output_____
###Markdown
Now do the upload:
###Code
%%bash -s "$username" "$passwd"
python ../../ddot/tohiview.py --ont ./data/test2.ont --hier_name test2 --ndex_account http://test.ndexbio.org $1 $2 --score ./data/test2_score.txt
###Output
http://hiview.ucsd.edu/8ce3e160-d763-11ea-9101-0660b7976219?type=test&server=http://test.ndexbio.org
###Markdown
The communities in this hierarchy (shown in the "model view") are now each associated with a network shown in the "data view". "score" of a community. This is a concept specific to certain community detection algorithms, e.g. CliXO, which takes a weighted graph as the input, and iterate community detection at different thresholds. Thus, each community in CliXO is associated with a "score". By default, edges in a subnetwork have a uniform color in HiView. However, if communities are associated with scores, the edges will be shown with a discrete color map (which often visually highlights the community structures), determined by the score of the community itself, and the score(s) of its children community(ies). This can be achieved by adding a 4-th column to the file for the `--ont` argument, as in the following example:
###Code
df_ont = pd.read_csv(d + '/test2_ww.ont', sep='\t', header=None)
df_ont.head(3)
###Output
_____no_output_____
###Markdown
The values in the column "3" (e.g. 0.72, 0.65) indicate the "score" of the community in the column "0". The score of a parent community is required to be smaller than the scores of its children. In this example, "S22435" is the parent of "S22133", and thus 0.65 < 0.72.
###Code
%%bash -s "$username" "$passwd"
python ../../ddot/tohiview.py --ont ./data/test2_ww.ont --hier_name test2.1 --ndex_account http://test.ndexbio.org $1 $2 --score ./data/test2_score.txt
###Output
http://hiview.ucsd.edu/45b03af5-d764-11ea-9101-0660b7976219?type=test&server=http://test.ndexbio.org
###Markdown
After upload, we can see the change of edge colors in the data view. 3. Adding multiple evidence networks to communities In addition to a single master network, it is also possible to overlay more networks supporting a community and visualize them HiView. For example, if the master network is the result of integrating multiple datasets, it is often of interest to visualize the interactions in these datasets (jointly or separately). This can be achieved by passing a file to the `--evinet_links` argument. It is a two-column file, providing the name of individual datasets, and the path to the actual files containing the interactions:
###Code
%%bash
cat ./data/net_links.txt
###Output
Physical ./data/test3_ppisample.txt
Co_protein_expr ./data/test3_coxsample.txt
CCMI ./data/test3_binarysample.txt
###Markdown
A source file is a 3-column tab-separated file, which can contain binary interactions, or interactions with weights.
###Code
%%bash
cat ./data/test3_ppisample.txt |head -5
%%bash
cat ./data/test3_binarysample.txt |head -5
###Output
MTDH SUPT16H True
SUPT16H TSPYL5 True
MTDH SSRP1 True
SSRP1 TSPYL5 True
###Markdown
Now we do the upload:
###Code
%%bash -s "$username" "$passwd"
python ../../ddot/tohiview.py --ont ./data/test2_ww.ont --hier_name test3 --ndex_account http://test.ndexbio.org $1 $2 --score ./data/test2_score.txt --evinet_links ./data/net_links.txt
###Output
http://hiview.ucsd.edu/0f7f704a-d765-11ea-9101-0660b7976219?type=test&server=http://test.ndexbio.org
###Markdown
For the evidence network with real values, users can toggle the threshold to adjust the number of edges from this particular network to be shown in the data view. Large networks Large scale networks are often bottlenecks of the speed of uploading and HiView visualization (we are working on improving that). To reduce overhead, subnetwork uploading can be disabled for large communities, while still being enabled for smaller communities. It is achieved by the `--subnet_size` argument, which takes two integers, specifying the lower and upper bound of community sizes for which upload of the integrated subnetworks is enabled.Similarly `--evinet_size` takes one integer, and for communities larger than this threshold, upload of evidence networks will be disabled.We require `subnet_size[0] < evinet_size <= subnet_size[1]`. Reuse uploaded subnetworks After uploading a hierarchy with subnetworks, you will notice a file starting with `term_2_uuid` written to the working directory. This file describes the mapping between community names and community subnetworks. This file can also be later used as the input of `--term_2_uuid` argument, so subnetworks can be shared across different hierarchical models. 4. Control the information displayed in HiView update the metadata of communities Users can show some metadata associated with each community in the bottom-right area of HiView ("Subsystem details"). This is achieved by the `--node_attr` argument. The input is a data frame with rows being communities and columns being the names of those metadata.For example, assuming we can assign a robustness score for each community. We have created a file with some made-up values:
###Code
%%bash
cat ./data/test4_nodeattr.txt
###Output
robustness
S21851 0.462515
S21875 0.781374
S22133 0.686939
S22435 0.848153
S22451 0.471456
S22573 0.247138
S22871 0.201151
S23161 0.55682
S23248 0.277122
###Markdown
Now upload:
###Code
%%bash -s "$username" "$passwd"
python ../../ddot/tohiview.py --ont ./data/test2_ww.ont --hier_name test4 --ndex_account http://test.ndexbio.org $1 $2 --score ./data/test2_score.txt --term_2_uuid term_2_uuid.test3 --node_attr ./data/test4_nodeattr.txt
###Output
http://hiview.ucsd.edu/71ac490b-d76b-11ea-9101-0660b7976219?type=test&server=http://test.ndexbio.org
###Markdown
Note that we also reused uploaded subnetworks with the `--term_2_uuid` argument here. update display labels in the model view It is possible to update the displayed labels on communities in the model view without a new upload. This can be achieved by the NDEx python client, which should have been installed as a prerequisite of DDOT.
###Code
import ndex.client as nc
from ndex.networkn import NdexGraph
my_ndex = nc.Ndex("http://test.ndexbio.org", username, passwd)
###Output
_____no_output_____
###Markdown
We now change the label of `S22573` in the above model to `helloworld`.
###Code
def change_hiview_label(uuid, dict_rename):
Gcx = my_ndex.get_network_as_cx_stream(uuid).json()
k1 = [i for i in range(len(Gcx)) if 'nodes' in Gcx[i].keys()][0]
k2 = [i for i in range(len(Gcx)) if 'nodeAttributes' in Gcx[i].keys()][0]
dict_nid_label = {}
for d in Gcx[k1]['nodes']:
if d['n'].split('.')[0] in dict_rename:
dict_nid_label[d['@id']] = dict_rename[d['n'].split('.')[0]]
for i in range(len(Gcx[k2]['nodeAttributes'])):
nid = Gcx[k2]['nodeAttributes'][i]['po']
if (nid in dict_nid_label) and (Gcx[k2]['nodeAttributes'][i]['n'] == 'Label'):
Gcx[k2]['nodeAttributes'][i]['v'] = dict_nid_label[Gcx[k2]['nodeAttributes'][i]['po']]
G = NdexGraph(Gcx)
Gcx_new_stream = G.to_cx_stream()
my_ndex.update_cx_network(Gcx_new_stream, uuid)
return
uuid = '71ac490b-d76b-11ea-9101-0660b7976219'
change_hiview_label(uuid, {'S22573':'helloworld'})
###Output
consistency group max: 2
###Markdown
update node layout in the data view By default, DDOT calls the `spring_layout` function in the `NetworkX` package to create a layout for nodes in the data view. But it is not necessarily the most informative layout, especially for large networks. With NDEx python client, users can provide their own node positions from alternative algorithms, and update the layout in the HiView data view. To make an alteration, we need to know the UUID of a subnetwork. It can be found in the `term_uuid_XXX` file created after an upload.
###Code
%%bash
cat term_2_uuid.test3 |head -n 2
###Output
S21851 S21851 0e2e3818-d765-11ea-9101-0660b7976219
S21875 S21875 0e4828ba-d765-11ea-9101-0660b7976219
###Markdown
We now choose the first subnetwork (S21851), and divide x and y coordinates by 5 (to make nodes closer to each other)
###Code
uuid = '0e4828ba-d765-11ea-9101-0660b7976219'
Gcx = my_ndex.get_network_as_cx_stream(uuid).json()
G = NdexGraph(Gcx)
Gpos_new = {k:[v[0]/5, v[1]/5] for k,v in G.pos.items()} # alter node position
G.pos = Gpos_new
Gcx_new_stream = G.to_cx_stream()
my_ndex.update_cx_network(Gcx_new_stream, uuid) # update the network on NDEx
###Output
consistency group max: 2
|
prem-step/ew-inf.ipynb | ###Markdown
Eng+Wales well-mixed example model This is the inference notebook. There are various model variants as encoded by `expt_params_local` and `model_local`, which are shared by the notebooks in a given directory.Outputs of this notebook:* `ewMod-inf.pik` : result of inference computation* `ewMod-hess.npy` : hessian matrix of log-posteriorNOTE carefully : `Im` compartment is cumulative deaths, this is called `D` elsewhere Start notebook(the following line is for efficient parallel processing)
###Code
%env OMP_NUM_THREADS=1
%matplotlib inline
import numpy as np
from matplotlib import pyplot as plt
import pyross
import time
import pandas as pd
import matplotlib.image as mpimg
import pickle
import os
import pprint
import scipy.stats
# comment these before commit
#print(pyross.__file__)
#print(os.getcwd())
from ew_fns import *
import expt_params_local
import model_local
###Output
_____no_output_____
###Markdown
switches etc
###Code
verboseMod=False ## print ancillary info about the model? (would usually be False, for brevity)
## Calculate things, or load from files ?
doInf = False ## do inference, or load it ?
doHes = False ## Hessian may take a few minutes !! does this get removed? what to do?
## time unit is one week
daysPerWeek = 7.0
## these are params that might be varied in different expts
exptParams = expt_params_local.getLocalParams()
pprint.pprint(exptParams)
## this is used for filename handling throughout
pikFileRoot = exptParams['pikFileRoot']
###Output
{'careFile': '../data/CareHomes.csv',
'chooseCM': 'premEtAl',
'dataFile': '../data/OnsData.csv',
'estimatorTol': 1e-08,
'exCare': True,
'forecastTime': 3,
'freeInitPriors': ['E', 'A', 'Is1', 'Is2', 'Is3'],
'infOptions': {'cma_population': 32,
'cma_processes': None,
'ftol': 5e-05,
'global_atol': 1.0,
'global_max_iter': 1500,
'local_max_iter': 400},
'inferBetaNotAi': True,
'numCohorts': 16,
'numCohortsPopData': 19,
'pikFileRoot': 'ewMod',
'popFile': '../data/EWAgeDistributedNew.csv',
'timeLast': 8,
'timeZero': 0}
###Markdown
convenient settings
###Code
np.set_printoptions(precision=3)
pltAuto = True
plt.rcParams.update({'figure.autolayout': pltAuto})
plt.rcParams.update({'font.size': 14})
###Output
_____no_output_____
###Markdown
LOAD MODEL
###Code
loadModel = model_local.loadModel(exptParams,daysPerWeek,verboseMod)
## should use a dictionary but...
[ numCohorts, fi, N, Ni, model_spec, estimator, contactBasis, interventionFn,
modParams, priorsAll, initPriorsLinMode, obsDeath, fltrDeath,
simTime, deathCumulativeDat ] = loadModel
###Output
** model
{'A': {'infection': [], 'linear': [['E', 'gammaE'], ['A', '-gammaA']]},
'E': {'infection': [['A', 'beta'],
['Is1', 'beta'],
['Is2', 'betaLate'],
['Is3', 'betaLate']],
'linear': [['E', '-gammaE']]},
'Im': {'infection': [], 'linear': [['Is3', 'cfr*gammaIs3']]},
'Is1': {'infection': [],
'linear': [['A', 'gammaA'],
['Is1', '-alphabar*gammaIs1'],
['Is1', '-alpha*gammaIs1']]},
'Is2': {'infection': [],
'linear': [['Is1', 'alphabar*gammaIs1'], ['Is2', '-gammaIs2']]},
'Is3': {'infection': [],
'linear': [['Is2', 'gammaIs2'],
['Is3', '-cfrbar*gammaIs3'],
['Is3', '-cfr*gammaIs3']]},
'S': {'infection': [['A', '-beta'],
['Is1', '-beta'],
['Is2', '-betaLate'],
['Is3', '-betaLate']],
'linear': []},
'classes': ['S', 'E', 'A', 'Is1', 'Is2', 'Is3', 'Im']}
typC 0.5952004647091569
** using getPriorsControl
###Markdown
Inspect most likely trajectory for model with prior mean params
###Code
x0_lin = estimator.get_mean_inits(initPriorsLinMode, obsDeath[0], fltrDeath)
guessTraj = estimator.integrate( x0_lin, exptParams['timeZero'], simTime, simTime+1)
## plots
yesPlot = model_spec['classes'].copy()
yesPlot.remove('S')
plt.yscale('log')
for lab in yesPlot :
indClass = model_spec['classes'].index(lab)
totClass = np.sum(guessTraj[:,indClass*numCohorts:(indClass+1)*numCohorts],axis=1)
plt.plot( N * totClass,'-',lw=3,label=lab)
plt.plot(N*np.sum(obsDeath,axis=1),'X',label='data')
plt.legend(fontsize=14,bbox_to_anchor=(1, 1.0))
plt.xlabel('time in weeks')
plt.ylabel('class population')
plt.show() ; plt.close()
indClass = model_spec['classes'].index('Im')
plt.yscale('log')
for coh in range(numCohorts):
plt.plot( N*guessTraj[:,coh+indClass*numCohorts],label='m{c:d}'.format(c=coh) )
plt.xlabel('time in weeks')
plt.ylabel('cumul deaths by age cohort')
plt.legend(fontsize=8,bbox_to_anchor=(1, 1.0))
plt.show() ; plt.close()
###Output
_____no_output_____
###Markdown
INFERENCEparameter count* 32 for age-dependent Ai and Af (or beta and Af)* 2 (step-like) or 3 (NPI-with-easing) for lockdown time and width (+easing param)* 1 for projection of initial condition along mode* 5 for initial condition in oldest cohort* 5 for the gammas* 1 for beta in late stagetotal: 46 (step-like) or 47 (with-easing)The following computation with CMA-ES takes some minutes depending on compute power, it should use multiple CPUs efficiently, if available. The result will vary (slightly) according to the random seed, can be controlled by passing `cma_random_seed` to `latent_infer`
###Code
def runInf() :
infResult = estimator.latent_infer(obsDeath, fltrDeath, simTime,
priorsAll,
initPriorsLinMode,
generator=contactBasis,
intervention_fun=interventionFn,
tangent=False,
verbose=True,
enable_global=True,
enable_local =True,
**exptParams['infOptions'],
)
return infResult
if doInf:
## do the computation
elapsedInf = time.time()
infResult = runInf()
elapsedInf = time.time() - elapsedInf
print('** elapsed time',elapsedInf/60.0,'mins')
# save the answer
opFile = pikFileRoot + "-inf.pik"
print('opf',opFile)
with open(opFile, 'wb') as f:
pickle.dump([infResult,elapsedInf],f)
else:
## load a saved computation
print(' Load data')
# here we load the data
# (this may be the file that we just saved, it is deliberately outside the if: else:)
ipFile = pikFileRoot + "-inf.pik"
print('ipf',ipFile)
with open(ipFile, 'rb') as f:
[infResult,elapsedInf] = pickle.load(f)
###Output
Load data
ipf ewMod-inf.pik
###Markdown
unpack results
###Code
epiParamsMAP = infResult['params_dict']
conParamsMAP = infResult['control_params_dict']
x0_MAP = infResult['x0']
CM_MAP = contactBasis.intervention_custom_temporal( interventionFn,
**conParamsMAP)
logPinf = -estimator.minus_logp_red(epiParamsMAP, x0_MAP, obsDeath, fltrDeath, simTime,
CM_MAP, tangent=False)
print('** measuredLikelihood',logPinf)
print('** logPosterior ',infResult['log_posterior'])
print('** logLikelihood',infResult['log_likelihood'])
###Output
** measuredLikelihood -254.95035320878037
** logPosterior -166.00943303559984
** logLikelihood -254.95035320878037
###Markdown
MAP dominant trajectory
###Code
estimator.set_params(epiParamsMAP)
estimator.set_contact_matrix(CM_MAP)
trajMAP = estimator.integrate( x0_MAP, exptParams['timeZero'], simTime, simTime+1)
yesPlot = model_spec['classes'].copy()
yesPlot.remove('S')
plt.yscale('log')
for lab in yesPlot :
indClass = model_spec['classes'].index(lab)
totClass = np.sum(trajMAP[:,indClass*numCohorts:(indClass+1)*numCohorts],axis=1)
plt.plot( N * totClass,'-',lw=3,label=lab)
plt.plot(N*np.sum(obsDeath,axis=1),'X',label='data')
plt.xlabel('time in weeks')
plt.ylabel('class population')
plt.legend(fontsize=14,bbox_to_anchor=(1, 1.0))
plt.show() ; plt.close()
fig,axs = plt.subplots(1,2,figsize=(10,4.5))
cohRanges = [ [x,x+4] for x in range(0,75,5) ]
#print(cohRanges)
cohLabs = ["{l:d}-{u:d}".format(l=low,u=up) for [low,up] in cohRanges ]
cohLabs.append("75+")
ax = axs[0]
ax.set_title('MAP (average dynamics)')
mSize = 3
minY = 0.12
maxY = 1.0
indClass = model_spec['classes'].index('Im')
ax.set_yscale('log')
ax.set_ylabel('cumulative M (by cohort)')
ax.set_xlabel('time/weeks')
for coh in reversed(list(range(numCohorts))) :
ax.plot( N*trajMAP[:,coh+indClass*numCohorts],'o-',label=cohLabs[coh],ms=mSize )
maxY = np.maximum( maxY, np.max(N*trajMAP[:,coh+indClass*numCohorts]))
#ax.legend(fontsize=8,bbox_to_anchor=(1, 1.0))
maxY *= 1.6
ax.set_ylim(bottom=minY,top=maxY)
#plt.show() ; plt.close()
ax = axs[1]
ax.set_title('data')
ax.set_xlabel('time/weeks')
indClass = model_spec['classes'].index('Im')
ax.set_yscale('log')
for coh in reversed(list(range(numCohorts))) :
ax.plot( N*obsDeath[:,coh],'o-',label=cohLabs[coh],ms=mSize )
## keep the same as other panel
ax.set_ylim(bottom=minY,top=maxY)
ax.legend(fontsize=10,bbox_to_anchor=(1, 1.0))
#plt.show() ; plt.close()
#plt.savefig('ageMAPandData.png')
plt.show(fig)
###Output
_____no_output_____
###Markdown
sanity check : plot the prior and inf value for one or two params
###Code
(likFun,priFun,dim) = pyross.evidence.latent_get_parameters(estimator,
obsDeath, fltrDeath, simTime,
priorsAll,
initPriorsLinMode,
generator=contactBasis,
intervention_fun=interventionFn,
tangent=False,
)
def showInfPrior(xLab) :
fig = plt.figure(figsize=(4,4))
dimFlat = np.size(infResult['flat_params'])
## magic to work out the index of this param in flat_params
jj = infResult['param_keys'].index(xLab)
xInd = infResult['param_guess_range'][jj]
## get the range
xVals = np.linspace( *priorsAll[xLab]['bounds'], 100 )
#print(infResult['flat_params'][xInd])
pVals = []
checkVals = []
for xx in xVals :
flatP = np.zeros( dimFlat )
flatP[xInd] = xx
pdfAll = np.exp( priFun.logpdf(flatP) )
pVals.append( pdfAll[xInd] )
#checkVals.append( scipy.stats.norm.pdf(xx,loc=0.2,scale=0.1) )
plt.plot(xVals,pVals,'-',label='prior')
infVal = infResult['flat_params'][xInd]
infPdf = np.exp( priFun.logpdf(infResult['flat_params']) )[xInd]
plt.plot([infVal],[infPdf],'ro',label='inf')
plt.xlabel(xLab)
upperLim = 1.05*np.max(pVals)
plt.ylim(0,upperLim)
#plt.plot(xVals,checkVals)
plt.legend()
plt.show(fig) ; plt.close()
#print('**params\n',infResult['flat_params'])
#print('**logPrior\n',priFun.logpdf(infResult['flat_params']))
showInfPrior('gammaE')
###Output
_____no_output_____
###Markdown
Hessian matrix of log-posterior(this can take a few minutes, it does not make use of multiple cores)
###Code
if doHes:
## this eps amounts to a perturbation of approx 1% on each param
## (1/4) power of machine epsilon is standard for second deriv
xx = infResult['flat_params']
eps = 100 * xx*( np.spacing(xx)/xx )**(0.25)
#print('**params\n',infResult['flat_params'])
#print('** rel eps\n',eps/infResult['flat_params'])
CM_MAP = contactBasis.intervention_custom_temporal( interventionFn,
**conParamsMAP)
estimator.set_params(epiParamsMAP)
estimator.set_contact_matrix(CM_MAP)
start = time.time()
hessian = estimator.latent_hessian(obs=obsDeath, fltr=fltrDeath,
Tf=simTime, generator=contactBasis,
infer_result=infResult,
intervention_fun=interventionFn,
eps=eps, tangent=False, fd_method="central",
inter_steps=0)
end = time.time()
print('time',(end-start)/60,'mins')
opFile = pikFileRoot + "-hess.npy"
print('opf',opFile)
with open(opFile, 'wb') as f:
np.save(f,hessian)
else :
print('Load hessian')
# reload in all cases (even if we just saved it)
ipFile = pikFileRoot + "-hess.npy"
try:
print('ipf',ipFile)
with open(ipFile, 'rb') as f:
hessian = np.load(f)
except (OSError, IOError) :
print('... error loading hessian')
hessian = None
#print(hessian)
print("** param vals")
print(infResult['flat_params'],'\n')
if np.all(hessian) != None :
print("** naive uncertainty v1 : reciprocal sqrt diagonal elements (x2)")
print( 2/np.sqrt(np.diagonal(hessian)) ,'\n')
print("** naive uncertainty v2 : sqrt diagonal elements of inverse (x2)")
print( 2*np.sqrt(np.diagonal(np.linalg.inv(hessian))) ,'\n')
###Output
** param vals
[1.468e-01 8.368e-02 1.121e-01 2.718e-01 3.491e-01 5.952e-01 5.493e-01
4.932e-01 6.160e-01 6.321e-01 7.740e-01 8.211e-01 1.555e+00 1.340e+00
1.214e+00 2.618e+00 1.025e-01 1.983e+00 2.979e+00 2.408e+00 9.451e-01
9.215e-01 1.539e-01 1.382e-01 1.394e-01 1.461e-01 1.613e-01 1.498e-01
1.225e-01 1.693e-01 2.640e-01 1.362e-01 2.404e-01 2.992e-01 2.406e-01
2.390e-01 2.789e-01 2.131e-01 2.236e+00 1.866e+00 6.992e-04 7.032e-05
3.062e-05 5.750e-06 7.614e-07 5.490e-07]
** naive uncertainty v1 : reciprocal sqrt diagonal elements (x2)
[9.573e-02 6.967e-02 6.831e-02 2.735e-02 3.710e-02 2.945e-02 3.491e-02
2.811e-02 2.524e-02 2.281e-02 3.384e-02 3.479e-02 6.151e-02 5.289e-02
4.280e-02 4.018e-02 1.167e-02 1.811e-02 6.801e-02 4.836e-02 3.508e-02
3.383e-02 1.488e-01 1.286e-01 1.298e-01 1.307e-01 1.422e-01 9.883e-02
9.024e-02 1.074e-01 8.474e-02 6.262e-02 6.032e-02 4.986e-02 4.014e-02
3.941e-02 3.596e-02 1.393e-02 9.373e-03 8.408e-02 1.316e-05 1.275e-05
8.065e-06 5.691e-06 7.351e-07 5.476e-07]
** naive uncertainty v2 : sqrt diagonal elements of inverse (x2)
[9.960e-02 7.411e-02 7.885e-02 9.636e-02 1.363e-01 1.547e-01 1.417e-01
1.235e-01 1.592e-01 1.236e-01 1.578e-01 1.513e-01 2.762e-01 2.283e-01
1.990e-01 4.218e-01 5.281e-02 4.199e-01 5.277e-01 4.395e-01 1.941e-01
1.932e-01 1.496e-01 1.287e-01 1.300e-01 1.380e-01 1.589e-01 1.380e-01
1.045e-01 1.619e-01 2.599e-01 1.115e-01 1.727e-01 1.454e-01 1.217e-01
1.101e-01 1.054e-01 8.441e-02 1.211e-01 2.756e-01 1.948e-04 1.018e-04
5.022e-05 7.247e-06 7.380e-07 5.526e-07]
|
nbs/02_export.ipynb | ###Markdown
nbprocess.export- Exporting a notebook to a library
###Code
#export
from nbprocess.read import *
from nbprocess.maker import *
from nbprocess.imports import *
from fastcore.script import *
from fastcore.imports import *
from fastcore.xtras import *
from collections import defaultdict
from pprint import pformat
from inspect import signature,Parameter
import ast,contextlib,copy
from fastcore.test import *
from pdb import set_trace
from importlib import reload
import shutil
###Output
_____no_output_____
###Markdown
NotebookProcessor - Special comments at the start of a cell can be used to provide information to `nbprocess` about how to process a cell, so we need to be able to find the location of these comments.
###Code
minimal = read_nb('../tests/minimal.ipynb')
#export
def extract_comments(ss):
"Take leading comments from lines of code in `ss`, remove `#`, and split"
ss = ss.splitlines()
first_code = first(i for i,o in enumerate(ss) if not o.strip() or re.match('\s*[^#\s]', o))
return L((s.strip()[1:]).strip().split() for s in ss[:first_code]).filter()
###Output
_____no_output_____
###Markdown
nbprocess comments start with ``, followed by whitespace delimited tokens, which `extract_comments` extracts from the start of a cell, up until a blank line or a line containing something other than comments:
###Code
exp = """#export module
# hide
1+2
#bar"""
test_eq(extract_comments(exp), [['export', 'module'],['hide']])
#export
class NotebookProcessor:
"Base class for nbprocess notebook processors"
def __init__(self, path, debug=False): self.nb,self.path,self.debug = read_nb(path),Path(path),debug
###Output
_____no_output_____
###Markdown
Subclass `NotebookProcessor` to add methods to act on nbprocess comments. The method names are of the form `cmd_type`, where "`cmd`" is the first word of the nbprocess comment, and `type` is the `cell_type` of the cell (normally "`code`"). The methods must take at least `comment` and `code` as params, plus extra params for any additional words included in a comment. Here's an example that prints any word following a "print me" comment:
###Code
class _PrintExample(NotebookProcessor):
def printme_code(self, to_print): print(to_print)
###Output
_____no_output_____
###Markdown
We can create a processor by passing it a notebook:
###Code
everything_fn = '../tests/01_everything.ipynb'
proc = _PrintExample(everything_fn)
#export
@functools.lru_cache(maxsize=None)
def _param_count(f):
"Number of parameters accepted by function `f`"
params = list(signature(f).parameters.values())
# If there's a `*args` then `f` can take as many params as neede
if first(params, lambda o: o.kind==Parameter.VAR_POSITIONAL): return 99
return len([o for o in params if o.kind in (Parameter.POSITIONAL_ONLY,Parameter.POSITIONAL_OR_KEYWORD)])
###Output
_____no_output_____
###Markdown
The basic functionality of a notebook processor is to read and act on nbprocess comments.
###Code
#export
@patch
def process_comment(self:NotebookProcessor, cell_type, comment, idx):
cmd,*args = comment
self.comment,self.idx = comment,idx
cmd = f"{cmd}_{cell_type}"
if self.debug: print(cmd, args)
f = getattr(self, cmd, None)
if not f or _param_count(f)<len(args): return True
return f(*args)
###Output
_____no_output_____
###Markdown
Behind the scenes, `process_comment` is used to call subclass methods. It is passed the comment (as a list split on spaces), the notebook cell itself, and the index number of the comment. You can subclass this to change the behavior of a processor.
###Code
proc.process_comment("code",["printme","hello"], 0)
###Output
hello
###Markdown
If the wrong number of parameters are passed, it is silently ignored:
###Code
proc.process_comment("code", ["printme","hello","there"], 0)
#export
@patch
def process_cell(self:NotebookProcessor, cell):
comments = extract_comments(cell.source)
self.cell = cell
if not comments: return
keeps = [self.process_comment(cell.cell_type, comment, i)
for i,comment in enumerate(comments)]
self.cell.source = ''.join([o for i,o in enumerate(self.cell.source.splitlines(True))
if i>=len(keeps) or keeps[i]])
###Output
_____no_output_____
###Markdown
Subclass `process_cell` to change how `process_comment` is called. The return value of `process_cell` is used to replace the cell in the notebook.
###Code
cell = make_code_cell("#printme hello\n1+1")
proc.process_cell(cell);
###Output
hello
###Markdown
Cell comments that have been processed are removed from the cell source, unless the processing function returns `True`:
###Code
cell.source
#export
@patch
def process(self:NotebookProcessor):
"Process all cells with `process_cell` and replace `self.nb.cells` with result"
for i in range_of(self.nb.cells): self.process_cell(self.nb.cells[i])
proc.process()
###Output
testing
###Markdown
`NotebookProcessor.process` doesn't change a notebook or act on any comments, unless you subclass it.
###Code
everything = read_nb(everything_fn)
proc = NotebookProcessor(everything_fn)
proc.process()
for a_,b_ in zip(everything.cells, proc.nb.cells): test_eq(str(a_),str(b_))
###Output
_____no_output_____
###Markdown
ExportModuleProcessor -
###Code
#export
class ExportModuleProcessor(NotebookProcessor):
"A `NotebookProcessor` which exports code to a module"
def __init__(self, path, dest, mod_maker=ModuleMaker, debug=False):
dest = Path(dest)
store_attr()
super().__init__(path,debug=debug)
def process(self):
self.default_exp,self.modules,self.in_all = None,defaultdict(L),defaultdict(L)
super().process()
###Output
_____no_output_____
###Markdown
Specify `path` containing the source notebook, `dest` where the module(s) will be exported to, and optionally a class to use to create the module (`ModuleMaker`, by default).
###Code
proc = ExportModuleProcessor(everything_fn, 'tmp')
#export
@patch
def default_exp_code(self:ExportModuleProcessor, exp_to): self.default_exp = exp_to
###Output
_____no_output_____
###Markdown
You must include a `default_exp` comment somewhere in your notebook to show what module to export to by default.
###Code
proc.process()
test_eq(proc.default_exp, 'everything')
#export
@patch
def exporti_code(self:ExportModuleProcessor, exp_to=None):
"Export a cell, without including the definition in `__all__`"
self.modules[ifnone(exp_to, '#')].append(self.cell)
###Output
_____no_output_____
###Markdown
Exported cells are stored in a `dict` called `modules`, where the keys are the modules exported to. Those without an explicit module are stored in the `''` key, which will be exported to `default_exp`.
###Code
proc.process()
proc.modules['#']
#export
@patch
def export_code(self:ExportModuleProcessor, exp_to=None):
"Export a cell, adding the definition in `__all__`"
self.exporti_code(exp_to)
self.in_all[ifnone(exp_to, '#')].append(self.cell)
proc.process()
print(proc.in_all['some.thing'])
#export
@patch
def create_modules(self:ExportModuleProcessor):
"Create module(s) from notebook"
self.process()
for mod,cells in self.modules.items():
all_cells = self.in_all[mod]
name = self.default_exp if mod=='#' else mod
mm = self.mod_maker(dest=self.dest, name=name, nb_path=self.path, is_new=mod=='#')
mm.make(cells, all_cells)
###Output
_____no_output_____
###Markdown
Let's check we can import a test file:
###Code
shutil.rmtree('tmp')
proc = ExportModuleProcessor('../tests/00_some.thing.ipynb', 'tmp')
proc.create_modules()
g = exec_new('import tmp.some.thing')
test_eq(g['tmp'].some.thing.__all__, ['a'])
test_eq(g['tmp'].some.thing.a, 1)
###Output
_____no_output_____
###Markdown
We'll also check that our 'everything' file exports correctly:
###Code
proc = ExportModuleProcessor(everything_fn, 'tmp')
proc.create_modules()
g = exec_new('import tmp.everything; from tmp.everything import *')
_alls = L("a b d e m n o p q".split())
for s in _alls.map("{}_y"): assert s in g, s
for s in "c_y_nall _f_y_nall g_n h_n i_n j_n k_n l_n".split(): assert s not in g, s
for s in _alls.map("{}_y") + ["c_y_nall", "_f_y_nall"]: assert hasattr(g['tmp'].everything,s), s
###Output
_____no_output_____
###Markdown
That notebook should also export one extra function to `tmp.some.thing`:
###Code
del(sys.modules['tmp.some.thing']) # remove from module cache
g = exec_new('import tmp.some.thing')
test_eq(g['tmp'].some.thing.__all__, ['a','h_n'])
test_eq(g['tmp'].some.thing.h_n(), None)
#export
def nb_export(nbname, lib_name=None):
if lib_name is None: lib_name = get_config().lib_name
ExportModuleProcessor(nbname, lib_name).create_modules()
#export
@call_parse
def nbs_export(
path:str='.', # path or filename
recursive:bool=True, # search subfolders
symlinks:bool=True, # follow symlinks?
file_glob:str='*.ipynb', # Only include files matching glob
file_re:str=None, # Only include files matching regex
folder_re:str=None, # Only enter folders matching regex
skip_file_glob:str=None, # Skip files matching glob
skip_file_re:str=None, # Skip files matching regex
skip_folder_re:str='^[_.]' # Skip folders matching regex
):
if os.environ.get('IN_TEST',0): return
if not recursive: skip_folder_re='.'
files = globtastic(path, symlinks=symlinks, file_glob=file_glob, file_re=file_re,
folder_re=folder_re, skip_file_glob=skip_file_glob, skip_file_re=skip_file_re, skip_folder_re=skip_folder_re)
files.map(nb_export)
###Output
_____no_output_____
###Markdown
Export -
###Code
#skip
Path('../nbprocess/export.py').unlink(missing_ok=True)
nbs_export()
g = exec_new('import nbprocess.export')
assert hasattr(g['nbprocess'].export, 'nb_export')
###Output
_____no_output_____ |
materials/assignments/assignment_2.ipynb | ###Markdown
Assignment \2**Due:** Saturday, October 31 at 11:59 pm PT**Objective:**This assignment will give you experience using lists, logical operations, control flow (*for* loops and *if* statements), and NumPy arrays and functions.**Instructions:**1. This version of the assignment cannot be edited. To save an editable version, copy this Colab file to your individual class Google Drive folder ("OCEAN 215 - Autumn '20 - {your name}") by right clicking on this file and selecting "Move to".2. Open the version you copied.3. Complete the assignment by writing and executing text and code cells as specified. **To complete this assignment, you do not need any material beyond Lesson 7.** However, you may use material from beyond Lesson 7 if you wish as long as it has been discussed in a lesson or class, and is not prohibited in the question.4. When you're finished and are ready to submit the assignment, simply save the Colab file ("File" menu โ> "Save") before the deadline, close the file, and keep it in your individual class Google Drive folder.5. If you need more time, please see the section "Late work policy" in the syllabus for details.**Honor code:** In the space below, you can acknowledge and describe any assistance you've received on this assignment, whether that was from an instructor, classmate (either directly or on Piazza), and/or online resources other than official Python documentation websites like docs.python.org or numpy.org. Alternatively, if you prefer, you may acknowledge assistance at the relevant point(s) in your code using a Python comment (). You do not have to acknowledge OCEAN 215 class or lesson resources. *Acknowledge assistance here:* Question 1 (5 points) ***For* loops and list comprehensions***Useful resources:* Lesson 3 on list functions and indexing; Lesson 4 on *for* loops and list comprehensionsFor this question, you are given a list called *decimals* that contains 5 floating-point numbers:* decimals = \[0.0003, 0.2342, 0.5629, 0.6376, 0.9731]Please create a new list in which these decimal fractions have been first converted to percents (i.e. between 0-100%), then rounded to 1 decimal place, and finally converted to strings with a percent sign (%) on the end. In other words, your final result should be the following list:* percents = \['0.0%', '23.4%', '56.3%', '63.8%', '97.3%']This question has two parts, which should both result in a list identical to *percents* above:1. Create this new list using a *for* loop. Print the list.2. Create this new list using a list comprehension. Print the list.To round a number, you should use Python's *round()* function, which takes two arguments: the number, and the number of decimal places to round to. For example, *round(23.4173, 2)* gives 23.42 as the result.
###Code
# Keep this starting line of code:
decimals = [0.0003,0.2342,0.5629,0.6376,0.9731]
# Write your code for Part 1 here:
# Write your code for Part 2 here:
###Output
_____no_output_____
###Markdown
Question 2 (15 points)*Image: Bathymetry of the seafloor around Axial Seamount (source: Wikipedia, originally from NOAA/PMEL).* **Axial Seamount eruption***Useful resources:* Lesson 3 on list indexing; Lesson 4 on *for* loops and *if* statements; Lesson 7 on basic plotsDuring the eruption of a submarine volcano, the seafloor deflates rapidly as magma is expelled into the ocean. Bottom pressure gauges use pressure to measure depth (as you might recall from Assignment 1, pressure is nearly equivalent to depth in the ocean), and so these gauges can detect this deflation. After an eruption, the seafloor slowly inflates and rises as magma accumulates in preparation for the next eruption.Axial Seamount is an active submarine volcano offshore of the Oregon coast. It last erupted in 2015. Below, we provide a time series of monthly bottom pressure data released by the [Ocean Observatories Initiative \(OOI\)](https://dataexplorer.oceanobservatories.org/default-data/4) from August 15, 2014 to December 15, 2019, with measurements taken on the 15th day of each month. Times are specified in "fractional years." For example, July 1, 2020 is approximately 2020.5 in fractional years.**Please write only one *for* loop for this entire question. Do not use NumPy.** Organize your code for Parts 1-3 at the top, followed by the *print()* statements for your answers, followed by your code for the Part 4 plot at the bottom. Write code to answer the following questions:1. The seafloor dropped more than 2.0 m during the 2015 eruption. This can be used to identify when the eruption happened. Answer these questions using a *print()* statement, e.g. "Part 1a: The seafloor drop during the eruption was \ m."> **a.** Calculate the actual seafloor drop during the eruption. Round to 1 decimal place (see Question 1 for details on *round()*).>> **b.** Identify the two measurement times (in fractional years) immediately before and after the eruption.>> **c.** Find the seafloor depth measurement immediately before the eruption. Round to 1 decimal place.>2. The seafloor has been steadily inflating since the 2015 eruption. Express answers to the following questions in meters and fractional years, respectively, each rounded to 1 decimal place:> **a.** How much has the seafloor inflated between the measurement immediately after the eruption and the most recent measurement in this data?>> **b.** How much time has elapsed between these two measurements? >3. Calculate the approximate rate of seafloor inflation using the values from Part 2. Express your answer in units of centimeters per month, rounded to 1 decimal place.4. Use Matplotlib to create a plot of the data with time on the x-axis and depth on the y-axis. Shallower depths should point in the positive y-direction (you will need to search the [Matplotlib API](https://matplotlib.org/3.3.1/api/axes_api.html) to find out how to do this). **Format and label your plot such that it looks as similar to this plot as possible**:
###Code
# Keep these lines of code:
#
# Axial Seamount Central Caldera bottom pressure gauge data
# (times in fractional years; depths in meters)
times = [2014.625,2014.708,2014.792,2014.875,2014.958,2015.042,2015.125,2015.208,2015.292,2015.375,2015.458,2015.542,2015.625,2015.708,2015.792,2015.875,2015.958,2016.042,2016.125,2016.208,2016.292,2016.375,2016.458,2016.542,2016.625,2016.708,2016.792,2016.875,2016.958,2017.042,2017.125,2017.208,2017.292,2017.375,2017.458,2017.542,2017.625,2017.708,2017.792,2017.875,2017.958,2018.042,2018.125,2018.208,2018.292,2018.375,2018.458,2018.542,2018.625,2018.708,2018.792,2018.875,2018.958,2019.042,2019.125,2019.208,2019.292,2019.375,2019.458,2019.542,2019.625,2019.708,2019.792,2019.875,2019.958]
depths = [1510.24,1510.19,1510.14,1510.08,1510.04,1509.96,1509.93,1509.87,1509.83,1512.23,1512.11,1512.04,1511.96,1511.89,1511.79,1511.7,1511.64,1511.57,1511.54,1511.5,1511.47,1511.47,1511.4,1511.35,1511.33,1511.32,1511.31,1511.26,1511.21,1511.19,1511.14,1511.12,1511.05,1511.0,1511.06,1511.04,1511.01,1510.98,1510.93,1510.87,1510.87,1510.91,1510.86,1510.77,1510.75,1510.7,1510.67,1510.68,1510.64,1510.59,1510.56,1510.54,1510.45,1510.51,1510.41,1510.42,1510.39,1510.37,1510.4,1510.42,1510.41,1510.38,1510.33,1510.3,1510.26]
# Your code for Parts 1-3:
# Your print() statements for Parts 1-3:
# Your code to make the plot for Part 4:
###Output
_____no_output_____
###Markdown
Question 3 (20 points)*Image: The boundary between air and sea as seen from a research vessel.* **Air-sea heat fluxes***Useful resources:* Lesson 2 on mathematical operations, Lesson 3 on list indexing; Lesson 4 on *for* loops and *if* statements; Lesson 5 on NumPy arrays and functions**Background information:**A body of water will lose heat when exposed to cooler air, and it will lose heat faster when there is airflow over it. Think about how you might cool a cup of coffee or tea by blowing on it. The ocean behaves similarly."Air-sea heat flux" refers to the rate of this exchange of heat between the ocean surface and atmosphere. When the ocean is warmer than the air, the ocean loses heat. When the ocean is cooler than the air, the ocean gains heat. This exchange happens more rapidly when wind speeds are higher. Here is the equation that describes this heat flux. Notice that this rate of heat exchange, $H$, becomes larger if the wind speed, $U_{10}$, becomes larger:$H = \rho_a \cdot c_p \cdot C_h \cdot U_{10} \cdot (T_a - T_s)$* $H$ = air-to-sea heat flux (units: Watts/m$^2$, or W/m$^2$, where positive values represent heat moving from the air into the ocean)* $\rho_a$ = air density (1.2 kg/m$^3$)* $c_p$ = heat capacity of air (1004 J/(kg ยฐC))* $C_h$ = transfer coefficient (0.0011 \[unitless])* $U_{10}$ = wind speed 10 m above the ocean (units: m/s)* $T_a$ = air temperature 10 m above the ocean (units: ยฐC)* $T_s$ = sea surface temperature (units: ยฐC)The Irminger Sea in the North Atlantic, near Greenland, experiences some of the most extreme air-sea heat fluxes in the world. Below, we provide three lists containing monthly-average data for $T_a$, $T_s$, and $U_{10}$ from 2018 in the Irminger Sea. Here is a plot showing these monthly data:**Questions:**Answer each question below using a *print()* statement, e.g. "Part 1: The air-sea heat flux in October is \ W/m^2." For these parts, you may use *for* loops, but **not NumPy**:1. What is the air-sea heat flux, $H$, in October? Round your answer to 1 decimal place.2. Is the ocean gaining or losing heat in October? Don't do any calculations for this part; just give your answer in a *print()* statement.3. Use a *for* loop to calculate $H$ for each month, and round each value to 1 decimal place. These values should be stored in a list. Print the list, e.g. "Part 3: The monthly air-sea heat fluxes are: \{list of $H$ values} ."4. During which month is ocean heat loss at a maximum? Identify and print this month using **code only**. In other words, use a *for* loopย โย don't identify the month simply by looking at the data, and don't type the name of any months for this part.5. What are the two reasons that ocean heat loss was greatest in this month? You may wish to consult the plot and/or the $H$ equation above. There is no need to do calculations for this part; just type a short answer in a *print()* statement.6. Use a *for* loop to calculate the average $H$ value in 2018. Round your answer to 1 decimal place.For these next calculations, you may use NumPy, but **not *for* loops**:7. Convert the lists of $T_a$, $T_s$, and $U_{10}$ data to NumPy arrays. Use new variable names for these arrays. There is no need to print anything for this part.8. Calculate $H$ using the NumPy arrays from Part 7. Round the NumPy array of $H$ values to 1 decimal place using NumPy's [*.round()*](https://numpy.org/doc/stable/reference/generated/numpy.round_.html) function. Print the result, which should be a new NumPy array with a variable name different from the $H$ list you created in Part 3. If you did everything correctly, the numbers in this array will be the same as the numbers in the $H$ list from Part 3.9. Answer the same question from Part 4 using the NumPy array of $H$ values from Part 8. For this, use either NumPy's [*argmax()*](https://numpy.org/doc/stable/reference/generated/numpy.argmax.html) or [*argmin()*](https://numpy.org/doc/stable/reference/generated/numpy.argmin.html) function. As before, identify and print this month using code only. You should get the same answer as you did in Part 4.10. Calculate the average $H$ value in 2018 using the NumPy array of $H$ values from Part 8. For this, use NumPy's [*mean()*](https://numpy.org/doc/stable/reference/generated/numpy.mean.html) function. Round your answer to 1 decimal place. You should get the same answer as you did in Part 6.
###Code
# Keep these lines of code:
#
# Monthly ERA-Interim data for the North Atlantic (Irminger Sea at 59.5ยฐN, 40.0ยฐW) from 2018
months = ['January','February','March','April','May','June','July','August','September','October','November','December']
U_10 = [12.91,14.63,8.58,9.32,10.88,7.73,7.15,8.09,8.49,9.77,11.34,9.14]
T_a = [0.55,-1.95,1.52,2.81,2.71,5.13,7.06,8.19,7.08,4.56,3.68,2.61]
T_s = [4.18,3.86,3.72,3.91,4.35,5.29,6.76,8.21,7.37,6.24,5.09,4.42]
# Your code for Part 1:
# Your code for Part 3:
# Your code for Part 4:
# Your code for Part 6:
# Your code for Part 7:
# Your code for Part 8:
# Your code for Part 9:
# Your code for Part 10:
# Your print() statements for Parts 1-6 and 8-10:
###Output
_____no_output_____
###Markdown
Question 4 (10 points)*Image: The research vessel R/V Polarstern.* **Thermosalinograph time series***Useful resources:* Lesson 5 on NumPy arrays and functions, Lesson 6 on multidimensional arrays and datetime objectsDuring an oceanographic cruise, research vessels will often have an instrument called a thermosalinograph taking temperature measurements throughout the duration of the cruise. To get the most accurate picture of the surrounding conditions, there are usually 2 different temperature sensors in the water, situated at different depths. For this problem you are provided two NumPy arrays of sample data from a cruise in 2003 aboard the [R/V Polarstern](https://www.awi.de/en/expedition/ships/polarstern.html).**Data arrays:*** **T_data** contains 4 columns (longitude, latitude, temperature [หC] at 5 meters, and temperature [หC] at 11 meters)* **time_data** contains a single dimension with strings containing the date/time information for the temperature measurements with the format %Y-%m-%d %H:%M:%S**Questions:**Answer each question below using a *print()* statement, e.g. "Part 1a: The measurements were taken over \ hours."1. Create a new 1-dimensional array with the date strings in the **time_data** array converted into datetime objects. Answer the following:> **a.** Over how many hours were these measurements taken?>> **b.** What is the frequency of these measurements (e.g. how often did measurements occur)? You can assume that the measurements were all collected at the same frequency.>2. Find the maximum and minimum values of latitude and longitude in **T_data**.3. Create and print a new 1-dimensional array containing temperatures averaged between 5 meters and 11 meters for each measurement time. If ocean temperatures vary linearly between these two depths, what is the approximate depth in the ocean that these average temperatures represent?4. Reshape the average temperature array from Part 3 into a 2-dimensional array with the same number of rows as the number of hours that these measurements span (your solution from Part 1a). Calculate and print a new array that represents the average temperature in each row of the reshaped array. (Notice that this is equivalent to the hourly average temperature!) Data is contained in the cell below. *Make sure to run the cell below before running your solution cell. Please do not alter the data arrays. Write your code and answers in the cell at the bottom, not this cell.*
###Code
# Do not alter the code in this cell
import numpy as np
# Data in this array consists of 4 columns:
# Latitude, longitude, T at 5 m (หC), T at 11 m (หC)
T_data = np.array([[51.7439,2.4476,14.726,14.736],[51.7147,2.4071,14.746,14.756],[51.6851,2.3664,14.796,14.816],[51.6561,2.3254,14.856,14.866],
[51.627,2.2854,14.866,14.876],[51.5981,2.2454,14.896,14.916],[51.5689,2.2055,14.936,14.946],[51.5404,2.1661,14.946,14.956],
[51.5122,2.127,14.936,14.946],[51.4831,2.087,14.956,14.966],[51.4545,2.0478,15.016,15.026],[51.4271,2.01,15.106,15.116],
[51.3959,1.9686,15.136,15.146],[51.3635,1.9252,15.086,15.086],[51.3304,1.8848,14.826,14.826],[51.2986,1.8437,14.616,14.626],
[51.2679,1.8036,14.527,14.547],[51.2371,1.7642,14.636,14.646],[51.207,1.7255,14.666,14.686],[51.1782,1.6886,14.766,14.786],
[51.1497,1.6519,14.736,14.756],[51.1215,1.6156,14.716,14.726],[51.0984,1.581,14.656,14.666],[51.077,1.5485,14.567,14.577],
[51.0586,1.5198,14.467,14.477],[51.0354,1.4841,14.247,14.257],[51.0088,1.4431,14.117,14.147],[50.9829,1.4033,14.307,14.327],
[50.957,1.3635,14.337,14.347],[50.9314,1.324,14.307,14.327],[50.9077,1.2801,14.327,14.337],[50.8867,1.2301,14.207,14.217],
[50.8654,1.1789,14.157,14.177],[50.8436,1.1266,14.167,14.187],[50.8213,1.0736,14.137,14.157],[50.7988,1.0196,14.257,14.277],
[50.776,0.9649,14.437,14.447],[50.7527,0.9096,14.626,14.646],[50.7295,0.8538,14.796,14.806],[50.7059,0.7976,14.836,14.846],
[50.6826,0.7407,14.806,14.816],[50.6626,0.6806,14.806,14.816],[50.6388,0.6227,14.826,14.836],[50.615,0.5641,14.826,14.836],
[50.6005,0.4986,14.786,14.796],[50.5881,0.4317,14.786,14.786],[50.5756,0.3649,14.756,14.766],[50.5632,0.2975,14.826,14.836],
[50.5509,0.2306,14.886,14.896],[50.5386,0.1641,15.006,15.016],[50.5263,0.0974,15.176,15.186],[50.5138,0.0313,15.196,15.196],
[50.5018,-0.0345,15.186,15.196],[50.4897,-0.0997,15.286,15.296],[50.4778,-0.1644,15.346,15.356],[50.466,-0.2284,15.386,15.396],
[50.454,-0.2916,15.376,15.386],[50.4426,-0.3536,15.366,15.376],[50.4313,-0.4153,15.416,15.416],[50.4168,-0.4275,15.456,15.466],
[50.409,-0.4882,15.436,15.446],[50.4017,-0.5474,15.466,15.476],[50.3933,-0.6047,15.426,15.426],[50.3796,-0.6583,15.396,15.406],
[50.3668,-0.7114,15.396,15.406],[50.3524,-0.763,15.396,15.406],[50.3396,-0.8151,15.396,15.406],[50.3288,-0.8668,15.476,15.486],
[50.3223,-0.9188,15.556,15.566],[50.316,-0.97,15.616,15.636],[50.3092,-1.0191,15.696,15.706],[50.3024,-1.0675,15.746,15.756]])
# Data in this array has only one dimension:
# Date/Time (Y-m-d H:M:S) string
time_data = np.array(['2003-10-23 06:25:00','2003-10-23 06:35:00',
'2003-10-23 06:45:00','2003-10-23 06:55:00','2003-10-23 07:05:00',
'2003-10-23 07:15:00','2003-10-23 07:25:00','2003-10-23 07:35:00',
'2003-10-23 07:45:00','2003-10-23 07:55:00','2003-10-23 08:05:00',
'2003-10-23 08:15:00','2003-10-23 08:25:00','2003-10-23 08:35:00',
'2003-10-23 08:45:00','2003-10-23 08:55:00','2003-10-23 09:05:00',
'2003-10-23 09:15:00','2003-10-23 09:25:00','2003-10-23 09:35:00',
'2003-10-23 09:45:00','2003-10-23 09:55:00','2003-10-23 10:05:00',
'2003-10-23 10:15:00','2003-10-23 10:25:00','2003-10-23 10:35:00',
'2003-10-23 10:45:00','2003-10-23 10:55:00','2003-10-23 11:05:00',
'2003-10-23 11:15:00','2003-10-23 11:25:00','2003-10-23 11:35:00',
'2003-10-23 11:45:00','2003-10-23 11:55:00','2003-10-23 12:05:00',
'2003-10-23 12:15:00','2003-10-23 12:25:00','2003-10-23 12:35:00',
'2003-10-23 12:45:00','2003-10-23 12:55:00','2003-10-23 13:05:00',
'2003-10-23 13:15:00','2003-10-23 13:25:00','2003-10-23 13:35:00',
'2003-10-23 13:45:00','2003-10-23 13:55:00','2003-10-23 14:05:00',
'2003-10-23 14:15:00','2003-10-23 14:25:00','2003-10-23 14:35:00',
'2003-10-23 14:45:00','2003-10-23 14:55:00','2003-10-23 15:05:00',
'2003-10-23 15:15:00','2003-10-23 15:25:00','2003-10-23 15:35:00',
'2003-10-23 15:45:00','2003-10-23 15:55:00','2003-10-23 16:05:00',
'2003-10-23 16:15:00','2003-10-23 16:25:00','2003-10-23 16:35:00',
'2003-10-23 16:45:00','2003-10-23 16:55:00','2003-10-23 17:05:00',
'2003-10-23 17:15:00','2003-10-23 17:25:00','2003-10-23 17:35:00',
'2003-10-23 17:45:00','2003-10-23 17:55:00','2003-10-23 18:05:00',
'2003-10-23 18:15:00','2003-10-23 18:25:00'])
###Output
_____no_output_____
###Markdown
Write your code and answers in the cell below:
###Code
###Output
_____no_output_____ |
samples/jupyter-notebooks/Linear Regression Algorithms Demo.ipynb | ###Markdown
Linear Regression Algorithms using Apache SystemMLThis notebook shows:- Install SystemML Python package and jar file - pip - SystemML 'Hello World'- Example 1: Matrix Multiplication - SystemML script to generate a random matrix, perform matrix multiplication, and compute the sum of the output - Examine execution plans, and increase data size to obverve changed execution plans- Load diabetes dataset from scikit-learn- Example 2: Implement three different algorithms to train linear regression model - Algorithm 1: Linear Regression - Direct Solve (no regularization) - Algorithm 2: Linear Regression - Batch Gradient Descent (no regularization) - Algorithm 3: Linear Regression - Conjugate Gradient (no regularization)- Example 3: Invoke existing SystemML algorithm script LinearRegDS.dml using MLContext API- Example 4: Invoke existing SystemML algorithm using scikit-learn/SparkML pipeline like API- Uninstall/Clean up SystemML Python package and jar file Install SystemML Python package and jar file
###Code
!pip uninstall systemml --y
!pip install --user https://repository.apache.org/content/groups/snapshots/org/apache/systemml/systemml/1.0.0-SNAPSHOT/systemml-1.0.0-20171201.070207-23-python.tar.gz
!pip show systemml
###Output
_____no_output_____
###Markdown
Import SystemML API
###Code
from systemml import MLContext, dml, dmlFromResource
ml = MLContext(sc)
print "Spark Version:", sc.version
print "SystemML Version:", ml.version()
print "SystemML Built-Time:", ml.buildTime()
ml.execute(dml("""s = 'Hello World!'""").output("s")).get("s")
###Output
_____no_output_____
###Markdown
Import numpy, sklearn, and define some helper functions
###Code
import matplotlib.pyplot as plt
import numpy as np
from sklearn import datasets
plt.switch_backend('agg')
###Output
_____no_output_____
###Markdown
Example 1: Matrix Multiplication SystemML script to generate a random matrix, perform matrix multiplication, and compute the sum of the output
###Code
script = """
X = rand(rows=$nr, cols=1000, sparsity=0.5)
A = t(X) %*% X
s = sum(A)
"""
###Output
_____no_output_____
###Markdown
ml.setStatistics(False) ml.setExplain(True).setExplainLevel("runtime")
###Code
prog = dml(script).input('$nr', 1e5).output('s')
s = ml.execute(prog).get('s')
print (s)
###Output
_____no_output_____
###Markdown
Load diabetes dataset from scikit-learn
###Code
%matplotlib inline
diabetes = datasets.load_diabetes()
diabetes_X = diabetes.data[:, np.newaxis, 2]
diabetes_X_train = diabetes_X[:-20]
diabetes_X_test = diabetes_X[-20:]
diabetes_y_train = diabetes.target[:-20].reshape(-1,1)
diabetes_y_test = diabetes.target[-20:].reshape(-1,1)
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
diabetes.data.shape
###Output
_____no_output_____
###Markdown
Example 2: Implement three different algorithms to train linear regression model Algorithm 1: Linear Regression - Direct Solve (no regularization) Least squares formulationw* = argminw ||Xw-y||2 = argminw (y - Xw)'(y - Xw) = argminw (w'(X'X)w - w'(X'y))/2 Setting the gradientdw = (X'X)w - (X'y) to 0, w = (X'X)-1(X' y) = solve(X'X, X'y)
###Code
script = """
# add constant feature to X to model intercept
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
A = t(X) %*% X
b = t(X) %*% y
w = solve(A, b)
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
"""
prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')
w, bias = ml.execute(prog).get('w','bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='blue', linestyle ='dotted')
###Output
_____no_output_____
###Markdown
Algorithm 2: Linear Regression - Batch Gradient Descent (no regularization) Algorithm`Step 1: Start with an initial point while(not converged) { Step 2: Compute gradient dw. Step 3: Compute stepsize alpha. Step 4: Update: wnew = wold + alpha*dw }` Gradient formula`dw = r = (X'X)w - (X'y)` Step size formula`Find number alpha to minimize f(w + alpha*r) alpha = -(r'r)/(r'X'Xr)`
###Code
script = """
# add constant feature to X to model intercepts
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
max_iter = 100
w = matrix(0, rows=ncol(X), cols=1)
for(i in 1:max_iter){
XtX = t(X) %*% X
dw = XtX %*%w - t(X) %*% y
alpha = -(t(dw) %*% dw) / (t(dw) %*% XtX %*% dw)
w = w + dw*alpha
}
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
"""
prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')
w, bias = ml.execute(prog).get('w', 'bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
###Output
_____no_output_____
###Markdown
Algorithm 3: Linear Regression - Conjugate Gradient (no regularization) Problem with gradient descent: Takes very similar directions many timesSolution: Enforce conjugacy`Step 1: Start with an initial point while(not converged) { Step 2: Compute gradient dw. Step 3: Compute stepsize alpha. Step 4: Compute next direction p by enforcing conjugacy with previous direction. Step 4: Update: w_new = w_old + alpha*p}`
###Code
script = """
# add constant feature to X to model intercepts
X = cbind(X, matrix(1, rows=nrow(X), cols=1))
m = ncol(X); i = 1;
max_iter = 20;
w = matrix (0, rows = m, cols = 1); # initialize weights to 0
dw = - t(X) %*% y; p = - dw; # dw = (X'X)w - (X'y)
norm_r2 = sum (dw ^ 2);
for(i in 1:max_iter) {
q = t(X) %*% (X %*% p)
alpha = norm_r2 / sum (p * q); # Minimizes f(w - alpha*r)
w = w + alpha * p; # update weights
dw = dw + alpha * q;
old_norm_r2 = norm_r2; norm_r2 = sum (dw ^ 2);
p = -dw + (norm_r2 / old_norm_r2) * p; # next direction - conjugacy to previous direction
i = i + 1;
}
bias = as.scalar(w[nrow(w),1])
w = w[1:nrow(w)-1,]
"""
prog = dml(script).input(X=diabetes_X_train, y=diabetes_y_train).output('w', 'bias')
w, bias = ml.execute(prog).get('w','bias')
w = w.toNumPy()
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w*diabetes_X_test)+bias, color='red', linestyle ='dashed')
###Output
_____no_output_____
###Markdown
Example 3: Invoke existing SystemML algorithm script LinearRegDS.dml using MLContext API
###Code
prog = dmlFromResource('scripts/algorithms/LinearRegDS.dml').input(X=diabetes_X_train, y=diabetes_y_train).input('$icpt',1.0).output('beta_out')
w = ml.execute(prog).get('beta_out')
w = w.toNumPy()
bias=w[1]
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, (w[0]*diabetes_X_test)+bias, color='red', linestyle ='dashed')
###Output
_____no_output_____
###Markdown
Example 4: Invoke existing SystemML algorithm using scikit-learn/SparkML pipeline like API *mllearn* API allows a Python programmer to invoke SystemML's algorithms using scikit-learn like API as well as Spark's MLPipeline API.
###Code
from pyspark.sql import SQLContext
from systemml.mllearn import LinearRegression
sqlCtx = SQLContext(sc)
regr = LinearRegression(sqlCtx)
# Train the model using the training sets
regr.fit(diabetes_X_train, diabetes_y_train)
predictions = regr.predict(diabetes_X_test)
# Use the trained model to perform prediction
%matplotlib inline
plt.scatter(diabetes_X_train, diabetes_y_train, color='black')
plt.scatter(diabetes_X_test, diabetes_y_test, color='red')
plt.plot(diabetes_X_test, predictions, color='black')
###Output
_____no_output_____
###Markdown
Uninstall/Clean up SystemML Python package and jar file
###Code
!pip uninstall systemml --y
###Output
_____no_output_____ |
lessons/04-working-with-libraries/03e-exercise-soultion.ipynb | ###Markdown
Exercise Solution: Exploring Data With Jupyter, Pandas, and Matplotlib Fact Finding:Find the answer to each of these questions:* What was the most expensive property sold in the dataset?* How many sales were for less than $10 * How could this possibly be right? (Hint: read the data documentation on Kaggle...)* How many of the properties sold were built prior to 1950?* What is the smallest gross square feet property sold? * What was the largest?* Which zip code had the fewest number of sales?
###Code
# Before we can start, we should import the libraries we're going to use
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# Then load the data
path_to_ny_sales = '../../datasets/nyc-property/nyc-rolling-sales.csv'
sales_df = pd.read_csv(path_to_ny_sales)
sales_df.head()
# To make things cleaner, I'm also going to drop rows that have a missing:
# price, gross square feet, land square feet, or year built.
# This code is in the example notebook
columns_to_convert = [
'LAND SQUARE FEET',
'GROSS SQUARE FEET',
'SALE PRICE',
'YEAR BUILT'
]
for column_name in columns_to_convert:
sales_df[column_name] = pd.to_numeric(sales_df[column_name], errors='coerce')
sales_df = sales_df[sales_df[column_name].notna()]
sales_df.describe()
# What was the most expensive property sold?
# Actually, we can see this in the information above from the .describe() function!
# Max sale price was 2.210000e+09 aka 2.21 BILLION DOLLARS!!
# But you can also find it this way:
sales_df['SALE PRICE'].max()
# Relevant documentation: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.max.html
# How many properties sold were built prior to 1950?
# The easiest way to do this is filter the dataframe, then count the rows.
before_1950 = sales_df[sales_df['YEAR BUILT'] < 1950]
print(len(before_1950))
# Note, you could also use the "count" function, though it gives more information than we need.
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.count.html
before_1950.count()
# Smallest and largest gross square feet
# Once again, this information was in the "describe()" output
# Smallest: 0 square feet (weird right?)
# Largest: 3.750565e+06 aka 3,750,565 square feet (HOLY S***)
# Again, you could also find these values using .min() and .max() on the proper columns:
print(sales_df['GROSS SQUARE FEET'].min())
print(sales_df['GROSS SQUARE FEET'].max())
# But, inquiring minds want to know Lets find out what the smallest non-zero property is
# I imagine any 0 values are more likely "missing" than being a property that actually
# doesn't have a size...
non_zero_gross_sq_feet = sales_df[sales_df['GROSS SQUARE FEET'] != 0]
non_zero_gross_sq_feet['GROSS SQUARE FEET'].min() # 60. Wow, that's a small property.
# Which zip code had the fewest number of sales?
# The easiest way to do this is to use the .value_counts() function on the ZIP CODE column.
# This will tell us how many times each zip code appears in the overall data.
# Documentation: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.value_counts.html
print(sales_df['ZIP CODE'].value_counts())
# The data is returned in sorted order based on the count.
# So, zip code 11201 has the most sales: 1324
# and 10803 only has 1 sale.
# We also learned that there are 180 zip codes represented in this dataset.
###Output
11201 1324
11235 1312
11234 1165
11229 916
11215 899
...
10167 1
11005 1
10006 1
10044 1
10803 1
Name: ZIP CODE, Length: 180, dtype: int64
1324
###Markdown
Chart Making:Create the following charts:* A boxplot showing how many properties were sold in each borough. * Use the data documentation to find the names of each borough rather than the 1-5 values.* A pie chart showing the share of sales by borough. * Use the data documentation to find the names of each borough rather than the 1-5 values.* A boxplot showing the average (mean) sale price of property in each zip code.* A scatterplot showing the sales price by the gross square feet. * **Bonus points**: show the least squares regression line as well!
###Code
# A boxplot showing how many properties were sold in each borough.
# We can use value_counts for this too:
sales_by_borough = sales_df['BOROUGH'].value_counts()
# Replace the numbered boroughs with their names:
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.rename.html
sales_by_borough.rename(index={
1: 'Manhattan',
2: 'Bronx',
3: 'Brooklyn',
4: 'Queens',
5: 'Staten Island'
},
inplace=True
)
# Make the plot using pandas!
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.bar.html
sales_by_borough.plot.bar()
# We already have the data, so a pie chart is very easy to make (as long as you know what function to use):
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.pie.html
sales_by_borough.plot.pie()
# A barchart showing the average (mean) sale price of property in each zip code.
# This one is a bit trickier... we need to group the data based on the zip code:
# https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.groupby.html
sales_and_zip = sales_df[['ZIP CODE', 'SALE PRICE']]
sales_grouped_by_zip = sales_and_zip.groupby(['ZIP CODE']).mean()
sales_grouped_by_zip.plot.bar(figsize=(100, 20))
# Hmmmm... that's not very interesting or usable.
# 180 values is just too many, and the range of sale prices is also too broad.
sales_and_zip = sales_df[['ZIP CODE', 'SALE PRICE']]
sales_grouped_by_zip = sales_and_zip.groupby(['ZIP CODE']).mean()
# Now, lets sort it by average sale price and just display the top 15
sorted_sales_by_zip = sales_grouped_by_zip.sort_values(by=['SALE PRICE'], ascending=False)
top_15 = sorted_sales_by_zip[0:15]
top_15.plot.bar()
# Fun fact, the 10167 zip code is ONE CITY BLOCK on Park Ave.
# (https://www.zip-codes.com/zip-code/10167/zip-code-10167.asp)
# https://en.wikipedia.org/wiki/245_Park_Avenue
# Okay, one more, lets leave out the 245 Park Street sale
# and look at the next top 15 zips to get a better picture overall:
sorted_sales_by_zip[1:16].plot.bar()
# Finally, A scatterplot showing the sales price by the gross square feet.
sales_df.plot.scatter(x='GROSS SQUARE FEET', y='SALE PRICE')
# To get the regression line we need to do a bit more work and use some lower level libraries directly
# pyplot plot function: https://matplotlib.org/3.2.1/api/_as_gen/matplotlib.pyplot.plot.html
# numpy polyfit: https://numpy.org/doc/1.18/reference/generated/numpy.polyfit.html
x = sales_df['GROSS SQUARE FEET']
y = sales_df['SALE PRICE']
slope, y_intercept = np.polyfit(x, y, 1) # one is for "first degree polynomial" aka, a line.
# Plot the scatter, then the line, then show the plot:
plt.plot(x, y, 'o') # 'o' is for "dots"
plt.plot(x, y_intercept + slope * x, '-') # '-' is for "line"
###Output
393539.4820124962 207.02811915510165
|
12. kNN (1).ipynb | ###Markdown
12์ฅ. k ์ต๊ทผ์ ์ด์ (kNN) 1. k-NN ๋ถ๋ฅ๊ธฐ ํฌํ ํจ์
###Code
from typing import List
from collections import Counter
###Output
_____no_output_____
###Markdown
์ต๋ ๋ํ ๋ฐฉ์์ผ๋ก ํฌํ๋ฅผ ์ค์
###Code
def raw_majority_vote(labels: List[str]) -> str:
votes = Counter(labels)
winner, _ = votes.most_common(1)[0]
return winner
###Output
_____no_output_____
###Markdown
๋๋ฅ ์ด ๋ฐ์ํ๋ฉด k๋ฅผ ํ๋์ฉ ์ค์ด๋ฉด์ ์ฌํฌํ
###Code
def majority_vote(labels: List[str]) -> str:
"""Assumes that labels are ordered from nearest to farthest."""
vote_counts = Counter(labels)
winner, winner_count = vote_counts.most_common(1)[0]
num_winners = len([count
for count in vote_counts.values()
if count == winner_count])
if num_winners == 1:
return winner # unique winner, so return it
else:
return majority_vote(labels[:-1]) # try again without the farthest
###Output
_____no_output_____
###Markdown
k-NN ๋ถ๋ฅ๊ธฐ ๋ฐ์ดํฐ ํฌ์ธํธ ์๋ฃ ๊ตฌ์กฐ
###Code
from typing import NamedTuple
from scratch.linear_algebra import Vector, distance
class LabeledPoint(NamedTuple):
point: Vector
label: str
###Output
_____no_output_____
###Markdown
k-NN ๋ถ๋ฅ๊ธฐ
###Code
def knn_classify(k: int,
labeled_points: List[LabeledPoint],
new_point: Vector) -> str:
# Order the labeled points from nearest to farthest.
by_distance = sorted(labeled_points,
key=lambda lp: distance(lp.point, new_point))
# Find the labels for the k closest
k_nearest_labels = [lp.label for lp in by_distance[:k]]
# and let them vote.
return majority_vote(k_nearest_labels)
###Output
_____no_output_____
###Markdown
2. (key, value) ์๋ฃ ๊ตฌ์กฐ 1์. Collections์ namedtuple ์ด์ฉํ๊ธฐ
###Code
import datetime
from collections import namedtuple
StockPrice1 = namedtuple('StockPrice', ['symbol', 'date', 'closing_price'])
price = StockPrice1('MSFT', datetime.date(2018, 12, 14), 106.03)
###Output
_____no_output_____
###Markdown
2์. Typing์ NamedTuple ์ด์ฉํ๊ธฐ
###Code
from typing import NamedTuple
class StockPrice(NamedTuple):
symbol: str
date: datetime.date
closing_price: float
def is_high_tech(self) -> bool:
"""It's a class, so we can add methods too"""
return self.symbol in ['MSFT', 'GOOG', 'FB', 'AMZN', 'AAPL']
price = StockPrice('MSFT', datetime.date(2018, 12, 14), 106.03)
assert price.symbol == 'MSFT'
assert price.closing_price == 106.03
assert price.is_high_tech()
###Output
_____no_output_____
###Markdown
3์. dataclasses์ dataclass ์ด์ฉํ๊ธฐ
###Code
from dataclasses import dataclass
@dataclass
class StockPrice2:
symbol: str
date: datetime.date
closing_price: float
def is_high_tech(self) -> bool:
"""It's a class, so we can add methods too"""
return self.symbol in ['MSFT', 'GOOG', 'FB', 'AMZN', 'AAPL']
price = StockPrice2('MSFT', datetime.date(2018, 12, 14), 106.03)
assert price.symbol == 'MSFT'
assert price.closing_price == 106.03
assert price.is_high_tech()
###Output
_____no_output_____
###Markdown
3. k-NN ์์ (๋ถ๊ฝ ์ข
๋ถ๋ฅ) ๋ฐ์ดํฐ์
๋ค์ด๋ก๋
###Code
import requests
data = requests.get("https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data")
with open("iris.data", "w") as f:
f.write(data.text)
###Output
_____no_output_____
###Markdown
๋ฐ์ดํฐ ์ฝ๊ธฐ
###Code
from typing import Dict
import csv
from collections import defaultdict
###Output
_____no_output_____
###Markdown
๋ฐ์ดํฐ ํ์ฑ
###Code
def parse_iris_row(row: List[str]) -> LabeledPoint:
"""
sepal_length, sepal_width, petal_length, petal_width, class
"""
measurements = [float(value) for value in row[:-1]]
# class is e.g. "Iris-virginica"; we just want "virginica"
label = row[-1].split("-")[-1]
return LabeledPoint(measurements, label)
###Output
_____no_output_____
###Markdown
csv ํ์ผ ์ฝ๊ธฐ
###Code
with open("iris.data") as f:
reader = csv.reader(f)
# for i, row in enumerate(reader):
# print(i, row)
iris_data = [parse_iris_row(row) for row in reader if row]
# print(iris_data)
###Output
_____no_output_____
###Markdown
๋ถ๊ฝ ์ข
๋ณ ๋์
๋๋ฆฌ ์์ฑ
###Code
# We'll also group just the points by species/label so we can plot them.
points_by_species: Dict[str, List[Vector]] = defaultdict(list)
for iris in iris_data:
points_by_species[iris.label].append(iris.point)
# print(points_by_species)
###Output
_____no_output_____
###Markdown
๋ฐ์ดํฐ ํ์
###Code
metrics = ['sepal length', 'sepal width', 'petal length', 'petal width']
pairs = [(i, j) for i in range(4) for j in range(4) if i < j]
print(pairs)
marks = ['+', '.', 'x'] # we have 3 classes, so 3 markers
from matplotlib import pyplot as plt
# plt.figure(figsize=(30, 20))
fig, ax = plt.subplots(2, 3, figsize=(12, 7))
for row in range(2):
for col in range(3):
i, j = pairs[3 * row + col]
ax[row][col].set_title(f"{metrics[i]} vs {metrics[j]}", fontsize=8)
ax[row][col].set_xticks([])
ax[row][col].set_yticks([])
for mark, (species, points) in zip(marks, points_by_species.items()):
xs = [point[i] for point in points]
ys = [point[j] for point in points]
ax[row][col].scatter(xs, ys, marker=mark, label=species)
ax[-1][-1].legend(loc='lower right', prop={'size': 6})
plt.show()
###Output
_____no_output_____
###Markdown
๋ฐ์ดํฐ์
๋ถ๋ฆฌ
###Code
import random
from scratch.machine_learning import split_data
random.seed(12)
iris_train, iris_test = split_data(iris_data, 0.70)
assert len(iris_train) == 0.7 * 150
assert len(iris_test) == 0.3 * 150
###Output
_____no_output_____
###Markdown
์์ธก
###Code
from typing import Tuple
# track how many times we see (predicted, actual)
confusion_matrix: Dict[Tuple[str, str], int] = defaultdict(int)
num_correct = 0
for iris in iris_test:
predicted = knn_classify(5, iris_train, iris.point)
actual = iris.label
if predicted == actual:
num_correct += 1
confusion_matrix[(predicted, actual)] += 1
pct_correct = num_correct / len(iris_test)
print(pct_correct, confusion_matrix)
###Output
0.9777777777777777 defaultdict(<class 'int'>, {('setosa', 'setosa'): 13, ('versicolor', 'versicolor'): 15, ('virginica', 'virginica'): 16, ('virginica', 'versicolor'): 1})
###Markdown
4. ์ฐจ์์ ์ ์ฃผ
###Code
import random
def random_point(dim: int) -> Vector:
return [random.random() for _ in range(dim)]
def random_distances(dim: int, num_pairs: int) -> List[float]:
return [distance(random_point(dim), random_point(dim))
for _ in range(num_pairs)]
import tqdm
dimensions = range(1, 101)
avg_distances = []
min_distances = []
random.seed(0)
for dim in tqdm.tqdm(dimensions, desc="Curse of Dimensionality"):
distances = random_distances(dim, 10000) # 10,000 random pairs
avg_distances.append(sum(distances) / 10000) # track the average
min_distances.append(min(distances)) # track the minimum
min_avg_ratio = [min_dist / avg_dist
for min_dist, avg_dist in zip(min_distances, avg_distances)]
from matplotlib import pyplot as plt
plt.plot(dimensions, avg_distances)
plt.plot(dimensions, min_distances)
plt.title("10,000 random distances")
plt.legend(["average distance", "minimum distance"], loc='upper left')
plt.xlabel("# of dimensions")
plt.ylabel("distance")
plt.show()
from matplotlib import pyplot as plt
plt.plot(dimensions, min_avg_ratio)
plt.title("Minimum Distances/Average Distance")
plt.xlabel("# of dimensions")
plt.ylabel("ratio")
plt.show()
from mpl_toolkits.mplot3d import Axes3D
dim = 3
num_pairs = 50
points = [random_point(dim) for _ in range(num_pairs)]
x_coord, y_coord, z_coord = zip(*points)
fig, ax = plt.subplots(1, 1)
ax = Axes3D(fig, elev=-150, azim=110)
ax.scatter(x_coord, y_coord, z_coord,
cmap=plt.cm.Set1, edgecolor='k', s=40)
###Output
_____no_output_____ |
Solutions/pandas_9.ipynb | ###Markdown
Assignment 9.1The records in `df_1` has erroneously been set to be recorded from Jan 1, 2011. The records were in fact from May 2, 2011. Adjust the index accordingly.Hint 1 A pandas DataFrame as a tshift method, that allows you to shift the DatetimeIndex by a given number of steps at the DatetimeIndex's frequency Hint 2 Pandas assumes that individual indices are independent. It will therefore shift every index seperately. To keep the proper relation between dates, calculate the offset for the first date in days and offset all indices by that number of days Hint 3 Pandas has a generic DateOffset object in pd.tseries.offsets.DateOffset, that allows you to specify the number of years, months and days to offset a date by Hint 4 Subtracting two pandas dates returns a timedelta. A timedelta has an attribute days, that returns the number of days between the two dates as an integer
###Code
df_1 = df_1.tshift((df_1.index[0]+pd.tseries.offsets.DateOffset(months=4, days=1)-df_1.index[0]).days)
###Output
_____no_output_____
###Markdown
Assignment 9.2Every Thursday the business tallies the total number of events in the previous week (Thursday to Wednesday). Compute the total number of event in every Thursday to Wednesday period and return a pandas Series where the index is the date of the Thursday the week's data is being tallied.Hint 1 The pandas offset object pd.tseries.offsets.Week has a parameter weekday, that allows you to specify which day of the week a date should be offset to. Note: When specifying e.g. Monday as the weekday, every Monday will be offset by a full week as pandas offsets to the following weekday. The parameter weekday is defined by an integer, where Monday is 0 and Sunday is 6 Hint 2 Pandas groupby can be passed a Series not in the DataFrame, e.g. an offset index, and it will then group by the values in that Series
###Code
df_1.groupby(df_1.index+pd.tseries.offsets.Week(weekday=3)).sum()
###Output
_____no_output_____
###Markdown
Assignment 9.3For each calendar month find the date where the most events occurredHint 1 Pandas as an offset object MonthBegin, that offsets a date to the beginning over the month Hint 2 Pandas groupby can be passed a Series not in the DataFrame, e.g. an offset index, and it will then group by the values in that Series Hint 3 A pandas DataFrame has a method nlargest which can return the rows of a columns, that contains the largest values
###Code
df_1.reset_index().groupby(df_1.index+pd.tseries.offsets.MonthBegin()).apply(lambda df: df.nlargest(1,'No. Events')).set_index(df_1.index.name)
###Output
_____no_output_____
###Markdown
Assignment 9.4The event counts in `df_1` signals the number of contacts the business gets in a day. To plan the weekend staffing, the capacity is planned as the average of the week (Monday to Friday) plus one standard deviation for the same period. How many of the weekend days did the number of contacts exceed capacity?Hint 1 A DatetimeIndex has the attributes week and dayofweek, that returns the week number and the day of week number, respectively, for each date. Hint 2 Pandas groupby can be passed a Series not in the DataFrame, e.g. an offset index, and it will then group by the values in that Series
###Code
df_1.groupby(df_1.index.week).apply(lambda df: df[df.index.dayofweek>=5] > (df[df.index.dayofweek<5].mean()+df[df.index.dayofweek<5].std())).reset_index(level=0, drop=True).rename({'No. Events': 'Exceeded capacity'}, axis=1).sum()
###Output
_____no_output_____
###Markdown
Assignment 9.5Find the daily average of No. Events up until, but not including, the first day that No. Events exceeds 60 for each month.Hint 1 Applying the cummax method to a Boolean pandas Series will return 0 up until the first True value, and the return 1 from there on Hint 2 A DatetimeIndex has the attribute month, that returns the month number of each date Hint 3 Pandas groupby can be passed a Series not in the DataFrame, e.g. an offset index, and it will then group by the values in that Series
###Code
tmp = df_1[~(df_1['No. Events']>60).groupby(df_1.index.month).cummax()]
tmp.groupby(tmp.index.month).mean()
###Output
_____no_output_____ |
docs/samples/contrib/mlworkbench/image_classification_flower/Flower Classification (large dataset experience).ipynb | ###Markdown
ML Workbench Sample --- Image Classification _Introduction of ML Workbench_ML Workbench provides an easy command line interface for machine learning life cycle, which involves four stages:* analyze: gather stats and metadata of the training data, such as numeric stats, vocabularies, etc. Analysis results are used in transforming raw data into numeric features, which can be consumed by training directly. * transform: explicitly transform raw data into numeric features which can be used for training.* train: training model using transformed data.* predict/batch_predict: given a few instances of prediction data, make predictions instantly / with large number of instances of prediction data, make predictions in a batched fassion.There are "local" and "cloud" run mode for each stage. "cloud" run mode is recommended if your data is big.ML Workbench supports numeric, categorical, text, image training data. For each type, there are a set of "transforms" to choose from. The "transforms" indicate how to convert the data into numeric features. For images, it is converted to fixed size vectors representing high level features. _Transfer learning using Inception Package - Cloud Run Experience With Large Data_ML Workbench supports image transforms (image to vec) with transfer learning.This notebook continues the codifies the capabilities discussed in this [blog post](https://cloud.google.com/blog/big-data/2016/12/how-to-train-and-classify-images-using-google-cloud-machine-learning-and-cloud-dataflow). In a nutshell, it uses the pre-trained inception model as a starting point and then uses transfer learning to train it further on additional, customer-specific images. For explanation, simple flower images are used. Compared to training from scratch, the time and costs are drastically reduced.This notebook does preprocessing, training and prediction by calling CloudML API instead of running them "locally" in the Datalab container. It uses full data.
###Code
# ML Workbench magics (%%ml) are under google.datalab.contrib namespace. It is not enabled by default and you need to import it before use.
import google.datalab.contrib.mlworkbench.commands
###Output
_____no_output_____
###Markdown
Setup
###Code
# Create a temp GCS bucket. If the bucket already exists and you don't have permissions, rename it.
!gsutil mb gs://flower-datalab-demo-bucket-large-data
###Output
_____no_output_____
###Markdown
Next cell, we will create a dataset representing our training data.
###Code
%%ml dataset create
name: flower_data_full
format: csv
train: gs://cloud-datalab/sampledata/flower/train3000.csv
eval: gs://cloud-datalab/sampledata/flower/eval670.csv
schema:
- name: image_url
type: STRING
- name: label
type: STRING
###Output
_____no_output_____
###Markdown
AnalyzeAnalysis step includes computing numeric stats (i.e. min/max), categorical classes, text vocabulary and frequency, etc. Run "%%ml analyze --help" for usage. The analysis results will be used for transforming raw data into numeric features that the model can deal with. For example, to convert categorical value to a one-hot vector ("Monday" becomes [1, 0, 0, 0, 0, 0, 0]). The data may be very large, so sometimes a cloud run is needed by adding --cloud flag. Cloud run will start BigQuery jobs, which may incur some costs.In this case, analysis step only collects unique labels.Note that we run analysis only on training data, but not evaluation data.
###Code
%%ml analyze --cloud
output: gs://flower-datalab-demo-bucket-large-data/analysis
data: flower_data_full
features:
image_url:
transform: image_to_vec
label:
transform: target
# Check analysis results
!gsutil list gs://flower-datalab-demo-bucket-large-data/analysis
###Output
_____no_output_____
###Markdown
TransformWith analysis results we can transform raw data into numeric features. This needs to be done for both training and eval data. The data may be very large, so sometimes a cloud pipeline is needed by adding --cloud. Cloud run is implemented by DataFlow jobs, so it may incur some costs.In this case, transform is required. It downloads image, resizes it, and generate embeddings from each image by running a pretrained TensorFlow graph. Note that it creates two jobs --- one for training data and one for eval data.
###Code
# Remove previous results
!gsutil -m rm gs://flower-datalab-demo-bucket-large-data/transform
%%ml transform --cloud
analysis: gs://flower-datalab-demo-bucket-large-data/analysis
output: gs://flower-datalab-demo-bucket-large-data/transform
data: flower_data_full
###Output
_____no_output_____
###Markdown
After transformation is done, create a new dataset referencing the training data.
###Code
%%ml dataset create
name: flower_data_full_transformed
format: transformed
train: gs://flower-datalab-demo-bucket-large-data/transform/train-*
eval: gs://flower-datalab-demo-bucket-large-data/transform/eval-*
###Output
_____no_output_____
###Markdown
TrainTraining starts from transformed data. If training work is too much to do on the local VM, --cloud is recommended so training happens in cloud, in a distributed way. Run %%ml train --help for details.Training in cloud is implemented with Cloud ML Engine. It may incur some costs.
###Code
# Remove previous training results.
!gsutil -m rm -r gs://flower-datalab-demo-bucket-large-data/train
%%ml train --cloud
output: gs://flower-datalab-demo-bucket-large-data/train
analysis: gs://flower-datalab-demo-bucket-large-data/analysis
data: flower_data_full_transformed
model_args:
model: dnn_classification
hidden-layer-size1: 100
top-n: 0
cloud_config:
region: us-central1
scale_tier: BASIC
###Output
_____no_output_____
###Markdown
After training is complete, you should see model files like the following.
###Code
# List the model files
!gsutil list gs://flower-datalab-demo-bucket-large-data/train/model
###Output
_____no_output_____
###Markdown
Batch PredictionBatch prediction performs prediction in a batched fashion. The data can be large, and is specified by files. Note that, we use the "evaluation_model" which sits in "evaluation_model_dir". There are two models created in training. One is a regular model under "model" dir, the other is "evaluation_model". The difference is the regular one takes prediction data without target and the evaluation model takes data with target and output the target as is. So evaluation model is good for evaluating the quality of the model because the targets and predicted values are included in output.
###Code
%%ml batch_predict --cloud
model: gs://flower-datalab-demo-bucket-large-data/train/evaluation_model
output: gs://flower-datalab-demo-bucket-large-data/evaluation
cloud_config:
region: us-central1
data:
csv: gs://cloud-datalab/sampledata/flower/eval670.csv
# after prediction is done, check the output
!gsutil list -l -h gs://flower-datalab-demo-bucket-large-data/evaluation
# Take a look at the file.
!gsutil cat -r -500 gs://flower-datalab-demo-bucket-large-data/evaluation/prediction.results-00000-of-00006
###Output
_____no_output_____
###Markdown
Prediction results are in JSON format. We can load the results into BigQuery table and performa analysis.
###Code
import google.datalab.bigquery as bq
schema = [
{'name': 'predicted', 'type': 'STRING'},
{'name': 'target', 'type': 'STRING'},
{'name': 'daisy', 'type': 'FLOAT'},
{'name': 'dandelion', 'type': 'FLOAT'},
{'name': 'roses', 'type': 'FLOAT'},
{'name': 'sunflowers', 'type': 'FLOAT'},
{'name': 'tulips', 'type': 'FLOAT'},
]
bq.Dataset('image_classification_results').create()
t = bq.Table('image_classification_results.flower').create(schema = schema, overwrite = True)
t.load('gs://flower-datalab-demo-bucket-large-data/evaluation/prediction.results-*', mode='overwrite', source_format='json')
###Output
_____no_output_____
###Markdown
Check wrong predictions.
###Code
%%bq query
SELECT * FROM image_classification_results.flower WHERE predicted != target
%%ml evaluate confusion_matrix --plot
bigquery: image_classification_results.flower
%%ml evaluate accuracy
bigquery: image_classification_results.flower
###Output
_____no_output_____
###Markdown
Online Prediction and Build Your Own Prediction ClientPlease see "Flower Classification (small dataset experience)" notebook for how to deploy the trained model and build your own prediction client. Cleanup
###Code
!gsutil -m rm -rf gs://flower-datalab-demo-bucket-large-data
###Output
_____no_output_____ |
day2/CNN.ipynb | ###Markdown
Neural Networks for image classification importing libraries
###Code
import keras
from keras.datasets import cifar10
from keras.layers import Dense, Dropout, Flatten,Lambda,Reshape,Flatten,BatchNormalization
from keras.layers import Conv2D, MaxPooling2D,multiply,concatenate,Convolution2D
from keras import backend as K
from keras.models import Model
from keras.layers import Input, Dense
from keras import optimizers
from keras.constraints import Constraint
from keras import backend as K
from keras.layers import Activation
from keras.callbacks import callbacks
from keras.callbacks import ModelCheckpoint
import random
import os
import numpy as np
from matplotlib import pyplot as plt
from progressbar import ProgressBar
from keras.utils import np_utils
from sklearn.model_selection import train_test_split
from sklearn.manifold import TSNE
from sklearn.metrics import log_loss
###Output
_____no_output_____
###Markdown
Setting up data The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class.There are 50000 training images and 10000 test images.The dataset is divided into five training batches and one test batch, each with 10000 images. The test batch contains exactly 1000 randomly-selected images from each class. The training batches contain the remaining images in random order, but some training batches may contain more images from one class than another. Between them, the training batches contain exactly 5000 images from each class.
###Code
batch_size = 40
epochs = 12
img_rows, img_cols = 32, 32
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 3, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 3, img_rows, img_cols)
input_shape = (3, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 3)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 3)
input_shape = (img_rows, img_cols, 3)
X = np.reshape(np.vstack([x_train,x_test]),(-1,img_rows,img_cols,3))
Y = np.reshape(np.vstack([y_train,y_test]),(-1,1))
x_train = np.array(x_train).reshape(-1,img_rows,img_cols,3) - np.mean(X,axis = 0)
x_test = np.array(x_test).reshape(-1,img_rows,img_cols,3) - np.mean(X,axis = 0)
y_train = np_utils.to_categorical(y_train, 10)
y_test = np_utils.to_categorical(y_test, 10)
print (np.shape(x_train))
print (np.shape(x_test))
print (np.shape(y_train))
print (np.shape(y_test))
###Output
(50000, 32, 32, 3)
(10000, 32, 32, 3)
(50000, 10)
(10000, 10)
###Markdown
Helper functions: Batching : each batch contains number of points sampled from train data(samples and labels) calc_acc: gives accuracy on test data given a model get_model: returns an mlp compile_model: returns a compiled model given an architecture
###Code
def batch(batch_size, x = x_train, y = y_train):
samples = np.array(random.sample(range(1, len(x)), batch_size))
return x[samples],y[samples]
def calc_acc(model,x_test = x_test, y_test = y_test):
s1 = np.argmax(model.predict(x_test),axis=1)
s2 = np.argmax(y_test,axis=1)
c = 0
for i in range(len(s1)):
if s1[i] == s2[i]:
c +=1
return (c/np.shape(x_test)[0])*100
def get_model():
input_l = Input(shape=(32, 32, 3,))
lay1_1 = Convolution2D(96, (3, 3), padding = 'same',activation='relu')(input_l)
lay1_2 = Convolution2D(96, (3, 3), padding = 'same',activation='relu')(lay1_1)
lay1_2 = Dropout(0.5)(lay1_2)
norm_1 = BatchNormalization()(lay1_2)
lay1_3 = Convolution2D(96, (3, 3), padding = 'same',activation='relu', subsample = (2,2))(norm_1)
flat = Flatten()(lay1_3)
out = Dense(10, activation='softmax',kernel_initializer='glorot_uniform')(flat)
model = Model(inputs=[input_l], outputs=[out])
return model
def compile_model(model):
sgd = optimizers.SGD(lr=0.001, momentum=0.9, clipnorm=1.0, clipvalue=0.5)
model.compile(optimizer=keras.optimizers.Adam(lr = 0.0001) ,loss=keras.losses.categorical_crossentropy,metrics = ['accuracy'])
return model
model = get_model()
###Output
/home/u980159/anaconda3/envs/my_space/lib/python3.7/site-packages/ipykernel_launcher.py:7: UserWarning: Update your `Conv2D` call to the Keras 2 API: `Conv2D(96, (3, 3), padding="same", activation="relu", strides=(2, 2))`
import sys
###Markdown
Model summary
###Code
model = compile_model(model)
model.summary()
###Output
Model: "model_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_4 (InputLayer) (None, 32, 32, 3) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 32, 32, 96) 2688
_________________________________________________________________
conv2d_7 (Conv2D) (None, 32, 32, 96) 83040
_________________________________________________________________
dropout_3 (Dropout) (None, 32, 32, 96) 0
_________________________________________________________________
batch_normalization_2 (Batch (None, 32, 32, 96) 384
_________________________________________________________________
conv2d_8 (Conv2D) (None, 16, 16, 96) 83040
_________________________________________________________________
flatten_2 (Flatten) (None, 24576) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 245770
=================================================================
Total params: 414,922
Trainable params: 414,730
Non-trainable params: 192
_________________________________________________________________
###Markdown
Training
###Code
no_of_epochs = 15
history_c = model.fit(x_train, y_train, batch_size=batch_size, epochs=no_of_epochs,validation_data=(x_test, y_test), verbose=1)
###Output
WARNING:tensorflow:From /home/u980159/anaconda3/envs/my_space/lib/python3.7/site-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.
Train on 50000 samples, validate on 10000 samples
Epoch 1/15
50000/50000 [==============================] - 28s 558us/step - loss: 1.6053 - accuracy: 0.4329 - val_loss: 1.2234 - val_accuracy: 0.5597
Epoch 2/15
50000/50000 [==============================] - 26s 528us/step - loss: 1.1904 - accuracy: 0.5825 - val_loss: 1.0250 - val_accuracy: 0.6331
Epoch 3/15
50000/50000 [==============================] - 26s 530us/step - loss: 0.9965 - accuracy: 0.6516 - val_loss: 0.9510 - val_accuracy: 0.6680
Epoch 4/15
50000/50000 [==============================] - 27s 530us/step - loss: 0.8725 - accuracy: 0.6947 - val_loss: 0.9250 - val_accuracy: 0.6827
Epoch 5/15
50000/50000 [==============================] - 27s 532us/step - loss: 0.7915 - accuracy: 0.7248 - val_loss: 0.8917 - val_accuracy: 0.6941
Epoch 6/15
50000/50000 [==============================] - 27s 532us/step - loss: 0.7281 - accuracy: 0.7469 - val_loss: 0.8877 - val_accuracy: 0.6996
Epoch 7/15
50000/50000 [==============================] - 27s 532us/step - loss: 0.6666 - accuracy: 0.7689 - val_loss: 0.8856 - val_accuracy: 0.7022
Epoch 8/15
50000/50000 [==============================] - 27s 534us/step - loss: 0.6082 - accuracy: 0.7875 - val_loss: 0.8628 - val_accuracy: 0.7079
Epoch 9/15
50000/50000 [==============================] - 27s 531us/step - loss: 0.5659 - accuracy: 0.8020 - val_loss: 0.8650 - val_accuracy: 0.7085
Epoch 10/15
50000/50000 [==============================] - 27s 532us/step - loss: 0.5199 - accuracy: 0.8179 - val_loss: 0.8575 - val_accuracy: 0.7106
Epoch 11/15
50000/50000 [==============================] - 27s 534us/step - loss: 0.4801 - accuracy: 0.8314 - val_loss: 0.8858 - val_accuracy: 0.7129
Epoch 12/15
50000/50000 [==============================] - 27s 531us/step - loss: 0.4444 - accuracy: 0.8431 - val_loss: 0.9254 - val_accuracy: 0.7018
Epoch 13/15
50000/50000 [==============================] - 26s 530us/step - loss: 0.4151 - accuracy: 0.8539 - val_loss: 0.8882 - val_accuracy: 0.7205
Epoch 14/15
50000/50000 [==============================] - 26s 530us/step - loss: 0.3789 - accuracy: 0.8655 - val_loss: 0.9135 - val_accuracy: 0.7186
Epoch 15/15
50000/50000 [==============================] - 27s 536us/step - loss: 0.3563 - accuracy: 0.8751 - val_loss: 0.9252 - val_accuracy: 0.7202
###Markdown
Analysis
###Code
b_l = history_c.history['val_loss']
v_l = history_c.history['loss']
acc_t = history_c.history['val_accuracy']
acc_tr = history_c.history['accuracy']
fig = plt.figure(figsize=(8,10))
plt.xlim(0, no_of_epochs)
plt.ylim(0, np.max(v_l)+1)
ax = fig.gca()
ax.set_xticks(np.arange(0, no_of_epochs, 1))
ax.set_yticks(np.arange(0, np.max(v_l)+1, 0.1))
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(20)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(20)
plt.plot(v_l[:no_of_epochs],'r-s' ,linewidth=4)
plt.plot(b_l[:no_of_epochs],'g-o',linewidth=4)
plt.grid()
plt.title(' Model Loss',fontsize=30)
plt.ylabel('loss',fontsize=30)
plt.xlabel('epoch',fontsize=30)
plt.legend(['test', 'train'], loc='upper left', prop={"size":30})
plt.show()
fig = plt.figure(figsize=(8,10))
plt.xlim(0.2, no_of_epochs)
plt.ylim(0.2, 1)
ax = fig.gca()
ax.set_xticks(np.arange(0.2, no_of_epochs, 1))
ax.set_yticks(np.arange(0.2, 1, 0.1))
for tick in ax.xaxis.get_major_ticks():
tick.label.set_fontsize(20)
for tick in ax.yaxis.get_major_ticks():
tick.label.set_fontsize(20)
plt.plot(acc_t[:no_of_epochs],'r-s' ,linewidth=4)
plt.plot(acc_tr[:no_of_epochs],'g-o',linewidth=4)
plt.grid()
plt.title(' Model Accuracy',fontsize=30)
plt.ylabel('loss',fontsize=30)
plt.xlabel('epoch',fontsize=30)
plt.legend(['test', 'train'], loc='upper left', prop={"size":30})
plt.show()
print (calc_acc(model))
print (calc_acc(model=model,x_test=x_train,y_test=y_train))
labels = ['airplane',
'automobile',
'bird',
'cat',
'deer',
'dog',
'frog',
'horse',
'ship',
'truck']
s1 = np.argmax(model.predict(x_test),axis=1)
s2 = np.argmax(y_test,axis=1)
c = []
pr_l = []
for i in range(len(s1)):
if not s1[i] == s2[i]:
c.append(i)
pr_l.append("Label = "+labels[s2[i]]+" Predicted_Label = "+labels[s1[i]])
from sklearn.metrics import confusion_matrix
cm = confusion_matrix(s1,s2)
plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
plt.colorbar()
tick_marks = np.arange(10)
plt.xticks(tick_marks, labels, rotation=45)
plt.yticks(tick_marks, labels)
plt.show()
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
for i,ele in enumerate(x_test[:100]):
if i in c:
plot_var = ele
plt.imshow(plot_var)
plt.title(pr_l[i])
plt.show()
break
model.save_weights('CNN.h5')
###Output
_____no_output_____ |
Xu Dong's Add-hoc Analyze & Visualization on sales data.ipynb | ###Markdown
Add-hoc analyze and visualization Source: Kaggle.com 1.0--Customer needs description
###Code
display_png(file="C:/Users/Xavier/Downloads/stadium exercise B/exercise B.png")
###Output
_____no_output_____
###Markdown
1.1 Explore the Data
###Code
library(plyr)
library(dplyr)
library(tidyr)
library(readr)
library(magrittr)
library(ggplot2)
library("IRdisplay")
library(chron)
library('scales')
library(reshape2)
library(ggmap)
library(tidyverse)
library(fiftystater)
library(viridis)
library(mapdata)
library(maptools)
library(maps)
snkr_data<-read.csv('C:/Users/Xavier/Downloads/stadium exercise B/sg_data_analyst_exercise_b.csv')
head(snkr_data,10)
summary(snkr_data)
###Output
_____no_output_____
###Markdown
2.1 Top Selling Product for Each Store
###Code
#aggregate(y~x,data,func,func argument)---y means responsible variable,x means grouping parameters,func...)
seller_data<-aggregate(qty_ordered~product_id+store_id, snkr_data, 'sum')
seller_data[1:10,]
# fist group various stores, and then group various products,finally sum the quantity of orders
order_seller<-seller_data[order(-seller_data$qty_ordered),]
# order the above result by qty_ordered DESC
result<-order_seller[which(order_seller$store_id==5),][1,]
# filter with store_id condition
result
###Output
_____no_output_____
###Markdown
topseller function can request any store's best seller, whether a single store or some stores. It doesn't matter whether it is starting from store_id 1, and out of range request is restricted.
###Code
topseller<-function(i,j){
if (i>j) {a<-min(c(i,j))
b<-max(c(i,j))
i<-a
j<-b}
if (i>length(unique(snkr_data$store_id)))
{
return('request not available')}
else
{
if (j>length(unique(snkr_data$store_id)))
{j<-5}
seller_data<-aggregate(qty_ordered~product_id+store_id, snkr_data, sum)
order_seller<-seller_data[order(-seller_data$qty_ordered),]
mylist<-c(i:j)
output<-order_seller[1:(j-i+1),]
for (i in i:j) {
x=match(i,mylist)
output[x,]<-order_seller[which(order_seller$store_id==i),][1,]
}
return(output)}
}
topseller(6,10)
topseller(2,2)
topseller(1,5)
topseller(2,10)
topseller(5,2)
topseller(10,2)
###Output
_____no_output_____
###Markdown
2.2 Average Order Value for Each Store
###Code
#order_value is the product of quantity and unit price
snkr_data$order_value<-with(snkr_data,unit_price*qty_ordered)
t=ncol(snkr_data)
snkr_data<-snkr_data[c(1:(t-2),t,(t-1))]
snkr_data[1:10,]
length(snkr_data$order_id);length(unique(snkr_data$order_id))
# Now, we know that there are some duplicated order_id, so I cannot simply use aggregate 'mean' function
avg_order_value<-function(i,j){
n=length(unique(snkr_data$store_id))
output<-data.frame('store_id'=rep(1,n),"avg_order_value"=rep(0,n))
output<-output[1:(j-i+1),]
mylist<-c(i:j)
for (i in i:j){
x=match(i,mylist)
subset<-snkr_data[snkr_data$store_id==i,]
len=length(unique(subset$order_id))
sum=sum(subset$order_value)
avg=floor(sum/len)
output[x,1:ncol(output)]=c(i,avg)
}
return(output)
}
avg_order_value(2,5)
###Output
_____no_output_____
###Markdown
2.3 Average Daily Quantity Ordered
###Code
class(snkr_data$order_timestamp)
snkr_data$order_date<-as.Date(snkr_data$order_timestamp,'%Y-%m-%d')
snkr_data[1:10,]
snkr_data$order_date<-as.factor(snkr_data$order_date)
str(snkr_data$order_date)
daily_order<-aggregate(qty_ordered~store_id+order_date,snkr_data,FUN='sum')
daily_order[order(daily_order$store_id,daily_order$order_date),]
avg_daily_qty<-function(){
output<-aggregate(order_value~store_id, snkr_data, mean)
output$avg_daily_qty_round<-1
daily<-aggregate(qty_ordered~store_id+order_date,snkr_data,FUN='sum')
i=length(unique(snkr_data$store_id))
for (i in 1:i){
subtest<-snkr_data[daily$store_id==i,]
len=length(unique(subtest$order_date))
sum=sum(subtest$qty_ordered)
avg=floor(sum/len)
avg2=round(sum/len,digits=2)
output[i,(ncol(output)-1):ncol(output)]=c(avg,avg2)
}
names(output)[names(output) == "order_value"] <- "avg_daily_qty_floor"
return(output)
}
avg_daily_qty()
###Output
_____no_output_____
###Markdown
3.1 Visualization--Each Store's Revenue Share
###Code
str(snkr_data)
snkr_data$store_id<-as.factor(snkr_data$store_id)
v1<-aggregate(order_value~order_date+store_id,snkr_data,sum)
v1<-v1[order(v1$order_date,v1$store_id),]
names(v1)[names(v1) == "order_value"] <- "revenue"
v2<-aggregate(order_value~order_date,snkr_data,sum)
names(v2)[names(v2) == "order_value"] <- "total_revenue"
v3<-merge(v1,v2, by='order_date', all.x=TRUE)
v3$rev_percent<-with(v3,round((100*revenue/total_revenue),digits=2))
v3
v3 <- ddply(v3, .(order_date),transform, pos = 100-cumsum(rev_percent) + (0.5 * rev_percent))
#to display correct position on the stacked bars, I should recode the position
colorfill=c('darkgoldenrod1','beige','rosybrown','lightpink1','lightblue2')
p1<-ggplot(v3,aes(y = rev_percent, x = order_date, fill = store_id)) +
# fill is the bar
geom_bar(stat="identity",width=0.55,color="seashell3") +
geom_text(data=v3, aes(x = order_date, y =pos,label = paste0(rev_percent,'%')), size=2.5) +
theme(legend.position="bottom", legend.direction="horizontal") +
scale_y_continuous(labels = dollar_format(suffix = "%", prefix = "")) +
ggtitle("Each Store's Share of Revenue By Day")+
theme(plot.title = element_text(size=10,hjust=0.5,vjust=6,face="bold"),
panel.background = element_rect(fill = 'white', colour = 'seashell3'),
plot.margin = unit(c(3, 1, 3, 1), "cm"),
axis.text.x=element_text(angle=50, size=8, vjust=0.5,face='bold',color='black'))+
scale_fill_manual(values=colorfill)+
labs(x="Order Date", y="Revenue Percentage")
p1
###Output
_____no_output_____
###Markdown
3.2 Visualization--Geomap for Unshipped Quantity
###Code
snkr_data[1:10,]
snkr_data$unshipped_qty<-with(snkr_data,qty_ordered-qty_shipped)
str(snkr_data)
nrow(snkr_data)
snkr_data<-snkr_data[snkr_data$shipping_region!='Armed Forces Europe'&snkr_data$shipping_region!='Armed Forces Pacific',]
snkr_data<-snkr_data[snkr_data$shipping_region!='Puerto Rico',]
nrow(snkr_data)
#remove two Armed Forces outside the country, as they cannot be shown in the map
unship_data<-aggregate(unshipped_qty~shipping_region,snkr_data,FUN='sum')
unship_data[1:5,];nrow(unship_data)
summary(unship_data)
unship_data$state<-str_to_lower(unship_data$shipping_region)
unship_data[1:5,];nrow(unship_data)
head(us,6)
gglabel=fifty_states %>%
group_by(id) %>%
summarise(lat = mean(c(max(lat), min(lat))),
long = mean(c(max(long), min(long))))
gglabel[1:5,];nrow(gglabel)
gglabel$state_abb<-state.abb[match(gglabel$id,str_to_lower(state.name))]
gglabel[1:5,];nrow(gglabel)
gglabel[gglabel$state_abb=='NY',]
gg <- ggplot()+
geom_map(data=us, map=us,
aes(long,lat,map_id=region),
color="#2b2b2b", fill=NA, size=0.5)+
geom_map(data=unship_data, map=us,
aes(fill=unshipped_qty,
map_id=state),
color="grey", size=0.15)+
geom_text(data = gglabel,aes(x = long, y = lat, label = state_abb ),color='black',size=2.5) +
coord_map("polyconic")+
ggtitle("Unshipped Quantity by Region")+
scale_fill_distiller(palette = 'Blues', direction = 1,name = "Unshipped\nQuantity\n /Pairs")+
#distiller is for continuous vars, fill is the area, color is the line
scale_x_continuous(breaks = NULL) +
scale_y_continuous(breaks = NULL) +
labs(x = "", y = "")+
theme(plot.title = element_text(size=10,hjust=0.5,vjust=0,face='bold'),
panel.background = element_rect(fill = 'white', color = 'white'),
legend.position="bottom",legend.title=element_text(size=8))+
annotate("text", x = -80, y = 50, label = "blank state means missing value",size=2.5)
gg
###Output
Warning message:
"Ignoring unknown aesthetics: x, y"Warning message:
"Removed 1 rows containing missing values (geom_text)."
###Markdown
4 Refer linkshttp://t-redactyl.io/blog/2016/01/creating-plots-in-r-using-ggplot2-part-3-bar-plots.htmlhttp://www.sthda.com/english/wiki/ggplot2-barplots-quick-start-guide-r-software-and-data-visualizationhttp://zevross.com/blog/2014/08/04/beautiful-plotting-in-r-a-ggplot2-cheatsheet-3/https://stackoverflow.com/questions/48832201/plot-a-numerical-values-in-united-states-map-based-on-abbreviated-state-nameshttps://ggplot2.tidyverse.org/reference/scale_brewer.html
###Code
snkr_data<-subset(snkr_data,select=-c(X))
snkr_data<-read.csv('D:/BaiduNetdiskDownload/snkr_data1016.csv')
write.csv(map_data,file='D:/BaiduNetdiskDownload/map_data1016.csv')
write.csv(snkr_data,file='D:/BaiduNetdiskDownload/snkr_data1016.csv')
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.